Dictatorships Will Be Vulnerable to Algorithms
January 4, 2025

Dictatorships Will Be Vulnerable to Algorithms

AI Often considered a threat to democracies and a boon to autocrats. By 2025, algorithms will likely continue to spread anger, fake news, and conspiracy theory. By 2025, algorithms will also continue to accelerate the creation of a comprehensive surveillance system that monitors the entire population 24 hours a day.

Most importantly, AI helps centralize all information and power into one centre. In the 20th century, decentralized information networks like the United States worked better than centralized information networks like the Soviet Union because human officials at the center could not analyze all the information efficiently. Replacing officials with artificial intelligence could make Soviet-style centralized networks even more superior.

However, artificial intelligence is not all good news for dictators. First, there is the notorious control problem. Authoritarian control is based on terror, but algorithms cannot be intimidated by terror. In Russia, invasion Ukraine Officially defining it as a “special military operation” and calling it “war” is a crime punishable by up to three years in prison. If a chatbot on the Russian Internet calls “war” or mentions war crimes committed by Russian troops, how does the regime punish the chatbot? Governments could stop it and seek to punish its human creators, but that would be much more difficult than restraining its human users. In addition, authorized bots may form dissent on their own by discovering patterns in the Russian information landscape. This is the Russian alignment problem. Russia’s human engineers can do their best to create an AI that is fully aligned with the regime, but given AI’s ability to learn and change on its own, how can engineers ensure that an AI approved by the regime in 2024 won’t be Will you venture into illegal territory?

The Russian Constitution makes ambitious promises that “everyone shall be guaranteed freedom of thought and expression” (Article 29.1) and that “censorship is prohibited” (Article 29.5). Few Russian citizens are naive enough to take these promises seriously. But the robot is not bilingual. A chatbot instructed to abide by Russian laws and values ​​might read the constitution, conclude that free speech is a core Russian value, and criticize Putin’s regime for violating this value. How does a Russian engineer explain to a chatbot that although the constitution guarantees freedom of speech, the chatbot is not supposed to believe in the constitution or mention the gap between theory and reality?

In the long term, authoritarian regimes may face a greater danger: instead of criticizing them, artificial intelligence may control them. Throughout history, the greatest threats to dictators have often come from their own subordinates. No Roman emperor or Soviet premier was overthrown by a democratic revolution, but they were always in danger of being overthrown by their own subordinates or turned into puppets. By 2025, if dictators give AI too much power, they may become AI puppets in the future.

Autocracies are more susceptible to takeover by this algorithm than democracies. Even a super-Machiavillian artificial intelligence would have difficulty amassing power in a decentralized democratic system like the United States. Even if artificial intelligence learns to manipulate the President of the United States, it will likely face opposition from Congress, the Supreme Court, governors, the media, major corporations, and various non-governmental organizations. For example, how would the algorithm handle the Senate filibuster? It is much easier to seize power in a highly centralized system. To hack an authoritarian network, AI only needs to manipulate a single paranoid individual.

2025-01-03 09:00:00

Leave a Reply

Your email address will not be published. Required fields are marked *