Autocratic governments have transformed social media into a practical arm. There are plenty of fake accounts under their directions. So, for what reason they command all these accounts? Because by using new media platforms, they attempt to bother the journalists and create confusion regarding their opponents’ activities.
Not only the authoritarian governments but also democracies use new media channels as weapons. Take the 2016 presidential election in the United States as an example, numerous fake Twitter, Facebook, and other digital accounts served Donald Trump by constantly posting on behalf of him.
And in Europe, we’ve seen the British adopted the same methods during their campaign for Brexit. We can safely argue that Trump and anti-EU supporters in the UK created an immense wave of distorted knowledge for the sake of their triumph.
However, endeavors for cheating and fooling society aren’t limited only to technology. Beneath the wave of misinformation, there is a sophisticated labyrinth formed by the social, economic, and also political crisis.
So, what we have to examine is not social media itself, which has the potential to serve the well-being of society, but the misuse of social media. As the first stage of examination, let’s look at the reasons that make social media nonfunctional.
Chapter 1 – Unlike traditional media, new media weakens the trust in institutions.
People never lose their faith in democratic institutions as much as today they do. According to research made by Gallup poll, over forty-five years, the percentage of Americans who trust Congress has dropped from 43 to 11 percent. For economic and religious establishments, the situation is similar. Faith in banks has reduced, and the percentage of people that say they have confidence in religious institutions has reduced from 65 to 38 percent.
The United States is not the only country that experiences this problem. In other democratic countries like Brazil, Italy, South Africa, we can observe this tendency too. Why then? Why have people lost their faith in democratic establishments? This is strongly related to the way we use social media.
The discovery of the internet enormously transformed the media setting. Information flow was a one-way process in the pre-digital era. Thousands of people listened to only one person speaking. Imagine the anchors and newspaper editors, they addressed an audience formed by a great variety of people. Distinguished presenters and journalists aimed to be objective. They aired or wrote only the views that were examined enough. Still, people could send the letter full of conspiracy theories to the New York Times, but the newspaper held the authority to refuse to publish them.
This type of media limited the citizens to get their information from the same sources. As a result, people heard the same information and could agree upon the truth, so they could create confidence.
But recently, our way to get information has changed. Today, information flows are explained with a new paradigm – known as the “many-to-many” paradigm. People don’t need a publisher or presenter to promulgate their opinions. On the new media, each of us can publish content and anyone can reach it easily.
Inventors of the internet were aiming for a golden age of liberty of expression and civic involvement. However, they failed to predict that the scenery would be dominated by a few big social media companies whose accountabilities would be less than old media members.
There is almost no control mechanism for social media, so it provides a wonderfully suitable place for disinformation. And, in this chaotic platform of easily abuse algorithms, distinguishing the truth from fiction is challenging.
A survey dated in 2018 showed that 64 percent of British had difficulties in distinguishing the real from the fake information. So, in this foggy landscape, it’s hard to easily believe in what we read and hear, people often get confused about whom to trust.
Chapter 2 – Fake news, which is often released because of commercial interests, attract people’s tendency to think critically.
“FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE.”
In early November 2016, people read this striking headline on an internet journal called Denver Guardian, which claimed to be the oldest news source of Colorado. The news depicted Democrat Party’s candidate for the presidency, Hilary Clinton, in a scandalous assassination plot.
In a short period, this news was read by thousands of people. Every minute, more than 100 Facebook account posted it on their pages. However, despite all these sensations it created, it was soon after revealed that this news was fake.
There was not any single true point of Denver Guardian’s story. The reporter that published it was Jestin Colar, an entrepreneur from California. He was the owner of the website, and his only aim was to make ad clicks. And Coler thought that the most guaranteed way for it is creating a sensational false story about the ongoing presidential campaign.
Indeed, he was right, it was a money-making business. During the November presidential contest, he gained between $10,000 and $30,000 a month from fake news.
As he declared so, every single part of this story was invented by Coler, and he defended himself by saying that “The people wanted to hear this.” They were not true at all, they didn’t even exist- the people, the sheriff, the FBI agent. After he sketched the scenario, his team responsible for the social media management promulgated it on pro-Trump internet sites. Only after a few hours, many people were reading it.
Coler only aimed for commercial profits, and that distinguishes him from other examples of misinformation runners. However, it didn’t stop there, on the contrary, it triggered a huge digital attack based on distorting the truth. Much further false news was released on Facebook – employed by the Russian government- as well as imagined stories were sold by young people in Moldova and Trump’s excited supporters.
That’s exactly how Coler stated – people have the desire to hear these kinds of news. Why? Still, some of us are longing for trustworthy news announced by faithful broadcasters. That explains why Denver Guardian particularly claimed that it was the oldest media in Colorado.
Furthermore, people are keen on getting the depths of the developments. Investigate journalism that fulfilled all the necessary examinations used to answer this desire. However, since it began to be replaced by a new type of media, promulgates of disinformation who don’t bother themselves for making research on the source of their news has dominated the information flows.
Chapter 3 – Social media has no intention to monitor posts, so it’s a huge advantage for conspiracists.
Conspiracy theories always existed. People have always tended to produce alternative guesses about what’s going on around them. Each society creates dark rumors of sensational plots, stories. But recently, somethings modified.
Before, rumors circulated slowly; rumors about the malicious actions of kings or presidents, which could hardly be confirmed, aired on the evening news. But with social media, we can reach the rumors all the time, so conspiracies can spread quickly to a great variety of people.
Remember QAnon, a conspiracy theory produced by the far-right groups in the United States. Its defenders argue that there is a faction run by non-elected state officials, which is called “deep state”, this clandestine faction is working against Donald Trump.
Also, the supporters of this conspiracy theory state that a mysterious agent “Q” has infiltrated this “deep state.” Q symbolized the highest authorization code to American officials. The documents carrying the Q’s signature sometimes appeared on forums like 4chan. For example, Q has reported that liberal politicians are involved in a child sex ring out of a pizza shop.
What Q said spread incredibly fast. Organized far-right groups blew them up and posted them on bigger channels like Twitter and Reddit. For doing it, they used also some programs like social media “bots” that systematizes regular content posting. Bots create algorithmic delusion and increased the persuasiveness of the false news. Soon after, the conspiracy reaches the ordinary readership. For old media, this transforms into a story that makes things more difficult.
The thing is that all this misinformation destroy democratic principles. In QAnon conspiracy theory, liberals act totally against American values. If you believe in this claim, you would think that liberals don’t deserve to be assigned to govern the county – even if they are elected by society.
So, why giant social media companies like Facebook and Twitter carry on allowing these conspiracies to spread? Well, they believe that it’s not their business to keep an eye on social media posts. Also, after all, don’t people have the freedom of expression?
These companies have delayed forbidding the use of bots that boost misinformation spread. So, what’s the consequence of it? A landscape that can easily be dominated by marginal conspiracists. And, as we’ll examine in the following pages, typical political actors have also learned to use social media for their campaigns.
Chapter 4 – As known so far, the first case of bots intervention in a political campaign happened in the United States.
At the end of 2013, Ukrainians organized a big protest against their pro-Russian president Viktor Yanukovych. In the meantime, the Russian government reacted to those protests against their ally, Mr. Yanukovych, by making online offensive. Thousands of Russian run fake social media accounts and bots overwhelmed the web. They produced many false news and contents against the protestors. After three years, a similar policy would be implemented against the United States during the 2016 presidential election.
The people observing this uncommon policy were witnessing a significant breaking point of propaganda- now the time is for digital campaigns. Although the Russian government might have dominated the strategy of digital subversion, it wasn’t the earliest example of this tactic.
In 2010, residents of Massachusetts were about to vote for a new senator election. There were two candidates in the contest: Scott Brown from the Republican Party, and Martha Coakley from the Democrats.
More likely Coakley would be the winner because Massachusetts had voted for a liberal candidate for decades. The sudden death of the previous senator, Ted Kennedy – who had represented Massachusetts since 1962, had generated the contest. In a while, Brown’s reputation began to swell.
As Brown’s popularity suddenly increased, computer scientists at Wesleyan University detected something bizarre. Some awkward-looking Twitter accounts looked as if they had been in an organized attack on Coakley. What did they argue? That Democrat candidate was against the Catholics, which meant serious trouble in a state like Massachusetts.
These social media accounts had no bio or followers, also it seemed that they used their accounts only for posting about Coakley. And the attacks happened every 10-second interval. So, it was obvious that someone used bots, but who could do that?
When the researchers investigate the accounts, they found that this a group of conservative activists whose center was in Ohia, a remote state from Massachusetts. But the fake accounts they created pretended like worried residents of the state that would decide on their senator. A careless looking would have deceived you that these online attacks were made by regular citizens from Coakley’s state, not from bots designed by a political movement centered in a different city.
Unfortunately, the truth was revealed. Since the bots created too many posts, mainstream agencies like National Catholic Register and National Review could realize that the propaganda about Coakley that claimed he was anti-Catholic had no grounds. The posts they designed were associated with the bots’ activities on Twitter. The mentioned mainstream channels announced the false accusation on Brown and the Democrat candidate won the election. And, as expectedly, Coakley failed the race.
Chapter 5 – Bots are fools, but that doesn’t mean that they are in no use.
Each time you surf on your Twitter page and read the popular posts; you probably meet the bots-driven accounts. These accounts constantly share, like, and comments o different posts. It’s not difficult to recognize the messages sent by a software program. Usually, the messages written by bots have an odd syntax. Their contents are well-timed and posted like an installed torrent.
In another saying, an average bot isn’t so sophisticated. So, for what accounts the journalists claim that the bots have destroyed democracy? One possible answer is that it’s simpler to blame it on bots instead of extending the inquiry with a more difficult question: Is democracy so fragile that can be destroyed by such simple means? We’ll return to this point later, but before, let’s examine these bots.
Try to visualize the 2016’s presidential election and the race between Hillary Clinton and Donald Trump.
The rumors about the role of bots in that election was a great deal. For example, Cambridge Analytica, the political consulting establishment that served for the Trump campaign, guaranteed a great service for the campaign. As the company claimed, the bots they used would target certain audiences with an emphasis on pro-Trump and anti-Clinton discussion points. For this purpose, Cambridge Analytica endeavored to control “psychographic” data, the voters’ extensive information gathered from their social media pages.
This means is enormously complicated. And, as far as we know today, that was never used. However, Trump’s team achieved an excellent digital campaign. This underlies the most significant point of the issue. When we begin to talk about digital propaganda, being smart or fool becomes irrelevant for the results of the race.
Think the Computational Propaganda Project at Oxford University. Its examination has met a remarkably similar story. Regardless of the scene – it could be the Russian digital intervention to the Ukraine internal protests in 2013, the Brexit referendum, or Trump’s presidential campaign in 2016, the bots used for promulgating misinformation have been so simple. All service they can offer is liking or sharing the contents, spreading links, and trolling people. These bots were not driven by artificial intelligence, they were not well-programmed.
But even with this simple form, they could fulfill their functions. The torrent they created from talking points were abundant that rivals felt overwhelmed and helpless to answer them quickly and successfully.
Chapter 6 – Social media companies need autoregulation, however, they refuse to be in charge of using their tools.
As these mentioned examples have shown, you do not have to use advanced technology to manage public opinion. Your aim could be either win a political race or simply sell false news, you need simply basic bots and attractive headlines to control social media algorithms.
Now, this leads us to the previously stated disturbing question: Why is democracy so delicate that can be destroyed by simple hacking tools? Maybe we can find an answer if we consider how social media companies are regulated. They replaced the position that was once possessed by newspapers and broadcasters. But the rules managing their functioning are not as strict as the ones of the traditional media.
Think the Federal Election Commission, a monitoring organ that enforces the judicial regulation about political campaign finance. However, in 2016, the Commission declared that online campaigning isn’t included in its monitoring responsibilities. If we are to be fair, although they can take initiative for political ads, America’s Election Commission couldn’t detect the torrent of viral fake news and software-produced online disinformation.
So, that monitoring body is in no use for this online race. Then, does the federal government have any other tool to control this landscape? Well, Section 230 of the Communication Decency Act regulated this issue. Passed in 1995, this act gives authority to the internet corporations to remove detrimental speech. Also, it discharged them from responsibility if anything unexpected occurs in the end.
In the United States, social media corporations implemented the authority given by Section 230. This justifies their censorship decision for the posts carrying explicit hate speech, such as anti-Semitic or neo-Nazi discourse. On the other hand, the companies also consider this law as permission to ignore disputable political content like disinformation. As the directors of these companies guessed, the statute doesn’t make them justified “arbiters of truth.”, it’s not up to them. But Section 230 allows these companies to arbitrate content, whereas they only occasionally use it. Why? Maybe its contradiction with the libertarian ethos prevented them from implementing it.
Imagine how overwhelming it would be to clean all the contents published on Facebook and Twitter. The popularity of these two giant social media platforms flourished incredibly quickly, and at the beginning, they didn’t set their ethical rules. To escape misuse, it is a must to be able to successfully adopt new regulations to the previous structure. As a Facebook employee once admitted, this well-known social media company is like a plane that departed into the sky without being fully pledged with all the necessary equipment.
Chapter 7 – Machine learning could be helpful to understand online disinformation, however, it cannot solve all the problems.
There’s one more question to ask. Could we just ban using bots? In 2018, an American senator, Dianne Feinstein made a proposition for that aim, which was the “Bot Disclosure and Accountability Act.”
But the Congress delayed the proposition. It would create a dilemma between the anti-bot measure for security concerns and the freedom of speech. Furthermore, even if you managed to regulate the bots using, that wouldn’t completely solve the disinformation: Humans can spread rumors as quickly as the bots can do. For example, the Chinese “50 Cent Army” is an organized group that consisted of human accounts that are working for overwhelming the web with pro-government propaganda. Anti-bot laws wouldn’t prevent them from doing this.
So, maybe we have to change our focus. The bots shouldn’t be our only focus on the issue, instead, maybe it is needed to think more deeply about the natural flow of information.
Visualize a bot that applies machine learning. This software would gather information by discussing people on social media. It would gradually improve its dialogues with real humans.
Now visualize, if this program started to talk to real people and persuaded them to post a trivializing article about global warming, this would alone provide with lots of data. Then, the bot could evaluate this data to detect which strategy worked and which did not. And this would be used to determine the bot’s upcoming behavior.
It sounds like a nightmare, doesn’t it? Of course, but it’s not the only possible scenario. We can use machine learning to win over digital disinformation too. Example research for it made by the Observatory on Social Media at Indiana University, they invented “Botometer.”
The Botometer helps to distinguish human accounts from bots’ accounts. It reached it by examining thousands of features of each account. The program takes its connections, activities, language, and style of contents. When all the examinations finish, Botometer shows an overall “bot score.” It informs people whether they encounter a human or a bot, so detecting one of the most usual sources of disinformation becomes possible.
Unfortunately, algorithms provided by machine learning, as Botometer applies, don’t seem like the solution to all the misinformation problems. Most specialists suggest the invention of hybrid, in another saying “cyborg”, monitoring models. For instance, humans could check the sources of the news, and the machines might be set to quickly detect the bots that cause disinformation.
It seems that we cannot fully rely on machines for certain significant tasks.
The Reality Game: How the Next Wave of Technology Will Break the Truth by Samuel Woolley Book Review
The new media has extremely differed from the traditional one. The invention of the internet creates great excitement because many people thought that this new tool would save us from being dependent on newspapers and broadcasters. However, offering people a platform in which they create and find their news sources didn’t improve civic participation. Instead, it created a suitable place for digital disinformation. This problem has been worsened by soft regulation and the careless social media companies’ refusal to deal with the bots. But still, we can get benefit from machine learning by using it for fact-checking.