The Man Who Saw Fake News Coming
Filippo Menczer predicted a global misinformation boom 10 years ago, but he's still shocked.
The proliferation of fake news on Facebook and Google has been a hot-button topic among people not employed by those companies since the unexpected result of the 2016 presidential election split the internet at the seams. The media has always propagated mistruths by accident and lies have always been able to travel halfway around the internet before the truth even finds its shoes, but the current fake news phenomenon represents something new: Digital misinformation crowding fact out of public discourse. Indiana University Professor of Informatics and Computer Science Filippo Menczer has been bracing himself for this eventually for years, but even he says the magnitude of the problem has shocked even him.
Roughly a decade ago, Menczer and his colleagues ran an experiment that produced unnerving results. He created a fake web page full of fake celebrity gossip news complete with ads and a disclaimer about the information’s illegitimacy. At the end of the month, he received a check in the mail for the ad revenue. He found that 72 percent of college students trusted links that appeared to originate from friends, going so far as to hand over personal login information to unknown entities. He realized fake news could be a profitable business and he said as much publicly, warning social networks and media companies that the race to the bottom was about to involve go into hyperdrive.
Then some time passed. Then it happened.
Menczer, who also serves as the Director of the Center for Complex Networks and Systems Research, tells Inverse that proliferation was inevitable, but Pizzagate wasn’t — at least he didn’t think so.
Can you explain why people trust sources or publications that they haven’t heard of and share them without really knowing what the publication is?
We haven’t studied this in a lab or anything, but I can give you my assumption. My guess is that people don’t pay so much attention to the source as to the content. Whether they perceive it as true or not, based on their pre-existing beliefs, feelings, or opinions. When they see something that looks like it’s true, they believe it. They don’t really spend the time to say if it’s a trustworthy source or not.
On top of that, there are a lot of people that have decreased trust in traditional media. Probably, those people are even more, I think, vulnerable to being manipulated by fake news websites. They don’t even assume at the outset that a traditional media source has some commitment to the truth. That’s my impression. It’s not so much that they see that those website they’ve never heard of and then they decide to trust it, it’s that they don’t even, probably, notice that it’s something that they’ve never heard of.
You ran that experiment with the fake celebrity gossip news ten years ago. What was your reaction to the conversations about fake news right after the election?
Mainly it’s mixed feelings. On the one hand, I’m very sad to see that fake news has become mainstream, that so many people fall for it, and it’s become a business. It’s such a big business that a lot of people are making money by just, basically, creating a cottage industry of fake news websites.
On the other hand, I’m happy that at least the problem is getting the attention that it deserves. The fact that many people are talking about it is hopefully creating public awareness about our vulnerability to misinformation. Even since 2010, we’ve observed a lot of fake news websites and people using Twitter and reposting fake news. It just did not seem to be something at the forefront of people’s attention. We wrote a couple of papers that got a little bit of attention to it, but it was not like something that people would talk about. It was not discussed to this extent. The public awareness of the problem was something that I’m happy about, simply because I hope that it will bring more research to bear on how to understand this phenomenon of fake news and misinformation. Also, maybe, on how to mitigate it.
With more research being done on the topic, there’s the hope that people will go out and educate themselves. Are you faithful that people are going to be doing that? Or are people already set in their political ideologies and what they believe just not going to care?
There are some people who are going to continue to be guided mostly by ideology, rather than by rationality and science, or bonafide journalism. They’re more going to be guided by politics and opinions. That’s always been the case. Then there is, I would imagine, a large number of people who, even though they have strong political beliefs, they are interested in being informed.
As they realize that they can become a victim, then they will probably become a little bit more aware and more careful about what sources they trust and believe. They will realize that just because their friend posted something does not imply that that friend has vetted that news and, therefore, it is something trustworthy. They realize that their friends can be just as biased and mistaken as them. But for some people, nothing will change.
For a lot of people, eventually they’ll become more wary of it. Another piece of it is that some of the problem may be solved through technology. Take spam, for example. At the beginning, when everybody started using email, there were a lot of people who were victimized by spam. Then we started having spam filters and then spammers became more smart, and there was an arms race. Still, today, there are people who are victimized by spam and fraud online. Phishing attacks and so on. People are more aware of it. Part of it is people are a little bit more careful. Then, the other part of it is that a lot of the spam that is generated, it never even reaches your inbox because the email system that you use is good enough that it can filter, maybe not 100 percent of it, but certainly 95 percent of it.
Perhaps, in the future, the same will happen with fake news. If it’s pretty clear that something is junk, then maybe it doesn’t even need to reach your eyeballs. Between the awareness of the end users and technological solutions, I’m hopeful that things will get better. Eventually. Maybe they’ll get worse before they get better. But they will get better.
You’ve also studied how bots have helped spread misinformation. Who’s making them and how do they function?
For bots that are deceptive, all we know is that it’s somebody who creates a fake account and then uses software to control that fake account. There are so many of them. Some are more sophisticated, some are less sophisticated. Some of them, we don’t even know how to recognize them from real users, real human users. In some cases, we can. W. e’ve built some software that, with some decent accuracy, can distinguish between a human and a bot.
Some of them are just trying to inflate followers, like make it look like somebody’s more popular by liking pages on Facebook or following somebody on Twitter. Some of them are actively trying to steer the conversation. They may post, retweet certain accounts that post information. Some bots are just promoting articles to fake websites, just so people go there and click. That way, the people make money out of ads, the people who maintain the websites. There are different motivations for building bots, different types of bots, different level of sophistication of bots. Bots that have different behaviors.
In your experiment from a decade ago, you ended up getting cut a check from the ads on the site. Did you ever think money was going to be such a big driving factor in perpetuating fake news?
Well, for me, it was a proof of concept. I saw that it was possible. I was hoping that nobody would do it. I had the proof, and we wrote it in a paper that this is dangerous. At that point, I was thinking in terms of pollution. Who is the victim here? I was trying to understand the economic incentives. The advertiser is happy because they got a click anyway. They don’t care how they got it. The person who makes the fake news website or whatever, the deceptive website, they’re happy because they get the money. The ad net, or Google, or Facebook, or whoever it is, they’re happy because they get a cut. Who’s the loser here? That’s something that I was trying to think.
In the end, the best thing I could come up with is we are all the losers because this junk is polluting the web. By polluting the web, it means that it will be harder for us to find quality information. It will be harder for search engines to identify bad stuff. Then somebody will be misinformed, at some point. They will waste time reading things that are not good. I wasn’t even envisioning, at that time, that somebody could misinform to the point of being manipulated. I didn’t think somebody’s going to go into a restaurant and risk killing some innocent people. I thought more like people are going to waste time browsing the web and finding junk.
What can a normal, everyday person do to combat misinformation?
I think of three things. Number one is: Don’t unfollow people because they have different opinions from yours. That is actually a very efficient way to put yourself inside an echo chamber. Very quickly, you only see things that already conform with your pre-existing beliefs. That will make you more vulnerable because you’re never tricked from a news from the opponent. Let’s say that you are pro-Trump. You’re not likely to believe a fake pro-Hillary message. You are likely to believe a pro-Trump, fake message and vice versa. We are more likely to be vulnerable to things that we’re more likely to believe.
By following and seeing things from people who have different ideas from us, that can help our skepticism. That can help us more informed about the different kinds of things that are going around and it can make us more skeptic. If you must, you can unfollow a source. You can say, “This person is posting links to this thing, which is a fake news website.” You can make it so that you will not see posts from that website again. Don’t make it so that you won’t see posts from that person again. Don’t put yourself in an echo chamber. Don’t unfollow people who have different opinions.
The second one is: Have some healthy skepticism. If you see something, before you think it’s true, check it. It’s not difficult to check it. You can usually just Google the title and, chances are, if you’re seeing it, many other people have seen it. In that case, probably somebody has already fact checked it. Either you’ll find that there are lots of sources that are talking about this, and then it’s probably true. Or, you’ll see some fact checking site who says, “No, no, this is not true.” Do some very simple fact checking.
The third thing is: Don’t share something without actually reading it. Some people share something just based on the headline. The headline can be incredibly misleading. Even if the article itself is not necessarily misleading, the headline very likely can be. This is a well-known technique of click bait. You will see a headline that really sounds true to you, and makes you angry, so you’re so tempted to share it. That’s a very bad thing because then, you’re becoming not only the victim of misinformation, but also the perpetrator.
This interview has been edited for brevity and clarity.