Kinza Chaudhry cringes when she gets a WhatsApp message full of forwarded health advice from a family friend or distant cousin. It might be a message about treating diabetes or thyroid disease or advice about what foods constitute healthy eating.
Chaudhry, 33, who is a registered dietitian in Washington, D.C., is concerned that health misinformation shared on messaging platforms like WhatsApp are often not backed by science.
“These generalized chain messages are spreading a lot of misinformation. I don’t know if any of this data is ever being tracked and if people are going to the hospital,” because they followed faulty information, Chaudhry told the PBS NewsHour.
Chaudhry is among the more than two billion WhatsApp users around the world who use the private messaging service to connect with family and friends. The services are a free connection for everyone, but for millions in diaspora communities across the United States, they are also a lifeline to home countries. Calls that used to cost dollars per minute are now free texts sent across thousands of miles. But they can also be a source of misinformation–some of it dangerous.
As the U.S. was consumed with political disinformation surrounding the U.S. presidential election, largely on Facebook, the rest of the world was dealing with science and health news misinformation on messaging apps, said Claire Wardle, of First Draft News, a nonprofit dedicated to tackling misinformation,
And while social media giants like Facebook and Twitter have come under intense scrutiny for hosting misinformation, private messaging services like Telegram, South Korea’s Kakao, the China-based WeChat and the largest–WhatsApp–have been more difficult to monitor because they host private, sometimes encrypted, chats between individuals or small groups–sheltered from the eyes of fact-checkers and watchdogs. Often, that privacy is what brings people to these types of messaging apps. Though out of view, chats can include the spread of misinformation about health and politics both in the United States and abroad.
READ MORE: Facebook’s leadership had ‘no appetite’ to fact check political ads, combat disinformation
The nearly one-in-five American adults who get their news primarily from social media sources are more likely to have heard about false or unproven claims than those who get their news from other sources, according to a 2020 analysis by the Pew Research Center. They’re also less concerned about the dangers and consequences of made-up news, the analysis said. These concerns are only compounded when the messaging is taking place on platforms where the conversations are more protected, such as on apps like WhatsApp.
“Messaging apps, they’re essentially the same as me having a conversation around the dinner table…they are encrypted so even the platforms themselves don’t know what’s being shared,” Wardle said.
How misinformation has molded reality
Twenty-eight year-old Faris Ibrahim’s mom forwards messages to his Sudanese family’s WhatsApp group everyday. He’s seen everything from graphics of vegetables that allegedly cure cancer to home remedies that will prevent you from catching the coronavirus.
“Whenever I get something on WhatsApp or hear something ridiculous, instead of saying’ ‘fake news’ we now call it ‘WhatsApp news,’” said Ibrhaim, who lives in D.C.
Ibrahim believes there is a “WhatsApp culture,” one that’s prevalent among older generations in diaspora communities like his own because it’s information being shared by loved ones, friends and people from one’s own community–all on the platform where they all exist.
“Now my generation sees everything on WhatsApp like it’s fake, even if it’s a real article shared on WhatsApp, we have so much speculation compared to our parents who are like ‘oh this is definitely real because it came from your auntie or lady from the mosque, there’s that trust and credibility there,” Ibrahim said.
Stephanie Hankey, co-founder and executive director of the Tactical Technology Collective aimed at studying the intersection of tech and culture, said that a sense of trust is what makes information sharing on encrypted messaging apps, or EMAs, different from open platforms.
“With WhatsApp, you’re usually getting messages from people you know that you’re somehow connected to, the level of trust is much higher so it makes a difference in how that information moves,” Hankey said.
Ibrahim’s family often sends political updates about the military coup in Sudan, home to his extended family. Back during the revolution in 2019, his parents relied heavily on WhatsApp to make sure their family in Sudan were safe. But it was hard to know whether other information being forwarded among the Sudanese diaspora, such as videos and images of death and violence, were accurate.
“You don’t know if what you’re seeing is real. I know a lot of the images we later found out were from years before the revolution or took place in another country, or in another context,” Ibrahim said.
Diasporic communities who leave their home countries–whether for economic, cultural or political reasons–often have a snapshot-in-time understanding of that birthplace, which could influence what they share, explains Dwaine Plaza, a sociology professor at Oregon State University who studies transnational families and social media.
“Good intentions can sometimes go in bad ways, because some of those folks can be the sewers of disinformation back in their home place,” Plaza said. “They can also be the sewers of strife for people who have sort of worked through the strife and are now having to work through the strife again because this person keeps ripping off the band-aid.”
Health misinformation has run rampant across social media and messaging apps throughout the pandemic — everything from false measures to prevent COVID-19 to unsubstantiated cures for the virus. Conspiracy theories around the virus’ origins have also spread, some as outlandish as the vaccine being a cover for Microsoft co-founder and billionaire Bill Gates to microchip and track people–which is false.
Hispanic communities in the United States have been inundated with health- and vaccine-related falsehoods, from Spanish-language videos circulating on WhatsApp featuring people claiming to be doctors spouting false information about the vaccine, to memes in WhatsApp groups saying things like “the only cure I need is God.” Some medical experts have pointed to this kind of misinformation as contributing to lower vaccination rates among Hispanic people (as it has for other groups).
There are reportedly about 78 million WhatsApp users in the U.S. and Hispanic Americans (46 percent) are far more likely to say they use WhatsApp than Black (23 percent) or white Americans (16 percent) according to a Pew Research study. Hispanic Americans were also much more likely to talk about politics on WhatsApp, 27 percent, followed by 21 percent of Asian Americans. Only about 4 percent of white Americans used the platform to discuss politics.
In response to the spread of misinformation on its platform, WhatsApp says it has partnered with health authorities to connect users with factual COVID information and for vaccine appointments.
READ MORE: Americans agree misinformation is a problem, poll shows
Anna Wong, a 62-year-old who lives in Boca Raton, Florida, and communicates with friends and family around the country and abroad in Hong Kong and Canada, says in the beginning of the pandemic she got WhatsApp forwards that made claims about Indian curries preventing the coronavirus or how the summer heat would stop the pandemic — both false claims.
Even people she regards as highly educated in her networks are getting fooled because “the misinformation is very crafty.” A forward might say a lot of reasonable things, she said, but then slip in one or two pieces of false information. “If they agree with most of the information, and they disagree with one, they will still share it. This misinformation thing sneaks up on you.”
Even people she regards as highly educated in her networks are getting fooled because “the misinformation is very crafty.” A forward might say a lot of reasonable things, she said, but then slip in one or two pieces of false information. “If they agree with most of the information, and they disagree with one, they will still share it. This misinformation thing sneaks up on you.”
For Indian Americans, whose native country has the highest number of WhatsApp users, a recent survey found that around 30 percent of Indians surveyed used WhatsApp for COVID-19 information, and just about as many fact-checked less than 50 percent of messages before forwarding.
Chaudhry, who is South Asian American, says many of the chain messages come from her parents’ generation, who in turn sometimes ask her about whether to buy a certain oil or eat a certain food that they saw touted on WhatsApp. But despite her professional expertise as a dietician, Chaudhry said she’s given up trying to correct people because she says people can often be headstrong about their beliefs.
“Years of education and practice — that’s kind of going out the window because there are people that are not trained professionals that are disseminating information that’s inaccurate,” Chaudhry said. “It’s sad to see that.”
Wardle believes that a lot of the susceptibility also trends along a generational divide, older generations are less inclined to be able to tell what content is doctored versus what is authentic while younger users have grown up in a digital age and may be better conditioned to spot those disparities.
During the 2020 U.S. presidential election, Latin Americans in Florida were bombarded with false information about communism and the Democratic Party via private messaging apps. Fake images circulating on WeChat–a Chinese-owned messaging platform–of the U.S. preparing to dispatch the military to stop riots during the election attempted to lower turnout among Chinese American voters.
“Technology sits on top of humans. So if humans hate each other, then it turns out that WhatsApp just kind of amplifies that in the same way,” Wardle said.
READ MORE: How misinformation is used to amplify and solidify ideology
Hankey said that the types of false news spread via technology is always shifting, from voter persuasion and influence to public health, but similar to Wardle, she believes that what circulates is a mirror of what a society — or a diaspora community — sees around them.
“Whilst I believe that the messaging apps amplify and extend those phenomenons. I’m not sure whether we can say that they’re the reason for them,” she said. “The reason for them is we have those things in society and then we have these mechanisms that are making them travel faster, further to more people and going unchecked,” Hankey said.
Private messaging apps vs. the big, open social platforms
In recent years, major open platforms, like Facebook and Twitter, have implemented measures that sometimes flag users if content they share was shared many times and might not be factual. However, critics point out that Big Tech has a long way to go in fighting disinformation and may not be able to properly self-regulate. Recently leaked documents from Facebook whistleblower Frances Haugen reveal the company has had “no appetite” to fact check political ads, has struggled with curbing divisive content and has allowed hate speech to thrive.
Difficult as is monitoring and flagging messages on open platforms, any semblance of this kind is near impossible on private messaging apps.
This is what makes monitoring disinformation on EMAs so difficult–the company is unable to access user communication, meaning they cannot see what people are sending each other.
A spokesperson for WhatsApp — which is owned by Meta, formerly Facebook — reiterated that the messaging service is different from its sister products. “Unlike social media, where people can find and view content at scale, WhatsApp is designed primarily for one-to-one and small group conversations,” the spokesperson told the PBS NewsHour.
READ MORE: Is Facebook putting company over country? New book explores its role in misinformation
WhatsApp now attempts to curb a viral message from spreading by limiting forwarding messages to only five chats at a time, which the company said reduced forwarded messages on WhatsApp by a quarter. Messages that have been forwarded many times and didn’t originate from a close contact are also marked with double arrows and can only be forwarded to one other chat at a time. That change reduced those messages by about 70 percent, the platform said.
Tencent — the Chinese parent company behind WeChat — declined to comment on the record but did point to WeChat’s in-app reporting function, a feature that exists within the app that allows users to report messages they deem harmful. WeChat then further investigates the flagged messages.
Signal, Telegram and Viber did not respond to a request for comment.
So what’s being done about the misinformation?
Amid the pandemic, the World Health Organization launched a WHO alert on WhatsApp. If you texted “hi” to a certain number, you’d receive factual information on the coronavirus infection — its rates around the world, travel advisories, debunked rumors.
Even if more is done to combat misinformation on EMAs, it may not help many diaspora communities. The recent Facebook leaks also revealed that the company spends 87 percent of its budget classifying misinformation on U.S. users — even though Americans only represent 10 percent of the platform’s active user base. “The reality is that Facebook has radically under-invested in safety and security systems for all languages other than English,” Haugen said in testimony to European lawmakers.
“They’re not really doing anything to tackle it in the same way that they are … to a certain extent, tackling misinformation that is proliferating in English within the U.S.,” said Rachel Moran, a postdoctoral fellow at the University of Washington.
“We need technology to be run by a more diverse subset of people to understand both the intended and the unintended consequences of the decisions that these companies are making,” Moran said.
Fact-checking organizations like Viet Fact Check and The Interpreter, help flag misinformation for Vietnamese Americans, for example. But monitoring forwarding and fact-checking resources may not be enough.
“We can have more fact checking tip lines. We can do visits. I do think another way to think about this is working with community media,” said Wardle, adding that there needs to be more groundwork with community leaders, educating about what types of information can be trusted and methods on how to identify if something is false.
“There’s no shortcut to this,” Wardle said. “It’s just digital literacy.”