How does fake news of 5G and COVID-19 spread worldwide?
]
Share on Pinterest RapidEye/Getty Images A recent study finds misinformation on the new coronavirus spreads differently across various countries. However, there was a consistent misunderstanding of 5G technology.
Among the search topics examined, the myth around 5G having links to COVID-19 was the one that spread fastest.
Dispelling myths and encouraging people to fact-check sources could help build trust with the public. Stay informed with live updates on the current COVID-19 outbreak and visit our coronavirus hub for more advice on prevention and treatment. The year 2020 brought a COVID-19 pandemic as well as a pandemic of misinformation. From the first reported case in Wuhan, China, scientists have worked around the clock to gather information about this new coronavirus. In a year, we have learned a lot about the structure of the new coronavirus, how it spreads, and ways to reduce transmission. But with new information comes misinformation. There have been many potentially dangerous theories related to COVID-19, ranging from the new coronavirus being human-made to the idea that injecting bleach or other disinfectants could protect against infection. With the coincidental rollout of 5G technology, rumors have also linked the new technology to the new coronavirus.
Factors behind the spread of misinformation The COVID-19 pandemic resulted in widespread lockdowns across the world in 2020. With billions stuck at home, people have increasingly turned to social media, which is playing a pivotal role in the spread of misinformation. According to an October 2020 study in Scientific Reports, some social media sites, such as Gab, have a far higher proportion of articles from questionable sources circulating than other platforms such as Reddit. Engagement with the content on social media platforms also varied, with Reddit users reducing the impact of unreliable information and Gab users amplifying its influence. Not all misinformation is shared maliciously. A July 2020 modeling study in Telematics and Informatics found people shared COVID-19 articles — even if they were false — because they were trying to stay informed, help others stay informed, connect with others, or pass the time. One particular social media platform, Twitter, has become a double-edged sword regarding coronavirus news. A 2020 commentary in the Canadian Journal of Emergency Medicine suggests that Twitter helps rapidly disseminate new information. Still, constant bad news can result in burnout, or push users to seek out more optimistic information that may be false. But who is more likely to share articles from dubious sources? A 2016 study in PNAS found that like-minded individuals tend to share more articles with each other, but this can lead to polarized groups when article sharing involves conspiracy theories or science news. Sharing articles with inaccurate information was most observed among conservatives and people over the age of 65 years, suggests a 2019 study in Science Advances. The research was looking at fake news surrounding the 2016 United States political election. To investigate how misinformation spreads worldwide, an international team of researchers explored what types of misinformation were more likely to be shared with others, and the patterns in how that misinformation spread. Their findings appear in the Journal of Medical Internet Research.
Common misinformation terms Using the World Health Organization (WHO) website, researchers compiled a list of words falsely associated with causing, treating, or preventing COVID-19. The scientists also included “hydroxychloroquine,” even though it was not part of the WHO new coronavirus mythbuster page at the start of the study. The authors focused on four misinformation topics that claimed: drinking alcohol, specifically wine, increases immunity to COVID-19
sun exposure prevents the spread of COVID-19, or it is less likely to spread in hot, sunny areas
home remedies may prevent or cure COVID-19
COVID-19 spreads via 5G cellular networks. From December 2019 to October 2020, the team used Google trends to look at the frequency of these search terms in eight countries spanning five different continents: Nigeria, Kenya, South Africa, the U.S., the United Kingdom, India, Australia, and Canada.
5G myth spread fastest The researchers observed that searches related to the new coronavirus and 5G started at different times but peaked in the same week of April 5 for six of the countries. The U.K. and South Africa observed a peak during the previous week. The volume of searches for 5G also doubled in size at a faster rate than other search terms. Searches for hydroxychloroquine displayed a unique pattern, with three distinct peaks. This was likely a reflection of the ongoing discussions over several months about the drug’s possible benefits. Searches for ginger and coronavirus occurred in several countries, including the U.S., the U.K., Canada, Australia, and India, during the week of January 19, 2020. The remaining countries did not search for these terms until February or March, while Nigeria reported no searches for ginger and coronavirus for two consecutive weeks. However, the authors note this may be due to Google’s scaling algorithm and not from Nigeria having no searches for those weeks. The sun’s effect on the new coronavirus was the subject of searches from the week of January 19, 2020, in several countries. However, Kenya did not show any such topic search until a month later. Compared to other countries, searches for coronavirus and the sun doubled more slowly in Canada. Search trends for wine concerning the new coronavirus were inconsistent across countries. Scientists excluded Nigeria and Kenya from the analysis because of low search volumes. The U.S. had the earliest searches during the week of January 12, 2020, with a peak in April. The researchers noted no obvious groupings in terms of peak weeks across countries, with search peaks spreading across March 15 to April 12. “This study illustrates that neighboring countries can have different misinformation experiences related to similar topics, which can impact control of COVID-19 in these countries,” concluded the authors.
Limitations of study While the study tracked how often people encountered a topic dealing with new coronavirus information, the researchers could not deduce whether people believed in misinformation. The authors suggest further studies would be needed to determine a person’s interest in looking up a particular search term. Other limitations include variable access to the internet across countries — the authors note that less than 10% of Nigeria’s population has access, compared with more than 90% for the U.K. Another limitation was the search terms used, which may have excluded relevant content or included noise. Lastly, the researchers point out that it would be helpful to know the characteristics of people who tend to share articles with inaccurate or false data. This could help in developing future intervention strategies. “Although monitoring misinformation-seeking behavior via Google Trends is one pathway for identifying belief prevalence and trends, we should monitor information flow across multiple platforms including social media sites, such as Facebook, Twitter, and Instagram, and messaging apps such as WhatsApp.”
A fake story about the secretary of defense stole my real byline
]
WASHINGTON ― It started with an unusual email Friday afternoon: “Is this article true?”
The attached jpeg resembled a Defense News story complete with my byline, but instead of my work, it was a cocktail of lies and paranoia. There was the layout of one of my stories about the new defense secretary, Lloyd Austin, but some of the words matched a Washington Post article ― and it included both a false headline about Austin “defunding and dismantling” the U.S. Army and an equally false quote about America looking to China for its national defense.
Over the next few days, I’d receive more than five dozen more emails from strangers, most of them fearful this viral fake news was true. Meanwhile, I watched the meme stubbornly spread on Twitter, Instagram, Facebook and maybe most of all, Telegram, a chat app popular with QAnon adherents that was first developed in Russia.
“Did you in fact write this article? If so is there some back up for it? I am a retired Marine and frankly this scares the shit out of me,” one man wrote.
For the first two days, I alerted the trickle of Facebook users who’d reposted it. But a search of Facebook on Sunday revealed a soul-crushing cascade: The image had been reposted dozens of times.
Weeks after a disinformation-fueled siege at the U.S. Capitol, where lawmakers were confirming the national election results, this was an uncomfortably close look at a threat that’s long worried national security experts and scholars. The misinformation crisis will be a challenge for President Joe Biden, who called out the “attack on democracy and truth,” in his inaugural address by saying: “We must reject the culture in which facts themselves are manipulated, and even manufactured.”
Expect an uphill fight. A 2018 MIT study of Twitter found that false news travels faster than true stories, and not due to bots programmed to disseminate inaccurate stories, but people retweeting them. False news stories are 70 percent more likely to be retweeted than true stories, and it takes true stories about six times as long to reach 1,500 people as it does for false stories to reach the same number of people.
Digital investigations
× Fear of missing out? Sign up for the Early Bird Brief, the defense industry’s most comprehensive news and information, straight to your inbox. Thanks for signing up. By giving us your email, you are opting in to the Early Bird Brief.
According to Benjamin Decker, the founder and chief executive of the digital investigations consultancy Memetica, the Austin image and some associated text seemed to have first appeared on 4chan on Jan. 20, two days before a reader emailed me, and before it jumped to multiple platforms and message boards. Decker has likened the authors of these sorts of memes to brushfire arsonists who want sparks to jump firebreaks into more mainstream platforms, where others will fan the flames.
Where did this one come from? That’s unclear.
Authorities have warned that Russia, China and others are using cyber-enabled information operations to exacerbate existing tensions within the United States and between it and its allies. Some of the clumsy language in the fake Austin quote — “We are looking at China to rely on regarding our national defense” — suggested to Decker the meme may have been developed overseas. He noted that it had received some “suspicious amplification” from the “Republicans Worldwide” Facebook page, which has carried Russian-made memes in the past.
One of the Telegram accounts that posted the piece is attributed to retired U.S. Air Force Lt. Gen. Thomas McInerney, an 83-year-old former Fox News analyst and military adviser for the Trump campaign known for making debunked claims about voter fraud. The account has an incredible reach with more than 80,000 subscribers.
But when I called the real McInerney, he told me it was not him.
“I’m not on any social media, so someone is trying to discredit me,” McInerney said. “I’ve sent at least four emails to Telegram to ask that my name be removed. Others have called me, and it’s obvious Telegram is complicit.”
What about the premise that Austin might end the U.S. Army and outsource the country’s defense to China? “Ridiculous,” McInerney said, adding that Austin was the best appointment Biden has made. “Austin is a general officer I have great respect for, and I’m actually surprised he would want to work in this administration,” McInerney said.
President Donald Trump’s departure has left the pro-Trump and QAnon communities disillusioned, fractured and flocking to alternative platforms such as Gab and Telegram after mainstream sites banned them. As these communities look for new narratives, accounts purporting to have military service can lend credibility to fake conspiracy theories.
“In the post-election era of QAnon, they need some sort of validation or verification, and what there’s been recently, especially in the run up to Jan. 20, are these new QAnon channels on Telegram impersonating senior military officials,” Decker said.
Impersonating a media brand can do the same. “You’re kind of hijacking the brand in order to push a certain narrative, and that’s something that we’ve seen time and time again,” Decker said.
Screengrab
Fighting brush fires
What I learned while alerting those Facebook users was depressing but also reassuring. After finding the images by searching for keywords, I was sometimes parachuting into threads where debates were already happening about the meme’s veracity. Many concluded it was fake after searching for the nonexistent story itself, and one QAnon thread even hosted a version of the meme emblazoned with big red letters: “FAKE!!”
“I haven’t seen this anywhere,” one Facebook user replied to another. “Trust me I have some concerns about the ‘new admin’ but I do want to make sure I get info from various sources.”
One woman I alerted was embarrassed, but I offered that the item was crafted to be deceptive, and that she and I were its victims. I work hard to provide accurate information, with context, to educate the public, and the meme was upsetting to me because my name had been used to do the opposite. Not only was it damaging to my credibility, but political lies ― as shown by the deadly insurrection at the Capitol ― damage our democracy.
It was dispiriting that my flagging of several dozen individual posts as “false news” through Facebook’s own notification system had no immediate effect. Only after Defense News and Memetica contacted Facebook did the memes disappear en masse.
For its part, Facebook said its team determined that the posts violated its policies toward digitally altered images leading to misinformation, that it deleted them and that similar posts should be automatically screened in the future.
Several users I contacted acknowledged the meme was false but stubbornly left it up because, to them, it showed larger truths about the new Biden administration.
“Not as farfetched as you think,” one man replied with regard to Austin. “What do you know of his past? Why was he chosen? Why did Biden already move in Syria? Why did he just make us dependent on foreign oil? Why is he against jets just sold to UAE? Why did he support past funding for Iran? Why did his past administration take out Seal Team Six and not lend aid to Bengahzi? Not farfetched at all. Corruption runs very deep with this guy.”
But anyone with an internet connection can watch Austin’s confirmation hearing and get the facts.
The headline falsely stated: ‘Lloyd Austin, Biden’s nominee to lead the Pentagon, is considering “defunding, maybe disbanding” U.S. Army.’ Another lie was that Austin was asked how the U.S. would counter foreign threats and he replied, “We are looking into having China to rely on regarding national defense.” If Austin had said either, it’s unlikely he’d have been confirmed by the Senate by 93-2.
Yes, some lawmakers have argued Austin lacks sufficient China experience, but Austin used his confirmation hearing to call China an adversary and the Defense Department’s “pacing” challenge. Asked how he would maintain a competitive edge against China, he gave a predictable answer: “We will present a credible deterrent to China and any other adversary that looks to take us on.”
Far from dismantling the Army, Austin ― who is a retired four-star Army general and former Army vice chief of staff ― had to reassure a skeptical Sen. Tom Cotton, R-Ark., during the hearing that he would not show favoritism toward the Army over the other services.
Claims that Austin, the country’s first Black defense secretary, was considering defunding the Army echoed the “defund the police” slogan that became common during the George Floyd protests this summer. However, Biden’s campaign platform has promised $300 million for police departments to hire more officers and he’s rejected the “defund police” label.
AI-powered content: does it work for SEO?
]
Artificial Intelligence (AI) has come on in leaps and bounds since Alan Turing first asked, “Are there imaginable digital computers which would do well in the imitation game?” in his 1950 paper ‘Computing Machinery and Intelligence’.
In the 70-odd years since, computers have gone on to defeat chess grandmaster champions (IBM’s Deep Blue vs. Garry Kasparov in 1997), successfully complete the DARPA Grand Challenge in 2005 (race for autonomous vehicles over 100km in the Mojave Desert) and win against humans at Jeopardy! In 2011.
AI use in digital marketing is rising steeply
While these milestones highlight AI’s rapid progress, the technology is now starting to make a real-word impact.
When it comes to digital marketing, the enormous potential and central role AI can play is clear. In fact, according to Salesforce’s sixth edition of the State of Marketing report, the vast majority (84%) of marketers now report using AI, up from just under a third (29%) in 2018, a huge 186% adoption increase in just two years.
These stats won’t come as a surprise to many, considering machine learning algorithms enable marketers to filter useful insights from enormous amounts of data and deliver enhanced customer experiences, making it an invaluable resource in a data-driven world.
AI and SEO content generation
Covering AI’s role in the field of digital marketing within one session would be an impossible feat for the best of us; because of this, we will instead discuss and explore AI’s place in the SEO content creation process.
AI has long been a hyped-up buzzword in the SEO industry; but as is the case with many avant-garde technologies, opinion is split, and cases can be made by SEOs for and against the use of AI.
While AI is undoubtably an extremely useful resource in the research, data analysis and planning phase of content creation, when it comes to the actual writing itself, the waters begin to get murky.
On one hand, using AI to generate content is significantly less time and resource consuming. However fast, a human would never be able to write a 1000-word article in the space of a minute like an AI tool can. This means a certain amount of budget needs to be allocated to producing content which could instead be reallocated if AI were used. If quantity were the be-all and end-all, the benefits of using AI are clear.
On the other hand, in a time where ‘fake news’ and misinformation are rife, can widely available AI currently be used to generate high-quality, accurate and engaging content? And how will Google and other search engines treat AI generated content? Google’s mission is to “organise the world’s information and make it universally accessible and useful”, so what does that mean for the endless wave of content that can be churned out day in, day out by AI tools?
To get a better understanding, we used widely available AI content generation tools to create a number of articles around different topics, such as sport, fashion and pharmaceuticals. Using different topics allowed us to analyse how well the tools performed when varying levels of difficulty and accuracy come into play; aside from the fashion failure, many wouldn’t be too fussed if an article suggested wearing the wrong type of shoe with a certain dress, but inaccurate information could be extremely costly when it comes to the type and quantity of medicine to take for a headache.
The AI-generated content was then analysed and evaluated based on a variety of factors, from readability to accuracy, and compared to like for like content created by professional writers to see how the two matched up.
If you want to find out more about the results, join our forthcoming webinar on Thursday 11th February, where we will cover how well today’s AI-generated content fares in the ‘imitation game’. Registration here.
Alex Bova is SEO operations manager at Threepipe Reply