How AI Accelerates the Fight Against Fake News

img ]

Microchips in coronavirus vaccines. Pedophile rings in pizza restaurants. Jewish space lasers. The Internet–bless its heart–has always suffered from its share of wacky wingnuts and conspiracy theories. But in the wake of the November 3 presidential election and the January 6 Capitol riot, social media platforms and governments are stepping up their efforts to crack down on the most problematic content, and AI plays a leading role.

The concern is not academic. The January 6 Capitol riot is painful example of how misinformation spread online can culminate in citizens being driven to partake in real world violence. Weeks after the riot, Facebook and Twitter took action by banning large numbers of people and groups that they identified as sources or perpetuators of misinformation.

That wasn’t fast enough for Kristen Clarke, president and executive director of the Lawyers’ Committee for Civil Rights Under Law.

“It shouldn’t take several weeks for a company to formally decide to remove conspiracy theories, lies, and false election rumors, especially when these disinformation campaigns have resulted in real-life threats of violence, caused public confusion and undermined democracy,” she said.

The question, then, becomes about speed: How quickly can misinformation and fake news be detected? Social media platforms traditionally have relied on human moderators for much of this work. But with billions of posts per day, the volume of content is simply too great for human eyes alone. Automation is the only viable approach, which means a greater reliance on AI.

One company on the cutting edge of using AI to combat fake news is Logically. The UK firm has developed a sophisticated solution–available as a mobile app or a Chrome plug-in–that can classify a given piece of content as fake news or misinformation before the user has a chance to read it.

The magic of Logically’s solution happens behind the scenes. According to Anil Bandhakavi, who heads up the company’s data science initiative, Logically’s blended approach gives it an advantage over competitive offerings.

“We have a three-pronged approach that broadly looks at the origins of a given content, the content itself, and also the associated metadata that provides us additional informant and clues about a given piece of content,” he says.

From a technical standpoint, the company makes extensive use of machine learning, NLP, network theory, knowledge graphs, and traditional rules-based decisioning to automatically identify and classify large amounts of suspicious content. It also relies extensively on human experts, not only to provide a check on the algorithms for iffy edge-cases that could go either way, but also for training the algorithms and making them better. Natural language generation is also used to explain why the system determined a given piece of content is fake or harmful.

AI technologies and techniques are used at several stages of Logically’s solution. It uses deep learning techniques to bolster its capability to understand the meaning behind text and other content. Specifically in this vein, it uses a custom-trained, multi-billion-parameter language model based on BERT (Bidirectional Encoder Representations from Transformers), a transformer-based NLP mode model that was released by Google in 2018, Bandhakavi says.

Natural language understanding is a critical and technically challenging aspect of what Loagically does, but it forms just part of the overall solution. According to Bandhakavi, it’s also important to track with whom a particular piece of content or a news article originated and the networks across which it spreads (which is where its knowledge graphs comes in handy).

“We built technology to understand the context in which misinformation is embedded, and how it spreads in a network, the Internet, or the social media,” Bandhakavi says. “[We also built technology] to understand the integration patterns and communities and users with fake news and misinformation.”

Having a multi-pronged approach to detecting fake news and misinformation is critical to success, says Joel Mercer, who heads up product design for Logically.

“The difficult thing is basically taking one point–whether that’s taking credibility or the content level or sentiment–on its own to indicate that’s a piece of problematic content by itself,” Mercer says. “Generally, what we’re doing is taking a multitude of different signals whether that’s simple stuff like automation scoring and bot scoring, in combination with technical analysis of the content, and then the accounts themselves to really build a broader picture.”

One of the challenges is how linguistic patterns constantly change. The Internet is a wellspring for new ideas and new words, and that requires Logically to retrain its language model very frequently. But even that alone is not enough to ensure the highest possible accuracy.

“Just looking at content and its origin might not give you the full picture about the nature of the content,” Bandhakavi tells Datanami. “We look beyond these two aspects as we model a lot of metadata and [conduct] the network analysis. And our in-house experts help us to constantly improve our model.”

The in-house experts are a critical element of Logically’s approach, as they provide a check against the AI models for late-breaking news and fast-changing information types. The humans also provide a valuable pool of knowledge upon which to train the NLU models.

“AI alone can get very stale when it comes to the understanding the evolution of content, so it’s always important and useful to have expert intelligence input,” Bandhakavi says. “We’ve realized the value of coupling AI with expert intelligence. And we’re able to adopt our own unique way in which human-in-the-loop frameworks can be leveraged.”

According to internal benchmarks, Logically’s systems demonstrates an accuracy rate in the mid 90s. In other words, it will misidentify a piece of real news as fake (or identify a piece of fake news as real) about five times out of 100. That is not perfect, by any means, but it’s currently the state of the art, Bandhakavi says. “Our technology is constantly advancing and improving, especially with our investment in human-in-the-loop AI,” he says.

Getting this technology to the front-lines in the battle against misinformation is critical. During the 2020 presidential election, Logically worked with the electoral commission for a major battleground state to fight fake news. It also worked with the government of India for its 2019 general election. The company is working with major social media platforms, although it doesn’t say which ones.

Facebook and Twitter users are accustomed to seeing content that is flagged as false or misleading. But just having a given piece of content flagged isn’t enough to stop the tsunami of bad data, according to a recent report, titled “Tailoring heuristics and timing AI interventions for supporting news veracity assessments,” published in Computers in Human Behavior Reports.

The researchers found that, when news consumers hold strong prior beliefs about a particular topic, they are less willing to accept the advice offered by the AI system. But when they don’t hold strong beliefs—such as with a novel news story—then they’re more receptive to AI.

“It’s not enough to build a good tool that will accurately determine if a news story is fake,” Dorit Nevo, an associate professor at Rensselaer Polytechnic Institute (RPI) and a co-author of the report, says in a press release. “People actually have to believe the explanation and advice the AI gives them, which is why we are looking at tailoring the advice to specific heuristics. If we can get to people early on when the story breaks and use specific rationales to explain why the AI is making the judgment, they’re more likely to accept the advice.”

The battle over fake news and misinformation is as dynamic as the human experience, which means there will never be a single solution or approach that works 100% of the time. Humans are fallible, which means an infallible AI system for fake news identification is an impossibility. But with enough time, technology, and diligence, we can create AI systems can put a dent in the spread of damaging information, while minimizing collateral damage to the freedom of speech and thought that the Internet was created with in the first place. Considering the real-world harm that misinformation is creating, that possibility must be explored.

Related Items:

AI Squares Off Against Fake News

Systemic Data Errors Still Plague Presidential Polling

Fake News, AI, and the Search for Truth

Scientists develop method to detect fake news

img ]

Social media is increasingly used to spread fake news. The same problem can be found on the capital market - criminals spread fake news about companies in order to manipulate share prices. Researchers at the Universities of Göttingen and Frankfurt and the Jožef Stefan Institute in Ljubljana have developed an approach that can recognise such fake news, even when the news contents are repeatedly adapted. The results of the study were published in the Journal of the Association for Information Systems.

In order to detect false information - often fictitious data that presents a company in a positive light - the scientists used machine learning methods and created classification models that can be applied to identify suspicious messages based on their content and certain linguistic characteristics. “Here we look at other aspects of the text that makes up the message, such as the comprehensibility of the language and the mood that the text conveys,” says Professor Jan Muntermann from the University of Göttingen.

The approach is already known in principle from its use by spam filters, for example. However, the key problem with the current methods is that to avoid being recognised, fraudsters continuously adapt the content and avoid certain words that are used to identify the fake news. This is where the researchers’ new approach comes in: to identify fake news despite such strategies to evade detection, they combine models recently developed by the researchers in such a way that high detection rates and robustness come together. So even if “suspicious” words disappear from the text, the fake news is still recognised by its linguistic features. “This puts scammers into a dilemma. They can only avoid detection if they change the mood of the text so that it is negative, for instance,” explains Dr Michael Siering. “But then they would miss their target of inducing investors to buy certain stocks.”

The new approach can be used, for example, in market surveillance to temporarily suspend the trading of affected stocks. In addition, it offers investors valuable information to avoid falling for such fraud schemes. It is also possible that it could be used for criminal prosecutions in the future.

Original publication: Michael Siering, Jan Muntermann, Miha Grčar. Design Principles for Robust Fraud Detection: The Case of Stock Market Manipulations. Journal of the Association for Information Systems (2021). Doi: 10.17705/1jais.00657, or https:/ / aisel. aisnet. org/ jais/ vol22/ iss1/ 4

Contact:

Professor Jan Muntermann

University of Göttingen

Faculty of Business and Economics

Professor of Electronic Finance and Digital Markets

Platz der Göttinger Sieben 5, 37073 Göttingen, Germany

Tel: +49 (0)551 39 27062

muntermann@wiwi.uni-goettingen.de

https:/ / uni-goettingen. de/ en/ 187348. html

Rajasthan man arrested for spreading fake news

img ]

New Delhi [India], February 2 (ANI): Delhi Police Cyber Cell arrested a person on Monday from Rajasthan on charges of spreading fake news about resignations of police personnel due to the ongoing farmers’ agitation.

A social media campaign had been going on for the past two days propagating false rumours regarding disaffection in police ranks. Old, unrelated videos were being posted with fake news of the resignation of police personnel.

Cyber Prevention Awareness and Detection (CyPAD) unit of the Delhi Police registered a criminal case against Om Prakash Dhetarwal and later on arrested him. Om Prakash belongs to Churu District of Rajasthan.

The accused had created a Facebook account by the name of ‘Kisan Andolan Rajasthan’ and shared an old video from September 2020 of Home Guards of another State, portraying it as a reaction of Delhi Police personnel to the recent Farmers agitation.

Delhi Police has advised people to avoid sharing or posting any such content without verifying its authenticity.

Further arrests in the case are likely to be made soon, it said. (ANI)

Disclaimer: The views expressed in the article above are those of the authors’ and do not necessarily represent or reflect the views of this publishing house. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.