By Wendy barrot
ON : 13 March 2023
Artificial intelligence (AI) has become a powerful tool for spreading false information as it allows the creation and dissemination of incorrect knowledge at a scale and speed that was previously impossible. This has led to a growing concern about the impact of AI-enabled disinformation on society, including on the political process, public health, and trust in institutions.
One solution is to invest in fact-checking and verification technology. It can help identify and flag untrue information, making it easier for individuals and organizations to distinguish between credible and non-credible sources. Another solution is to increase transparency in online advertising and social media.
Taking a multifaceted approach that combines technology, education, transparency, and collaboration will mitigate the negative impact of AI-enabled disinformation and create a more informed and engaged citizenry.
What are the risks of disinformation?
The risks of disinformation are numerous and can significantly impact society. Some of the key risks include:
- Political interference:
Disinformation can influence the outcome of elections and undermine the democratic process. False information can be spread to influence public opinion, create confusion and sow division.
- Public health risks:
Disinformation can also seriously influence public health by spreading fraudulent information about COVID-19, vaccinations, and other health issues. This can lead to people making dangerous or life-threatening decisions.
- Damage to trust in institutions:
It can erode trust in institutions such as governments, media, and experts. When people cannot trust the information they receive, they may be less likely to follow guidance from public health officials, for example.
- Economic risks:
Moreover, disinformation can significantly harm businesses and economies by spreading erroneous reviews about products and services, causing confusion, and cultivating mistrust in financial markets.
- Loss of privacy:
Disinformation campaigns often involve collecting and using personal data, and spreading faulty details may lead to people exposing sensitive information about themselves.
- Societal risks:
Disinformation can create division and suspicion among different social groups, leading to a breakdown of social cohesion and a decline in civil discourse.
Is artificial intelligence (AI) the antidote to disinformation?
Machine learning algorithms can be used to identify patterns in disinformation campaigns, making it easier to track their origin and spread. Remember, the same technology that can be used to detect disinformation can also be used to create deep fakes which are highly convincing digital manipulations of images, videos, and audio. Therefore, the same technology that can be used to detect disinformation can also be used to create it. To mitigate this problem, AI-based systems to detect deep fakes have been developed.
Another important aspect of disinformation is that it often adapts to countermeasures. For example, if a social media platform starts to crack down on fake news, the purveyors of disinformation may start to use more sophisticated techniques to evade detection. This makes it harder to detect disinformation and requires constant adaptation of the detection methods.
AI has the potential to be a powerful tool in the fight against deceit. However, AI needs to recognize the disinformation, adapt and evolve overtime.
Some use cases of AI as an antidote for disinformation are:
- AI can be used to analyze text, images, and videos to identify disinformation, which can be used to develop automated systems to detect and flag disinformation on social media platforms.
- AI can also be used to understand the motivations of those who spread disinformation; this can help to develop targeted campaigns to counter disinformation.
- AI can be used to understand how disinformation campaigns target specific communities or demographics; this can be used to develop targeted campaigns to protect these groups from disinformation.
- AI can be used to analyze the effectiveness of countermeasures against disinformation; this can be used to optimize and improve these countermeasures over time.
- AI can be used to analyze the overall sentiment of the public towards certain topics; this can be used to identify disinformation campaigns that may be attempting to manipulate public opinion.
Solutions to deal with AI-enabled Disinformation
De-emphasize and Correct False Content
Among many, one solution to deal with AI-enabled disinformation is to de-emphasize and correct false content. This approach focuses on reducing the visibility and reach of false information rather than trying to obliterate it.
- One way to do this is by using algorithms and machine learning techniques to identify and demote false content in search results and social media feeds. This can make it less likely for individuals to encounter incorrect information in the first place.
- Warning labels or fact-check tags can be used on social media platforms to inform users about inaccurate content.
- Providing accurate and reliable information as an alternative to false content can help to correct false information. This can include creating and sharing fact-checks, infographics, and other educational resources that can help to counter the spread of disinformation.
Promote Greater Accountability and Transparency
This approach focuses on holding platforms and advertisers accountable for the content on their sites and ensuring that users have access to information about who is behind the content they encounter online.
- Implementing stricter regulations for online advertising and social media platforms can make it more difficult for disinformation to spread and make it easier for users to identify credible sources of information.
- Platforms can be required to disclose the funding source for political ads, providing users with information to evaluate the motivations behind the content they see online.
- When you use the internet, there are computer programs that decide what information you see. These programs are called algorithms. By allowing people to see how these algorithms work, it can help them understand why they are seeing certain information and not others. This can help them make more informed decisions about the information they come across online.
- Holding platforms accountable for the content on their sites can incentivize them to take a more proactive approach to detect and remove false information.
- Collaboration between government and industry can help create a regulatory framework that balances the need for free expression with the need to protect users from disinformation.
Regulate Social Media Content
This approach focuses on creating a legal framework to hold social media platforms accountable for the content that appears on their sites and ensure that they take necessary steps to remove or flag false information. Regulating social media content can help create a safer and more informed online environment and reduce the spread of disinformation.
- Governments can take action to combat disinformation by implementing regulations that hold social media platforms accountable for the spread of false information on their platforms. This could include laws requiring platforms to remove misleading content and penalties for failure. By holding these platforms liable for the spread of disinformation, governments can incentivize them to take more proactive measures to prevent the spread of false information
- Platforms can be required to disclose their algorithms and how they moderate content, providing users with information to evaluate the credibility of the information.
- Governments can establish independent regulatory bodies that monitor and enforce compliance with laws and guidelines on social media content.
- Governments can also launch public education campaigns to increase media literacy and critical thinking skills among citizens, making them less susceptible to disinformation.
- Governments can also collaborate with international partners to tackle the cross-border nature of disinformation and to share best practices.
- Governments can also increase penalties for individuals or organizations that deliberately spread disinformation.
- Governments can also take steps to protect the privacy and security of citizens, which can make it more difficult for disinformation campaigns to target and manipulate individuals.
- Governments can also work with industry and civil society stakeholders to develop common standards and protocols for dealing with disinformation.
- Governments can also create a legal framework that enables them to take action against foreign actors who spread disinformation in their country.
Technological Remedies for Deep Fakes
Technological remedies are one way to address the problem of deep fakes, which are videos or images manipulated using AI to make it appear as if someone said or did something that they did not. These remedies use technology to detect and flag deep fakes, making it easier for users to identify them and assess their credibility.
- Developing and using advanced image and video analysis tools that detect subtle changes in lighting, shadow, and other visual cues can help flag deep fakes.
- Creating a digital fingerprint, a unique signature that can be used to identify an original video or image, and comparing it to the manipulated version can help to detect deep fakes.
- Incorporating audio analysis tools that detect changes in voice, intonation, and other acoustic cues can help detect deep fakes in audio files.
- Making use of blockchain technology to store original images and videos, which can be used to verify that the media is not tampered with.
- Creating an open-source library of deep fakes that can be used to train detection algorithms and help researchers understand the latest techniques used to create deep fakes.
Improving the personalization and reducing filter bubbles
Solutions for improving personalization, combating the negative effects of manipulation and influence, and reducing filter bubbles can include:
- Encouraging users to actively seek out diverse perspectives and information.
- Providing users with more control over the information and content they see.
- Encouraging social media platforms and other businesses to adopt transparency and explain how they use profiling and micro-targeting.
- Educating users on how to recognize fake and false information.
To mitigate discriminatory practices, solutions can include:
- Governments can implement laws and regulations prohibiting discriminatory practices based on race, gender, age, and other characteristics.
- Encouraging industry and civil society stakeholders to develop common standards and protocols for ensuring fair and non-discriminatory use of user profiling and micro-targeting.
As the social media landscape expands and AI technology becomes more advanced, disinformation campaigns will likely become more frequent. To effectively combat these attacks, there is a need for further advancements in AI, particularly in the ability to quickly determine the credibility of online sources with limited data.
Additionally, social media companies must prioritize the implementation of policies and resources to effectively utilize technology for detecting disinformation and to ensure that their platforms promote access to accurate information rather than spreading misinformation.