OpenAI, one of the world’s leading artificial intelligence research companies, released a report yesterday warning of the dangers of AI-enabled influence operations, or disinformation campaigns.
The study was conducted in partnership with Georgetown University’s Center for Security and Emerging Technology (CSET) and the Stanford Internet Observatory, suggesting that as language models improve, they will become formidable tools for spreading propaganda and disinformation.
With the ongoing conflict between Russia and Ukraine and the potential global ramifications at stake, it is of paramount importance to develop measures to prevent the misuse of advanced AI language models. The potential for a state like Russia or North Korea to heavily invest in this technology and use it to further their own agenda is a chilling thought.
The research conducted by OpenAI et al. is crucial in bringing attention to this potential threat. However, the public also plays a vital role in this endeavor. It is important for individuals to educate themselves on the issue and take steps to promote media literacy, such as fact-checking sources and being vigilant about the content they share on social media.
The Threat of AI-Enabled Influence Operations
The researchers believe that it is critical to analyze the threat of AI-powered influence operations before they become a reality, and they hope their work will inform policymakers and spur in-depth research into potential preventive measures.
OpenAI released ChatGPT, one of the most powerful chatbots to date, to the public in Nov 2022, but many other LLMs have been released months prior. Moreover, open-source alternatives are already being developed.
The widespread availability of AI language models has the potential to impact all three facets of influence operations:
- behaviors, and
The researchers believe that language models could drive down the cost of running disinformation campaigns, making them accessible to a wider range of actors. Additionally, they may enable new tactics to emerge, such as real-time content generation in chatbots.
Imagine an AI generating live fake news stories that appear to be written by a credible news source but are specifically tailored to the interests of a specific group. The content could range from COVID disinformation to fake stories about political candidates.
One of the most concerning aspects of AI-powered propaganda is the potential for AI to generate more impactful or persuasive messaging compared to traditional methods. This could make influence operations less discoverable since the generated content would not be as easily recognizable as copied and pasted content. While there are detectors for AI-generated content being developed, none of them are fully reliable.
Civilians would find it difficult to discriminate the truth from lies.
The study also highlights critical unknowns surrounding the use of LLMs as it is unclear what new capabilities for propaganda will emerge, even from well-intentioned research or commercial investment into this technology.
Additionally, it is unclear which actors will make significant investments in language models and when easy-to-use tools for creating and disseminating such content will become widely available.
All we know is that it is coming.
Mitigating the Threat
OpenAI et al. created a framework for analyzing potential mitigations for the threat of LLM-powered influence operations. AI developers could build models that are more fact-sensitive and impose stricter usage restrictions on usage.
They also recommend that platforms and AI providers coordinate to identify automated content and that institutions engage in media literacy campaigns.
Additionally, the researchers suggest that governments could impose restrictions on data collection and access controls on AI hardware. They also recommend that digital provenance standards be widely adopted to help trace the origins of AI-generated content. Blockchain and digital identities (DIDs) can help with this.
The report from OpenAI serves as a stark warning of the potential dangers of AI-enabled influence operations. As LLMs continue to improve, they could become too much to handle if left unchecked. OpenAI and others like it bear most of the responsibility, but they shouldn’t be fully relied on.
We as civilians must also advocate for transparency and accountability from tech companies, governments, and other organizations that have a role in preventing the spread of disinformation. By working together, we can better prepare ourselves for the threat of AI propaganda and protect the integrity of our information environment.
Readers also like:
- How Creatives Can Thrive in the ChatGPT Era
- How to Copyright AI-Generated Art (Books, Comics, etc): A Comprehensive Guide
Join our newsletter as we build a community of AI and web3 pioneers.
The next 3-5 years is when new industry titans will emerge, and we want you to be one of them.
- Receive updates on the most significant trends
- Receive crucial insights that will help you stay ahead in the tech world
- The chance to be part of our OG community, which will have exclusive membership perks