乌鸦传媒

Skip to Content

Deep stupidity – or why stupid is more likely to destroy the world than smart AI

Steve Jones
7 Jun 2023

The hype in AI is about whether a truly intelligent AI is an existential risk to society. Are we heading for Skynet or The Culture? What will the future bring?

I鈥檇 argue that the larger and more realistic threat is from Deep Stupidity 鈥 the weaponization of Artificial General Intelligence to amplify misinformation and create distrust in society.

Social media is the platform, AI is the weapon

One of the depressing things about the internet is how its made conspiracy theories spread. Where before people were lone idiots, potentially subscribing to some bizarre magazine or conspiracy society in a given area, you really didn鈥檛 have the ability to industrial scale these things. . So while some AI folks talk about the existential threat of AGI, personally I鈥檓 much more concerned about Artificial General Stupidity.

So I thought it is worth looking at why it is much easier to build an AI that is a flat earther than it is to build a High School physics teacher, let alone a Stephen Hawking.

It is easier being confidently wrong and not understanding

LLMs are confidently wrong, that  is a great advantage when being a conspiracy theorist. Because when you understand stuff, then conspiracy theories are dumb.

This means the training data set for our AI conspiracy theorist must be incomplete, what we need is not something that has access to a broad set of data, but actually something that has access to an incredibly small and specific set of data that repeats the same point over and over again.

To be a conspiracy theorist means denying evidence and ignoring contradictions, this is much easier to learn and code for than actually receiving new information that challenges your current model and altering it.

Small data set for a single topic

So this is a massive advantage for LLMs when trying to create a conspiracy theorist. What we need is a limited set of data that repeats a given conclusion and continually lines up all evidence to that conclusion. We can apply this to lots of conspiracy theorists out there, for instance those folks who scream 鈥渇alse flag鈥 after every single mass shooting incident in the US, in other words we have a small set of data, possibly only a few hundred data points that always result in the same conclusion. This means for our custom trained conspiracy theorist the association it always knows is 鈥what ever the data, the answer is the conspiracy鈥.

Now we could get fancy and have a number of conspiracies, but given very few of them are logically consistent with each other, let alone with reality, it is more effective to have a model per conspiracy and just switch between them. That a conspiracy theorist is inconsistent with what they鈥檝e previously said isn鈥檛 a problem, but we don鈥檛 want inconsistencies between conspiracies on a single topic. What we need to add are the standard 鈥渞ebuttals of reality鈥 like 鈥淲ater finds its level鈥, 鈥淲e don鈥檛 see the curve鈥, 鈥淣ASA is fake鈥 or 鈥淪purs are a top Premier League club鈥.

Hallucinations help

This small set of data really helps us take advantage of the largest flaw in LLMs, hallucinations, or when the LLM just makes stuff up either because it has no data on the topic, or because the actual answer is rare so the weightings bias it towards an invalid answer. This is where LLMs really can scale conspiracy theories, because the probabilities are weighted towards the conspiracy theory already (as that is the only 鈥渃orrect鈥 answer within the model) then any information we are provided with is recast within that context. So if someone tells us that the Greeks proved the earth was round in the 2nd Century BC our LLM could easily reply:

Context makes hallucinations doubly annoying

Our LLMs can go beyond the average conspiracy theorist thanks to the context and hallucinations. While an average conspiracy person will only have a fixed set of talking points, and potentially be constrained at some level by reality, the hallucinations and context of the conversation enables our conspiracy LLM to keep building its conspiracy and adding elements to it. Because our LLM is unconstrained by reality and counter arguments, instead being able to reframe any counter argument by using a hallucination it will be significantly more maddening. It will also mean it will create new justifications for the conspiracy that have never been put forwards before. These will, of course, be total nonsense but new total nonsense is mana from heaven to other conspiracy theorists.

Reset and start again

The final piece that makes a conspiracy LLM much easier to create is that if the LLM goes truly bonkers and you need to reset鈥 this is exactly what conspiracy theorists do today. So if our LLM is creating hallucinations that fail some form of basic test, or just every 20 responses, we can reset the conversation in a totally different direction. Making my generative LLM detect either a frustration or an 鈥渁h ha鈥 moment from the person it is annoying, a trivial task, enables me to then have my conspiracy bot just jump to another topic, and to do so in a much smoother way than most conspiracy theorists do today.

This is a much smoother transition for a flat earth conspiracy than you鈥檒l hear on TikTok or YouTube.

We have achieved AGS, that isn鈥檛 a good thing

I鈥檝e argued that the current generation of AIs aren鈥檛 close to genuinely passing the Turing test, let alone more modern tests. Turing set the bar of intelligence as the CEO of a Fortune 50 company, and made it have awareness of what it didn鈥檛 know.

Some folks are concerned about a coming existential crisis where Artificial General Intelligence becomes a threat to humanity.

But for me that is assuming the current generation of technologies are not a threat, and that intelligence is a greater threat than weaponized stupidity. Many people in AI are in fact arguing that GPT passes the Turing test, not because it replicates an intelligent human, but because either it can pass a multiple choice or formulaic example, or because it can convince people they are speaking to a not very bright person.

We can today make an AI that is the equivalent of a conspiracy theorist, someone untethered to reality and disconnected from logic. This isn鈥檛 General Intelligence, but it is General Stupidity.

Deep fakes and deep stupidity

Where Deep Fakes can make us not trust sources, Deep Stupidity can amplify misinformation and constantly give it justification and explanation. Where Deep Fakes imitate a person or event, Deep Stupidity can imitate the response of the crowd to that event. Spinning up a million conspiracy theorists to amplify not just the Deep Fake but the creation of an alternative reality around it.

The internet and particularly social media has proven a fertile ground for human created stupidity and conspiracy theories. Entire political movements and groups have been created based on internet created nonsense. These have succeeded in gaining significant mindshare without having the capacity to really generate either convincing material or convincing narratives.

AIs today have the ability to change that.

Stupidity and misinformation are today鈥檚 existential threat

We need to stop talking about the challenge with AI being only when it becomes 鈥渋ntelligent鈥, because it is already sufficiently stupid to have massive negative consequences on society. It is madness to think that companies, and especially governments, aren鈥檛 looking at this technologies and how they can use them to achieve their ends, even if their ends are simply to sew chaos.

Stupidity is the foundation for worrying about intelligence

Worrying about an AI 鈥榳aking up鈥 and threatening humanity is a philosophical problem, but addressing Artificial Stupidity would give us the framework to deal with that future challenge. Everything about controlling and managing AI in future can be mapped to controlling and avoiding AGS today.

When we talk about frameworks for  and legislation on things like  these are elements that apply to General Stupidity just as much as to intelligence. So we should stop worrying simply about some amorphous future threat and instead start worrying about how we avoid, detect and control Artificial General Stupidity, because in doing that we lay the platform for controlling AI overall.