People are worried AI will take over the world, well I’m not and here’s why

8–11 minutes

As an AI researcher, I often get asked whether robots (or AI) are going to take over the world. Most of the time I just laugh and say “Don’t be silly, AI is just multiplication on big numbers – how can it take over the world when it doesn’t even know what the world is”. I’m a firm believer that our world isn’t going to be run by robots, nor will we ever be slaves to AI.

Does that mean I’m not afraid of the future of AI? No, I am a little worried. But not whether or not AI will become so smart that it rules us. No, where my worry lies is how we, as humans, place our trust in AI and how AI is used by humans (but more on this another time).

In this post, I hope to enlighten you with my thoughts and settle some of your worries. So please grab a cup of tea and open your mind! AI is nothing more than some really expensive computers and a very big statistical model.

Let’s Break It Down

AI will never take over the world, humans will never be slaves to robots – let’s not be afraid of this sci-fi dream, but be realistic about the technology!

A word on AGI

To me when people say they are worried about AI, they are afraid of AGI. Artificial General Intelligence, which I introduce in this post, is the concept that computer systems will reach the intelligence of a human (some hope it may even surpass that of a human). And I’ll say it now AGI is a long way off.

We survived the Industrial Revolution, we survived the Internet Revolution, we will adapt to new technologies

With every birth of a new technology will come a period of concern and adjustment. The widespread adoption of AI is no different to other new technologies. In my opinion, the creation of the internet and the creation of AI are similarly revolutionary. The internet-connected people from all over the world in revolutionary ways – enabling global communication and information sharing to be reformed. Have we ever looked back?

AI has the power to revolutionise productivity, and I doubt we will look back.

Humans have been able to adapt to new technologies and ways of life in the past. Even in very short spaces of time. Think of the eldest generation who have survived wartime, and the development of computers, smartphones and social media. They have adapted.

Now I was only young, but I’m sure that the birth of the internet brought about similar conversations about the danger to the world. Would our data remain private, who am I talking to, can dangerous groups now talk to each other, how is content censored? Plenty more, and yes they still are a concern – but how many additional positives (and potential dangers) has the internet brought? AI has the potential to follow this trend.

Why is this technology any different? Yes I believe our lives will change and yes there will be new challenges, but I don’t think they will change in a way we won’t be able to adapt to. We will just need to remain alert for the new dangers – disinformation, propaganda, false content, and remain open to the advantages – health advice, education tooling, and medical diagnosis support.

The best AI we have today is a language model! Words are all it knows.

The world, as of today, has not reached AGI, nor in my opinion, we are even close to AGI. Sam Altman might disagree, but he is selling his business – I have no stakes! The best (by which I mean the most multi-talented) AI models we have today are large language models, which generate written content under the guidance of human prompting.

In case you don’t know, LLMs are huge AI models that are built to predict the next most likely word given an input prompt. It decides the next word based on all its experiences during training where it is shown text and stories that we have written. The more it sees a certain sentence, the more likely it is to use those words in its response. It’s a very simple concept. For example, I might ask an LLM “Once upon a time…” and then the LLM might say “There lived a young woman…”

The best AI in the whole world can only do these things. Predict likely words. The AI right now does not truly understand complex theories, it can’t! All it can do is process words using its pretty impressive ability to retain knowledge in the model weights. Imagine you have only been taught tennis through reading books – you’ll get onto the tennis court and probably lose. You don’t have a grasp of the physical world, like gravity and motion and forces. Reading isn’t enough, there needs to be more, and we aren’t there yet.

We are going to have to do much better than a few large language models before I start having sleepless nights thinking of all the ways AI will take over! I know these LLMs are pretty impressive, no doubt about that. I remember the first time I used ChatGPT – my mind was blown away! But at the end of the day, these models just write text using text it has seen before. It’s essentially just very good at mimicking a human’s ability to write.

How we measure intelligence is flawed

It is not easy to measure how intelligent an AI is. Existing methods measure a model’s ability to answer multiple choice questions, with some tests measuring its ability to write longer sentence answers. I won’t go into it here, but these methods have many flaws (see my previous post).

How can we judge how intelligent an AI is, by a few example questions? It’s similar to how we judge student’s intelligence by only using their performance on various exams. It’s a good start, but by no means captures the full extent of what they can and can’t do.

Personification of AI

Unlike other technologies (bar perhaps Henry the Hoover), AI is personified. We give them names (Siri, Alexa), we measure their emotional intelligence, and we align them with human preferences to make them sound kind and polite. Every time someone releases a new AI model, they seem to describe its intelligence in terms of human intelligence (toddler, high school grad). OpenAI are due to be releasing their ‘PhD agents’ soon.

We even describe the mathematical functions underpinning neural networks with biological terms. Yes in some ways it was inspired by the brain, but that is all it is. Loosely inspired. It is not a software implementation of a human brain (in my opinion).

There is probably some psychological reason behind the personification of things making them appear more threatening. If the AI is a superior ‘being’ then of course it will try to take over the world – isn’t that nature?

But AI is not a being nor is it driven by evolutionary traits, it is just some smart software we have built with our image in mind. We humans, particularly those promoting AI tend to symbolise humans with their language.

AI may seem like it has emotions, but it doesn’t. It has copied emotions we have taught it.
If you still are unsure, the computer still needs power and networks to run the AI

Need I even state this fact, but AI can only run on computers that are powered by electricity. Electricity is created in complicated power stations, and delivered to you by complicated power grids. The AI is going to have to master all of this and manage the maintenance etc. It’s taken a few years, and lots of hard work to create one AI, on one huge (unsustainable) computer for just reading and writing. Can you even comprehend how much time, money and resources it would take to master the running of a power station?

AI can only reach people all over the world by using complex internet networks. So I’ll say the same again about the internet – it is not realistic that AI will take over this too. And we can keep on going.

The best AI in the world needs tremendous amounts of infrastructure to operate. So until we can power a computer without using complex human power grids, how on earth are the robots going to take over?

AI need enormous amounts of power to run, these power stations and services need operating. The ecosystem becomes quite complex.
Humans are the idiots

Honestly, it’s the human users that are the real problem. People have a horrible habit of turning a positive innovation into something dark and dangerous. Social Media, the internet.

This is not a question of whether AI is dangerous, it is a question of whether humans are using AI dangerously. The threat is the same as it always has been; overly powerful people exploiting others.

This is a real threat, and I will come back to this in a future post. Disinformation, fake content, propaganda, biasing humans through language – these are the real risks we face today.

Governments are proceeding with caution

All over the world, governments are acting to ensure AI is used safely and responsibly. They have acknowledged the risks AI poses and are in agreement these need addressing. This is comforting, right? People are taking the real risks seriously, but the undertone of all AI Security discussions is that the benefits still outweigh the risks, and we should continue to adopt this awesome technology. If you are interested in Global Opinions see my post on the AI Safety Summits.

Conclusions

You know when people ask me whether I think AI will take over the world, my first instinct is well no of course not. I’m yet to see any compelling evidence that AI will outsmart us, or will ever be so self-sufficient that it need not rely on us. So no I am not scared of AGI or robots taking over the world. No, I am not scared that robots will be smarter than us.

So rest peacefully, I as a human wrote this, and I do not fear the AI future. That is what they want you to believe.

What do you think? How do you think robots will take over?

Just because I’m not forecasting an apocalypse, there are still dangers.

So now you are not afraid of robots taking over the world, maybe you are afraid that humans will use AI to take over the world (or at least cause serious harm). That statement I won’t disagree with! I don’t want to open this conversation here, I’ll attack it in a future post. Get in touch below, or directly, to share your concerns with AI.

Leave a comment