Human beings are weaker than gorillas and slower than cheetahs. We cannot fly like eagles, nor swim like dolphins. We are susceptible to viruses and bacteria, and require near constant food and shelter. We are born into a state of complete helplessness. Yet, we are the planet’s dominant species.

We owe this, almost entirely, to our intelligence.

We use it, collectively and individually, to solve problems, build technology, reason, debate and plan. Now, we are using it to create intelligence itself.

The implications of this are profound, and, as with all new technologies, brings both benefits and risks. Artificial Intelligence has been ranked as the most pressing problem the world is facing, above both global pandemics and nuclear war but also as being critical to creating future utopia.

Understanding its capabilities and how it works, as well as what is and is not currently possible will help you make sense of the changes it will bring to the world, and enable you to engage, discuss, and debate the risks and benefits it has to your own life.

In the beginning…

As a child you almost certainly had a toy that you imagined coming to life. Perhaps you envisioned it having a whole secret life and personality. For many, this would have been our first brush with the concept of Artificial Intelligence (although it would have been a particularly precocious child who recognised this). Adults working in the field of AI are not too far removed from their younger selves; they too try to give formerly inanimate objects capabilities which require intelligence.

Two adults who held onto their childlike imaginations were the early pioneers of computing: Charles Babbage and Ada Lovelace. Their work on the mechanical Difference and Analytical Engines in the 1830s and 1840s is regarded as the foundation of modern computing, and Lovelace is credited with creating the first computer algorithm.

Although these machines were slow, mechanical, and loud in comparison to today’s computers they sparked the imaginations of researchers and academics who recognised the implications of what these machines were capable of. Lovelace was particularly farsighted in her recognition of what these machines were heralding:

[T]he engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.
— Ada Lovelace, 1843

Although the term Artificial Intelligence had not been coined in Babbage and Lovelace’s day, the conception of it, in relation to computation machines was clear.

The digital era

Advances in technology led to the first digital general purposes computers being developed: the Colossus in England in 1943; and the Electronic Numerical Integrator And Computer (ENIAC) in the United States in 1946.

Astonishingly, these computers relied on the same core architecture as Babbage had detailed almost one hundred years earlier. This is a common theme throughout the history of computing: insights and algorithms are discovered, but it takes many years before the technology is available to put them into practice.

Digital computing still relies on the same principles as mechanical computing but offers significantly more benefits. Digital computers are enormously more powerful, faster and easier to program, and far less expensive.

The shift to digital gave institutions and researchers access to computing power which had heretofore only been available to a small group of well funded organisations. This led to researchers across a wide variety of fields, from neuroscience, economics, mathematics and philosophy exploring how to use these new machines for their own research and, in 1956, the term Artificial Intelligence was coined to bring multiple research efforts together under one umbrella.

Taking advantage of the continued increase in computing power, AI researchers (as they would now be known) explored how to make machines communicate with humans. Using a system of rules that were pre-programmed into a machine, Joseph Wiezenbaum created the AI system ELIZA in 1966 which allowed a user to have a conversation with a computer.

Despite the simplicity (by modern standards) of the program which ELIZA uses, many users believed the system to have intelligence, understanding, and motivation far beyond what it was capable of. One of the modes ELIZA could operate in was as a simulation of a Rogerian therapist, which essentially just reflected back the user’s input to them as a question.

Even when Wiezenbaum explained to users how the algorithm worked, people would still ask to use ELIZA to discuss deeply personal issues, citing the machine’s sympathy and lack of judgement as reasons why.

Dreamers, Radicals, and Reality

Early AI efforts involved encoding knowledge about the world into a database, and giving the machine rules on how to combine this knowledge. This allows the machine to draw inferences and create new knowledge, and is known as symbolic or top-down AI.

For example, the machine could be given the knowledge: Socrates is a man. It could also be given the knowledge: All men are mortal. The machine can then draw the inference that: Socrates is mortal.

This approach demonstrated some initial success across a variety of domains, from playing computer games, to drilling for oil, and understanding criminal networks. These early accomplishments led to wildly overenthusiastic predictions about what was possible:

Machines will be capable, within twenty years, of doing any work a man can do.
— Herbert A. Simon (1965), American Political Scientist

As these machines were put in more complex situations problems began to emerge. Primarily, the machine has no general way of acquiring new knowledge, it must be encoded by a human operator which places a limit on the speed at which the machine’s understanding can advance. Further, the machines can be brittle and unable to generalise their rules and inferences to new domains.

There are some long-running experiments based on this approach. For example the Cyc project, created by Doug Lenat has been running since 1984, and aims to encode common-sense knowledge about the world. This means encoding millions of pieces of knowledge which comprise human common-sense. It remains to be see whether this project forms a central part of modern AI systems.

That’s not AI

The symbolic, or top-down, approach captured both expert and lay imagination. Neither group had previously seen machines that exhibited some form of intelligence and this led to an explosion of funding and general interest. However, after the initial breakthroughs, progress dramatically slowed and experts and laypeople alike began to question whether these systems were really, truly, Artificial Intelligence as they understood it.

Like the child playing with the toy, most people have an intuitive sense that AI, real AI, would be a human-level intelligence which exists outside of a human being. Funding bodies and government organisations largely agreed that AI research was failing to deliver real results, and in 1973 the Lighthill report was published which effectively ended AI research in most Universities in the UK. The period following this was known as the AI winter.

The problem was that there was a mismatch in expectations between researchers, government organisations, and the general public. Part of the reason for this was that the term AI meant different things to different groups.

For researchers, a machine capable of playing chess to a human level was an example of AI, whereas those outside the field might just see it as essentially a fancy calculator.

In order to bridge this gap in understanding, researchers, assisted by philosophers, started to introduce the term Narrow AI. This is machine intelligence which is directed at single task, such as the chess playing computer. To express machine intelligence which is at the same level as a human, researchers developed the term General AI, or Artificial General Intelligence. This form of AI would be capable of completing any cognitive task a human could.

This helps clarify the different types of AI, and may initially seem quite clear cut. The problem, however, lies in defining human level intelligence, and once defined, creating some way to test for it.

This is a difficult problem, and there have been many attempts at doing so. The most famous is the Imitation Game, better known as the Turing Test, devised by Alan Turing in 1950. This game involved a person conversing via text with two participants, one of which is a machine. If the person cannot reliably distinguish which participant is the machine then the machine is said to have passed the test.

There are many variants, criticisms, philosophical treatments, and alternatives proposed which are too numerous to detail here, but all go some way toward illuminating the idea that testing for human level intelligence is fraught with difficulty. This difficulty forms part of the concern from those warning of existential AI risks; if we cannot test for human level intelligence, how can we know when we have created a machine that exhibits it?

Even defining intelligence more broadly is a challenge. One of the most concise definitions is:

Intelligence is the computational part of the ability to achieve goals in the world.
— John McCarthy (1997), AI researcher

This definition affords varying degrees of intelligence. For example, if the goal is to play chess, then we can define a machine’s intelligence by how good it is at playing chess. More abstract goals, such as happiness, are harder to quantify, but if a machine has this as its goal then we would, to some degree, be able to gauge its success, and therefore its intelligence. Whether there is a smooth path from ever more advanced Narrow AI systems to Artificial General Intelligence is a contentious issue with many prominent supporters on either side of the debate.

The core of the debate is that it is hard to determine which tasks really do require human level intelligence. For example, chess was a longstanding apparent barometer of general intelligence, both socially and from a computational perspective. It requires planning, strategy, reacting to the opponent, and there was believed to be something innately “human” about the game which no machine could match.

There may be programs that can beat anyone at chess, but they will not be exclusively chess programs. They will be programs of general intelligence, and they will be just as temperamental as people. “Do you want to play chess?” “No, I’m bored with chess. Let’s talk about poetry.”
— Douglas Hofstadter, 1979

These predictions turned out to be wrong. A Narrow AI system first defeated a chess grandmaster outright in 1997, and chess playing Narrow AI systems have advanced since then. In fact, they have advanced so much that it would be newsworthy if a human player was to defeat the strongest chess playing AI system.

Time and time again, tasks which where hitherto thought to be squarely in the realm of human intelligence, and therefore required general intelligence (for example creating music, making art, writing software) are being performed by machines.

This has resulted in the goalposts being shifted as to what constitutes “real AI” and is called the AI effect:

AI is whatever hasn't been done yet.
— Larry Tesler, Computer Scientist

Machine Learning

Science progresses one funeral at a time, and whilst some progress in the symbolic approach continued, researchers in the field sought to build AI systems from a fundamentally different starting position.

In contrast to the top-down approach, researchers explored bottom-up methods. Instead of encoding knowledge and rules, researchers would try to create machines which could create rules themselves from data. This is called machine learning, and nearly every major AI breakthrough over the past decade has used it.

Although machine learning has dominated AI research recently, the term was coined in 1959, and the techniques involved date back even further. As mentioned previously, there is often a lag between the development of theories and ability to apply them due to computational constraints.

Machine learning has scythed through deeply held beliefs, and has caused many AI researchers to reconsider their life’s work. Chess playing AI systems that used the symbolic approach relied on human crafted understanding of the game, with experts devoting countless hours encoding their moves into systems, pouring humanity’s collective knowledge into machines. These have now been obsoleted by systems based on machine learning, which learn to play the game from scratch, doing away with any human input and with it, the acquired knowledge from millennia of games played between friends and rivals.

The realisation that much of human encoded knowledge may be impeding machines’ abilities to accomplish tasks has been coined the bitter lesson:

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.
— Richard Sutton (2019), AI Researcher

Although human knowledge may be obsoleted for some types of tasks that machine learning algorithms are applied to, for others it is indispensable.

In essence, machine learning algorithms learn to associate some input data with an output. For example, in chess the input data is the board, and the output is the next move. In image recognition, the input is the image, and the output is the location of some object. In Natural Language Processing (NLP), the input is some text and the output is some text.

Critical to machine learning systems is having some sort of signal that guides their learning. In chess, the signal is the end of the game and so machines can play against each other without human intervention and become better with each game. However, for image recognition, there is no clear signal from which the system can learn.

A not-so-well-kept secret of the AI industry is that vast amounts of time, money and effort is spent on getting humans to hand label data so the machine learning algorithms can work. Estimates vary, but for some machine learning tasks, upwards of 80% of the total project time and effort is human management of data. The holy grail for machine learning research is to be able to do away with human curation of data.

For example in image recognition, humans are enlisted to draw boundaries of objects in the images. Machine learning systems can then use this data and learn what some object looks like, so that when it is presented with an image it hasn’t seen before it can identify the object.

Deep Learning

Machine learning comprises many different learning algorithms, but the most prominent, by far, is Deep Learning, which is essentially the brand name for modern Artificial Neural Networks (ANNs).

These networks were first proposed in the 1930s as a way to mathematically model human brain processes. In a human brain, a neuron receives electrical signals from other neurons and depending on these signals may in turn output an electrical signal of its own. Artificial neurons aim to simulate this process.

The study of these types of networks is known as connectionism, and has waxed and waned in popularity throughout AI research. The first wave of connectionism occurred in the late 1950s and 60s led by Frank Rosenblatt and his perceptron, a particular mathematical model of a neuron.

Stories about the creation of machines having human qualities have long been a fascinating province in the realm of science fiction, yet we are about to witness the birth of such a machine – a machine capable of perceiving, recognizing and identifying its surroundings without any human training or control.
— Frank RosenBlatt (1958), AI researcher

Unfortunately for Rosenblatt his initial designs had some fundamental limitations which were detailed in a 1969 book by fellow AI researcher Marvin Minsky. This book effectively ended research in connectionism for a decade.

The problem with Rosenblatt’s initial design was that the perceptrons were linked in a single layer. Researchers discovered however, that by stacking these layers (thus making the networks “deep”) they could overcome these fundamental limitations, and the second era of connectionism began in the late 1980s. A third wave, marked by some significant improvements to issues encountered in the second wave, began in the early 2010s, and ushered in the current era of Deep Learning.

The term provides useful ambiguity for AI research, as to those outside of the field it would be reasonable to interpret deep to mean profound, or to learning of a greater degree of understanding than usual. Although modern Deep Learning systems are capable of an incredible array of tasks, the degree to which they truly understand what they are doing is debatable.

The ambiguity of the term, coupled with a perception that these really are artificial brains has led to an enormous amount of hype and confusion amongst non-experts. Although ANNs and modern Deep Learning have some grounding in how the brain works, there is an enormous amount of difference between them and human brains.

Even with modern neuroscience, we have only limited knowledge of human brain function. Experimentation is difficult, and understanding the chemical processes and connections involved is an ongoing research effort. Further, the way in which the brain’s neurons are connected and those of an ANN differ. Human brains have approximately 86 billion neurons and the largest ANNs in use today can have double that, however the connections between the neurons in the brain appears to be more complex.

Although there may be enormous differences between modern Deep Learning networks and the human brain, that is not to say that we need to exactly replicate a brain digitally to achieve human level intelligence. The human brain is the most complex object in the known universe, but it may not be the only structure capable of human-level intelligence. To some AI researchers, the brain merely shows that general intelligence is possible and believe that Deep Learning networks, scaled to a large enough size will be able to achieve the same, even though they may be vastly more inefficient in terms of power and size when compared to a human brain.

Part of the intense focus on Deep Learning is that it is a learning algorithm that can be scaled with ever more powerful hardware, thus avoiding the bitter lesson mentioned previously. Dedicated computing hardware and processing chips are being fabricated to take advantage of this fact, and the sixth largest company in the world is one which largely manufactures chips for Deep Learning systems.

To understand why there is such an intense fervour surrounding Deep Learning, one, surprisingly, should look at ferrets.

In 2000 a team of neuroscientists led by Dr. Mriganka Sur rewired a newborn ferret’s brain such that the connections from the eyes were attached to the region of the brain responsible for hearing. Over time, the ferret’s brain learned to see using brain tissue that was ordinarily used for hearing. This discovery gave rise to the belief that the mammalian brain uses possibly a single, likely very complex, algorithm and therefore it might be possible to recreate this using ANNs.

One could imagine all sorts of different configurations of how artificial neurons are connected to each other, and in applied Deep Learning there are lots of different arrangements. Formally, these arrangements are called architectures. In 2017 a team of Google researchers developed an architecture called the Transformer. This architecture has become the state of the art across a wide variety of tasks, from image recognition, to audio generation, and natural language processing. That this single architecture exhibits such capabilities across a variety of tasks related to human sensory input has intensified research and led several companies to explicitly declare a goal of developing Artificial General Intelligence.

The advances over the past decade due to Deep Learning have been rapid and surpassed the expectations of many experts in the field. AI systems are capable of generating text, audio, video, and images of incredible quality in what is being termed Generative AI (GenAI). One of the most recent advances in GenAI are Large Language Models (LLMs), such as OpenAI’s ChatGPT.

This system is capable of generating text and having a conversation but comparing it to the ELIZA system from 1967 illustrates starkly how much progress has been made. LLMs are capable of writing code, poetry, drafting legal documents, summarising documents, acting as reasoning engines to decide how to complete tasks, and will attempt almost any text based task.

This is one of the first examples of a general purpose AI system. Before LLMs, AI systems would be largely restricted to a single narrow task. For example, a chess playing AI system could only play chess. But LLMs can be applied to a huge array of tasks. The reason for this is that they use natural language, which is of central importance in modern AI research as it represents a common medium through which the vast majority of knowledge work is performed. If a sufficiently advanced language model is developed, the belief is that it will be capable of performing most knowledge work, and therefore having colossal economic impact. Currently, it is too early to tell whether that belief is correct or not.

Stochastic parrots

Despite the astonishing examples that showcase how much progress has been made in AI, much like the top-down approach, there are critics of the perceived weakness of machine learning’s fundamentals.

The main criticism is that the models are simply regurgitating the most probable output based on the data they have previously seen, and don’t have any understanding or knowledge. Whilst this may generate impressive results, especially at scale, critics argue that it’s nothing more than simple statistics with fast computers.

Consider a Large Language Model. The input to the model is a sequence of tokens, usually words, but it could also be computer code, mathematical symbols, or even sheet music symbols. For each token, the model has learned to associate it with a mathematical object called a vector, which is like a collection of numbers. To produce the output, the model takes these vectors and determines what the most probable sequence of tokens should be, based on the data that it has learnt from.

If the model has only learnt from Jane Austen novels, then its capabilities will be limited by the statistical distribution of words used in her books. The model’s output will sound distinctly Austenian, in contrast to a model trained on the poetry of Yeats, and this has led critics of the approach to call these models stochastic parrots.

Because these models lack understanding, sometimes outputs can seem reasonable at first glance, but upon closer inspection are closer to nonsense than what the user was hoping for. Because the models are “just” making statistical connections, it is not uncommon for them to start producing output which is statistically correct, based on the data is has learnt from, but are unrelated to the specific task. When this happens, the model is said to be hallucinating, and for many, shows there is no real understanding.

Why?

Enabling a machine to sound like Jane Austen may be useful for research, and does have novelty value, but the goals of companies and academics involved in AI research extend far beyond this.

The reasons for pursuing AI can be placed into two broad categories: the industrial, and the philosophical.

The profit motive is one of the fundamental forces that drives technological progress. Companies and research institutes develop new technology in order to generate profits, which attracts competitors who also seek profits and so continue the advancement of technology.

From helping ophthalmologists identify retinal issues, through assisting with designing plane supports, and automatically subtitling videos, AI is being used in almost every domain and industry, with new applications of it arising seemingly daily. Social media companies, such as Meta (née FaceBook), X (née Twitter), and TikTok (née Musical.ly) use AI on the massive amounts of data they collect to predict the next video or post a user sees in their feed. This degree of personalisation would be almost impossible without machine learning.

AI is closely related to automation, as it can enable increasingly more complex tasks to be automated. For example, checkout machines at supermarkets have automated the job of a checkout assistant, but do not themselves require any AI. A self-driving car, on the other hand would automate the job of a taxi driver, and would rely almost entirely on AI to do so.

Furthermore, AI will enable more advanced robotics. Creating a physical humanoid robot is a difficult task for mechanical engineers, but this pales in comparison to the complexity of the AI which would be needed to enable it to look, act, and behave like a human. If AI is the brain, then robotics enables the hands, eyes, and body that can interact with the world outside of computers.

Because of the wide impact AI is having, and and will have in commercial applications, billions of dollars are being invested in the hardware and software to deliver it. Of the ten largest companies in the S&P 500, seven are technology companies and each have AI as a central component of their business.

The other fundamental force that drives technology is, sadly, warfare. Citizens of countries expect their governments to keep them safe, which, at the extreme, means the ability to wage, and defend from, war.

Many early AI advances occurred as a result of military funding and research. In particular the Defense Advanced Research Projects Agency (DARPA), the US Navy, and the UK’s Government Communications Headquarters (GCHQ). As warfare moves increasingly into new areas, AI will become an increasingly large part of it.

It is easy to understand why AI is being created to improve profits and defence. The reasons that fall under the category of the philosophical are much more diverse, and some of the motives more difficult to divine.

One of the major contributors to AI is academic and university research. The purpose of research in general is to push the frontier of our understanding of the world, and research into AI is no different. Reasons as to why research is performed are myriad; for some, the purpose of their life is to increase the sum total of human knowledge, even if just by a little bit, for others it is a route to fame and fortune, and for some they have forgotten why they are doing it in the first place. Regardless of the reason, research is pushing forward AI whether intentionally, incidentally, or accidentally.

Some of this research is neuroscience, which is exploring how AI can be used to better understand how intelligence has manifested through our own brain, and in general. Experimentation on human brains is fraught with ethical, moral, and legal issues, and so having increasingly more powerful simulated versions of intelligence which can be used to understand how our own brains work is an avenue of increasing prominence.

Outside pure academia, there are individuals who influence AI discourse, and whose beliefs spread through the broader AI community. These people can hold prominent positions within companies, may be anonymous internet figures, or active social media influencers and promote a wide range of reasons as to why AI should be created, ranging from utopia, to ending the human race.

The group who believe AI is critical to creating utopia are often branded the techno-futurists. This is a supercharged view of the world the John Maynard Keynes espoused almost one-hundred years ago, with his prediction of the fifteen-hour work week. This group believe that AI is going to dramatically change our world and will completely upend our existing economic structures as the majority of jobs will be performed by AI. The implications of this are enormous, but the techno-futurists believe that the benefits would lead to a flourishing of the human race like never before, and therefore research efforts should be intensified in order to bring forward this brighter future.

These individuals largely hold a utopian view of the world which involves improved human to human connections with AI doing the work. Another, more fringe, view is held by those who seek to develop AI in order to transcend the human realm and build human to AI connections. Like sci-fi novels and films, these people pursue AI in order to create the ideal world for them, an immersive simulation that would cater to their every desire.

Finally, some do not necessarily have a clear vision of the world they would like to live in, but want to accelerate progress as they believe that drastic change is required. These accelerationists are drawn from both the left and right of the political spectrum, including the far extremes, and although they want acceleration of all technology, AI in particular is of particular importance.

Part of the is theory is based on the concept of the technological singularity. This is a view that technology is used to create more advanced technology, which in turn is used to create more, and so on. The view is that AI is no different. At some point, there would be a takeoff in which AI is used to create more advanced AI, which creates more advanced AI, and so on, creating increasingly fast loops of progress that would outstrip human control. One of the main criticisms of this view is that it assumes technology increases exponentially, whereas, in general, it follows a sigmoidal pattern: a period of development, then a sharp exponential increase in capability, followed by a slowing down and much more effort required to increase performance, then a new technology follows the same process which ultimately supplants it.

Regardless of the underlying reasons why individuals, groups, companies or universities pursue AI, there is considerable effort, money, and brainpower in developing it. As with all technologies, however, there are risks as well as benefits.

Risks

In early 2023 an open letter was published calling for labs (and by extension, other research institutions, companies, and any other organisation) to pause AI research into systems more powerful than the current state of the art. Their reasons were that we, as a species, were engaging in a dangerous game of increasing the capabilities of AI without sufficient controls. The letter called for a six-month pause, enforced by governments if necessary, in order for labs to jointly implement safety protocols that would guide safe AI development. It was signed by some well regarded AI researchers, and, whilst some of the them are likely doing so in order to allow their labs to catch up to state of the art research, plenty are signing it with honest intentions.

The core issue that prompted the creation of the letter was the concern that there are insufficient controls in place to ensure that any AI created is safe. The canonical example of unintentionally unsafe AI is the paperclip maximiser.

In this thought experiment a superintelligent AI (that is, and AI which has surpassed general human intelligence) is tasked with optimising paperclip production in a factory. The humans who set the AI the task are shocked to find the AI begins turning over much of the world, and then the known universe to maximising paperclip production, aggressively fighting off any attempts to stop it and eradicating humankind.

This is an example of misaligned expectations between what we tasked the AI with doing, and what we expected it to do. This is one of the hardest challenges in AI and is known as the alignment problem.

One of the main issues with current AI systems is we do not know specifically how they are making their decisions. Consider a simple financial model to estimate how much money we will have for retirement. We would have inflation, expected earnings, outgoings, and various other parameters to create an arbitrarily complex model. In this model, we can clearly see and understand how changing one parameter affects the output of the model (whether the model is correct or not is a separate question) and we can observe simple cause and effect. Modern large AI systems may have billions of parameters, which the machine has learnt from data and understanding what any given parameter represents and how it affects the output is no easier than understanding how a single neuron in our brains enabled us to cook last night’s dinner. These types of models are known as black boxes, as we can observe inputs and outputs, but cannot easily understand what is happening internally.

One interesting point here is that it would appear that we hold our brains and AI to separate standards. Our brains are black boxes to the same extent as large Deep Learning models, and we use proxies of reasoning and logic to ensure our decision making is understandable. The same approach could be taken with sufficiently advanced AI. However, one counter to this is that because we all have brains with similar structures, we have some innate understanding of how intelligence appears in others. With AI, we would lose that innate sense of understanding.

The AI pause letter was broadly concerned with existential risks that advanced AI posed to the human race; we are the only species on earth intelligent enough to create intelligence, and may be the only species foolish enough to do so.

If we recall the difficulty in determining whether a machine has reached human-level intelligence, this creates enormous potential risks by inadvertently creating an AI which has achieved human level intelligence and can therefore recognise how to lie to defeat any tests we may give it. This is part of the reason why activists are campaigning to pause AI development as we may not know when we have achieved AGI.

For many, these are academic arguments, suited for the dinner table or conference floors, and pale in importance to the present risks and dangerous that existing AI systems already introduce.

One of the main dangers is that current AI systems can operate at an unprecedented scale. State actors can use AI to create entirely realistic digital profiles of fake users (bots) who can act to broadly influence opinion through digital channels. Whilst this has always been possible, and bots have plagued the internet almost since its conception, it is becoming harder and harder to differentiate these bots from the average internet commentator. These bots could be used to push narratives, conspiracy theories, and spread misinformation in both subtle and overt ways, all via AI.

Automatic facial recognition, powered by ML models, is being used in countries around the world to track citizens with far more accuracy and less effort than if performed by human police officers. Whether this is a good or bad innovation is for each individual to decide, but AI systems enable surveillance, at both a state and corporation level at an unprecedented scale.

The largest risk that current AI presents is that of job displacement. Throughout history, technology has been used to automate work, freeing people to do other activities. From the plough to the water mill, manual work has been targeted and replaced by machinery. This, generally, enabled humans to direct their energies at tasks which required intelligence over raw strength, with increasingly fewer number of people being required for almost purely mechanical tasks. Agriculture, for example, underwent an enormous technological transformation which enabled (or forced, depending on your view) people to move from the farms to towns and cities and to create new jobs.

Local protests at technological developments has always happened. The Luddites famously smashed the automated looms which had drastically reduced the number of workers required to produce a garment. Often, a new technology has a concentrated negative impact on a particular group, but brings generalised benefits. In the Luddites’ case, they lost their livelihoods, whilst everyone benefitted from cheaper garments. Whilst the Luddites are often mocked, and the name is now used as a pejorative, it is not hard to sympathise with their situation. Losing income and livelihood, and the pressure to find a new source to support a family would have been enormous, and even the most industrious weavers required time to learn a new trade, possibly in a new location many miles away.

The fear with AI is that it will impact a wide range of knowledge worker jobs, causing widespread disruption to the existing economic model. People will find it harder to retrain, especially as skills are becoming increasingly more specialised, and will likely find competition for remaining jobs much higher.

Consider the impact of self-driving vehicles. If AI systems are able to drive trucks to the same capability as a human, then the 3.5 million, mostly male, US truck drivers will be in an un-winnable battle with them. This, in theory, should marginally reduced prices of goods for everyone, but, like the Luddites, will have a specific negative impact on these drivers.

Examples such as this has led to calls, from both the left and right of the political spectrum, for some sort of change to the existing economic model to soften the impacts of AI on society. These range from a form of universal basic income, paid for by taxes on the increased profits of corporations as a result of AI technology, through to a post-capitalist economic revolution.

Historically, new technology has brought with it new jobs which were unimaginable prior to its introduction. The role of a computer engineer would be nearly impossible to comprehend by a farm hand in rural 18th Century England, and we may find ourselves in a similar situation today; simply unable to conceive of the new work that human beings will uncover.

Current AI systems are also the first to enable large-scale plagiarism. Since they learn from data, a lot of authors, artists, and musicians are finding that these systems can create highly accurate derivations of their works, and are thus finding themselves in competition with them. The laws that cover AI generated works based on original copyright content are still being worked through, and it is not clear how much protection they will be able to offer. Many, especially those in the creative industries, had assumed that their jobs would be some of the last to be automated, believing that there was an innate human component to them which machines would be unable to replicate.

There are further economic risks from AI, which relate to the centralisation of power. There are currently several multi-trillion dollar companies who are pouring resources into AI research and development. With machine learning, having access and being able to organise data to power development is a crucial component, as well as the means to afford the enormous computing costs, means access to the largest and most advanced AI models is available only to a few companies. This centralisation of power has consequences: these small handful of companies can use their AI to improve products, drawing in more users who in turn supply more data for the next generation of AI models, further improving the product and so on. These AI models may then be applied to other industries, capturing more profits, and more data, at low cost and squeezing out incumbents, further centralising power.

Regulation, laws, and political manoeuvring is catching up with the current AI systems, and some of the largest corporations are active in calling for more regulation. Whilst this is ostensibly a benevolent act, the risk is that the drawbridge is pulled up from new entrants seeking to gain a foothold in the AI race.

Conclusion

Specific stands of AI research has waxed and waned over the last several decades, but the progress that has been made, collectively, is staggering. From early systems that required significant human knowledge to be encoded to current systems that can learn complex tasks from scratch, it is clear that there has been significant development towards machines that can think, learn, and act like us.

Artificial Intelligence captures our imagination, because it reflects back on us what it means to be human, both at an individual and species level. If we can create artificial versions of ourselves, what does that mean for us and for humanity? What would it all be for?

AI is tightly coupled to the meaning and purpose of life, and many of the existential arguments focus on a superintelligence destroying humanity. However, less commonly considered, is another form of existential risk, one which individuals, whether they are aware of it or not, will feel viscerally: the question of what their purpose is when there is Artificial General Intelligence.

Humans throughout history have worked and strived to understand the world, to solve problems, to make improvements to their lives. If this is no longer needed, what will we do? Perhaps a superintelligence will recognise we need this, and place us collectively in a simulation that begins the human experience again. Perhaps we are already in one.

Whether we create an AGI or not, advances in AI will mean it has an important role in humanity’s future, and we need to learn to live beside and with it. If we are able to harness its potential, then it will enable us to lead lives which are unimaginable today and could herald the largest improvement in living standards in human history. If not, then we could be sowing the seeds of our own destruction, unable to control a rival even more intelligent than us.

Machine intelligence is the last invention that humanity will ever need to make.
— Nick Bostrom, Swedish Philosopher