Oppenheimer, AI, And Existential Risk

Oppenheimer, AI, And Existential Risk

Nuclear weapons were once the leading threat to the world. Does AI pose a similar danger today?

Released in theatres in the US and UK on 21 July by Universal Pictures, Oppenheimer has proven a box office sensation. The critically-acclaimed movie grossed $80 million in its opening weekend alone – a post-pandemic record.

The film delves into the life and work of the American theoretical physicist, J. Robert Oppenheimer (played by Cillian Murphy), exploring his pivotal role in the development of the atomic bomb and the profound implications that arose from creating such a devastating weapon.

‘Are you saying that there's a chance that when we push that button... we destroy the world?’ asks Lieutenant Colonel Leslie Groves (played by Matt Damon), who oversaw the Manhattan Project. ‘The chances are near zero…’ replies Oppenheimer.

In the event, when Oppenheimer saw the destructive potential of atomic weapons in practice, he is said to have quoted a Hindu scripture: ‘Now I am become Death, the destroyer of worlds.’ Nuclear weapons presented an existential threat to the world, embodying the ability to change life on Earth irrevocably – or even end it altogether.

Today, despite major conflicts, humanity’s fears are no longer focused on nuclear destruction. Instead, it’s AI – the ability for computers to develop and apply human-like intelligence – that is prompting questions about how our urge for technological progress might seriously harm as well as help us.

Kranzberg’s Laws

American history of technology professor Melvin Kranzberg is famous for his six laws of technology, now known as Kranzberg’s Laws. The first of these states:

‘Technology is neither good nor bad; nor is it neutral.’

While a technology in itself isn’t inherently good or bad, the way it is deployed is. Understanding the inner workings of the atom has brought medical advances, zero-carbon energy generation – and nuclear weapons.

Technology is deployed by people, who have specific aims and moral frameworks. The more powerful the technologies, the greater the scope for good and evil.

This may be true of all technologies up to this point in time, but artificial intelligence poses a new problem. AI opens the intriguingly terrifying prospect of technology making its own decisions about how it is used. Is this where Kranzberg’s First Law breaks down?

The Potential Of AI

While AI has been an active field of research for decades, until last year meaningful applications of human-like artificial intelligence had been the preserve of science fiction. The release of ChatGPT-4 changed that overnight, as OpenAI’s new software and other similar platforms were launched and rapidly gained traction.

This software is incredibly powerful, and holds the potential for enormous productivity gains in existing jobs and sectors, and new developments in medicine, security, robotics, materials, climate science and much else besides, as well as art, music, literature and other areas that are already well-known applications for AI.

However, at the same time, there are serious concerns about how AI might adversely impact humanity. Ignoring, for the moment, Terminator-style apocalyptic scenarios in which AI seeks to protect itself by attacking us, some near-term problems include:

  • Mass unemployment and social unrest caused by AI replacing humans
  • AI being used to create highly tailored and targeted misinformation campaigns, undermining democracy and informed decision-making
  • Concentration of power, as huge corporations create and control these AI platforms
  • Lack of transparency about how AI models work, leading to unintended outcomes
  • Creating and entrenching ethical or other biases, due to flawed or biased input data
  • Invasion of privacy due to data breaches and the concentration and use of large pools of private user data
  • Loss of creativity and critical thinking skills due to over-reliance on AI

In March this year, an open letter was published, signed by Elon Musk, Steve Wozniak, and many other pioneers of AI, academic and business leaders, calling for a six-month pause on training large-language models more powerful than ChatGPT-4, due to the unknown risks and need to devise a regulatory framework.

The moratorium did not happen. Today, AI is already driving some amazing applications, as well as raising some terrifying possibilities. Not even the creators of these platforms understand how they work, and – like any technology – once progress in AI has been made, it can’t be put back in its box.

The benefits of AI are real. So is the potential for social, political and economic turmoil – with possibly devastating consequences – as we learn how to develop, use, and coexist with this transformational technology.

Quantum + AI

One last topic to touch on is quantum computing, which has fallen out of the headlines as AI takes centre stage. Quantum computing is not nearly so close to its breakthrough moment as AI, but progress is still rapidly being made. One day, viable quantum computing will be available – and will absolutely transform AI, as well as much else besides.

Even with conventional computers, AI promises to be a transformative technology that rivals and will quickly surpass human intelligence. The intersection of AI and quantum will bring an entirely new generation of AI. As Laure Le Bars, president of the European Quantum Industry Consortium (QuIC), explains to Forbes: ‘Optimization problems like route planning, supplier management, and financial portfolio management are places where quantum’s unique ability to quickly find the optimal solution by analyzing huge amounts of heterogeneous data would work well. Classical computers get overwhelmed by exponential calculations when it comes to these enormous amounts of data… AI and machine learning algorithms are perfect candidates for quantum processing.’

It’s likely that the first general-purpose quantum computer will be developed by 2030. The advent of quantum-powered AI will be a game-changer, even by the standards already set. The quantum era will see the development of computers that are millions of times faster than today’s machines, giving rise to unimaginably smart AI algorithms.

If AI’s Oppenheimer moment hasn’t already happened, that will be when it does.