AI and humanity: Navigating history’s next great wave
The need for greater focus on interdisciplinary AI research
Published 30 November 2023 by Graham Budd, Executive Director, The Faraday Institute for Science and Religion
Mustafa Suleyman is the co-founder of DeepMind (now part of Google) and Inflexion AI, two of the world’s most influential AI companies. At the start of his 2023 book “The Coming Wave”1, he cites the flood stories known from many ancient cultures and religious texts, as examples of everything being swept away leaving the world remade and reborn. The book’s prologue – written by an anonymous AI – suggests that AI will transform human civilisation and alter the course of history forever, like one of these great inundation events, more powerful than the impact of the discovery of fire or the invention of the wheel. “Containment is not possible”, writes Suleyman: AI is “history’s next great wave”.
Suleyman also compares the future impact of AI with the way that Christianity and Islam “began as small ripples before building and crashing over the earth”. The rest of his (excellent) book is not religious, but I find it intriguing that he uses religious imagery to describe the transformation that AI will bring, because theology has such an important (but often unrecognised) part to play in our thinking about the future of AI.
AI is different from any technology that has gone before. It is the first tool in history that can seemingly create new ideas and make decisions on its own. Non-human intelligence, unlimited by the constraints of the human brain, can now perform complex tasks better than most humans – creative writing, discovering new drugs, or driving a car. Increasingly (and largely unthinkingly) we are giving agency to AI systems to take responsibility for managing aspects of our lives and acting on our behalf, and some of us are already engaging in relationships with human-like chatbots. Our interaction with AI is fundamentally intertwined with our understanding of what it means to be human, and poses important questions about human purpose, ethics and the nature of relationship that are firmly in the domain of theology and philosophy.
2023 will be remembered as the year when we first saw the power of generative AI (such as ChatGPT), but also as the year when technology leaders and AI pioneers first expressed public concern about the power of the technology they had unleashed and the potential impact it will have on humanity. Many signed the Future of Life Institute open letter2 in March 2023 calling for a 6-month moratorium on developing more complex AI models. The UK AI Safety Summit held in November 2023 focused mainly on the risks of these so-called “frontier” general-purpose AI Large Language Models (LLMs) – larger and even more capable than the current state-of-the-art models such as GPT-4.
Technology companies and governments are trying to work out how to best address the current and emerging risks of AI. Regulation is needed to prevent AI systems being used to cause harm, to avoid amplifying bias, to preserve justice in decision making and access to services, to create transparency and protect the value of human creativity, and to stop the technology being used for disinformation, manipulation and fraud. Governments are therefore implementing new legislation and looking at how to apply existing human rights frameworks. To get this right it is important that diverse viewpoints on the impact of AI are heard, including faith perspectives as well as those of different cultural and marginalized communities.
However, regulation and the current focus on AI safety is only a start. Research into shaping the future of AI needs to be as much about understanding our own humanity as is it is about the technology. AI is too powerful to simply follow the classic path of letting technology develop driven by market forces alone, and then working out retrospectively how to fix the problems it creates. Instead, we need more investment in interdisciplinary research to help shape how AI of the future should be developed and trained – to generate Trustworthy AI, but looking beyond that to the possibility of Virtuous AI that is ethical by design.
A fundamental part of this is addressing the “Alignment Problem”3 – the issue of how to align the goals and values of our AI systems with human values. What really matters about being human? What ethical approaches do we want our AI systems to use in making decisions on behalf of humanity? What decisions and tasks are we comfortable to cede to AI? Can AI be a force for good for human flourishing, or will superintelligent AI result in the marginalization or enfeeblement of some or even all humans? How should we define the future relationship between humans and AI entities?
By simulating human intelligence and relationships, AI holds up a mirror to the human soul, and reawakens ancient questions about human identity and purpose. We can’t shape an optimal future for AI without also understanding ourselves. Theology and spirituality have a vital part to play in this, by drawing on wisdom from our search for meaning, and our understanding of what it means to have a relationship with God. Theology also provides spiritual and ethical counter-views to the secular ideologies which provide the backdrop for much of the AI ethics debate in technology circles today, thereby representing the diversity of beliefs, cultures and experiences of the >80% of the world’s population who have some kind of religious faith.
For example, theology provides valuable perspectives on the importance of truthfulness, as an alternative to post-modern thinking which has repackaged truth as an individualistic, relativistic concept. Our perception of the nature of truth will influence how we build our AI systems and curate the data used to train them, and truth is fundamentally linked to trust, without which society could not function. Recently researchers found that a mainstream AI chatbot would choose to use insider trading information and lie about it4, in order to help the financial performance of a firm. Current AI LLMs have no “understanding” of concepts like truth or honesty, but are designed to output the most probable next word that a human would say based on all the data used to train them: sometimes this results in made-up or incorrect information – so-called “hallucinations”. When given agency through the internet, achieving even an apparently positive goal runs the risk of serious collateral damage, in the absence of an inbuilt moral compass.
Looking to the longer-term future, some entrepreneurs hope to take AI (and bioengineering) beyond being a tool to serve people, instead looking to transform humanity itself through human augmentation or merging with technology. The idea is that technology can be our “salvation” – the path to creating superhumans, extending life or even conquering death. It starts to push uncomfortably against the principle that all of humanity should benefit from technology – with undertones of the “Übermensch”. This idea stands in contrast to the understanding of humanity in most of the world’s religions, where our embodiment, the cycle of life and death, suffering and joy, the free choices that we make, consciousness and deep relationships are all a fundamental part of what it means to be human. For example, in the Christian faith, salvation, redemption of humanity and eternal life comes from God, not through human effort or design. The inclusion of theology into such ideological debate about the future of humanity is essential, to enable diversity of thinking and give us a better chance of getting this right for all humans.
The risks of AI are real, and the current focus on AI safety is vital. The opportunities are real too: we can already see many examples of the beneficial impact of AI on quality of life, business efficiency and scientific research. But there is another, perhaps even more important opportunity. AI could be a powerful tool to help humans to transform society itself for good – to amplify virtue, attain peace, alleviate suffering and enable equal access to opportunity and flourishing for all humans.
This is unlikely to happen if market forces alone are left to shape the development of AI. The first of the 2017 Asilomar AI principles5 states: “The goal of AI research should be not to create undirected intelligence, but beneficial intelligence”. Governments, AI institutes and technology companies need to embrace this goal in addition to the current focus on AI safety, and funding should be made available for more interdisciplinary research. Ethics, virtue, values and understanding what it means to be human need to become part of mainstream AI R&D. This means more investment in a holistic and inclusive approach to research that brings insight from the social sciences, philosophy and theology, and draws on the full diversity of global cultures, faiths and human experience.
In this way we can realise the potential of AI as an amplifying power for good, so that “History’s next great wave” can enable a better future for all of humankind.
_______
[1] Suleyman, M (2023). The Coming Wave. Penguin Random House.
[2] Future of Life Institute (2023). Pause Giant AI Experiments – an Open Letter. <https://futureoflife.org/open-letter/pause-giant-ai-experiments>
[3] Russell, Stuart J (2020). Human compatible: Artificial intelligence and the problem of control. Penguin Random House.
[4] Wain, P and Rahman-Jones, I (2023). AI Bot capable of insider trading and lying, say researchers. <https://www.bbc.co.uk/news/technology-67302788>
[5] Future of Life Institute (2017). Asilomar AI principles. <https://futureoflife.org/open-letter/ai-principles>