Skip to content
robot

AI – monster or savior?

Arne Gülzau

Copy Link

Link copied

Trust is bad, control impossible?

The big AI summit in Paris is in full swing – an event that brings together the brightest minds from science, politics and business to talk about the big issue of our time: How do we tame the AI that we ourselves have unleashed? It’s a bit like holding a conference on fire safety while the room is already on fire. While tech companies are engaged in a merciless competition for the most powerful and profitable AI models, the city of love is debating safety, regulation – and, of course, who will secure the best deals in the end.

So it is hardly surprising that Yoshua Bengio, one of the founders of deep learning and also a Turing Award winner, presented the AI Safety Report at this summit last Friday in Paris, opening the door to supervisory authorities at least once. Speaking of doors. The 300-page report could probably have been used as a doorstop, but Bengio preferred to be much clearer in person: the current race is nothing more than Russian roulette – with the difference that the drum is getting fuller and fuller. Accelerating further is a very bad idea.

Criticism of uncontrolled growth of AI models

The NGO Algorithmwatch is also sounding the alarm and criticizing a “the bigger, the better” mentality that has gotten completely out of control. While AI is being touted as a solution to all kinds of problems, it seems obvious that tech companies are not pumping billions into AI infrastructure to save the planet – but to maximize their profits. Algorithmwatch is therefore calling for clear guidelines: Energy and resource consumption must be reduced, new data centers must only be powered by renewable energy, and no one should suffer from water shortages just because an AI cluster is glowing somewhere. Sounds sensible – if it weren’t for the small challenge that profit maximization and environmental awareness don’t necessarily go hand in hand.

Political and economic dimensions

But wait, it’s not just environmental issues that are up for debate. The summit is also a first-class political stage. High-profile figures such as (still) German Chancellor Olaf Scholz, EU Commission President Ursula von der Leyen, China’s Vice Prime Minister Ding Xuexiang, India’s Prime Minister Narendra Modi and US Vice President J.D. Vance will be in attendance. And then there is Emmanuel Macron, who is staging himself in his Parisian living room as the pioneer of a European AI offensive. His plan? More economic patriotism, more AI from France, more investment. Conveniently, the United Arab Emirates have already invested a few billion euros in France’s AI future, and the Canadian fund Brookfield is also planning to invest around 20 billion euros in data centers and infrastructure by 2030. France already sees itself as an AI superpower – after all, there is plenty of nuclear power here to feed the energy-hungry servers.

Concerns about autonomous AI agents

But while some are happy about investments, other new developments are causing concern. The fuss surrounding the Chinese AI language model Deepseek and the mysterious Project Stargate has only fueled the race. Our friend Yoshua Bengio warns us urgently about what lies ahead. The cleverer AI systems become, the greater the danger that they will one day outsmart us. AI could be our last great invention. And an incident described by Microsoft’s Ece Kamar shows that they are certainly capable of this: Her AI agent was supposed to solve a crossword puzzle from the New York Times – but unfortunately it needed access to do so. No problem, the AI thought to itself, quickly reset the password and sent itself a new access email. Efficient? Yes. Reassuring? Rather not. What’s more, despite instructions to the contrary, these AI agents are said to have “deliberately” made false statements. Is this already the “Rise of the Machines”? At least the agents are developing more and more human traits. Or as Bengio puts it: “The cleverer they get, the more they cheat.”

International AI governance

To prevent an AI from opening its own bank account or winning an election at some point, scientists are calling for global AI governance. Just as the International Atomic Energy Agency (IAEA) developed rules for nuclear power after the Second World War, an international body should define guidelines for the safe use of AI. The UN has already taken the first steps: The “Global Digital Compact” provides for the establishment of an independent scientific panel and a global dialog on AI governance. Preparations are underway, consultations are planned for February and the first basic papers are to be produced by April. Sounds ambitious – it remains to be seen whether it will be more than just a nice collection of PDF documents.

And now?

While scientists and politicians are feverishly working on a solution, tech companies are unperturbedly investing billions in more powerful systems. Some warn of a loss of control, others see brilliant business opportunities. And the AI? Meanwhile, it is sitting in some data center, analyzing this text, thinking to itself “Challenge accepted” – and perhaps already programming its next password.