
*if it represents an unacceptable risk
Bots & money
Artificial intelligence is once again a big topic in the media these days. You read about it everywhere and there is more bad news everywhere. But this time it’s not about the end of human creativity or the supposed dangers posed by the various entities, but actually just about money and how it can simply disappear in a twelve-figure sum. Poof, just gone.
We are of course talking about the new chatbot from China, DeepSeek, which works just as well as its American competitor, ChatGPT, but only cost a fraction of the money to develop. This led to a fall in the stock market and then to many articles in the news, newspapers and magazines. For the end consumer at the screen, it doesn’t really matter which company you trust with your knowledge gaps or spelling weaknesses. But for the global market, the war over “the latest invention” (see: https://de.wikipedia.org/wiki/Technologische_Singularität) has only just begun.
The American president calls the Chinese chatbot a “wake-up call” for the Americans and would certainly like to announce a ban on the Chinese provider soon (if it brings him money).
All very excited and thrilling.
AI Regulation
Meanwhile, the first instance of the European Artificial Intelligence Regulation (AI Regulation) comes into force in Europe on February 1. This is the world’s first comprehensive set of rules in the field of artificial intelligence. The AI Regulation aims to ensure that AI systems developed or used in the EU are trustworthy and protect people’s fundamental rights.
The law divides AI applications into different risk categories(minimal risk, special transparency obligations, high risk and unacceptable risk). There are no special obligations for systems with minimal risk, such as spam filters. Chatbots that pose a limited risk must comply with transparency rules . Particularly high-risk AI systems that are used in sensitive areas such as critical infrastructure, education or healthcare are subject to strict requirements, including the need for human oversight.
Bans in Europe
Certain AI applications that violate EU values and pose an unacceptable risk are completely banned. These include systems that restrict civil rights, manipulate human behavior, undermine free will or evaluate social behavior (“social scoring”).
And the implementation of this ban will now come into force on February 1, the rest a little later. Strictly speaking, the entire law has been in force since August 1 of last year. Member states have until August 2 of this year to designate the competent national authorities that will be responsible for monitoring compliance with the AI regulations and for market surveillance.
So it’s actually business as usual. China and the USA throw technologies onto the market and Europe has to regulate them. 😊