EU wants to ban AI use for surveillance, social scoring
The European Union intends to ban the use of Artificial Intelligence (AI) for mass surveillance or for social ranking behavior, according to draft legislation cited by Bloomberg and Frankfurter Allgemeine Zeitung. Companies that don’t comply with the proposed rules may face heavy fines reaching up to 4% of their global revenues. Details of the legislation may change before unveiling the measure, which is expected next week.
Ban on “social scoring”
The EU proposal is set to include a rule banning AI systems from manipulating human behavior, exploiting information about individuals or groups of individuals for social scoring or for indiscriminate surveillance. Some public security exceptions would apply.
Put simply, social scoring means point value procedures, i.e. the attempt to describe social phenomena or the characteristics of people with the help of point values and thus make them comparable. An extreme example is the Chinese system, which rewards people there for system-compliant behaviour.
Strict scrutiny
AI apps that are viewed as “high risk” will undergo strict inspections before they are deployed to make sure they are trained on unbiased data sets, in a traceable way, and with human oversight. By “high risk”, the EU means systems that could pose a risk to human safety, lives, or fundamental rights, as well as democratic processes within the Union, such as self-driving cars and remote surgery among others.
Some companies will have the right to undertake the required assessments themselves, while others will undergo third-party checks.
The rules will be in force for all EU member states and each country will need to appoint assessment bodies to test, certify, and inspect the systems, according to the draft.
Heavy fines
Companies that develop banned AI services, supply incorrect information, or fail to cooperate with national authorities face fines of up to 4% of their global revenues but no more than EUR 20 million. Changes in these amounts are possible with the final maximum penalties set to be announced in the middle of next week.
Exceptions
Some AI service providers may be excluded from the rules but only if their products and/or services are used to maintain public order. There will be clear rules for this.
Deep fakes should be recognisable
High-risk applications in sensitive fields, on the other hand, should have to meet certain minimum standards in order to be used in the European internal market, particularly when it comes to facial recognition in public spaces. Other sensitive areas include the use of AI for assessing creditworthiness, hiring or promoting employees, access to social benefits or in the prosecution of crimes.
In all such cases, humans should have ultimate control over decisions. In addition, it should be ensured that the data with which the AI is “fed” is neutral. This should prevent certain groups from being disadvantaged. The list of these high-risk applications is to be revised regularly. According to the EC, the aim is to ensure that the list generally includes applications that have serious and, in extreme cases, irreversible consequences for humans.
In the case of other more harmless applications of AI, it should at least be clear when people are dealing with AI, for example when a chat robot answers a telephone hotline instead of a human being. Or when films or images are manipulated with the help of so-called deep-fake programmes and the manipulation is not recognisable from the context itself.