قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Technology https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ The EU should ban AI-driven citizen scores and mass surveillance, experts say

The EU should ban AI-driven citizen scores and mass surveillance, experts say



A group of political experts gathered by the EU has recommended that it prohibit the use of AI for mass surveillance and mass of scoring of individuals; a practice that potentially involves collecting varied data on citizens – ranging from criminal records to their behavior on social media – and then using them to assess their moral or ethical integrity.

The recommendations are part of the EU's ongoing efforts to establish itself as a leader in the so-called "ethical AI". Earlier this year, it published its first guideline on the subject, which states that AI in the EU should be exploited in a credible and "human centric" way.

The new report offers more specific recommendations. These include identifying areas of AI research that require funding; Encourage the EU to incorporate AI education into schools and universities and propose new approaches to monitoring the impact of AI. However, the paper is only a set of recommendations at this time and not a legislative plan.

In particular, the proposals that the EU should ban AI-enabled mass scoring and mass-surveillance limitation are some of the report's relatively few concrete recommendations. (Often, the authors of the report simply suggest that further investigation is needed in this or that area.)

Fear of AI-activated mass scoring has evolved largely from reports on China's nascent social credit system. This program is often presented as a dystopian tool that will give the Chinese government great control over citizens' behavior; allows them to punish (as a ban on high-speed trains) in response to ideological violations (such as criticizing the Communist Party on social media).

However, recent, nuanced reporting suggests that this system is less orwellic than it seems. It is divided among dozens of pilot programs, with the majority focused on stamping everyday corruption into Chinese society rather than punishing would be thought crime.

Experts have also noted that similar systems of surveillance and punishment already exist in the West, but instead of being overseen by governments, they are run by private companies. With this further context, it is not clear which EU-wide mass-scoring ban would be. Would it also cover the activities of insurance companies, creditors or social media platforms, for example?

Other places in today's report suggest EU experts that citizens should not "be subjected to unjustified personal, physical or mental tracing or identification" using AI. This may include using AI to identify feelings in one's voice or track their facial expressions, they suggest. But again, these are methods that companies already use and use them for tasks such as tracking employee productivity. Should this activity be banned in the EU?

Uncertainty about the extent of the report's recommendations is matched with criticism that such political documents are currently toothless.

Fanny Hidvegi, a member of the expert group who wrote the report and a policy analyst at nonprofit Access Now, said the document was too vague, lacking "clarity on security measures, red lines and enforcement mechanisms." Others involved have criticized the EU corporate governance process. Philosopher Thomas Metzinger, another member of the AI ​​expert group, has pointed out how the first "red lines" on how AI should not be used have been stupid to just "critical concerns".

So while the EU can impose experts telling it to ban AI mass surveillance and scoring, it does not guarantee that the legislation will be adopted that prevents these damages.


Source link