• Health Is Wealth
  • Posts
  • American researchers are alarmed by the lack of regulation around artificial intelligence

American researchers are alarmed by the lack of regulation around artificial intelligence

[ad_1]

New York University's AI Now Institute research center released Thursday, December 12, 2019 a report on the implications of artificial intelligence (AI) in society. This year, researchers have focused on the negative aspects of AI.

The conclusion is very clear: this technology must be regulated much more strictly. "It is becoming increasingly clear that in various fields, AI amplifies inequalities, places information and means of control in the hands of those who have power, thereby depriving those who do not already have it.", can we read in the document.

Study the risks of facial recognition

Two points of particular concern to American scientists: facial recognition and algorithmic biases. "Governments and businesses should stop using facial recognition in sensitive social and political contexts until the risks are fully investigated and adequate regulations are in place", advocates the AI ​​Now Institute.

The report notes that this technology took off considerably in 2019, particularly in China where citizens are forced to have their face scanned to subscribe to a phone plan. However, these technologies have been shown to be far from perfect. For example, in July 2019, the National Institute of Standards and Technology (NIST) of United States has published a study that shows that these systems have trouble distinguishing the faces of black-skinned women.

The report advises legislators to take a moratorium imposing transparency requirements that would allow researchers, policy makers and civil society to decide the best approach to regulate facial recognition. The public must also be able to understand how these technologies work in order to form their own opinion.

The danger of algorithmic biases

"The AI ​​industry is terribly homogeneous"the report warns. This lack of diversity has consequences for algorithms that are biased. They can come from two sources. First, the developer himself, who integrates into his algorithms his own beliefs and cognitive biases Then, others arise from the data that feeds the system, from which it is driven.

The report "Algorithms: bias, discrimination and equity"produced by researchers from Telecom ParisTech and the University of Nanterre, published in March 2019, takes the example of Amazon. In 2015, the e-commerce giant decided to use an automated system for the "to help in the choice of its recruitments. The initiative was interrupted because only men were chosen."The data entered was completely unbalanced between men and women, men making up the overwhelming majority of managers recruited in the past. The algorithm left no chance for new candidates, however qualified "explains the report.

To combat these biases, researchers at the AI ​​Now Institute recommend opening up engineering positions to women and minorities. They also believe that the creation of algorithms should not remain in the sole hands of researchers in the information sciences, but that it should be open to the social sciences.  

How to supervise?

This report is not the first to be alarmed by the lack of regulation around AI. It must be said that States are struggling to adopt legislation on this subject. At the end of August 2019, we learned that the European Commission was working on a text. The international community is also interested in this issue. At the end of November 2019, Unesco was mandated to draw up a "global standard-setting instrument"in 18 months. But this fine initiative collides with the concrete reality of international law. In the vast majority of cases, it is impossible to make international sanctions effective and therefore effective. States can avoid them very easily.

[ad_2]