قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Business https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Google creates external advisory board to monitor it for unethical AI usage

Google creates external advisory board to monitor it for unethical AI usage



Today, Google announced a new External Advisory Board to help monitor the company's use of artificial intelligence for ways it might violate the ethical principles presented this summer. The group was announced by Kent Walker, Google's senior vice president of global affairs, and includes experts on a wide range of topics including mathematics, computer science, engineering, philosophy, public policy, psychology, and even foreign policy.

The group will be called the Advanced Technology External Advisory Council, and it seems that Google wants it to be seen as a kind of independent watchdog that monitors how it uses real-world AI with focus on face recognition. and remedying built-in bias in machine training training methods. "This group will consider some of Google's most complex challenges that arise under our AI principles … give different perspectives to inform our work," Walker writes.

As for the members, the names may not be easily recognizable to those outside the academy. However, the board's responsibilities seem to be of the highest caliber, with resumes involving several presidential administration positions and stations at top-notch universities spanning Oxford University, Hong Kong University of Science and Technology and UC Berkeley. That said, the selection ̵

1; including the Heritage Foundation President Kay Coles James – seems to at least partially turn to the Republican Party and potentially help influence AI-related law down the line.

Some critics of the board have noted that James, through his involvement with the conservative think tank, has shown anti-LGBTQ rhetoric on her pubic Twitter profile:

Google was not immediately available to comments regarding James & # 39; anti-LBGTQ positions and selection process for advisory board.

Last year, Google found itself in controversy over its participation in a US Department of Defense drone program called Project Maven. After a huge internal setback and external criticism to put employees into work on AI projects that may involve human life, Google decided to end its engagement with Maven after the expiration of the contract. It also puts together a new set of guidelines, which CEO Sundar Pichai called Google's AI principles, which would prohibit the company from working on any product or technology that could violate "internationally accepted standards" or "generally accepted principles of international law and human rights."

"We acknowledge that such powerful technology raises equally strong questions about its use," Pichai wrote at the time. "How AI develops and uses will have a significant impact on society for many years to come. As a leader in AI, we feel deeply responsible for getting the right one." Google wants the AI ​​study to be "socially beneficial," effectively, and it often does not involve public contracts or work in areas or markets with remarkable human rights abuses.

Although Google found itself in yet another similar controversy related to its plans to launch a search product in China that may involve applying some form of artificial intelligence in a country currently trying to use the same technology for to monitor and track its citizens. Google's promise differs from Amazon and Microsoft, both of whom have said they will continue to work the US government. Microsoft has secured a $ 480 million contract to deliver the HoloLens headset to the Pentagon, while Amazon continues to sell its Recognition Face Detection software to law enforcement agencies.

Update 3/26, 6:37 PM ET: Added that critics from Google's Advisory Board are calling on the company to respond to its choice of Heritage Foundation President Kay Coles James.


Source link