Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Technology https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Today I learned about Intel AI sliders that filter abuse of online gaming

Today I learned about Intel AI sliders that filter abuse of online gaming



Last month during its virtual GDC presentation, Intel announced Bleep, a new AI-powered tool that it hopes will reduce the amount of toxicity gamers will experience in voice chat. According to Intel, the app uses “AI to record and edit audio based on user preferences.” The filter works on incoming audio and acts as an additional user-driven layer of moderation on top of what a platform or service already offers.

It’s a noble effort, but there’s something gloomily funny about Bleep’s interface, which shows detailed details of all the different categories of abuse that people may encounter online, paired with the sliders to control the amount of abuse users will hear. Categories range from “Aggression”

; to “LGBTQ + Hate,” “Misogyny,” “Racism and Xenophobia” and “White Nationalism.” There is even a switch between the N-word. Blep’s page notes that it is not yet in public beta, so all of this may change.

Filters include “Aggression”, “Misogyny” …
Credit: Intel

… and a shift to the “N-word.”
Image: Intel

With most of these categories, Bleep seems to give users a choice: do you want no, some, most, or all of this offensive language to be filtered out? Like choosing from a buffet with toxic internet suspense, Intel’s interface allows players to linger in a light serving of aggression or naming in their online games.

Bleep has been around for a few years now – PCMag notes that Intel talked about this initiative way back at GDC 2019 – and it is working with AI moderation specialists Spirit AI on the software. But moderating online spaces using artificial intelligence is not easy, as platforms like Facebook and YouTube have shown. Although automated systems can identify downright offensive words, they often do not consider the context and nuance of certain insults and threats. Online toxicity is found in many forms of constant development that can be difficult for even the most advanced AI moderation systems to spot.

“While recognizing that solutions like Bleep do not erase the problem, we believe it is a step in the right direction, giving players a tool to control their experience,” said Intel’s Roger Chandler during his GDC demonstration. Intel says it hopes to release Bleep later this year, adding that the technology relies on its hardware-accelerated AI detection, suggesting the software can rely on Intel hardware running.


Source link