Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Science https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Do these AI-created fake people look right to you?

Do these AI-created fake people look right to you?

There are now companies selling fake people. On the Generated.Photos website, you can buy a “unique, worry-free” fake person for $ 2.99 or 1,000 people for $ 1,000. If you just need a few fake people – for characters in a video game or to make your business website look more diverse – you can get their photos for free at ThisPersonDoesNotExist.com. Adjust their similarity as needed; make them old or young or the ethnicity you choose. If you want your fake person animated, a company called Rosebud.AI can do that and even get them talking.

These simulated people are starting to appear around the internet, used as masks by real people with malicious intent: spies who have an attractive face in an attempt to infiltrate the intelligence community; right-wing propagandists hiding behind fake profiles, photos and everything else; online harassers who troll their targets with a friendly face.

We set up our own AI system to understand how easy it is to generate various fake faces.

The AI ​​system sees each face as a complex mathematical figure, a series of displaceable values. Choosing different values ​​- such as those that determine the size and shape of the eyes ̵

1; can change the whole picture.

For other qualities, our system used a different approach. Instead of changing values ​​that determine specific parts of the image, the system first generated two images to establish start and end points for all the values ​​and then created images in between.

The creation of these types of false images became possible only in recent years thanks to a new type of artificial intelligence called a generative adversarial network. In essence, you are feeding a computer program with a lot of photos of real people. It studies them and tries to come up with its own photos of people, while another part of the system tries to discover which of these photos are fake.

Back and forth makes the end product even more distinguishable from the real thing. The portraits in this story were created by The Times using GAN software made publicly available by computer graphics firm Nvidia.

Given the pace of improvement, it’s easy to imagine a not-so-distant future, where we are confronted with not only single portraits of fake people, but entire collections of them – at a party with fake friends hanging out with their fake dogs, holding their fake babies. It is becoming increasingly difficult to tell who is genuine online and who is a fantasy of a computer fantasy.

“When the technology first appeared in 2014, it was bad – it looked like the Sims,” ​​said Camille François, a disinformation researcher whose job is to analyze the manipulation of social networks. “It’s a reminder of how fast technology can evolve. Detection only becomes more difficult over time. ”

Advances in facial faking have been made possible in part because technology has become so much better at identifying key facial features. You can use your face to unlock your smartphone or ask your photo software to sort your thousands of photos and only show you your child. Face recognition programs are used by law enforcement agencies to identify and arrest suspected criminals (and also by some activists to reveal the identities of police officers covering their nameplates in an attempt to remain anonymous). A company called Clearview AI scraped the web of billions of public photos – randomly shared online by everyday users – to create an app capable of recognizing a stranger from just one photo. Technology promises superpowers: the ability to organize and treat the world in a way that was not possible before.

But face recognition algorithms, like other AI systems, are not perfect. Thanks to the underlying bias in the data used to train them, some of these systems are not as good, for example, by recognizing people in color. In 2015, an early image detection system developed by Google labeled two black people as “gorillas”, probably because the system had been fed many more photos of gorillas than of people with dark skin.

Moreover, cameras – the eyes on face recognition systems – are not so good at capturing people with dark skin; the unfortunate default dates for the early days of film development, where photos were calibrated to best show the faces of fair-skinned people. The consequences can be serious. In January, a black man in Detroit named Robert Williams was arrested for a crime he did not commit due to a false facial recognition fight.

Artificial intelligence can make our lives easier, but in the end it is as flawed as we are because we are behind it all. Humans choose how AI systems are manufactured and what data they are exposed to. We choose the voices that teach virtual assistants to hear, which leads to these systems not understanding people with accents. We design a computer program to predict a person’s criminal behavior by giving them data on previous decisions made by human judges – and in the process baking in those judges’ bias. We notice the images that train computers to see; they then associate glasses with “dweebs” or “nerds”.

You can spot some of the flaws and patterns we found out that our AI system was repeating itself when it evoked fake faces.

Humans fail, of course: we overlook or glaze past the flaws in these systems, too fast to trust that computers are hyper-rational, objective, always right. Studies have shown that in situations where humans and computers have to work together to make a decision – to identify fingerprints or human faces – people consistently misidentified when a computer pushed them to do so. In the early days of dashboard GPS systems, drivers famously followed the devices’ instructions for a fault and sent cars into lakes, off cliffs and into trees.

Is this humility or hubris? Do we place too little value on human intelligence – or do we overestimate it, provided we are so clever that we can still create things smarter?

Google and Bing’s algorithms sort the world’s knowledge for us. Facebook’s news feed filters the updates from our social circles and decides which ones are important enough to show us. With self-driving features in cars, we put our safety in the hands (and eyes) of the software. We place great trust in these systems, but they can be as flawed as us.

More articles on artificial intelligence:

Face recognition training on some new furry friends: bears

Antibodies Good. Machine-made molecules better?

These algorithms could bring an end to the world’s deadly killer

Source link