قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Business https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Researchers trick Tesla Autopilot for control in oncoming traffic

Researchers trick Tesla Autopilot for control in oncoming traffic



  This Driver's Education Diagra illustrates how to change paths.

Researchers have devised a simple attack that can cause Tesla to automatically steer into traffic under certain conditions. The concept of concept utilization does not work by hacking into the car's built-in computer system. Instead, it works by using small, unnoticed stickers that suffer from Enhanced Autopilot by a Model S 75 to detect and then follow a change in the current path.

Tesla's Enhanced Autopilot supports a variety of options, including path alignment, self-parking, and the ability to automatically change paths with driver confirmation. The function is now mostly called "Autopilot" after Tesla switched the autopilot price structure. It depends primarily on cameras, ultrasonic sensors and radar to obtain information about the environment, including nearby obstacles, terrain and track changes. Then, the data introduces data onboard computers that use machine learning to make real-time assessments of the best way to respond.

Scientists from Tencent's Keen Security Lab recently reversed several of Tesla's automatic processes to see how they responded when environmental variables changed. One of the most striking discoveries was a way to get Autopilot into traffic. The attack worked by carefully placing three stickers on the road. The stickers were almost invisible to drivers, but machine learning algorithms used by Autopilot discovered them as a line indicating the path, shifting to the left. As a result, the autopilot steered in that direction.

In a detailed 37-page report, the researchers wrote:

The Tesla autopilot module's lane recognition function has good robustness in a common external environment (not strong light, rain, snow, sand and dust disturbances), but it still handles not the situation correctly in our test scenario. This kind of attack is easy to implement and the materials are easy to get. As we spoke in the earlier introduction of Tesla's lane recognition feature, Tesla uses a pure computer vision solution for lane recognition, and in this attack experiment we found that the vehicle's driving decision is based only on computer vision lane recognition results. Our experiments showed that this architecture has security risks, and the perimeter recognition is one of the necessary functions for autonomous driving on non-enclosed roads. In the scene we build, if the vehicle knows that the false orbit is pointing to the reverse path, it should ignore this fake path and then avoid a traffic accident.

The researchers said that autopilot uses a feature called detect_and_track to detect paths and update an internal map that sends the latest information to the controller. The feature first calls several CUDA cores for different jobs, including:

Keen Security Lab

The researchers noted that Autopilot uses a number of measures to prevent incorrect detection. The measures include the location of road shoulder, track histories and the size and distance of different objects.

A separate section of the report showed how scientists exploit accessibility vulnerability access to autopilot ECU (or APE) – could use a remote control cushion for a car. This vulnerability was addressed in Tesla's 2018.24 firmware release.

Yet another section showed how scientists could manipulate a Tesla autowiper system to activate wipers as the rain did not fall. Unlike traditional car weighing systems that use optical sensors to detect moisture, the Tesla system uses a series of cameras that feed data to an artificial intelligence network to determine when to dry. The researchers found that in a way it is easy to make small changes in an image to devote artificial intelligence-based image recognition (for example, changes that cause an AI system to celebrate a panda for a gibbon) – it was not difficult to trick Tesla's autowiper function into thinking rain fell even when it wasn't.

So far, researchers have only been able to trick cars when feeding photos directly into the system. Finally, they said the attackers could show a "reverse image" that appears on road signs or other cars that do the same.

The ability to change self-driving cars by changing the environment is not new. At the end of 2017, researchers showed how road signs stickers could cause similar problems. At present, changes in physical environments are generally considered outside the scope of attacks on self-propelled systems. The point of the study is that companies designing such systems may need to consider such applications.

In an emailed statement, Tesla officials wrote:

We developed our bug-bounty program in 2014 to engage with the most talented members of the security research community for the purpose of encouraging this precise type of feedback. While we always appreciate this group's work, the primary vulnerability addressed in this report was addressed by Tesla through a robust 2017 security update, followed by another comprehensive security update in 2018, both of which we released before this group reported it research for us. The rest of the results are all based on scenarios where the physical environment around the vehicle is artificially altered to make the automatic wipers or the Autopilot system behave differently, which is not a realistic concern as a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be ready for it and can manually operate the operating settings of the windscreen wipers.

Although this report is not eligible for a prize through our bug-bounty program, we know it took an extraordinary amount of time, effort and skill, and we look forward to reviewing future reports from this group.


Source link