Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Science https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Neural network with artificial intelligence learns when not to rely on

Neural network with artificial intelligence learns when not to rely on



Neural network trust

MIT researchers have developed a way in which neural networks in deep learning can quickly estimate confidence levels in their output. Advances could increase the security and efficiency of AI-assisted decision-making. Credit: MIT

A faster way to estimate uncertainty in AI-assisted decision-making can lead to more reliable results.

Increasingly, artificial intelligence systems known as neural networks with deep learning are being used to inform decisions that are important to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex data sets to aid in the decision-making process. But how do we know they are correct? Alexander Amini and his colleagues on WITH and Harvard University wanted to find out.

They have developed a fast way for a neural network to crush data and output not only a prediction but also the model’s confidence level based on the quality of the available data. Progress can save lives as deep learning is already implemented in the real world today. A network’s security level can be the difference between an autonomous vehicle that determines that “it’s perfectly clear to go through the intersection” and “it’s probably clear, so just stop in case.”

Current methods for estimating neural network uncertainty tend to be computationally expensive and relatively slow for split-second decisions. But Amini’s approach, called “deep provable regression”, is speeding up the process and could lead to safer results. “We need the ability to not only have high-performance models, but also to understand when we can not trust these models,” said Amini, a PhD student in Professor Daniela Rus’ group at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

“This idea is important and widely applicable. It can be used to evaluate products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error can be expected from the model and what missing data can improve the model, ”says Rus.

Amini presents the research at next month’s NeurIPS conference with Rus, Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, Director of CSAIL and Vice Dean of Research at MIT Stephen A. Schwarzman College of Computing; and graduate students Wilko Schwarting from MIT and Ava Soleimany from MIT and Harvard.

Effective uncertainty

Following an up-and-down story, deep learning has shown remarkable performance on a variety of tasks, in some cases even surpassing human accuracy. And today, deep learning seems to go everywhere computers go. It promotes search engine results, social media and face recognition. “We have had great successes through deep learning,” says Amini. “Neural networks are really good at knowing the right answer 99 percent of the time.” But 99 percent will not cut it when life is on the field.

“One thing that has eluded researchers is the ability of these models to know and tell us when they can make mistakes,” says Amini. “We’re really interested in it 1 percent of the time and how we can detect these situations reliably and effectively.”

Neural networks can be massive, sometimes filled with billions of parameters. So it can be a heavy calculation boost just to get an answer, let alone a confidence level. Uncertainty analysis in neural networks is not new. But previous approaches stemming from Bayesian deep learning have relied on running or sampling a neural network many times to understand its trust. This process takes time and memory, a luxury that may not be found in high-speed traffic.

The researchers devised a way to only estimate uncertainty from a single run of the neural network. They designed the network with total production and produced not only a decision but also a new probable distribution that captured the evidence in support of that decision. These distributions, called distributions of evidence, directly capture the model’s confidence in its prediction. This includes any uncertainty present in the underlying input data as well as in the final decision of the model. This distinction can signal whether the uncertainty can be reduced by adjusting the neural network itself, or whether the input data is just noisy.

Trust control

To put their approach to the test, the researchers started with a challenging computer vision task. They trained their neural network to analyze a monocular color image and estimate a depth value (ie distance from the camera lens) for each pixel. An autonomous vehicle can use similar calculations to estimate its proximity to a pedestrian or other vehicle, which is no simple task.

Their network’s performance was on a par with previous modern models, but it also gained the ability to estimate its own uncertainty. As the researchers had hoped, the network projected high uncertainty for pixels where it predicted the wrong depth. “It was very calibrated to the mistakes the network is making, which we believe was one of the most important things in assessing the quality of a new uncertainty estimator,” says Amini.

To stress test their calibration, the team also showed that the network expected greater uncertainty for “out of distribution” data – brand new types of images that never encountered during training. After training the network on indoor home scenes, they fed it a series of outdoor driving scenes. The network consistently warned that its response to the new outdoor scenes was uncertain. The test highlighted the network’s ability to mark when users should not have full confidence in its decisions. In these cases, “if this is a healthcare application, we may not trust the diagnosis that the model provides and seek a different opinion instead,” says Amini.

The network even knew when photos had been docked and potentially uncovered against data manipulation attacks. In another experiment, the researchers boosted noise levels in a batch of images they gave birth to on the network. The effect was subtle – barely visible to the human eye – but the network sniffed out these images and labeled its output with high levels of uncertainty. This ability to alert on falsified data could help detect and deter counterattacks, a growing concern at the age of deep-fakes.

Deep provable regression is “a simple and elegant approach that promotes uncertainty estimation, which is important for robotic technology and other real-world control systems,” said Raia Hadsell, an artificial intelligence researcher at DeepMind who was not involved in the work. “This is done in a new way that avoids some of the cluttered aspects of other approaches – such as sampling or ensembles – making it not only elegant but also computationally more efficient – a winning combination.”

Deep demonstrable regression could increase the security of AI-assisted decision making. “We are starting to see a lot more of these [neural network] models leak out of the research lab and into the real world, into situations that affect people with potentially life-threatening consequences, ”says Amini. “Every user of the method, whether a doctor or a person in the passenger seat of a vehicle, must be aware of any risk or uncertainty associated with this decision.” He imagines that the system not only quickly marks uncertainty, but also uses it to make more conservative decision-making in risky scenarios like an autonomous vehicle approaching an intersection.

“Any field that has implementable machine learning must ultimately have reliable uncertainty awareness,” he says.

This work was supported in part by the National Science Foundation and the Toyota Research Institute through the Toyota-CSAIL Joint Research Center.




Source link