قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Health https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Scientists warn medical AI are vulnerable to attack

Scientists warn medical AI are vulnerable to attack



The risk of "medical error" takes on a new and more worrying meaning when the errors are not human but the motives are.

In an article published in the journal Science US researchers highlight the growing potential for adversarial attacks on medical machine learning systems in an attempt to influence or manipulate them.

Due to the nature of these systems and their unique vulnerabilities, small but carefully designed changes in how the input is presented can completely change the output and undermine otherwise reliable algorithms, the authors say.

And they present a strong example – their own success by using "conflicting noise" for coaxial algorithms to diagnose benign moles like malignant with 100% confidence.

The Boston team, led by Samuel Finlayson of Harvard Medical School, composed health, law and technology specialists.

In their article, the authors note that adversarial manipulations may come in the form of imperceptibly small disturbances to load data, such as making "a human-invisible change" to each pixel of an image.

"Scientists have demonstrated the existence of contradictory examples of virtually every type of machine model ever investigated and across a wide range of data types, including images, sound, text, and other inputs," they write.

To date, they say that no groundbreaking adversarial attacks have been identified in the healthcare sector. However, there is potential, especially in the field of medical billing and insurance, where machine learning is well established. Adversal attacks can be used to produce false medical claims and other fraudulent behaviors.

To address these emerging concerns, they call for an interdisciplinary approach to machine learning and artificial intelligence, which should include the active involvement of medical, technical, legal and ethical experts across the health sector.

"Adversarial attacks are one of many possible error conditions for medical machine learning systems, all of which are essential considerations for both developers and users of models," they write.

"From a political perspective, however, adversary attacks are an exciting new challenge because they allow users of an algorithm to influence their behavior on subtle, effective and sometimes ethically ambiguous ways."


Source link