Tech news from MIST
Ruchira Garai
Researchers have discovered a new adversarial attack that can fool AI technologies. This new attack—OPtical ADversarial attack (OPAD)—is based on three main objects—a camera, a low-cost projector, and a computer. OPAD is based on a low-cost projector-camera system in which researchers have projected calculated patterns to modify the appearance of the 3D objects. To perform the attack, researchers modified the already existing objects seen by AI. For example, they have modified basketball images and presented them as something entirely different. OPAD is non-iterative and therefore can target real 3D objects in a single shot. Moreover, this attack can launch untargeted, targeted, black-box, and white-box attacks as well. One of the critical factors of such an attack is that no physical access is required for the objects. OPAD attacks can transform any known digital results into real 3D objects. It can be used to fool self-driving cars which could eventually become the reason behind intentional accidents or pranks. For instance, it can represent a STOP signal as a speed limit signal. Moreover, security cameras with AI can be fooled, resulting in serious consequences. Thus, OPAD showed that organizations developing AI technologies should stay alert regarding potential security problems from within the AI models.
Abridged fromCyWare
Click here to see the original postShare this article