AI tricked into seeing what's not there

2017; Elsevier BV; Volume: 235; Issue: 3137 Linguagem: Inglês

10.1016/s0262-4079(17)31500-2

ISSN

2059-5387

Autores

Matt Reynolds,

Tópico(s)

Digital Media Forensic Detection

Resumo

Moustapha Cisse, an AI researcher at Facebook, and his colleagues figured out that a technique they call Houdini can be used to fool voice-recognition systems. They inserted a small amount of digital noise into a recording of a person speaking a phrase. When they played that recording to the Google Voice speech-recognition app, it picked up a completely different sentence to the one spoken. Cisse found that the types of image-classification algorithm used in driverless cars could be made to ignore pedestrians or parked cars. But not everyone is sure attacks will work in the real world. David Forsyth at the University of Illinois at Urbana-Champaign digitally altered a stop sign to fool such algorithms. He found that the signs worked fine when viewed by a moving camera, as they would be from a driverless car. He says adversarial examples might work under perfect conditions, but may be less effective in practice. Yet Al research lab OpenAl responded by showing it is indeed possible to trick image-recognition algorithms even if an image is viewed from different distances and angles

Referência(s)
Altmetric
PlumX