AI systems have long been treated like sealed black boxes, especially in areas like facial recognition and autonomous driving. New research suggests that protection isn’t as solid as assumed.
A KAIST-led team shows that AI systems can be reverse engineered remotely using emissions that leak during normal operation, without direct intrusion. Instead, the approach listens.
Recommended Videos
Using a small antenna, the researchers captured faint electromagnetic traces from GPUs and rebuilt how the system was designed. It sounds like a heist trick, but the results hold up, and the security implications are immediate.
How the side channel works
The system, called ModelSpy, collects electromagnetic output produced while GPUs handle AI workloads These traces are subtle, yet they follow patterns tied to how the architecture is arranged.
AI model structures can be stolen through walls using an antenna hidden in a bag
...Keep reading this article on Digital Trends.