Artificial Intelligence (AI), is certainly the cutting edge of software technology, and with good reason. PwC estimates AI will add $15.7 trillion to the annual global economy by 2030.
Where does this growth come from? We’ve been hearing for a long time that “data is the new oil,” emphasizing data’s profitability, but it’s useful to unpack the metaphor further. Just as crude oil is only valuable after refining it into usable products like gasoline, liquified gases, and asphalt, data’s worth lies in the actionable insights that can be obtained by understanding it. Whether it is running marketing campaigns based on a lake of customer data or guiding autonomous cars with sensor data, AI is the refinery that extracts those valuable insights.
For these insights to be truly valuable, the AI models must operate accurately, immune to subversion by threat actors with malicious intent. This scenario has spawned new threat models and coined new attack terminology. Data scientists will be very familiar with the risks of poisoning attacks that corrupt model training and input attacks that trick deployed models into making bad decisions.
While AI brings with it a new threat landscape, we should remember that the use case for AI models and algorithms does not make them magically immune to traditional cyberattacks. After all, these deployed models are still just code.
Existing security layers already protect organizations’ IT infrastructure, and a mature industry already surrounds software and hardware technologies that protect IT systems and stored data, both on-premises and in the cloud.
When AI models are deployed as API-accessible services, these traditional cybersecurity defenses remain relevant. The code, data, and APIs that encapsulate the model still need to be protected. Why go to the bother of attacking the AI model if classic cyberattacks offer a better return on investment?
Of course, AI models are not solely deployed as services; many are deployed at the edge—embedded in desktop applications, mobile apps, and IoT devices—a deployment approach known as Edge AI.
We’ve established that deployed AI models are fundamentally “just code”. But deploying that code at the edge exposes it to a very different set of attacks.
At the edge, an attacker has access to the device on which the model is executing and can use classic reverse engineering techniques to breach the Edge AI model. Decompilers, code visualizers, debuggers, and hooking frameworks let them analyze and manipulate the software, much like a film editor perfecting their latest movie. Attackers can run the app “frame by frame,” pausing to inspect any “frame” in detail, “touch up” the application state, “cut” any code that causes problems, and even “paste” new malicious code into the application.
Reverse engineering grants attackers a comprehensive understanding of the model’s operation, enabling much more targeted input attacks. But perhaps the most significant risk it brings is IP theft. Building and training models takes a lot of time and expertise—there’s a reason data scientists are among the most sought-after (and highly paid) software developers of any kind. Reverse engineering of the software allows a competitor to level the playing field without making the same investment you did, and allows a criminal to monetize your model for their own gain.
PACE supports companies who are bringing their critical applications and software IP to market. Drawing on decades of experience in software security via license management and security tools. PACE now applies the same proven techniques to protect algorithms and AI models. That means PACE’s customers can safely deploy AI at the edge, confident that their models are protected against reverse engineering and IP theft.
Ask corporate CEOs today what their key assets are, and they will answer “data and the AI/ML models that refine it.” Maximizing the value of that key asset demands robust security wherever it is deployed.
Talk to us today to set up a demo and discover how PACE protects Edge AI.