Have you noticed how AI is being worked into everything…often without adding any value beyond the marketing headlines? (That was rhetorical, because of course you’ve noticed.) If you’ve been around IT for more than a few years, it might bring back not-so-fond memories of the IoT wave. Back when everything from cameras to light bulbs suddenly needed connectivity and compute, opening a back door into your network became as easy as flipping a switch.
Now, “AI washing” and the hype cycle are doing the same thing — encouraging rushed adoption that outpaces governance and security readiness. Now, as many as half of organizations report that their AI systems have introduced cybersecurity risk. And that’s just the ones who know about it.
Then vs. now
-
IoT era: Cheap, ubiquitous devices shipped with weak defaults, opaque firmware, and poor patching. These gadgets often expanded perimeter exposure and shadow assets. Reports in recent years still show widespread device risk, a cautionary tale about adopting tech faster than it can be secured.
-
AI era: Models, agents, and their data pipelines create dynamic, non-deterministic systems where inputs, training data, and dependencies can be weaponized; many organizations lack visibility and governance over these assets, echoing early IoT’s “deploy now, secure later” pattern.
AI features driving risk
-
Generative assistants and chatbots: Susceptible to prompt injection, data leakage, and jailbreaks, including indirect injection via files or web content an assistant processes.
-
Autonomous and semi-autonomous agents: Over-privileged API access and weak identity controls allow lateral movement or unintended actions at machine speed.
-
Continuous learning pipelines: Data poisoning and backdoors during training or fine-tuning persist as latent defects that are hard to detect post-deployment.
-
Model APIs and plugins: Insecure endpoints enable model theft, inversion (reconstructing sensitive training data), or abuse via request flooding and input manipulation.
-
Shadow AI: Unapproved tools and integrations proliferate outside formal risk review, mirroring shadow IoT/SaaS and increasing the chance of data loss and compliance failures.
How AI expands the attack surface
-
New vectors unique to ML: Adversarial examples, AI model poisoning, inversion, and model theft target the model itself rather than only the app or host, demanding controls beyond traditional AppSec.
-
Identity and access sprawl: AI agents and connectors accumulate secrets and scopes; weak governance over these identities creates “super-user bots” attackers can hijack.
-
Supply chain complexity: Pretrained models, third-party datasets, and open-source components widen dependency risk, requiring provenance and integrity checks end to end.
-
Runtime ambiguity: Distinguishing legitimate use from exfiltration or manipulation requires semantic and behavioral monitoring not present in most SOC stacks today.
What “AI washing” breaks
-
Misstated capabilities: Overpromising detection or safety features leads buyers to relax controls, only to discover models can be bypassed or misled by crafted inputs.
-
Underinvested governance: Marketing-led deployments skip threat modeling, red-teaming, and data controls, raising breach likelihood and eroding ROI when incidents land.
What MSPs should do now
-
Inventory and classify AI assets: Catalog models, agents, prompts, datasets, connectors, and API scopes; treat them like high-risk cloud workloads with owners and data flow maps.
-
Enforce identity-first controls: Apply least privilege, short-lived tokens, and per-action approvals for agents; monitor agent-to-API calls and set guardrails for tool use.
-
Secure the data lifecycle: Validate sources, hash and sign training sets, segregate PII, and prevent inadvertent training on sensitive data with policy and technical controls.
-
Test like an adversary: Run red-team exercises for prompt injection, jailbreaks, data leakage, adversarial inputs, and API abuse; add adversarial robustness checks to CI/CD.
-
Monitor AI runtime: Add telemetry for prompts, tool calls, outputs, anomalies, and exfil indicators; adopt AI security posture management where feasible.
-
Vendor diligence to avoid AI washing: Demand evidence of red-teaming, secure development practices, model provenance, data handling, and incident response commitments.
Treat AI as a first-class risk domain, not just a feature: map assets, constrain permissions, harden data flows, and continuously test and monitor — lessons learned the hard way in the IoT era now apply to AI at greater scale and speed.