AI Model Security — Supply Chain Risks in AI
Model poisoning, weights integrity, hosted vs self-hosted security tradeoffs, and securing your AI pipeline
In an earlier lesson, we covered supply chain attacks in the npm ecosystem — how a compromised package can run arbitrary code in your application. AI models have their own supply chain risks, and they're arguably worse because they're harder to detect.
When you download a model from Hugging Face, use an open-source model through an API, or fine-tune a model on your data, you're trusting that model's entire provenance chain. The training data, the training process, the model weights, the serving infrastructure — each is an attack vector that can compromise your application in ways that traditional code scanning won't catch.
The AI Supply Chain
A model goes through several stages before it reaches your application:
Training Data -> Pre-training -> Fine-tuning -> Quantization This lesson is part of the Guild Member curriculum. Plans start at $29/mo.
