AI Security Deep Dive
Guaranteed to Run
Price
$2,795.00
Duration
3 Days
Delivery Methods
Virtual Instructor Led Private Group
Delivery
Virtual
ESTDescription
Objectives
Prerequisites
Course Description
AI Security Deep Dive is an advanced, hands-on course focused on the security, risk, and governance challenges introduced by artificial intelligence systems. The course examines how AI models, data pipelines, and AI-enabled applications can be attacked, misused, or compromised—and how organizations can defend against these risks.
Participants explore real-world threats such as prompt injection, data poisoning, model theft, supply chain risks, insecure integrations, and misuse of Generative AI. The course also addresses governance, compliance, and responsible AI practices, helping teams design and operate AI systems that are secure, trustworthy, and resilient in production environments.
Course Objectives
By the end of this course, participants will be able to:
- Understand the AI threat landscape and how it differs from traditional application security
- Identify security risks across the AI lifecycle, including data collection, training, deployment, and inference
- Recognize common AI-specific attacks such as prompt injection, data poisoning, model inversion, and model theft
- Secure AI pipelines, APIs, and integrations with existing systems
- Apply industry frameworks and guidance (OWASP, NIST AI RMF, MITRE ATLAS) to AI security
- Evaluate risks related to privacy, bias, intellectual property, and compliance
- Design defense-in-depth strategies for AI-enabled applications
- Implement monitoring, logging, and incident response for AI systems
- Establish governance and responsible AI practices within an organization
Who Should Attend?
- Security engineers and security analysts
- Application security (AppSec) professionals
- DevSecOps engineers
- AI / ML engineers responsible for production systems
- Cloud and platform engineers supporting AI workloads
- Architects designing AI-enabled systems
- Risk, compliance, and governance professionals
- Technical leaders responsible for AI security strategy
Course Prerequisites
- General understanding of cybersecurity concepts
- Familiarity with modern application architectures (web, cloud, APIs)
- Basic awareness of AI or machine learning concepts is helpful but not required
- No hands-on AI model training experience required
Do You Need Help? Please Fill Out The Form Below