Disclaimer: This summary has been generated by AI. It is experimental, and feedback is welcomed. Please reach out to info@qconlondon.com with any comments or concerns.
The presentation titled "Securing AI Assistants: Strategies and Practices for Protecting Data" by Andra Lezza discusses the critical aspects of securing AI assistants within enterprises. Andra highlights the complexities involved in protecting data across diverse AI architectures and the challenges posed by the AI supply chain. She provides insights into aligning AI strategies with robust security practices to ensure data protection without stifling innovation.
Key points covered in the presentation include:
- Introduction: The importance of AI assistants and the critical need for data security.
- Challenges: Discusses the complexities of securing AI systems, including data ingestion to deployment and continuous monitoring.
- Vulnerabilities: Review of OWASP top 10 LLMs vulnerabilities compared to web applications, highlighting risks like prompt injection and data poisoning.
- Security Controls: Strategies such as encryption, adversarial training, and implementing safety guardrails during the AI model lifecycle.
- Co-pilot Implementations: Examination of two AI co-pilot implementations and associated threats including excessive permissions, prompt injection, and data leakage.
- Security best practices: Emphasis on principles like least privilege, input validation, and comprehensive monitoring to safeguard AI systems.
The presentation concludes with a call to enact a proactive security posture, leveraging continuous monitoring and adaptive controls to secure AI ecosystems against evolving threats.
This is the end of the AI-generated content.
The data behind AI copilots is not only their most critical asset but also a key strategic consideration for enterprises and SMBs alike. This talk examines the challenges of securing diverse AI architectures at scale—while navigating the intricacies of the AI supply chain, from data ingestion to model deployment. Gain practical insights into safeguarding sensitive data, ensuring integrity throughout the pipeline, and enabling innovation without compromising trust. Learn how to align your AI strategy with robust security practices that maximize value and maintain end-to-end resilience.
Speaker

Andra Lezza
OWASP London Chapter Leader, 10+ Years of Experience Building AppSec Program
Andra is a Principal Application Security Specialist at Sage, with over seven years of experience in the field of application security. She is responsible for implementing DevSecOps practices, conducting security assessments, and developing secure coding guidelines for software engineering and AI/ML teams. She has a strong background in software development and project management, as well as a master's degree in information and computer sciences. She has been co-leading the OWASP London Chapter since 2019, where she organises and delivers events and workshops on various security topics. She is passionate about educating and empowering developers and stakeholders to build and deliver secure software and best practices in a fast-paced, results-driven environment.