AI-Native LLM Security: Threats, defenses, and best practices for building safe and trustworthy AI
Unlock the secrets to safeguarding AI by exploring the top risks, essential frameworks, and cutting-edge strategies—featuring the OWASP Top 10 for LLM Applications and Generative AI DRM-free PDF version + access to Packt's next-gen Reader* Key Features Understand adversarial AI attacks to strengthen your AI security posture effectively Leverage insights from LLM security experts to navigate emerging threats and challenges Implement secure-by-design strategies and MLSecOps practices for robust AI system protection Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionAdversarial AI attacks present a unique set of security challenges, exploiting the very foundation of how AI learns. This book explores these threats in depth, equipping cybersecurity professionals with the tools needed to secure generative AI and LLM applications. Rather than skimming the surface of emerging risks, it focuses on practical strategies, industry standards, and recent research to build a robust defense framework. Structured around actionable insights, the chapters introduce a secure-by-design methodology, integrating threat modeling and MLSecOps practices to fortify AI systems. You’ll discover how to leverage established taxonomies from OWASP, NIST, and MITRE to identify and mitigate vulnerabilities. Through real-world examples, the book highlights best practices for incorporating security controls into AI development life cycles, covering key areas such as CI/CD, MLOps, and open-access LLMs. Built on the expertise of its co-authors—pioneers in the OWASP Top 10 for LLM applications—this guide also addresses the ethical implications of AI security, contributing to the broader conversation on trustworthy AI. By the end of this book, you’ll be able to develop, deploy, and secure AI technologies with confidence and clarity. *Email sign-up and proof of purchase required What you will learn Understand unique security risks posed by LLMs Identify vulnerabilities and attack vectors using threat modeling Detect and respond to security incidents in operational LLM deployments Navigate the complex legal and ethical landscape of LLM security Develop strategies for ongoing governance and continuous improvement Mitigate risks across the LLM life cycle, from data curation to operations Design secure LLM architectures with isolation and access controls Who this book is forThis book is essential for cybersecurity professionals, AI practitioners, and leaders responsible for developing and securing AI systems powered by large language models. Ideal for CISOs, security architects, ML engineers, data scientists, and DevOps professionals, it provides insights on securing AI applications. Managers and executives overseeing AI initiatives will also benefit from understanding the risks and best practices outlined in this guide to ensure the integrity of their AI projects. A basic understanding of security concepts and AI fundamentals is assumed.
-
Autore:
-
Editore:
-
Anno:2025
-
Rilegatura:Paperback / softback
-
Pagine:416 p.
Le schede prodotto sono aggiornate in conformità al Regolamento UE 988/2023. Laddove ci fossero taluni dati non disponibili per ragioni indipendenti da Feltrinelli, vi informiamo che stiamo compiendo ogni ragionevole sforzo per inserirli. Vi invitiamo a controllare periodicamente il sito www.lafeltrinelli.it per eventuali novità e aggiornamenti.
Per le vendite di prodotti da terze parti, ciascun venditore si assume la piena e diretta responsabilità per la commercializzazione del prodotto e per la sua conformità al Regolamento UE 988/2023, nonché alle normative nazionali ed europee vigenti.
Per informazioni sulla sicurezza dei prodotti, contattare productsafety@feltrinelli.it