Loading…
Attending this event?
Wednesday October 2, 2024 4:30pm - 5:10pm GMT+08
As artificial intelligence (AI) becomes an integral part of our digital landscape, the looming threat of adversarial attacks casts a shadow over its immense potential. This presentation takes a technical deep dive into the evolving landscape of AI security and the relentless tactics employed by adversaries to exploit vulnerabilities. Attendees will gain insights into the various attacker strategies including OWASP LLM TOP 10, and security flaws in LLM frameworks that are exploitable. Moreover, there will be demos of adversarial AI attacks on POC applications. Demos covered include the Fast Gradient Sign Method (FGSM), Prompt injection to Code execution, Poisoning Training Data, Model Serialization Attacks, and SQL injection in LLM applications. The session aims to equip attendees with a comprehensive understanding of the adversarial tactics prevalent in AI security and empower them to guard against the shadows that threaten AI systems.
Speakers
avatar for Alex Devassy

Alex Devassy

security engineer, AppViewX India
Alex is a senior security engineer at AppViewX India, specializing in penetration testing to enhance application security. He's passionate about researching new attack vectors in focused technology domains. Among his achievements, he co-authored the chapter "Safeguarding Blockchains... Read More →
Wednesday October 2, 2024 4:30pm - 5:10pm GMT+08
Room: Jasmine Ballroom Marina Bay Sands Convention Center
Log in to leave feedback.

Attendees (5)


Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link