Design secure AI/ML solutions
Artificial Intelligence (AI) is literally eating software as more and more solutions become ML-based. Unfortunately, these systems also have vulnerabilities; but, compared to software security, few people are really knowledgeable about this area. If it's impossible to secure AI against cyberattacks, there will be no AI-based technologies, such as self-driving cars, and yet another "AI winter" will soon be on us.
This course is almost certainly the first public, online, hands-on introduction to the future perspectives of cybersecurity and adopts a clear and easy-to-follow approach. In this course, you will learn about high-level risks targeting AI/ML systems. You will design specific security tests for image recognition systems and master techniques to test against attacks. You will then learn about various categories of adversarial attacks and how to choose the right defense strategy.
By the end of this course, you will be acquainted with various attacks and, more importantly, with the steps that you can take to secure your AI and machine learning systems effectively. For this course, practical experience with Python, machine learning, and deep learning frameworks is assumed, along with some basic math skills.
All the code and supporting files for this course are available on GitHub at:
About the Author
- Alexander Polyakov is a cybersecurity expert and serial entrepreneur. He has over 15 years' practical experience in AI cybersecurity and other different fields, such as pentesting, security engineering, product management, architectures, and technology leadership. He is a member of Forbes Technology Council and a Forbes columnist, where he publishes his vision for the future. He has been recognized as Entrepreneur and R&D Professional of the Year by various bodies. His expertise covers cybersecurity aspects of various complex systems from enterprise applications and industry-specific systems to AI, ML, and future technologies. He has found over 200 vulnerabilities, published dozens of whitepapers, released two books, and conducted training sessions that were attended by Fortune 2000 CISOs. He also presented his research at more than 100 conferences, such as BlackHat, HITB, and RSA in over 30 countries on all continents. Besides cybersecurity, his areas of interest are AI, Neuroscience, synthetic biology, and psychology. When it comes to AI security, his articles published on Medium are on the first page of Google searches. He was the first cybersecurity expert to feature AI security as a keynote for a cybersecurity conference.
- This course is for every ML and AI professional, engineer, or student who wants to know more about AI system security; this course will also be beneficial if you want to become more competitive and an expert. Practical experience with Python, machine learning, and deep learning frameworks is assumed, along with some basic math skills
- Design secure AI solution architectures to cover all aspects of AI security from model to environment
- Create a high-level threat model for AI solutions and choose the right priorities against various threats
- Design specific security tests for image recognition systems
- Test any AI system against the latest attacks with the help of simple tools
- Learn the most important metrics to compare various attacks and defences
- Deploy the right defence methods to protect AI systems against attacks by comparing their efficiency
- Secure your AI systems with the help of practical open-source tools
6$ 9.99 $ 49.99
53$ 9.99 $ 49.99
13$ 9.99 $ 49.99