Making AI Less Susceptible to Adversarial Trickery
As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately ‘fool’ them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs – the algorithms intrinsic to much of AI – are used daily to process image, audio, and video data.
• Delve into DNNs and discover how they could be tricked by adversarial input
• Investigate methods used to generate adversarial input capable of fooling DNNs
• Explore real-world scenarios and model the adversarial threat
• Evaluate neural network robustness; learn methods to increase resilience of AI systems to adversarial data
• Examine some ways in which AI might become better at mimicking human perception in years to come