Artificial Intelligence

When AI Goes Wrong

Tuesday, June 21, 2022
21
Jun

Written by: Daniel Xu, CEO

What does a self driving car, a cancer diagnostic app, a gunshot detection system, and your Netflix recommendations have in common?

Yes, they are all powered by state-of-the-art artificial intelligence algorithms. Specifically, they all make use of a particular type of AI architecture designed to perform complex analysis on large amounts of data known as a Deep Neural Network (DNN). This special architecture is what enables computer vision models to accurately recognise objects from photos and your phone in order to transcribe text from speech. The models typically involve thousands of numerical calculations stacked across a number of ‘layers’ - a term used to describe the number of stages that the data is passed through in calculating the output. As machine learning is used to solve harder and more difficult problems, these models are getting increasingly larger and more complicated, and becoming more susceptible to being hacked, known as an Adversarial Attack

These hacks exploit the Butterfly Effect, where a seemingly small and harmless perturbation to the input can cause a catastrophic failure at the output end. Examples include adding a few pixels to medical scans which can fool an AI into wrongly detecting cancer, tricking an AI into thinking you’re someone else or fooling a self-driving car to read a Stop sign as a 45mph Speed Limit.

Quite often, the attacks come in the form of adding a small amount of ‘noise’ to the data, such that to a human eye, they are completely indistinguishable and appear harmless. Plus, it’s not just images that can be fooled. It has also been shown that these types of attacks (often unintentional) can confuse a chatbot in giving the wrong answer or trick your home automation devices to perform unwelcome functions. Imagine a scenario where playing a song on Youtube sends hidden commands to your AI powered home appliances - such as canceling medical appointments, opening doors, and disabling alarms.

In a world where AI is increasingly being used to measure our health, make financial decisions, and operate machinery, it is absolutely vital that we ensure we develop security protocols at the same velocity as we develop new AI models. Those of us in the AI community therefore have a strong responsibility to ensure we do not rush AI projects and launch them before they are robust and ready. Instead, a slower, more rigorous testing and refinement process is mandatory to ensure we build AI that serves us without harm. 

Click to Launch
L I T T L E    A L B E R T    D E M O
Want to stay in the loop? Subscribe to get updates from us.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
The Blog