Tecflix

What is Artificial Stupidity? Get Free Access Purchase the entire course

21 March 2019 · · 1065 views

The Future Series (2019)

What is Artificial Stupidity?

Artificial Intelligence of today easily becomes artificially stupid, I am afraid. The idea of artificial stupidity has not been talked about as much as I would have liked to hear, which is why I have been actively mentioning the subject for the last few years. I defined it, for a conference back in 2018, as “System based on Artificial Intelligence technology that occasionally makes incredibly wrong decisions”—like an autonomously driving car suddenly veering into a parked fire truck on the side of the road, for no clear reason, sadly also killing its occupants.

Things go wrong often in ways that humans find very surprising. There is new research on the subject, currently focusing on adversarial examples that can fool neural networks. In my video I show a couple of well-publicised examples. In the first one you will see how easy it is to put a small sticker on a road-side “Stop” sign to fool an image recognition system into thinking this is, actually, only a speed limit sign, making the car zoom past it dangerously. This IoT-focused research was done at the University of Michigan and is published in two interesting articles: Physical Adversarial Examples for Object Detectors and Robust Physical-World Attacks on Deep Learning Visual Classification. Source code for this example is available on GitHub.

In another adversarial example you can see how easily an image of an animal, a panda in this case, a cute, fluffy bear, can have a little digital noise added to it to fool the “AI” to think it is something completely different: a gibbon, which is an ape. This research is published in detail on OpenAI: Attacking Machine Learning with Adversarial Examples.

I hope you can see that if it is so easy to make the currently-in-vogue “AI” dumb, then perhaps we are not yet ready to put it into daily lives. While we are just “having some fun” with it, perhaps it does not matter—who cares if Siri or Alexa make a funny mistake, right? But would you like this level of reliability to affect what happens on the roads? At school? In a hospital? At the operating theatre during an automated surgery? On the warfield, letting machines decide where to shoot?

I would like to suggest a few simple rules that we should observe when building not just AI, but indeed, any algorithmic-bases systems that can affect human lives. I have been living with those rules for years, making sure that if I work with a customer, it is OK to use AI/ML/DS/DM/etc only if it is:

  1. Legal, ethical, and moral to build and use it
  2. It always works, or there is a recourse when it fails, or failure is acceptable to everyone
  3. All actions can be explained in simple, human terms

Unfortunately, that is not what I am seeing happen today. Companies build systems using ML, especially deep learning, which they cannot explain, which cause damage, and which are often unethical (Facebook, anyone?) and borderline illegal, even if the creators’ morality somehow permitted that. Part of the drive towards it comes from economy: first of all, much money has been made by breaking ethics and acting against the society (like Cambridge Analytica in another Facebook scandal). Secondly, building ML is much cheaper than paying thinking developers and designers to create logic-based systems or rules, frameworks, not to mention testing them. ML is cheap. DL is extremely cheap—the attraction to occasionally making incredibly wrong decisions in exchange for its low-cost is high.

Those systems will eventually fail. Sometimes an epic self-destruct happens (financial flash crashes in recent algorthmic trading), other times customers will leave and abandon. I just hope that you, your work, and your customers can avoid this unnecessary damage. My next video explains some approaches for avoiding it while building good AI.

Log in or register for free to play the video.

Purchase a Full Access Subscription

 
Individual Subscription

$480/year

Access all content on this site for 1 year.
Purchase
Group Purchase

from $480/year

For small business & enterprise.
Group Purchase
 
  • You can also redeem a prepaid code.
  • Payments are instant and you will receive a tax invoice straight away.
  • We offer sales quotes/pro-forma invoices, and we accept purchase orders and bank transfers.
  • Your satisfaction is paramount: we offer a no-quibble refund guarantee.
  • See pricing FAQ for more detail.
In collaboration with
Project Botticelli logo Oxford Computer Training logo SQLBI logo Prodata logo