Sorry, you need to enable JavaScript to visit this website.

video
Icon for Engerize the Core

Build Autonomous, Mindfully

Takeaways

  • Companies must build trust with customers interacting AI
  • In order to scale AI, we must instill humanity into the process
  • Raising confidence in AI algorithms is crucial to adoption
Download the full report

Capitalize on the Use of AI, However…

Today it seems the best algorithms are primed to win the race. However, the reality is much more complex. Companies must keep people in mind as AI expands in our lives, which will require taking much more responsibility for the consequences of our own inventions.

Designers and engineers must work around technical limitations such as concerns about black-box models that have no way of explaining their processes, the manipulation of outcomes and inherent biases in data.

Trust is required for customers and consumer to have confidence in critical AI products and services. Trust that the company respects privacy while gathering and collecting the specialized data sets needed to produce better machine-learning models that won’t exploit the humans they serve.

To achieve trust, companies must grapple with how to avoid unintended consequences, which goes beyond designing algorithms that simply make decisions based on optimal outcomes.

Image of streetlights
Math alone won't bring the human curiosity, empathy or passion needed to solve global problems.

Bringing Humanity to Autonomy

Every day around the world, designers and engineers are making decisions about how to implement AI systems with the best of intentions. In this way, they are setting the direction for how we will experience even more advanced AI systems in the future.

The innovative algorithms that allow AI to get better over time will uncompromisingly seek to converge on a solution. That’s not always a good idea. Consider the decision-making process of a self-driving car. A 2016 survey assessed people’s choices on the moral behavior of autonomous vehicles in three hypothetical situations involving an unavoidable collision. The scenarios resulted in the death of one pedestrian, multiple pedestrians or the occupant of the autonomous vehicle. Most respondents chose the self-sacrificing behavior of the autonomous vehicle for the greater good. However, some respondents favored self-preservation when the occupant was a family member or themselves.

Involving people in a deeply automated process loop can be a straightforward way to improve the outcome of an algorithm. For example, startup company Phantom Auto has launched a service that literally lends a helping hand to self-driving cars by having real human drivers located in a remote operations center intervene when the vehicle gets “confused.” The remote operator can add life-saving checks-and-balances when the car doesn’t recognize a sign, or there’s bad weather.

At the end of the day, we need to understand better what it means to “pull the strings” by designing more conscientious algorithms. While using the word “humanity” as a key principle, the point here is to keep people in mind. For example, the G7, representing the largest advanced economies in the world, has agreed on a “human-centric” vision for AI where there must be deliberate care taken to recognize the ethical, cultural and regulatory impact of the technology.

Raising Confidence in AI

In addition to ensuring that companies lead the way forward responsibly with AI, designers and engineers must increase the confidence level in the technology to ensure mass adoption in a meaningful and safe way.

With all the wild speculation around the future of AI, it’s easy to forget that intelligent systems are not born—they’re designed by humans.

– Karin Giefer, Executive Creative Director, frog

So far, the commitment by platform companies to the developer community has been impressive. Since 2012, Google has closed 14 AI acquisitions and IBM, Apple and Microsoft have strengthened their offerings with targeted acquisitions and investments across the AI pipeline and work ow. While these investments say a lot about the importance leading technology companies place on scaling their use of AI, it says nothing about the confidence we have in the inner workings of the AI technology itself.

Regulations are going to be a part of the answer to AI safety and security. However, we are in the very early stages of developing regulations and international standards for AI products, which are fundamentally different from traditional software. Algorithms and their trained models can produce unexpected and even unpredictable results beyond human control. There are two examples of AI guidance that are just now being introduced: the European Union’s General Data Protection Regulation (GDPR) requirement allows consumers a "right to explanation” and the IEEE’s standard requires “ethically aligned design.”

When creating products that have never existed before there will be many technical challenges—black-box models with no way of explaining processes, bad actors that can manipulate outcomes and latent bias in data.

We see five areas that will need attention to raise confidence:

  1. Solve for Biased Decision Making
  2. Know when to keep humans in the loop
  3. Taking AI to the edge
  4. Be prepared to explain your AI
  5. Plan for AI to be compromised

Download the Full Techvision 2018 now–

Book cover
Image for Digital Supercycle
Next trend

Bias for Action