“It won’t happen to me!” — The ethics behind Tesla’s AutoPilot

Dr. Adam Hart
6 min readNov 6, 2019

--

In the wake of the devastation at the end of World War II in Europe (VE Day: 8th May 1945), and the end of the war in the pacific (VP Day: 15th August 1945[1]), all continents rebuilt and attempted the return to normalcy. But the damage to people and culture was severe and long lasting.

Through the medium of words and other materials, the human tragedy and meaning of this war was exposed. Authors before, during and afterwards debated the purpose and meaning of life, identity, conscience, and the existential threat that humanity had faced and could face again at its own hands, for example:

  • Jean-Paul Sartre (Nobel Prize, 1964) — noted philosopher and existentialist;
  • Albert Camus (Nobel Prize, 1957) — noted author existentialist;
  • Antonio Gramsci — noted philosopher, imprisoned for 20 years for resisting Mussolini, died in jail after 11 years;
  • Yuiko Mishima — noted author and activist,
  • Kōbō Abe — from destitute poverty to paving the way for the first Japanese to win the Nobel prize for literature;
  • Kenzaburō Ōe (Nobel Prize, 1994) — first Japanese to win the Nobel prize for literature.
  • Ernest Hemmingway (Nobel Prize, 1954) — hugely influential American author, journalist and sportsman
  • John Steinbeck (Nobel Prize 1962) — giant of American letters

At the end of 2019 perhaps these debates are now sadly relegated to dusty shelves and a Kindle catalogue backwater, and only debated inside academic circles or in enforced high school essays by children who should not have to imagine the horror of war.

But there is another potential existential threat looming for humanity.

Artificial General Intelligence and Artificial Superintelligence AGI and ASI for short.

Just like any well funded SV startup, Elon Musk started the for-profit OpenAI Corporation with luminary technology researchers from the field of deep learning motivated by the existential risk of AGI becoming a super intelligence, a ASI, which says in short that when AGI teaches itself to outstrip all human cognitive ability, humanity will again be threatened, in an currently unknown or vague way by a superintelligence(s) that perceive humanity as a threat to the planet. Maybe because of pollution, and the Gaia hypothesis will be enacted so it can dominate the Earth’s resources?

Anyway, exactly how humanity will meet its demise at the hands of a malign ASI is not known at this time, but the best way to protect this in Elon’s mind is to create human “friendly” AI (and to create a machine/human neural link).

The late Professor Stephen Hawkins and Mr. Bill Gates, the great philanthropist, have also expressed similar concerns.

OpenAI’s mandate is along the lines of the best way to stop the risk is for researchers, 100 of them currently, to become actively involved in creating “friendly” AI, at the coalface of code.

While a friendly ASI is perhaps a superintelligent agent that has the same goals and morality as humans (what exactly are they? bureaucracy, taxation and warfare, hope not!); do we already have a tiny glimpse of a “friendly” albeit narrowly focussed AI (not AGI or ASI) here with us already in Tesla’s AutoPilot, as recently reported by Bloomberg’s @zachrymider?

The critical technology issue at stake with AI is that to train any deep neural network, real-world almost-population scale data sets are required to train the pattern recognition for all permutations. And this has already resulted in deaths, which the software engineer in the article above seems OK with, probably because “it won’t happen to me”.

Ethically this stance is called utilitarianism, that it is justified to harm the minority to maximize the wellbeing of the majority. Apart from the many criticisms of this type of ethics, like ignoring justice, it is the utilitarian ethics of customers themselves who are willing to let Tesla experiment on them that is as equally problematic as Tesla running the experiment in a real-world “production” environment.

Placing a legal warning on the AutoPilot feature as Tesla has rightly done to say the human must supervise the machine is one thing, but it is the quid pro quo here is that this is the same exchange as Google and FB use. “Give us all your data and we’ll give you functionality (utility).”

Admittedly, if we drive ourselves and have an accident, we or another human is at fault. In this new scenario, a minority of humans no matter how small is placed into the role of the live crash test dummy.

If we buy a chainsaw or an axe we inherently understand the product is dangerous, so we take care, training and use protective equipment. In this case the general automotive industry over the decades by introducing more and more safety and convenience features have belied us to feel that “it won’t happen to me” which flies in the face of statistics but its how we humans feel about risk in general.

The utilitarianism of the giant global technology companies has leaked unwittingly over to the consumers. The utility we get from a Google Nest or Tesla’s Autopilot means we have no choice with these technologies but to expect mistakes, mistakes of an unexpected nature, and sometimes will get harmed by them, randomly, with no choice and no warning.

In other areas like medical care or legal proceedings, we expect efficacy and justice and we do not expect random events to happen and will rail against them. With these increasingly advanced technologies, which ten years ago would be sci-fi-like, but now are emerging, even if they only mimic human decision making, some of us are going to be harmed by random events that the machine learning algorithms are not equipped to adapt to fast enough, or can’t adapt to.

The biggest concern we should have with OpenAI.com (and Deepmind.ai for that matter) with billionaire technology moguls playing god with consumer’s ethics are that they are absolutely the wrong people to be debating the issue. Being abnormally rich might prove you are super-duper at making money, but not super-duper at untangling ethics and governing humanities future.

It is startling that the loudest voices, at least in the media debate on AI, are not authors and artists who have a deep appreciation for what it is to be human, but the people who created the global technology platforms that encircle us today, and they can because they have the brand cache and cash reserves to do so.

Back in the day, Jean-Paul Sartre was equally as famous as Elon Musk is today. Philosophers have debated public issues on television. Now, they are relegated to a TED channel.

In the same way authors and other artists after WWII responded with great works of humanity to help people’s psyche heal and come to grips with the tragic and decimating events of that time, it may be a superior way forward that instead of writing more code under the guise of making an emergent AGI “friendly”, why not apply some of that enormous financial leverage to help authors, musicians, philosophers and other artists make sense and provide balance and insight to educate scientists and researchers on the problem of the real or imaginary existential risk of a utilitarian or even malign future AGI.

This may be a superior approach to issuing 23 principles and expecting them to be adhered to because “we’re all good people”. We’re not all good people we all can’t be good, it’s micro-dependent on individual technologists personal inner narrative and epistemology, which is essentially a random distribution and can range from highly ethical to lackadaisical to evil (aka cyberhackers), just like any person.

The stated fascination of the deep learning AI community with a machine discourse and reaching singularity (AGI>ASI) is too strong to leave this future to them and the self-appointed billionaires who own global tech platforms alone.

Footnote

[1] Which is also celebrated annually in Korea as Victory over Japan Day

Originally published at https://curiousnews.tech on November 6, 2019.

--

--