Follow

Future-shock Check-in: Algorithmic Morality
--
There is an active discussion in many fields about developing a global standard of autonomous ethics and morality.

The end is to create a shared, global, industry standard on how autonomous and artificial intelligence makes algorithmic decisions.

The simple and imminent example: The problem of the self-driving car in emergency "no-win" situations. A car will have, engineered into it, the ability to decide life and death in lose-lose situations.

Imagine this: A car is driving down the street, and a situation arises where there are multiple (say 7) people crossing the road without looking, and the only way to save them is to maneuver the car into possible self-destruction (e.g. veer into a concrete wall very fast).
What is the decision? Save the passenger(s) of the car by hitting the unlawful/negligent pedestrians (a much lower probability of harm to the passengers than a wall), or save the pedestrians and probably kill the occupants?

- What if there is only 1 occupant, but 4 pedestrians? (The "greater good" view here kills the car passenger).
- What if there are 4 occupants, but 1 pedestrian? (The "greater good" view here kills the pedestrian).
- But, the pedestrian usually always has the right of way. But is it worth killing 4 people over
- What if the pedestrian is a child vs. a retiree?

- Or a working age adult vs. a mom and a stroller (economic considerations in the decision)?
- What if Volvo makes a different decision in its algorithmic decision tree than does, say, BMW, or Ford?
- What if the pedestrian had been convicted of raping a 9-year-old?

etc.
etc.
etc.

In the not-too-distant future, the autonomous vehicles and artificially-intelligent systems in our society will have access to real-time information about everything and everyone around them, including each other. There will be a thing often called “car-to-car” (V2V) communication which will be a hive-mind system of conditions and other information which will be utilized by all vehicles to make routing decisions (based on things like traffic, weather), as well as emergency situations.

There are competing schools of philosophy here.

There are also competing decision-making processes biologically baked into our genetically shared frontal lobes about right and wrong, about self-preservation vs. greater good of the group/tribe/town/species, etc.

There is no right answer in all cases to uncountable scenarios - So there is a consortium of experts from many fields coming up with a shared, consensus-based standard against which to code all autonomous system algorithms going forward (hopefully).

After some trials in the real world, negative and positive outcomes are observed, and case law begins to fill in the question marks, it is more than likely that a single, shared algorithm for this kind of decision will be a single global standard:

No business, organization, or government will have their own standard.

This "shared-standard" also will certainly be demanded by the Insurance industry players so that it will greatly reduce the internal actuarial overhead of a huge number of standards and variables to calculate and maintain for the coming autonomous machine insurance categories.

So, just think: Right now, there is a programmer making algorithmic models for a swath of societal, industrial, medical, and governmental professionals and experts to discuss, consider, and synthesize a homogeneous standard of "who likely dies" in a huge variety of situations.

Now – a reality check:

There will be FAR FAR FEWER deaths attributed to transportation accidents once that even a 20% minority of moving vehicles on the road are autonomous (trucking will be early adopters, Teslas and Benzs can already drive themselves NOW, etc.).

This is because autonomous vehicles have also been coded by a team of programmers to keep safe distances and speeds relative to other cars and people.

Autonomous vehicles know a lot more about themselves than previous cars did: They are supercomputers on wheels for the most part). As such, they understand the capabilities of their own systems better than the vast majority of human drivers.

They won't exceed operational limits the way we all have in the past with our own cars - whether intentional or otherwise.

Also, they don't get drunk
or high,
or sleepy,
or distracted by text messages,
or emails,
or Facebook.

They can see in the dark.
They can see in radar.
They can see in lidar.
They can see in infrared.
They can see in ultraviolet.
They will be MUCH MUCH better drivers than the vast majority of human drivers right now.

There will be some growing pains, some controversies, but the currently HUGE number of people that are hurt or die in car accidents globally will plummet-to literally ALL TIME LOWS.

Cars have only become safer as the years have gone by, and this will be a quantum leap of safety in (probably) less than 20 years' time.

But, there will still be deaths inside, or involving, autonomous vehicles. And there will be cases (have already been…) of litigation, blame, suspicion, and skepticism about the morality or effectiveness of the programming and design of these types of transportation.

Making a global standard for these kinds of decisions is a good thing, but this also can be seen as another step closer to a globally-shared culture, further elevating GLOBAL society above towns/countries/divisions.

The same has already occurred with the banking system, the global telephone system, and of course the most impactful to date, the Internet.

But even so, this is something *some* people greatly fear and will make political grandstanding about to further agendas, or even just simply to make money off the controversy.

Don’t be distracted from the ramifications of the coming tsunami of innovation by the small, passing, thunderstorms of those that continue to look toward the past with harmful nostalgia.

There are better, and real, questions to answer, and they are being considered right now.

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.