AI Legal Personhood Distresses AI Ethicists Since People Could Deviously Scapegoat Machines To Avoid Apt Human Responsibility, Including In The Case Of AI-Based Self-Driving Cars – Forbes

Legal personhood is a perhaps surprisingly controversial topic.

Actually, when you think mindfully about it, there has been a longstanding historical morass underlying the notion and scope of legal personhood. Are all humans imbued with legal personhood? What legal rights and restrictions are associated with legal personhood? If society deems that some people are to be denied legal personhood, is there any recourse to overcome such a rejection?

On and on it goes.

A momentary reflection reveals that this is indeed a highly and at times exceedingly inflammatory topic. Whether this is dutifully sensible to have such controversy is a matter of keen debate. The obvious thought is that everyone should have legal personhood and we would summarily be done with any contentious discourse thereof. Regrettably, that declaration alone won’t settle the matter because you can then argue about varying levels of legal personhood. Getting into the rubric of legal personhood is just the table stakes while defining various degrees of personhood then comes directly to the divisive fore.

Suppose that we expand our viewpoint about the “person” aspects of legal personhood and decide that something other than human beings per se could be squeezed into the personhood categorization.

For example, you might be aware that in some respects and some cultures and at varying times in history and even the present, businesses or corporations have often been construed as having a semblance of legal personhood. Keeping in mind that a company is actually an entity and not a living breathing human, you would certainly and rightfully be hesitant to believe that a corporation or similar legally formulated business ought to be in the esteemed realm of personhood. Part of the asserted rationale could be that since a company is ostensibly run by and employs people, perhaps there is a demonstrative logic in extending personhood to encompass what is otherwise decidedly not a human unto itself.

Let’s try something that is either further afar from personhood or possibly closer to the shadowy semblance of personhood, depending upon your viewpoint.

I’m talking about Artificial Intelligence (AI).

Recommended For You

On a purely abstract basis, if AI were to programmatically or otherwise attain the equivalent capacity of human intelligence as do humans, this would surely be a solid basis for proclaiming that AI ought to be granted legal personhood. It makes indubitable sense. We intuitively ascribe personhood to having presumably a human physical form and having a human thinking formulation too. When a machine is able to attain that base requirement, we naturally would consider entering them into the legal personhood classification.

I kind of snuck a few words into that preceding definition about personhood that we should take a moment to mull over.

You might customarily hear or read about legal personhood being associated not with AI but instead with robots. Thus, the usual discussion entails arguing about robots being able to have legal personhood rather than saying that AI might have the revered categorization.

I’d like to briefly address the matter.

Without oversimplifying, let’s for purposes herein agree that a robot is the physical appearance of a machine. You’ve seen robots that work in factories and robots that are portrayed in sci-fi movies. These machines might be crafted to look similar to humans, potentially having limbs that can act somewhat like human arms and human legs.

Not all robots necessarily need to resemble humans. There are those robots that we see in viral online videos which look eerily like a dog or some kind of animal. They might be devised to crawl or walk. Some robots are able to leap and climb. A devised robot might be able to do physical actions that a human is generally unable to do or would need to have an augmented capacity to perform.

What is it that informs the robot about what actions to undertake?

Some robots are remotely controlled by a human operator. The robot is not at all autonomous. A human must be involved for the robot to function. Some robots work in a semi-autonomous fashion. A human might partially control the robot, doing so from time to time. The rest of the time the robot is operating without continuous human manual control. And then some robots are considered fully autonomous, such that no human operator is needed.

You might suggest that AI is the capability that enables a robot to act either semi-autonomously or autonomously. The AI does the “thinking” part of the machine. I have purposely put the word “thinking” into quotes due to the important point that you should not be confounded into believing that today’s AI can do the depth and range of thinking that humans can. Despite any of those wide-eye wild headlines blaring that AI is or is about to be sentient, rest assured those outsized pronouncements are blarney. I’ll be saying more about that in a moment.

In any case, I am merely trying to suggest that when referring to “robots” this often brings forth the imagery of a humanoid-like machine. But there are all shapes and designs of robots. Walking robots. Crawling robots. Rolling robots. Flying robots. They don’t have to be made in the direct image of humans if you will.

There are also so-called “dumb” robots that are the ones in the realm of being only able to function when a human must be at the controls. The “smarter” robots are the ones that have a modicum of AI in them and they can work semi-autonomously or fully autonomously (just to let you know, I don’t like the terminology of dumb or smart, in general, but also in particular for robots since it once again implies human-like thinking facets).

I’ve dragged you through that backstory to clarify that I tend to refer to “AI” when discussing legal personhood, rather than referring to robots. The problem with using the phrasing involving robots is that as I’ve just elucidated, a robot might have no more thinking-related capacity than a brick. I doubt that we would readily seek to anoint legal personhood for that style of a robot.

I believe that most people that discuss robots as being candidates for legal personhood are probably silently and without maybe even knowing it are construing robots as that which has a human-like form and that has human-like thinking. This is their underlying default assumption.

You could argue that such an assumption is problematic. What about robots that have essentially no AI in them and are as responsive as a brick? Another twist is this. What about robots that do not resemble humans in any stretch of the imagination, such as robots that look like salamanders, and yet what if that robot has AI that renders the robot into the arena of human-like thinking? Luckily, today we do not have salamanders that are as sharp as humans, which if we did, presumably those creatures would be trying to convince us that they too should have legal personhood. Imagine that conversation.

My gist here is that without the AI being in the box, as it were, you would seem hard-pressed to be strenuously seeking legal personhood for a robot. You could dubiously make such an argument if you wish, I’m just suggesting this would seem on precarious grounds.

As an aside, some of you are probably hollering aloud right now that AI as I am couching things is not what you consider AI to be. I grant you that. For example, some would say that AI is a large umbrella that encompasses a wide array of technologies, including robots. Ergo, AI can be defined as a field of study that includes machine learning, knowledge-based systems, natural language processing, robots, and so on.

The way that I seem to handily be using AI in this discussion is by suggesting that it is the artificial intelligence or cognitive underpinning of machines. I am then tossing robots into the broad category of machines. Not everyone will like my doing so, but I believe that now that I’ve explained the matter, we can at least sufficiently proceed ahead with this particular discourse on an eye-to-eye basis.

If nothing else, this once again highlights the thorny nature of figuring out legal personhood.

Moving on, we assuredly can have machines that are not autonomous, we can also have machines that are semi-autonomous, and we can have fully autonomous machines too. Can any of those be anointed with legal personhood? Can all of them? Can only the fully autonomous be so dubbed with the classification since it is closer to human-like thinking capacities than the others?

This also brings up another related twist that might get your mind roiling.

First, as you know, nowadays you can interact with a conversational AI system such as Alexa or Siri. This is usually done via using a microphone and an audio speaker. You might have a specialized device for this purpose or possibly simply use your smartphone. The guts of the conversational AI system are usually in the cloud, through to some degree an app on your smartphone is likely to also be participating in the AI activities.

The AI that does the “thinking” can be on your smartphone, in the cloud, or embedded within computers of all manner of size and shape. If we are going to consider providing legal personhood to AI, what is the physical thing that will be getting the personhood nod?

In other words, could your smartphone be the legal personhood holder if the AI was running on the computer chips within it? If the cloud was running the AI, would you instead appoint the cloud servers to be the legal personhood holder?

That’s a conundrum, for sure.

Here’s what some are mistakenly doing, either by design or by lack of awareness. You see, it is much easier to try and pretend that we are only envisioning walking-talking robots as potentially garnering legal personhood. Doing so tosses aside the entire conundrum about the physical form aspects. Apparently, if there is a robot that walks and talks, and looks a lot like humans, it is the only viable candidate for legal personhood (the proverbial walks like a duck, quacks like a duck mantra).

That doesn’t really make much sense and we are going to have to grapple with the machine-oriented physical form variations, sooner or later.

I’ll add more fuel to the fire.

Even if we decide that the physical form can vary dramatically, and if we instead focus on the AI “thinking” aspects as the cornerstone for the legal personhood requirements, we are still in a problematic pickle. How much AI is enough to land into the legal personhood sphere? For example, an AI program that can best all of the world’s top chess champions doesn’t seem a proper choice for legal personhood on that singularly fascinating feat alone. That kind of AI is typically extremely shallow and constructed solely to play chess. It cannot do anything else, such as aid in solving world hunger and not even potentially play the game of checkers.

Perhaps you can now see the head-scratching challenges facing the notion of fitting AI or robots into the revered legal personhood attribution.

Besides the numerous and confounding technological questions, we should enlarge our scope to puzzle over the ethical ramifications too. This is all part of the rising tide of Ethical AI, sometimes also referred to as the ethics of AI, see my analyses at the link here and the link here, just to name a few.

Pretend for a moment that we magically and amazingly are able to nail down the technological aspects (that’s a tall order), and we really do have AI or robots that we societally agree are deserving of legal personhood.

Does that settle the matter?

Heck no, it only gets us further entrenched into the rest of the morass. We still need to then ascertain how the AI or robot legal personhood aligns with or differs from the human legal personhood. Are those semblances of legal personhood to be completely equal? Seems doubtful, but we’ll have to wait and see. If they are unequal, what do humans get with their legal personhood that the AI or robots do not? And, likewise, is there anything that the AI or robots get with their legal personhood that humans do not? Does any of those differences give rise to potential discriminatory practices, impacting the human side or even possibly the AI or robots side of things?

Stew on that for a little bit and you’ll undoubtedly get your mind racing and feel that your noggin has been wrapped into a pretzel.

Ethics and Ethical AI play a gargantuan role in these considerations. We certainly should not be relying only upon the AI techies to figure this out, and nor should we necessarily solely rely on the legal beagles. Whenever the law gravitates toward the legal personhood riddle, there are bound to be ethical complications. In the famous book of 1850 entitled The Law by Frederic Bastiat, the author proffers this astute insight: “When law and morality contradict each other, the citizen has the cruel alternative of either losing his moral sense or losing his respect for the law.”

There are seemingly fifty good reasons to not grant legal personhood to AI or robots, and probably fifty reasons to claim that we should be heading in that direction (the choosing of the number fifty is a bit of tongue in cheek, perhaps modeled after the popularity of knowing that there are apparently fifty shades of grey).

I’ve covered many of the identified reasons for and against the legal personhood of AI or robots in my prior columns, see the link here. For today, let’s tackle one particular reason that is not usually given much airtime. This is a less-discussed reason to not grant legal personhood to AI or robots, taking us knee-deep into the Ethical AI arena.

Are you ready?

Suppose that AI or robots do have legal personhood. A possible and realistically plausible consequence could be that humans would leverage, exploit, or get confounded by pointing fingers at the AI and tend to hide behind or shift attention to the AI rather than perhaps more rightfully acknowledge the human culpability underpinning a grievous AI-related circumstance that has gone awry.

That’s a mouthful. Time to unpack it.

We’ll use a quick and easy scenario to highlight the issues involved.

An AI developer has crafted an AI system to do some actions that are generally useful to humans. The AI seems to be working fine and people that are daily using it are satisfied. One day, a person interacting with the AI gets injured via the AI system.

Who is legally responsible for what the AI did?

I’ve discussed at length the impending “AI blame game” that we are already veering toward, see my coverage at this link here. You could say that the AI developer ought to be responsible. As the developer that programmed the AI, we should hold that person’s feet to the fire. Or maybe the company that employs the AI developer ought to be on the hook. The company opted to hire the AI developer and put them to work, and furthermore, the company unleashed the AI onto the public. All in all, the company is perhaps where we should be setting our legally responsible sights.

Sometimes the developers and others behind-the-scenes are quick to try and blame the AI. They are fast to emphasize that it was the AI that caused the injury. Do not look at the AI developer. Do not look at the company that hired the AI developer, even though the company oversaw the crafting of the AI and put the AI into active use. No, definitely don’t look there, we are supposed to instead intently gaze at the AI.

Magicians do this, it is called theatrical misdirection.

For AI systems, the attempt to shift the focus and the blame is kind of devilishly clever when you think about it. The public at large is intimately familiar with machines that don’t work or errantly work. People oftentimes take their anger out on the machine and do not think about all of those humans that put the machine together and put the machine into their hands. How many times have you heard someone use the excuse that the darned misbehaving computer did this or that?

We all share that same frustration and simply shrug our shoulders.

This vividly showcases that people can be tricked into becoming focused on AI. Their scrutiny is diverted away from those humans that devised and fielded the AI. The thing is, legally, in today’s world, there isn’t any legal recourse to somehow go after the AI for your efforts of righting an injustice when you’ve been wronged by an AI system. If you decide to legally pursue your just compensation for the injury by going after the AI, please realize that you aren’t going to be able to compel the AI to walk, crawl, or somehow enter into a courtroom to face the truth piercing scrutiny of justice.

Instead, in today’s world, you could potentially pursue the hope for justice by going after the humans that underlie the AI system, including specific humans or possibly the company or companies that pertain to the AI system. They are traditionally within the sphere of being held legally responsible or legally accountable.

That being said, remember that we are also now pretending that we are going to provide legal personhood to AI or robots, and if so, the scenario that I have sketched is going to radically change. The scenario won’t change, but the pointing of fingers is going to legally demonstratively change.

Presumably, an AI that has legal personhood can be pursued for legal redress.

You could try suing the AI directly for compensation regarding your injuries. Besides civil legal actions, you could possibly have the government seek criminal charges against the AI. For those of you with a curiosity about this sort of thing, you might take a moment to consider how the AI would be criminally imprisoned or otherwise judicially punished for its crime, assuming that the AI or robot was found guilty (and was not able to overturn the verdict on appeal).

This determined pursuit of the AI or robot as to its legal responsibility due to having legal personhood seems perhaps judiciously satisfying. If it was the AI’s fault, and since it has the legal duty of care as presumably imposed via having legal personhood, the AI ought to pay the consequences for its wayward acts. No need to waste time going after any humans that were involved in the casting of the AI. Set your eyes on the AI and drag that AI into court.

I assure you that coping with AI that has legal personhood and seeking redress from that AI is going to be a rat’s nest of legal challenges. I realize you might snicker at this next comment, but we would almost decidedly have lawyers that would be asked to represent the AI or robot during any justice-seeking endeavors. One would assume that legal personhood should also have adequate legal representation.

Again, for a bit of an offshoot of mind-expanding considerations, if we can achieve AI-based legal reasoning sufficiently to perform the work of human lawyers, would we therefore possibly have AI that represents the AI that caused the injury? That’s a side tangent for you to ponder.

Getting back to the matter of the AI or robot as having legal personhood and carrying the legal responsibility accordingly, suppose that the AI wasn’t truly at fault. Nonetheless, the humans that perhaps crafted faulty AI could hide behind the AI garnered legal personhood. Whereas today this kind of hiding is not going to get you very far, the gambit becomes much more viable if AI has legal personhood. At that juncture, the expectation is that the AI will be held legally accountable.

Researchers at the University of Oxford stated the matter this way: “Attributing electronic personhood to robots risks misplacing moral responsibility, causal accountability, and legal liability regarding their mistakes and misuses. Robots could be blamed and punished instead of humans. And irresponsible people would dismiss the need for care in the engineering, marketing and use of robots” (by co-authors Luciano Floridi and Mariarosaria Taddeo, “Don’t Grant Robots Legal Personhood” in Nature, 2018).

There are lots of variations concerning how this blame-shifting might take place.

Someone might truly know that they were the bearers of the misconduct but opt to use the AI as a potential legal and ethical shield to protect their dour deeds. You can’t do so today because the AI of today does not have legal personhood. Once we allow legal personhood for AI or robots, the door is opened for this kind of shield-bearing shenanigans. Is it Pandora’s box that is being opened?

Another possibility is that someone is unsure of whether they were the source of the misconduct, but they go along with the idea that it probably is the AI. This is an instance where the human is not trying to overtly pull a fast one. They are simply going to wait and see what happens when the pursuit of the AI or robot reveals whether the AI or robot is in fact the legally accountable source or not. Might true justice be delayed or obscured by this?

As you can plainly see, the AI or robot will be a juicy scapegoat.

A fiendish avenue also then becomes feasible. An evildoer embeds something untoward into an AI or robot system and then waits to see the desired adverse outcome. When the AI or robot causes injury or possibly fatalities, the AI or robot is the legal focus of the inquiries. It is conceivable that the wrongdoers might have time to escape or that they will never even be pursued due to an assumption or claim that it was only the AI or robot that was the perpetrator of the malicious act.

Fiendish!

Suppose evildoers merely tricked the AI or robot into performing a monstrous act. There might be little if any indication that the AI or robot was swayed in that direction. Once again, our attention anyway is first and foremost going to be on the AI since it has legal personhood as its banner.

Without going wildly overboard, imagine that the fiends were so sneaky that they planted evidence into the AI or robot that would cause a computer-focused forensic analysis by human forensic experts to incorrectly surmise that the problem was within the AI or robot and not by some other external means. I realize this seems like a plot from the ever-classic Columbo series, but the insidious schemes of evildoers should never be underestimated (Columbo abundantly taught us that).

Before we go into the stratosphere on this, there is the other side of the coin that you can equally consider. Suppose the AI is so clever that it can make things seem as though humans were legally responsible, trying to shift the legal burden away from itself. If that seems preposterous, remember that we are pretending that AI is good enough to garner legal personhood. We might reasonably expect that anything or anyone granted legal personhood is likely also clever enough to want to slide out of legal responsibility if feasible to do so, especially assuming that there are legal repercussions for getting caught.

The usual retort is that we should program all AI to strictly speaking legally uphold its legal accountability and not weasel out of being legally ensnared. Well, all I can say is good luck with that.

Round and round we go.

Overall, we can recap that AI legal personhood might be:

1)     No legal personhood for AI (today’s prevailing circumstance), or

2)     Legal personhood for AI (with full legal accountability), or

3)     A variant of legal personhood for AI such as limited or no legal responsibility

Go ahead and vote for the choice of AI legal personhood that you think is best. Come back to this in five years, ten years, and fifty years, and see if your preferences have changed.

I realize this has been a somewhat heady examination of the topic and you might be hankering for some day-to-day examples. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the AI legal personhood topic, and if so, what does this inform us about?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Legal Personhood

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and ethical AI questions entailing the AI legal personhood conundrum.

Let’s use a readily straightforward example. An AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had given special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.

Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.

Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.

That’s something we might all need to get accustomed to, rightfully or wrongly.

Back to our tale. One day, suppose a self-driving car in your town or city suddenly rams into a pedestrian that was crossing the street. Sadly, the pedestrian was seriously injured (fortunately, they will recover to full health).

Everyone goes up in arms over the self-driving car colliding with a pedestrian. Outrageous! A danger to society. A menace on our peaceful streets.

Who is to blame for this reprehensible act?

If this happened today (of which, similar cases have occurred), the knee-jerk reaction might be to blame the AI. The AI did it. No other explanation is possible. Unless a remote human operator was somehow intervening with the driving controls, the culprit has to be the AI driving system. Case closed.

But, wait for a second, remember that the AI of today does not have legal personhood. You can certainly argue that AI driving likely is the root of what caused the collision. You can dig into the AI system and try to trace what happened as per the programming of the AI. All of that will aid in revealing what the AI did and did not do, which presumably led to the collision.

Such a detailed technological inspection and review will be essential fodder for going after the AI developers, the automaker, the self-driving car systems builders, the fleet operators, and other human or human-based companies that had a hand in the self-driving car. We might also see a legal pursuit of the city or governmental body legally responsible for the stretch of road that the incident occurred at.

You are not going to see anyone seriously try to legally pursue the AI per se.

With that handy exemplar in hand, shift your mindset to the future. The future might consist of assigning legal personhood to various AI systems. Pretend that the AI driving system of this brand of self-driving cars had been legally declared as officially embodying legal personhood.

The AI is now fair game for the legal accountability search mission.

Let’s revisit the discussion earlier about ways in which AI might be utilized as a scapegoat, either by intention or by happenstance.

We have these types of possibilities:

  • Doer of misconduct hiding behind the AI as a shield for legal misdirection
  • Potential misconduct by a human but waiting to see how the AI first fares
  • Evildoer that implanted evil into the AI and hopes no one will be the wiser
  • Wrongdoer that sneakily tricked the AI into performing an untoward act
  • Bad doer that directly did the foul deed (not by the AI) while making AI the fall guy
  • Etc.

We can briefly apply each of those scenarios to the imagined incident of the self-driving car that has struck a pedestrian (we are assuming too that the pedestrian was not at fault or might only be partially at fault, thus the AI is still on the hook).

Here’s how it might go:

  • Doer of misconduct hiding behind the AI as a shield for legal misdirection

An AI developer that crafted the AI driving system looks at their code and realizes that they mistakenly neglected to include a segment that would have prevented the collision from occurring. Yikes! The AI developer doesn’t want to lose their job. Their career is also on the line.

When asked what happened, the AI developer explicitly states that it must be the fault of the AI and the AI ought to take the fall.

  • Potential misconduct by a human but waiting to see how the AI first fares

The company that is operating the fleet of self-driving cars is unsure whether they might have caused the collision or whether the AI did it. There is a chance that poor maintenance of the autonomous vehicle could account for why the collision occurred. On the other hand, it seems that the maintenance records showcase that the proper upkeep was taking place.

The top executives of the firm agree to wait and see whether the AI gets legally pinned for the collision.

  • Evildoer that implanted evil into the AI and hopes no one will be the wiser

A hacker hitman was hired to injure a specifically targeted person that lived in the town and was known for frequently walking from their office to a nearby diner at lunchtime. The hacker used various cyber breaching tools to get a snippet of code into the AI driving system that would detect the specific person and attempt to ram into them.

Sure enough, by routing the self-driving car to the known location and at the right time of day that the target routinely crossed the street, the evildoer code insertion directed the AI driving system to strike the person upon detecting their presence. It will be extremely hard for anyone to figure out that the AI evildoing took place since the snippet of code erased itself immediately after the mission was accomplished.

  • Wrongdoer that sneakily tricked the AI into performing an untoward act

A jilted lover of the pedestrian had decided that some revenge was warranted. They stood at the corner where the target routinely crossed the street just at sunset after work was over. Holding a well-polished mirror, the wrongdoer angled the mirror to relay the remaining sunlight into the cameras on the self-driving car. For a brief instant, the AI driving system was not getting needed sensory data and inadvertently plowed into the pedestrian.

You could say that the wrongdoer tricked the AI by exploiting a known weakness of this particular AI driving system and its set of electronic sensors.

  • Bad doer that directly did the foul deed (not by the AI) while making AI the fall guy

A malcontent was standing next to the pedestrian. They did not know each other. Nonetheless, the troublemaker did not like the person. When a self-driving car came down the street, the bad doer pushed the person onto the street. The AI driving system could not stop the autonomous vehicle in time.

Bam, the self-driving car struck the pedestrian and produced the corresponding injuries. The bad doer sprinted away to avoid being spotted and figured that the AI would be the fall guy for the incident.

Conclusion

For any of you that might quibble with the admittedly plotted examples, I believe they are within the bounds of reasonably being possible. I realize that many in the self-driving car industry are insisting that for example, no one will ever be able to cyber breach a self-driving car, but this is wishful thinking. You can bet your bottom dollar on cybercriminals and bad hat actors striking the myriad of AI driving systems and self-driving cars. It is only a matter of time until enough self-driving cars are on the roadways and whether there is any profitable or beneficial reason to hack them. See my further analysis on this at the link here.

The primary overarching point is that we need to carefully consider whether we want to change our laws to include AI or robots as being able to possess legal personhood. This is a humongous question. Legal entanglements of immense proportions will ensue. Ethical entanglements of immense proportions will ensue.

The acclaimed Roman statesman and lawyer Marcus Tullius Cicero had famously uttered this sage wisdom: “The safety of the people shall be the highest law.” We must ask ourselves, are we going to be safer or less safe if we opt to designate legal personhood to AI or robots? The good news is that you get to decide since the matter is still being debated and the die has not yet been cast.

May the laws and our ethics do us well.

Spread the love

Leave a Reply

Your email address will not be published.