AI and the Future of Ethics

As soon as it works, no one calls it AI anymore.

– John McCarthy

As research into artificial intelligence grows, AI is becoming increasingly prevalent in our daily lives. But can we control it? If AI commits a crime, can we punish it? In this piece, we will explore these questions and the ethical systems that are likely to increase our chances of building benevolent AI.

But what exactly is AI, anyway?

John McCarthy coined the term “artificial intelligence” in 1956. He defined it as “the science and engineering of making intelligent machines.” If that sounds pretty broad and vague, that’s because it is. Under this definition, all the software we use today could be considered AI, depending on how we define “intelligent.” Unfortunately, our culture has taken full advantage of this ambiguity. If a company puts AI in its product description, sales and valuations increase — even though the AI may just be analytics.

So, before we can consider AI ethics, it’s important to narrow our definition of AI

3 Flavors of Artificial Intelligence

  1. Artificial narrow intelligence (ANI) does one thing and one thing only. ANI has defeated our very best at chess (1996), Jeopardy (2011), go(2016), and poker (2017). But these gaming algorithms wouldn’t have any idea how to design a house.
  2. Artificial general intelligence (AGI) is able to do everything we can at the same level that we can. This includes all of our mental abilities, such as reasoning, planning, creativity, and learning from the past.
  3. Artificial superintelligence (ASI) is superior to human intellect in every conceivable way. In the same way an ant is unable to comprehend how or why we build cars, we will be unable to comprehend the thoughts of this level of AI.

Now we’re on the same page about the types of AI. But when, if at all, will we reach superintelligence? In truth, even present-day AI is incredibly nascent. We have only achieved narrow AI, so it’s difficult to accurately speak about AI performance in the long term. Experts in the field disagree on AI’s future capabilities. Some predict that AI will stay solidly within human control, while others believe it will become unfathomably smarter than all of humanity within hours of attaining true AI. Here’s a good (if extremely long) primer on AI, expert viewpoints, and long-term consequences:

But for the general idea, here’s the quick brush.

4 Schools of Thought on Achieving ASI

  1. It is happening soon, due to an exponential growth curve in intelligence.
  2. It is nowhere near close to happening, due to the difficulty of problems also becoming exponentially more difficult.
  3. Neither of the above groups can reasonably justify their certainty; it very well could happen extremely quickly or be a long way off.
  4. It will never be achieved.

Okay, that’s nice and all, but let’s see some numbers. In a 2013 study at an annual artificial general intelligence (AGI) conference, author James Barrat recorded when attendees estimated AGI would appear. Responses varied widely:

  • By 2030: 42%
  • By 2050: 25%
  • By 2100: 20%
  • After 2100: 10%
  • Never: 2%

Now we broadly know what AI is and where the top researchers stand on its progress. Let’s turn to consider the ethical implications of AI. If and when AI is more intelligent than us, we won’t be able to control it anymore. The only thing we can hope for is that we imbued it with the correct ethical ideals so that it wants to advance our good. Do our old ethical standards work? In short, no — they have been less effective the more technology has progressed, and that’s without even considering AI. See the first piece in this series to understand why:

What kind of ethics could apply to AI?

First, let’s consider the agency AI can have. Intellectual and moral philosopher James Moor believes AI could be one of three types of ethical agents:

  • Implicit: AI has ethical constraints programmed into it.
  • Explicit: AI weighs inputs in a given ethical framework to choose an action.
  • Full agency: AI makes ethical judgments and defends the reasoning.

As we know, prominent AI researchers are split on the potential for it to achieve general (AGI) or superintelligence (ASI). It’s speculation that AI could hold Moor’s full ethical standing. But for argument’s sake, let’s assume AI will achieve AGI and have full ethical agency.

Once AI can make its own judgements, inevitably it will make choices we may disagree with. Because of that, a problem quickly arises.

Can We Punish AI If It Misbehaves?

People feel pain, suffering, happiness, joy, and remorse, which is why a system of rewards and punishments work for us. This is not, however, the case with computers. We can turn off an “immoral” AI and reprogram it. But we can’t serve it justice, because software isn’t alive in the way we are. Can we sentence AI to jail time or have it pay a fine? It seems unlikely. This shows the stark difference in human versus nonhuman ethics. People can face consequences, but AI really can’t. When AI achieves AGI or ASI, we will have a hard time controlling it, so enforcing responsibility will likely be impossible.

That means the crucial time period is right now, while AI is only narrow (ANI). If we can ingrain ethics now, it may carry over into more advanced AIs. There are no guarantees, but it’s the best chance we’ve got. Ethicists such as Luciano Floridi and J.W. Sanders approach the problem of AI ethics by putting ANI on the same standing as pets.

How are these similar? People own pets and are responsible for their actions. A dog “can be the cause of a morally charged action, like damaging property.” We can train this dog to act better, but the owner would be responsible for the cost of the damage. The logic follows that ANI that causes damage will be trained, while the owner bears the societal costs. However, with AI, the owners may not always be identifiable. Many different programmers continually edit and improve AI algorithms, making ownership tricky. Therefore, moving away from owner accountability toward overall system accountability is a possible solution. This has drawn criticism, however, because it may encourage people to be unaccountable when creating AIs. No ethical framework is perfect, after all.

AI System Ethics

When considering a systemic approach to moral accountability, it is better to be proactive rather than reactionary. Philosopher Donald Gotterbarn argues that the tech sector should avoid a malpractice model where changes are applied after something has gone wrong. Rather, software companies need to rigorously examine algorithms for ethically dubious problems before shipping them. This could be instituted through government regulation or industry standards.

Thus, even though moral responsibility within AI systems cannot be sufficiently determined and justice cannot be adequately administered, an environment that fosters a particular type of morality, with known moral shortcomings, can be established. Therefore, it is up to society to determine what morality we want to be built into AI. In a broader sense, this means programmers have a greater responsibility to the public. An emphasis on liberal arts education in computer science curriculums should allow future AI developers to fully understand the implications of their work. Another possible route could be for companies to hire ethicists and philosophers to work with engineers to determine which ethical frameworks will work best for the coming of more powerful AI.

Summing Up Our Current Situation

AI innovation is currently progressing faster than public policy or ethical considerations can keep pace. Therefore, deliberate philosophical frameworks must be incorporated into the software development process. At present, embedding ethical frameworks into software companies appears to be the most practical first step now that AI is beginning to make morally significant decisions without a clear way to assign responsibility.

AI is classified as existential threat



Leave a Reply