Artificial intelligence follow up

I had written about artificial intelligence in an earlier post. Basically, my initial view was that AI could produce a negative outcome for humanity not because it would be used by humans to harm humans (possible, but, like nuclear power, unlikely), but because machines could learn from their experiences to change their behavior. In the post, I had written “This is actually similar to how, as humans, we learn to be caring or threatening individuals”. The core assumption I was making was that machines would have a sense of morality, and that these morals could change with time to transform initially caring machines into threatening ones.

This weekend I read a post on AI by Tim Urban. Tim performs in-depth analyses on topics of interest to him at Wait But Why. I’m a big fan of his posts which distill complex topics to their core elements to support an informed stance on a particular issue. The posts are thousands of words long and leave my mind fully absorbed after reading them, so you may want to set aside a few hours to read and think about a specific post.

You can read Tim’s full post on AI here. In the post, Tim draws on the thoughts of leading figures in the AI space to make the case for why he believes that AI could produce a negative outcome for humanity. So far, this sounds like it’s similar to the conclusion I reached, but this is actually where the similarity ends.

The reason why I believed that AI could produce a negative outcome was because machines would develop a moral code that may not be in humanity’s interest. The reason why most experts believe that AI can produce a negative outcome is because, programmed with a specific end goal, machines can develop ever smarter ways to achieve that end goal. And part of achieving that end goal may require the elimination of humans. So machines would be eliminating humans not because of a moral decision, but simply as part of completing whatever end goal they’ve been assigned to more efficiently. You’ll have to read the full post for the fully fleshed out argument.

After reading Tim’s post, I’ve come to appreciate a second way in which AI presents a threat to humanity. I say a second way because, while agreeing that machines may eliminate humans as part of accomplishing an end goal that they’ve been assigned to more efficiently, I still believe that my original view on machines developing a moral code that goes against our interests remains valid. I see human morality as the outcome of a multitude of end goals, each with a different importance weighting which changes based on its interaction with other end goals. Seen this way, a machine programmed with multiple end goals (an AI doesn’t need to have a single end goal as outlined in the example in Tim’s post) whose relative importances continually change as they interact with each other can also be said to have a moral code. And based on the interactions which different and potentially competing end goals have with each other, this moral code may evolve in a way that produces a negative outcome for humanity.

That said, my current thoughts on AI are in no way final. They will continue to evolve as we learn more about AI in the future. I thank Tim for his current contributions to this evolution.