I recently watched the Avengers: Age of Ultron movie. If you’re into science fiction and action movies with great special effects, I highly recommend it. Its IMDB rating is 8.2 and this is also a good sign as I find that movies with a rating above 8 are usually very good.
The movie is another example of one that explores the impact that technology will have on our future lives. Technology is becoming increasingly mainstream and this is being reflected in the themes of Hollywood movies.
The core issue explored in this latest version of the Avengers is artificial intelligence. Although many parts of the movie are exaggerated, as superhero movies should be, the movie actually helped grow my thinking on AI.
Prior to watching the movie, I thought of AI as being within human control. We could use it to benefit humanity by building artificially intelligent robots that help us, or to hurt humanity by building robots that destroy us. My thinking about AI was similar to nuclear technology. Nuclear technology can be used to achieve both positive (nuclear energy, assuming there isn’t an accident) and negative (nuclear bombs) outcomes for humanity, with both being under human control.
However, as the movie shows, AI is different than nuclear energy. By definition, artificial intelligence implies that machines are able to learn. The machines do not simply take inputs and produce outputs as is the case for nuclear technology. Rather, they take inputs, produce outputs, and based on the feedback which they get from those outputs, are able to learn and change their actions in the future. As a result of this ability to learn and change their behavior, their actions may evolve to be different than what humans originally design them to be. Even if our goal is for the machines to act to help us, their capacity to learn from the experiences they’re exposed to could cause the machines to change their behavior such that they harm humans. This is actually similar to how, as humans, we learn to be caring or threatening individuals.
I’m not an AI expert so I don’t know how big a risk this is and what, if any, measures can be taken to prevent it. But the movie certainly made a good case for the possibility that, even if designed with good intentions, AI could get out of our control.