Artificial Intelligence Learns to Play Video Games
I came across a really interesting article online about Google's DeepMind artificial intelligence, and it's ability to not only execute tasks, but also to learn. It learned to play video games, but there are countless possible circumstances to which such technology can be applied in the future.
Of course, artificial intelligence playing a video is nothing new. If you've ever played a sports or fighting video game alone, you have played against artificial intelligence that itself must recognize certain patterns in order to compete with you in the context of the game. The difference here is that this artificial intelligence was not just programmed to play a certain game, but taught itself to play 49 of them. It appears from the article that these were pretty rudimentary Atari games, not more complex modern day ones, but if AI can learn these, it's just a matter time of time before it can learn more.
The key that makes this technology applicable to a multitude of scenarios, both in video games and potentially in the real world, is adaptability. A computer, by now, can be easily programmed to recognize a limited number of potential patterns, but if it can adapt to new situations, it can recognize an unlimited number of patterns. And if it can do that, then the technology can be applied even to the real world, not just a programmed, artificial one. It is apparently not such a far jump from learning to play a few dozen Atari games to a car that can drive itself, interacting with the world and prepared for most or all of the potential situations it could encounter.