Member-only story
Go is a really old board game that started in China about 4500 years ago. It’s like Chess, and lots of people still play it today. In a survey from 2016, more than 46 million people around the world knew how to play Go, and over 20 million were playing it regularly.[1]
The game board looks like a big grid with 19 rows and 19 columns. Two players, one with black pieces and the other with white, try to take control by surrounding areas on the board. The one who grabs the most space wins.
Seems simple, right? But here’s the tricky part: there are so many ways to play in Go that the number of possibilities is more than all the atoms in the known universe! That’s what makes it really complex and different from other board games

Lee Sedol is widely accepted in the Go community as one of the greatest players of all time. A national icon in South Korea, he’s often compared to sports legends like Michael Jordan or Roger Federer, boasting an impressive record of 18 international and 32 national titles. His strategic brilliance and exceptional skills make him a revered figure in the world of Go.
In 2016, Google’s DeepMind extended a unique challenge to Lee Sedol — a challenge that garnered immense attention from both Go enthusiasts and those closely tracking the progress of artificial intelligence. This challenge wasn’t just an ordinary match; it was a historic 5-series showdown between Lee Sedol and DeepMind’s AI program, AlphaGo. The clash of human intellect against machine learning capabilities became a focal point for the intersection of technology and traditional gaming.
Before jumping to the result of the match up, let’s pause and talk about AlphaGo. After all, we are here to learn something about deep learning, right ?
AlphaGo is an artificial intelligence program developed on the principles of deep learning. In a deep learning model, training takes place through artificial neural networks. These deep neural networks enable the system to learn and make decisions, mirroring human intuition. AlphaGo’s training process involved a combination of supervised learning, where it learned from labeled datasets, and reinforcement learning, which allowed it to refine its strategies through trial and error. To grasp the…