Is self-play a bottleneck in theory for AlphaGo to improve? My Perspective is not! The real problem with AlphaGo (and any other AI and human) are the state space of Go are much larger than the state space of Its neural network, therefore no matter how we train it, it still suffers from the underfitting problem. Which means there is always a problem with the value network and Policy network that, when some cases was trained very wel L, other cases pops up.
But in terms of supervised learning vs unsupervised learning, they-differ in training set, which means they make A Lphago ' s neural network bias to a certain style and handles certain cases very well. Unsupervised learning can provide all the information so supervised learning can provide, imagine the board is 9*9, Unsu Pervised learning is absolutely enough to provide a good training set. So, unsupervised learning are not really a bottleneck in theory, but in practice supervised learning makes AlphaGo bias to A certain style, and has a better chance to win a certain opponent. When it neural network gets larger to being able to accommodates more states, the value of supervised learning is also D ecreased.
Because of the underfitting problem, the value network may get wrong on who's winning the game on some states which may lo OK simple to human. This is what AlphaGo use MCTS to rollout for many steps for validation, if playing down some steps, the game I s still in favor of AlphaGo, the original state is considered truly good. So, AlphaGo are really a mixture of "Intuition + Logic", this is very similar to human.
This design makes it very hard-to-catch AlphaGo ' s weakness, but it does exists. Based on the analysis above, the weakness of AlphaGo are clear to me now:its value network gets wrong on not only one stat E, but also many steps following the state. Although the probability is very low, but it does happened in Game 4. Brilliant Lee sedol!
What is the bottleneck and weakness of AlphaGo?