Pessimism of the Intellect, Optimism of the Will Favorite posts | Manifold podcast | Twitter: @hsu_steve
Showing posts with label Go. Show all posts
Showing posts with label Go. Show all posts
Wednesday, May 27, 2020
David Silver on AlphaGo and AlphaZero (AI podcast)
I particularly liked this interview with David Silver on AlphaGo and AlphaZero. I suggest starting around ~35m in if you have some familiarity with the subject. (I listened to this while running hill sprints and found at the end I had it set to 1.4x speed -- YMMV.)
At ~40m Silver discusses the misleading low-dimensional intuition that led many to fear (circa 1980s-1990s) that neural net optimization would be stymied by local minima. (See related discussion: Yann LeCun on Unsupervised Learning.)
At one point Silver notes that the expressiveness of deep nets was never in question (i.e., whether they could encode sufficiently complex high-dimensional functions). The main empirical question was really about efficiency of training -- once the local minima question is resolved what remains is more of an engineering issue than a theoretical one.
Silver gives some details of the match with Lee Sedol. He describes the "holes" in AlphGo's gameplay that would manifest in roughly 1 in 5 games. Silver had predicted before the match, correctly, that AlphaGo might lose one game this way! AlphaZero was partially invented as a way to eliminate these holes, although it was also motivated by the principled goal of de novo learning, without expert examples.
I've commented many times that even with complete access to the internals of AlphaGo, we (humans) still don't know how it plays Go. There is an irreducible complexity to a deep neural net (and to our brain) that resists comprehension even when all the specific details are known. In this case, the computer program (neural net) which plays Go can be written out explicitly, but it has millions of parameters.
Silver says he worked on AI Go for a decade before it finally reached superhuman performance. He notes that Go was of special interest to AI researchers because there was general agreement that a superhuman Go program would truly understand the game, would develop intuition for it. But now that the dust has settled we see that notions like understand and intuition are still hidden in (spread throughout?) the high dimensional space of the network... and perhaps always will be. (From a philosophical perspective this is related to Searle's Chinese Room and other confusions...)
As to whether AlphaGo has deep intuition for Go, whether it can play with creativity, Silver gives examples from the Lee Sedol match in which AlphaGo 1. upended textbook Go theory previously embraced by human experts (perhaps for centuries?), and 2. surprised the human champion by making an aggressive territorial incursion late in the game. In fact, human understanding of both Chess and Go strategy have been advanced considerably via AlphaZero (which performs at a superhuman level in both games).
See also this Manifold interview with John Schulman of OpenAI.
Tuesday, April 03, 2018
AlphaGo documentary
Highly recommended -- covers the matches with European Go Champion Fan Hui and 18 time World Champion Lee Sedol. It conveys the human side of the story, both of the AlphaGo team and of the Go champions who "represented the human species" in yet another (losing) struggle against machine intelligence. Some of the most effective scenes depict how human experts react to (anthropomorphize) the workings of a complex but deterministic algorithm.
Wikipedia: After his fourth-game victory, Lee was overjoyed: "I don't think I've ever felt so good after winning just one game. I remember when I said I will win all or lose just one game in the beginning. ... However, since I won after losing 3 games in a row, I am so happy. I will never exchange this win for anything in the world." ... After the last game, however, Lee was saddened: "I failed. I feel sorry that the match is over and it ended like this. I wanted it to end well." He also confessed that "As a professional Go player, I never want to play this kind of match again. I endured the match because I accepted it."I wonder how Lee feels now knowing that much stronger programs exist than the version he lost to, 4-1. His victory in game 4 seemed to be largely due to some internal problems with (that version of) AlphaGo. I was told confidentially that the DeepMind researchers had found huge problems with AlphaGo after the Lee Sedol match -- whole lines of play on which it performed poorly. This was partially responsible for the long delay before (an improved version of) AlphaGo reappeared to defeat Ke Jie 3-0, and post a 60-0 record against Go professionals.
Wikipedia: ... Ke Jie stated that "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go."Last year I was on an AI panel with Gary Kasparov, who was defeated by DeepBlue in 1997. (Most people forget that Kasparov won the first match in 1996, 4-2.) Like Lee, Kasparov can still become emotional when talking about his own experience as the champion representing humanity.
In this video interview, Ke Jie says "I think human beings may only beat AlphaGo if we undergo a gene mutation to greatly enlarge our brain capacities..." ;-)
It took another 20 years for human Go play to be surpassed by machines. But the pace of progress is accelerating now...
Wikipedia: In a paper released on arXiv on 5 December 2017, DeepMind claimed that it generalized AlphaGo Zero's approach into a single AlphaZero algorithm, which achieved within 24 hours a superhuman level of play in the games of chess, shogi, and Go by defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case.Some time ago DeepMind talked about releasing internals of AlphaGo to help experts explore how it "chunks" the game. Did this ever happen? Might give real insight to scholars of the game who want to "touch the edge of truth of Go" :-)
Wednesday, October 25, 2017
AlphaGo Zero: algorithms over data and compute
AlphaGo Zero was trained entirely through self-play -- no data from human play was used. The resulting program is the strongest Go player ever by a large margin, and is extremely efficient in its use of compute (running on only 4 TPUs).
Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0.Rapid progress from a random initial state is rather amazing, but perhaps something we should get used to given that:
1. Deep Neural Nets are general enough to learn almost any function (i.e., high dimensional mathematical function) no matter how complex
2. The optimization process is (close to) convex
A widely discussed AI mystery: how do human babies manage to learn (language, intuitive physics, theory of mind) so quickly and with relatively limited training data? AlphaGo Zero's impressive results are highly suggestive in this context -- the right algorithms make a huge difference.
It seems certain that great things are coming in the near future...
Wednesday, May 24, 2017
AI knows best: AlphaGo "like a God"
Humans are going to have to learn to "trust the AI" without understanding why it is right. I often make an analogous point to my kids -- "At your age, if you and Dad disagree, chances are that Dad is right" :-) Of course, I always try to explain the logic behind my thinking, but in the case of some complex machine optimizations (e.g., Go strategy), humans may not be able to understand even the detailed explanations.
In some areas of complex systems -- neuroscience, genomics, molecular dynamics -- we also see machine prediction that is superior to other methods, but difficult even for scientists to understand. When hundreds or thousands of genes combine to control many dozens of molecular pathways, what kind of explanation can one offer for why a particular setting of the controls (DNA pattern) works better than another?
There was never any chance that the functioning of a human brain, the most complex known object in the universe, could be captured in verbal explication of the familiar kind (non-mathematical, non-algorithmic). The researchers that built AlphaGo would be at a loss to explain exactly what is going on inside its neural net...
NYTimes: ... “Last year, it was still quite humanlike when it played,” Mr. Ke said after the game. “But this year, it became like a god of Go.”On earlier encounters with AlphGo:
... After he finishes this week’s match, he said, he would focus more on playing against human opponents, noting that the gap between humans and computers was becoming too great. He would treat the software more as a teacher, he said, to get inspiration and new ideas about moves.
“AlphaGo is improving too fast,” he said in a news conference after the game. “AlphaGo is like a different player this year compared to last year.”
“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”
Sunday, January 08, 2017
AlphaGo (BetaGo?) Returns
Rumors over the summer suggested that AlphaGo had some serious problems that needed to be fixed -- i.e., whole lines of play that it pursued poorly, despite its thrashing of one of the world's top players in a highly publicized match. But tuning a neural net is trickier than tuning, for example, an expert system or more explicitly defined algorithm...
AlphaGo (or its successor) has quietly returned, shocking the top players in the world.
AlphaGo (or its successor) has quietly returned, shocking the top players in the world.
Fortune: In a series of unofficial online games, an updated version of Google’s AlphaGo artificial intelligence has compiled a 60-0 record against some of the game’s premier players. Among the defeated, according to the Wall Street Journal, were China’s Ke Jie, reigning world Go champion.As originally reported in the Wall Street Journal:
The run follows AlphaGo’s defeat of South Korea’s Lee Se-dol in March of 2016, in a more official setting and using a previous version of the program.
The games were played by the computer through online accounts dubbed Magister and Master—names that proved prophetic. As described by the Journal, the AI’s strategies were unconventional and unpredictable, including moves that only revealed their full implications many turns later. That pushed its human opponents into deep reflections that mirror the broader questions posed by computer intelligence.
“AlphaGo has completely subverted the control and judgment of us Go players,” wrote Gu Li, a grandmaster defeated by the program, in an online post. “When you find your previous awareness, cognition and choices are all wrong, will you keep going along the wrong path or reject yourself?”
Another Go player, Ali Jabarin, described running into Ke Jie after he had been defeated by the program. According to Jabarin, Jie was “a bit shocked . . . just repeating ‘it’s too strong’.”
WSJ: A mysterious character named “Master” has swept through China, defeating many of the world’s top players in the ancient strategy game of Go.We are witness to the psychological shock of a species encountering, for the first time, an alien and superior intelligence. See also The Laskers and the Go Master.
Master played with inhuman speed, barely pausing to think. With a wide-eyed cartoon fox as an avatar, Master made moves that seemed foolish but inevitably led to victory this week over the world’s reigning Go champion, Ke Jie of China. ...
Master revealed itself Wednesday as an updated version of AlphaGo, an artificial-intelligence program designed by the DeepMind unit of Alphabet Inc.’s Google.
AlphaGo made history in March by beating South Korea’s top Go player in four of five games in Seoul. Now, under the guise of a friendly fox, it has defeated the world champion.
It was dramatic theater, and the latest sign that artificial intelligence is peerless in solving complex but defined problems. AI scientists predict computers will increasingly be able to search through thickets of alternatives to find patterns and solutions that elude the human mind.
Master’s arrival has shaken China’s human Go players.
“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.” ...
Sunday, January 31, 2016
Deep Neural Nets and Go: AlphaGo beats European champion
I'm surprised that this happened so fast. I guess I need to update some priors :-)
AlphaGo uses two neural nets: one for move selection ("policy") and the other for position evaluation ("value"), but also uses MC search trees. Its strength is roughly top 1000 or so among all human players. In a few months it is scheduled to play one of the very best players in the world.
For training they used a 30 million position Go database of expert games (KGS Go Server). I have no intuition as to whether this is enough data to train the policy and value NNs. The quality of these NNs must be relatively good, as the MC tree search used was much smaller than for DeepBlue and its hand-crafted evaluation function.
Some grandmasters who reviewed AlphaGo's games were impressed by the "humanlike" quality of its play. More discussion: HNN, Reddit.
Schematic representation of the neural network architecture used in AlphaGo. The policy network takes a representation of the board position s as its input, passes it through many convolutional layers with parameters σ (SL policy network) or ρ (RL policy network), and outputs a probability distribution p (a|s) or p (a|s) over legal moves a, represented by a σρ probability map over the board. The value network similarly uses many convolutional layers with parameters θ, but outputs a scalar value vθ(s′) that predicts the expected outcome in position s′.
Related News: commenter STS points me to some work showing the equivalence of Deep Learning to the Renormalization Group in physics. See also Quanta magazine. The key aspect of RG here is the identification of important degrees of freedom in the process of coarse graining. These degrees of freedom make up so-called Effective Field Theories in particle physics.
These are the days of miracle and wonder!
AlphaGo uses two neural nets: one for move selection ("policy") and the other for position evaluation ("value"), but also uses MC search trees. Its strength is roughly top 1000 or so among all human players. In a few months it is scheduled to play one of the very best players in the world.
For training they used a 30 million position Go database of expert games (KGS Go Server). I have no intuition as to whether this is enough data to train the policy and value NNs. The quality of these NNs must be relatively good, as the MC tree search used was much smaller than for DeepBlue and its hand-crafted evaluation function.
Some grandmasters who reviewed AlphaGo's games were impressed by the "humanlike" quality of its play. More discussion: HNN, Reddit.
Mastering the game of Go with deep neural networks and tree search
Nature 529, 484–489 (28 January 2016) doi:10.1038/nature16961
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Schematic representation of the neural network architecture used in AlphaGo. The policy network takes a representation of the board position s as its input, passes it through many convolutional layers with parameters σ (SL policy network) or ρ (RL policy network), and outputs a probability distribution p (a|s) or p (a|s) over legal moves a, represented by a σρ probability map over the board. The value network similarly uses many convolutional layers with parameters θ, but outputs a scalar value vθ(s′) that predicts the expected outcome in position s′.
Related News: commenter STS points me to some work showing the equivalence of Deep Learning to the Renormalization Group in physics. See also Quanta magazine. The key aspect of RG here is the identification of important degrees of freedom in the process of coarse graining. These degrees of freedom make up so-called Effective Field Theories in particle physics.
These are the days of miracle and wonder!
Monday, May 26, 2014
The Mystery of Go
Nice article on the progress of computer Go.
See also The Laskers and the Go master: "While the baroque rules of Chess could only have been created by humans, the rules of Go are so elegant, organic, and rigorously logical that if intelligent life forms exist elsewhere in the universe, they almost certainly play Go." (Edward Lasker, International Master and US Chess champion)
See also The Laskers and the Go master: "While the baroque rules of Chess could only have been created by humans, the rules of Go are so elegant, organic, and rigorously logical that if intelligent life forms exist elsewhere in the universe, they almost certainly play Go." (Edward Lasker, International Master and US Chess champion)
WIRED: ... Even in the West, Go has long been a favorite game of mathematicians, physicists, and computer scientists. Einstein played Go during his time at Princeton, as did mathematician John Nash. Seminal computer scientist Alan Turing was a Go aficionado, and while working as a World War II code-breaker, he introduced the game to fellow cryptologist I.J. Good. ... Good gave the game a huge boost in Europe with a 1965 article for New Scientist entitled “The Mystery of Go.”The current handicap accorded a computer against a professional player is 4 stones. In the story below the world chess champion and his brother are given 9 stones by the Japanese mathematician (shodan = lowest non-beginner or black belt).
... Good opens the article by suggesting that Go is inherently superior to all other strategy games, an opinion shared by pretty much every Go player I’ve met. “There is chess in the western world, but Go is incomparably more subtle and intellectual,” says South Korean Lee Sedol, perhaps the greatest living Go player and one of a handful who make over seven figures a year in prize money. Subtlety, of course, is subjective. But the fact is that of all the world’s deterministic perfect information games — tic-tac-toe, chess, checkers, Othello, xiangqi, shogi — Go is the only one in which computers don’t stand a chance against humans.
...
After the match, I ask Coulom when a machine will win without a handicap. “I think maybe ten years,” he says. “But I do not like to make predictions.” His caveat is a wise one. In 2007, Deep Blue’s chief engineer, Feng-Hsiung Hsu, said much the same thing. Hsu also favored alpha-beta search over Monte Carlo techniques in Go programs, speculating that the latter “won’t play a significant role in creating a machine that can top the best human players.”
Even with Monte Carlo, another ten years may prove too optimistic. And while programmers are virtually unanimous in saying computers will eventually top the humans, many in the Go community are skeptical. “The question of whether they’ll get there is an open one,” says Will Lockhart, director of the Go documentary The Surrounding Game. “Those who are familiar with just how strong professionals really are, they’re not so sure.”
According to University of Sydney cognitive scientist and complex systems theorist Michael Harré, professional Go players behave in ways that are incredibly hard to predict. In a recent study, Harré analyzed Go players of various strengths, focusing on the predictability of their moves given a specific local configuration of stones. “The result was totally unexpected,” he says. “Moves became steadily more predictable until players reached near-professional level. But at that point, moves started getting less predictable, and we don’t know why. Our best guess is that information from the rest of the board started influencing decision-making in a unique way.” ...
Mr. Kitabatake one day told us that a Japanese mathematician was going to pass through Berlin on his way to London, and if we wanted to we could play a game with him at the Japanese Club. Dr. Lasker asked him whether he and I could perhaps play a game with him in consultation, and was wondering whether the master – he was a shodan – would give us a handicap.
“Well, of course,” said Mr. Kitabatake.
“How many stones do you think he would give us?" asked Lasker.
“Nine stones, naturally,” replied Mr. Kitabatake.
“Impossible!” said Lasker. “There isn’t a man in the world who can give me nine stones. I have studied the game for a year, and I know I understood what they were doing.”
Mr. Kitabatake only smiled.
“You will see,” he said.
The great day came when we were invited to the Japanese Club and met the master – I remember to this day how impressed I was by his technique – he actually spotted us nine stones, and we consulted on every move, playing very carefully. We were a little disconcerted by the speed with which the master responded to our deepest combinations. He never took more than a fraction of second. We were beaten so badly at the end, that Emanuel Lasker was quite heartbroken. On the way home he told me we must go to Japan and play with the masters there, then we would quickly improve and be able to play them on even terms. I doubted that very strongly, but I agreed that I was going to try to find a way to make the trip.
Labels:
ai,
algorithms,
chess,
computing,
Go,
machine learning
Saturday, October 24, 2009
Eric Baum: What is Thought?
Last week we had AI researcher and former physicist Eric Baum here as our colloquium speaker. (See here for an 11 minute video of a similar, but shorter, talk he gave at the 2008 Singularity Summit.)
Here's what I wrote about Baum and his book What is Thought back in 2008:
When I first looked at What is Thought? I was under the impression that Baum's meaning, underlying structure and compact program were defined in terms of algorithmic complexity. However, it's more complicated than that. While Nature is governed by an algorithmically simple program (the Standard Model Hamiltonian can, after all, be written down on a single sheet of paper) a useful evolved program has to run in a reasonable amount of time, under resource (memory, CPU) constraints that Nature itself does not face. Compressible does not imply tractable -- all of physics might reduce to a compact Theory of Everything, but it probably won't be very useful for designing jet airplanes.
Useful programs have to be efficient in many ways -- algorithmically and computationally. So it's not a tautology that Nature is very compressible, therefore there must exist compact (useful) programs that exploit this compressibility. It's important that there are many intermediate levels of compression (i.e., description -- as in quarks vs molecules vs bulk solids vs people), and computationally effective programs to deal with those levels. I'm not sure what measure is used in computer science to encompass both algorithmic and computational complexity. Baum discusses something called minimum description length, but it's not clear to me exactly how the requirement of effective means of computation is formalized. In the language of physicists, Baum's compact (useful) programs are like effective field theories incorporating the relevant degrees of freedom for a certain problem -- they are not only a compressed model of the phenomena, but also allow simple computations.
Evolution has, using a tremendous amount of computational power, found these programs, and our best hope for AI is to exploit their existence to speed our progress. If Baum is correct, the future may be machine learning guided by human Mechanical Turk workers.
Baum has recently relocated to Berkeley to pursue a startup based on his ideas. (Ah, the excitement! I did the same in 2000 ...) His first project is to develop a world class Go program (no lack of ambition :-), with more practical applications down the road. Best of Luck!
Here's what I wrote about Baum and his book What is Thought back in 2008:
My favorite book on AI is Eric Baum's What is Thought? (Google books version). Baum (former theoretical physicist retooled as computer scientist) notes that evolution has compressed a huge amount of information in the structure of our brains (and genes), a process that AI would have to somehow replicate. A very crude estimate of the amount of computational power used by nature in this process leads to a pessimistic prognosis for AI even if one is willing to extrapolate Moore's Law well into the future. Most naive analyses of AI and computational power only ask what is required to simulate a human brain, but do not ask what is required to evolve one. I would guess that our best hope is to cheat by using what nature has already given us -- emulating the human brain as much as possible.
This perspective seems quite obvious now that I have kids -- their rate of learning about the world is clearly enhanced by pre-evolved capabilities. They're not generalized learning engines -- they're optimized to do things like recognize patterns (e.g., faces), use specific concepts (e.g., integers), communicate using language, etc.
What is Thought?
In What Is Thought? Eric Baum proposes a computational explanation of thought. Just as Erwin Schrodinger in his classic 1944 work What Is Life? argued ten years before the discovery of DNA that life must be explainable at a fundamental level by physics and chemistry, Baum contends that the present-day inability of computer science to explain thought and meaning is no reason to doubt there can be such an explanation. Baum argues that the complexity of mind is the outcome of evolution, which has built thought processes that act unlike the standard algorithms of computer science and that to understand the mind we need to understand these thought processes and the evolutionary process that produced them in computational terms.
Baum proposes that underlying mind is a complex but compact program that exploits the underlying structure of the world. He argues further that the mind is essentially programmed by DNA. We learn more rapidly than computer scientists have so far been able to explain because the DNA code has programmed the mind to deal only with meaningful possibilities. Thus the mind understands by exploiting semantics, or meaning, for the purposes of computation; constraints are built in so that although there are myriad possibilities, only a few make sense. Evolution discovered corresponding subroutines or shortcuts to speed up its processes and to construct creatures whose survival depends on making the right choice quickly. Baum argues that the structure and nature of thought, meaning, sensation, and consciousness therefore arise naturally from the evolution of programs that exploit the compact structure of the world.
When I first looked at What is Thought? I was under the impression that Baum's meaning, underlying structure and compact program were defined in terms of algorithmic complexity. However, it's more complicated than that. While Nature is governed by an algorithmically simple program (the Standard Model Hamiltonian can, after all, be written down on a single sheet of paper) a useful evolved program has to run in a reasonable amount of time, under resource (memory, CPU) constraints that Nature itself does not face. Compressible does not imply tractable -- all of physics might reduce to a compact Theory of Everything, but it probably won't be very useful for designing jet airplanes.
Useful programs have to be efficient in many ways -- algorithmically and computationally. So it's not a tautology that Nature is very compressible, therefore there must exist compact (useful) programs that exploit this compressibility. It's important that there are many intermediate levels of compression (i.e., description -- as in quarks vs molecules vs bulk solids vs people), and computationally effective programs to deal with those levels. I'm not sure what measure is used in computer science to encompass both algorithmic and computational complexity. Baum discusses something called minimum description length, but it's not clear to me exactly how the requirement of effective means of computation is formalized. In the language of physicists, Baum's compact (useful) programs are like effective field theories incorporating the relevant degrees of freedom for a certain problem -- they are not only a compressed model of the phenomena, but also allow simple computations.
Evolution has, using a tremendous amount of computational power, found these programs, and our best hope for AI is to exploit their existence to speed our progress. If Baum is correct, the future may be machine learning guided by human Mechanical Turk workers.
Baum has recently relocated to Berkeley to pursue a startup based on his ideas. (Ah, the excitement! I did the same in 2000 ...) His first project is to develop a world class Go program (no lack of ambition :-), with more practical applications down the road. Best of Luck!
Sunday, April 05, 2009
Theories of games
While visiting Vanderbilt over spring break, I discovered that astrophysicist Bob Scherrer and I share a couple of boyhood interests: science fiction and strategic games. Bob actually writes science fiction, and still plays board games with his kids, whereas I switched long ago to "serious" literature and don't play or design games anymore.
But from age 11 to 14 or so (basically until I hit puberty and discovered girls), I spent every Saturday at the local university simulation gaming club, playing games like Panzergruppe Guderian, Starship Troopers or Dungeons and Dragons with college students and other adults. Bob tells me that I should have saved my collection of these games -- that they'd be very valuable today! Actually, the design of games and rule systems interested me even more than play.
Although I admire the elegance of classical games like Chess and Go, I prefer simulation games. More specifically, I enjoy thinking about the design and structure of these games. A good analogy is the distinction between natural science and mathematics. The former attempts to distill truths about the workings (dynamics) of the natural world, whereas the latter can be admired solely for its abstract beauty and elegance. To me Chess sometimes feels too finite and crystalline. The challenge of formulating a system of rules that captures the key strategic or tactical issues facing, e.g., Stalin, or an infantry platoon, or the ruler of a city state, or even a science postdoc, is just messy enough to be more interesting to me than the study of a finite abstract system. To some extent, every theoretical scientist, economist and financial modeler is participating in a kind of game design -- building a simplified model without throwing away the essential details.
I think role playing games are overly maligned, even among the community of gamers. Under ideal circumstances, role playing games are highly educational, and combine components like story telling, negotiation, diplomacy and team building. At the club I attended one of the "game masters" (I know, it sounds silly!) was a portly older man named Bill Dawkins, who preferred to be called Standing Bear (his Native American name). Standing Bear, though possessed of limited formal education, was widely read and had lived a vast life. He was the most creative story teller and world creator I have known. His ideas were easily as original and interesting as those I had encountered in science fiction and fantasy writing. Each of the role playing campaigns he created, taking place over years, was a masterpiece of imagination and myth building. He attracted scores of players from around the region. I often found myself playing alongside or against people I barely knew, although some came to be close friends.
But from age 11 to 14 or so (basically until I hit puberty and discovered girls), I spent every Saturday at the local university simulation gaming club, playing games like Panzergruppe Guderian, Starship Troopers or Dungeons and Dragons with college students and other adults. Bob tells me that I should have saved my collection of these games -- that they'd be very valuable today! Actually, the design of games and rule systems interested me even more than play.
Although I admire the elegance of classical games like Chess and Go, I prefer simulation games. More specifically, I enjoy thinking about the design and structure of these games. A good analogy is the distinction between natural science and mathematics. The former attempts to distill truths about the workings (dynamics) of the natural world, whereas the latter can be admired solely for its abstract beauty and elegance. To me Chess sometimes feels too finite and crystalline. The challenge of formulating a system of rules that captures the key strategic or tactical issues facing, e.g., Stalin, or an infantry platoon, or the ruler of a city state, or even a science postdoc, is just messy enough to be more interesting to me than the study of a finite abstract system. To some extent, every theoretical scientist, economist and financial modeler is participating in a kind of game design -- building a simplified model without throwing away the essential details.
I think role playing games are overly maligned, even among the community of gamers. Under ideal circumstances, role playing games are highly educational, and combine components like story telling, negotiation, diplomacy and team building. At the club I attended one of the "game masters" (I know, it sounds silly!) was a portly older man named Bill Dawkins, who preferred to be called Standing Bear (his Native American name). Standing Bear, though possessed of limited formal education, was widely read and had lived a vast life. He was the most creative story teller and world creator I have known. His ideas were easily as original and interesting as those I had encountered in science fiction and fantasy writing. Each of the role playing campaigns he created, taking place over years, was a masterpiece of imagination and myth building. He attracted scores of players from around the region. I often found myself playing alongside or against people I barely knew, although some came to be close friends.
Wednesday, October 29, 2008
The Laskers and the Go master
My father played Chess, Go and Bridge. I don't know much about the last two, but I recently came across this vignette from Edward Lasker (International Master and US Chess champion) about his and Emanuel Lasker's encounter with a Go master. Emanuel Lasker was world Chess champion for 27 years -- rated among the strongest players of all time -- and a mathematician as well.
Edward Lasker:
I'm sympathetic to this point of view. The rules of Go seem to be a natural embodiment of two dimensional notions of encirclement and control of space. They are much simpler and less arbitrary than those of Chess. I can't say anything about the strategic and tactical subtlety of the game, since I don't play, but experts seem to think it is quite deep (certainly it is more challenging for AI than Chess, if only for combinatorial reasons). One problem with Lasker's contention is that Go doesn't seem to have been invented independently by any human civilizations other than ancient China (supposedly 4000 years ago)!
Mr. Kitabatake one day told us that a Japanese mathematician was going to pass through Berlin on his way to London, and if we wanted to we could play a game with him at the Japanese Club. Dr. Lasker asked him whether he and I could perhaps play a game with him in consultation, and was wondering whether the master – he was a shodan – would give us a handicap.
“Well, of course,” said Mr. Kitabatake.
“How many stones do you think he would give us?" asked Lasker.
“Nine stones, naturally,” replied Mr. Kitabatake.
“Impossible!” said Lasker. “There isn’t a man in the world who can give me nine stones. I have studied the game for a year, and I know I understood what they were doing.”
Mr. Kitabatake only smiled.
“You will see,” he said.
The great day came when we were invited to the Japanese Club and met the master – I remember to this day how impressed I was by his technique – he actually spotted us nine stones, and we consulted on every move, playing very carefully. We were a little disconcerted by the speed with which the master responded to our deepest combinations. He never took more than a fraction of second. We were beaten so badly at the end, that Emanuel Lasker was quite heartbroken. On the way home he told me we must go to Japan and play with the masters there, then we would quickly improve and be able to play them on even terms. I doubted that very strongly, but I agreed that I was going to try to find a way to make the trip.
Edward Lasker:
While the baroque rules of Chess could only have been created by humans, the rules of Go are so elegant, organic, and rigorously logical that if intelligent life forms exist elsewhere in the universe, they almost certainly play Go.
I'm sympathetic to this point of view. The rules of Go seem to be a natural embodiment of two dimensional notions of encirclement and control of space. They are much simpler and less arbitrary than those of Chess. I can't say anything about the strategic and tactical subtlety of the game, since I don't play, but experts seem to think it is quite deep (certainly it is more challenging for AI than Chess, if only for combinatorial reasons). One problem with Lasker's contention is that Go doesn't seem to have been invented independently by any human civilizations other than ancient China (supposedly 4000 years ago)!
Subscribe to:
Comments (Atom)
Blog Archive
Labels
- physics (420)
- genetics (325)
- globalization (301)
- genomics (295)
- technology (282)
- brainpower (280)
- finance (275)
- american society (261)
- China (249)
- innovation (231)
- ai (206)
- economics (202)
- psychometrics (190)
- science (172)
- psychology (169)
- machine learning (166)
- biology (163)
- photos (162)
- genetic engineering (150)
- universities (150)
- travel (144)
- podcasts (143)
- higher education (141)
- startups (139)
- human capital (127)
- geopolitics (124)
- credit crisis (115)
- political correctness (108)
- iq (107)
- quantum mechanics (107)
- cognitive science (103)
- autobiographical (97)
- politics (93)
- careers (90)
- bounded rationality (88)
- social science (86)
- history of science (85)
- realpolitik (85)
- statistics (83)
- elitism (81)
- talks (80)
- evolution (79)
- credit crunch (78)
- biotech (76)
- genius (76)
- gilded age (73)
- income inequality (73)
- caltech (68)
- books (64)
- academia (62)
- history (61)
- intellectual history (61)
- MSU (60)
- sci fi (60)
- harvard (58)
- silicon valley (58)
- mma (57)
- mathematics (55)
- education (53)
- video (52)
- kids (51)
- bgi (48)
- black holes (48)
- cdo (45)
- derivatives (43)
- neuroscience (43)
- affirmative action (42)
- behavioral economics (42)
- economic history (42)
- literature (42)
- nuclear weapons (42)
- computing (41)
- jiujitsu (41)
- physical training (40)
- film (39)
- many worlds (39)
- quantum field theory (39)
- expert prediction (37)
- ufc (37)
- bjj (36)
- bubbles (36)
- mortgages (36)
- google (35)
- race relations (35)
- hedge funds (34)
- security (34)
- von Neumann (34)
- meritocracy (31)
- feynman (30)
- quants (30)
- taiwan (30)
- efficient markets (29)
- foo camp (29)
- movies (29)
- sports (29)
- music (28)
- singularity (27)
- entrepreneurs (26)
- conferences (25)
- housing (25)
- obama (25)
- subprime (25)
- venture capital (25)
- berkeley (24)
- epidemics (24)
- war (24)
- wall street (23)
- athletics (22)
- russia (22)
- ultimate fighting (22)
- cds (20)
- internet (20)
- new yorker (20)
- blogging (19)
- japan (19)
- scifoo (19)
- christmas (18)
- dna (18)
- gender (18)
- goldman sachs (18)
- university of oregon (18)
- cold war (17)
- cryptography (17)
- freeman dyson (17)
- smpy (17)
- treasury bailout (17)
- algorithms (16)
- autism (16)
- personality (16)
- privacy (16)
- Fermi problems (15)
- cosmology (15)
- happiness (15)
- height (15)
- india (15)
- oppenheimer (15)
- probability (15)
- social networks (15)
- wwii (15)
- fitness (14)
- government (14)
- les grandes ecoles (14)
- neanderthals (14)
- quantum computers (14)
- blade runner (13)
- chess (13)
- hedonic treadmill (13)
- nsa (13)
- philosophy of mind (13)
- research (13)
- aspergers (12)
- climate change (12)
- harvard society of fellows (12)
- malcolm gladwell (12)
- net worth (12)
- nobel prize (12)
- pseudoscience (12)
- Einstein (11)
- art (11)
- democracy (11)
- entropy (11)
- geeks (11)
- string theory (11)
- television (11)
- Go (10)
- ability (10)
- complexity (10)
- dating (10)
- energy (10)
- football (10)
- france (10)
- italy (10)
- mutants (10)
- nerds (10)
- olympics (10)
- pop culture (10)
- crossfit (9)
- encryption (9)
- eugene (9)
- flynn effect (9)
- james salter (9)
- simulation (9)
- tail risk (9)
- turing test (9)
- alan turing (8)
- alpha (8)
- ashkenazim (8)
- data mining (8)
- determinism (8)
- environmentalism (8)
- games (8)
- keynes (8)
- manhattan (8)
- new york times (8)
- pca (8)
- philip k. dick (8)
- qcd (8)
- real estate (8)
- robot genius (8)
- success (8)
- usain bolt (8)
- Iran (7)
- aig (7)
- basketball (7)
- free will (7)
- fx (7)
- game theory (7)
- hugh everett (7)
- inequality (7)
- information theory (7)
- iraq war (7)
- markets (7)
- paris (7)
- patents (7)
- poker (7)
- teaching (7)
- vietnam war (7)
- volatility (7)
- anthropic principle (6)
- bayes (6)
- class (6)
- drones (6)
- econtalk (6)
- empire (6)
- global warming (6)
- godel (6)
- intellectual property (6)
- nassim taleb (6)
- noam chomsky (6)
- prostitution (6)
- rationality (6)
- academia sinica (5)
- bobby fischer (5)
- demographics (5)
- fake alpha (5)
- kasparov (5)
- luck (5)
- nonlinearity (5)
- perimeter institute (5)
- renaissance technologies (5)
- sad but true (5)
- software development (5)
- solar energy (5)
- warren buffet (5)
- 100m (4)
- Poincare (4)
- assortative mating (4)
- bill gates (4)
- borges (4)
- cambridge uk (4)
- censorship (4)
- charles darwin (4)
- computers (4)
- creativity (4)
- hormones (4)
- humor (4)
- judo (4)
- kerviel (4)
- microsoft (4)
- mixed martial arts (4)
- monsters (4)
- moore's law (4)
- soros (4)
- supercomputers (4)
- trento (4)
- 200m (3)
- babies (3)
- brain drain (3)
- charlie munger (3)
- cheng ting hsu (3)
- chet baker (3)
- correlation (3)
- ecosystems (3)
- equity risk premium (3)
- facebook (3)
- fannie (3)
- feminism (3)
- fst (3)
- intellectual ventures (3)
- jim simons (3)
- language (3)
- lee kwan yew (3)
- lewontin fallacy (3)
- lhc (3)
- magic (3)
- michael lewis (3)
- mit (3)
- nathan myhrvold (3)
- neal stephenson (3)
- olympiads (3)
- path integrals (3)
- risk preference (3)
- search (3)
- sec (3)
- sivs (3)
- society generale (3)
- systemic risk (3)
- thailand (3)
- twitter (3)
- alibaba (2)
- bear stearns (2)
- bruce springsteen (2)
- charles babbage (2)
- cloning (2)
- david mamet (2)
- digital books (2)
- donald mackenzie (2)
- drugs (2)
- dune (2)
- exchange rates (2)
- frauds (2)
- freddie (2)
- gaussian copula (2)
- heinlein (2)
- industrial revolution (2)
- james watson (2)
- ltcm (2)
- mating (2)
- mba (2)
- mccain (2)
- monkeys (2)
- national character (2)
- nicholas metropolis (2)
- no holds barred (2)
- offices (2)
- oligarchs (2)
- palin (2)
- population structure (2)
- prisoner's dilemma (2)
- singapore (2)
- skidelsky (2)
- socgen (2)
- sprints (2)
- star wars (2)
- ussr (2)
- variance (2)
- virtual reality (2)
- war nerd (2)
- abx (1)
- anathem (1)
- andrew lo (1)
- antikythera mechanism (1)
- athens (1)
- atlas shrugged (1)
- ayn rand (1)
- bay area (1)
- beats (1)
- book search (1)
- bunnie huang (1)
- car dealers (1)
- carlos slim (1)
- catastrophe bonds (1)
- cdos (1)
- ces 2008 (1)
- chance (1)
- children (1)
- cochran-harpending (1)
- cpi (1)
- david x. li (1)
- dick cavett (1)
- dolomites (1)
- eharmony (1)
- eliot spitzer (1)
- escorts (1)
- faces (1)
- fads (1)
- favorite posts (1)
- fiber optic cable (1)
- francis crick (1)
- gary brecher (1)
- gizmos (1)
- greece (1)
- greenspan (1)
- hypocrisy (1)
- igon value (1)
- iit (1)
- inflation (1)
- information asymmetry (1)
- iphone (1)
- jack kerouac (1)
- jaynes (1)
- jazz (1)
- jfk (1)
- john dolan (1)
- john kerry (1)
- john paulson (1)
- john searle (1)
- john tierney (1)
- jonathan littell (1)
- las vegas (1)
- lawyers (1)
- lehman auction (1)
- les bienveillantes (1)
- lowell wood (1)
- lse (1)
- machine (1)
- mcgeorge bundy (1)
- mexico (1)
- michael jackson (1)
- mickey rourke (1)
- migration (1)
- money:tech (1)
- myron scholes (1)
- netwon institute (1)
- networks (1)
- newton institute (1)
- nfl (1)
- oliver stone (1)
- phil gramm (1)
- philanthropy (1)
- philip greenspun (1)
- portfolio theory (1)
- power laws (1)
- pyschology (1)
- randomness (1)
- recession (1)
- sales (1)
- skype (1)
- standard deviation (1)
- starship troopers (1)
- students today (1)
- teleportation (1)
- tierney lab blog (1)
- tomonaga (1)
- tyler cowen (1)
- venice (1)
- violence (1)
- virtual meetings (1)
- wealth effect (1)