What form would a brain theory take? Would it be short and punchy, like Maxwell’s Equations? Or with a clear goal but achieved by a community of mechanisms—local theories—to attain that goal, like the US Tax Code. The best developed recent brain-like model is the “neural network.” In the late 1950s Rosenblatt’s Perceptron and many variants proposed a brain-inspired associative network. Problems with the first generation of neural networks—limited capacity, opaque learning, and inaccuracy—have been largely overcome. In 2016, a program from Google, AlphaGo, based on a neural net using deep learning, defeated the world’s best Go player. The climax of this chapter is a fictional example starring Sherlock Holmes demonstrating that complex associative computation in practice has less in common with accurate pattern recognition and more with abstract high-level conceptual inference.
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
If you think you should have access to this title, please contact your librarian.