“The Brain As a Universal Learning Machine”, 2015-06-24 (; backlinks; similar):
This article presents an emerging architectural hypothesis of the brain as a biological implementation of a Universal Learning Machine. I present a rough but complete architectural view of how the brain works under the universal learning hypothesis. I also contrast this new viewpoint—which comes from computational neuroscience and machine learning—with the older evolved modularity hypothesis popular in evolutionary psychology and the heuristics and biases literature. These two conceptions of the brain lead to very different predictions for the likely route to AGI, the value of neuroscience, the expected differences between AGI and humans, and thus any consequent safety issues and dependent strategies.
Intro · Two viewpoints on the Mind · Universal Learning Machines · Historical Interlude · Dynamic Rewiring · Brain Architecture (the whole brain in one picture and a few pages of text) · The Basal Ganglia · Implications for AGI · Conclusion
…The roots of the universal learning hypothesis can be traced back to Mountcastle’s discovery of the simple uniform architecture of the cortex. The universal learning hypothesis proposes that all substantial mental algorithms are learned; nothing is innate except for the learning and reward machinery itself (which is somewhat complicated, involving a number of systems and mechanisms), the initial rough architecture (equivalent to a prior over mindspace), and a small library of simple innate circuits (analogous to the operating system layer in a computer). In this view the mind (software) is distinct from the brain (hardware). The mind is a complex software system built out of a general learning mechanism…The key takeaway is that the data is what matters—and in the end it is all that matters. Train an universal learner on image data and it just becomes a visual system. Train it on speech data and it becomes a speech recognizer. Train it on ATARI and it becomes a little gamer agent.
Conclusion: Ray Kurzweil has been predicting for decades that AGI will be built by reverse engineering the brain, and this particular prediction is not especially unique—this has been a popular position for quite a while. My own investigation of neuroscience and machine learning led me to a similar conclusion some time ago.
The recent progress in deep learning, combined with the emerging modern understanding of the brain, provide further evidence that AGI could arrive around the time when we can build and train ANNs with similar computational power as measured very roughly in terms of neuron/synapse counts. In general the evidence from the last four years or so supports Hanson’s viewpoint from the Foom debate. More specifically, his general conclusion:
Future superintelligences will exist, but their vast and broad mental capacities will come mainly from vast mental content and computational resources. By comparison, their general architectural innovations will be minor additions.
The ULH supports this conclusion. Current ANN engines can already train and run models with around 10 million neurons and 10 billion (compressed/shared) synapses on a single GPU, which suggests that the goal could soon be within the reach of a large organization. Furthermore, Moore’s Law for GPUs still has some steam left, and software advances are currently improving simulation performance at a faster rate than hardware. These trends implies that Anthropomorphic/Neuromorphic AGI could be surprisingly close, and may appear suddenly. What kind of leverage can we exert on a short timescale?
View External Link: