Created with Sketch.
85 minutes | 9 days ago
BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?
It’s generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less agreement about what neuroscience can provide AI. Should computer scientists and engineers care about how brains compute, or will it just slow them down, for example? Chris, Sam, and I discuss how neuroscience might contribute to AI moving forward, considering the past and present. This discussion also leads into related topics, like the role of prediction versus understanding, AGI, explainable AI, value alignment, the fundamental conundrum that humans specify the ultimate values of the tasks AI will solve, and more. Plus, a question from previous guest Andrew Saxe. Also, check out Sam’s previous appearance on the podcast. Chris’s lab: Human Information Processing lab.Sam’s lab: Computational Cognitive Neuroscience Lab.Twitter: @gershbrain; @summerfieldlabPapers we discuss or mention or are related:If deep learning is the answer, then what is the question?Neuroscience-Inspired Artificial Intelligence.Building Machines that Learn and Think Like People. 0:00 – Intro 5:00 – Good ol’ days 13:50 – AI for neuro, neuro for AI 24:25 – Intellectual diversity in AI 28:40 – Role of philosophy 30:20 – Operationalization and benchmarks 36:07 – Prediction vs. understanding 42:48 – Role of humans in the loop 46:20 – Value alignment 51:08 – Andrew Saxe question 53:16 – Explainable AI 58:55 – Generalization 1:01:09 – What has AI revealed about us? 1:09:38 – Neuro for AI 1:20:30 – Concluding remarks
79 minutes | 19 days ago
BI 094 Alison Gopnik: Child-Inspired AI
Alison and I discuss her work to accelerate learning and thus improve AI by studying how children learn, as Alan Turing suggested in his famous 1950 paper. The ways children learn are via imitation, by learning abstract causal models, and active learning by implementing a high exploration/exploitation ratio. We also discuss child consciousness, psychedelics, the concept of life history, the role of grandparents and elders, and lots more. Alison’s Website.Cognitive Development and Learning Lab.Twitter: @AlisonGopnik.Related papers:Childhood as a solution to explore-exploit tensions.The Aeon article about grandparents, children, and evolution: Vulnerable Yet Vital.Books:The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children.The Scientist in the Crib: What Early Learning Tells Us About the Mind.The Philosophical Baby: What Children’s Minds Tell Us About Truth, Love, and the Meaning of Life. Take-home points: Children learn by imitation, and not just unthinking imitation. They pay attention to and evaluate the intentions of others and judge whether a person seems to be a reliable source of information. That is, they learn by sophisticated socially-constrained imitation.Children build abstract causal models of the world. This allows them to simulate potential outcomes and test their actions against those simulations, accelerating learning.Children keep their foot on the exploration pedal, actively learning by exploring a wide spectrum of actions to determine what works. As we age, our exploratory cognition decreases, and we begin to exploit more what we’ve learned. Timestamps0:00 – Intro 4:40 – State of the field 13:30 – Importance of learning 20:12 – Turing’s suggestion 22:49 – Patience for one’s own ideas 28:53 – Learning via imitation 31:57 – Learning abstract causal models 41:42 – Life history 43:22 – Learning via exploration 56:19 – Explore-exploit dichotomy 58:32 – Synaptic pruning 1:00:19 – Breakthrough research in careers 1:04:31 – Role of elders 1:09:08 – Child consciousness 1:11:41 – Psychedelics as child-like brain 1:16:00 – Build consciousness into AI?
67 minutes | a month ago
BI 093 Dileep George: Inference in Brain Microcircuits
Dileep and I discuss his theoretical account of how the thalamus and cortex work together to implement visual inference. We talked previously about his Recursive Cortical Network (RCN) approach to visual inference, which is a probabilistic graph model that can solve hard problems like CAPTCHAs, and more recently we talked about using his RCNs with cloned units to account for cognitive maps related to the hippocampus. On this episode, we walk through how RCNs can map onto thalamo-cortical circuits so a given cortical column can signal whether it believes some concept or feature is present in the world, based on bottom-up incoming sensory evidence, top-down attention, and lateral related features. We also briefly compare this bio-RCN version with Randy O’Reilly’s Deep Predictive Learning account of thalamo-cortical circuitry. Vicarious website – Dileeps AGI robotics company.Twitter: @dileeplearningThe papers we discuss or mention:A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model.From CAPTCHA to Commonsense: How Brain Can Teach Us About Artificial Intelligence.Probabilistic graphical models.Hierarchical temporal memory. Time Stamps: 0:00 – Intro 5:18 – Levels of abstraction 7:54 – AGI vs. AHI vs. AUI12:18 – Ideas and failures in startups 16:51 – Thalamic cortical circuitry computation 22:07 – Recursive cortical networks 23:34 – bio-RCN 27:48 – Cortical column as binary random variable 33:37 – Clonal neuron roles 39:23 – Processing cascade 41:10 – Thalamus 47:18 – Attention as explaining away 50:51 – Comparison with O’Reilly’s predictive coding framework 55:39 – Subjective contour effect 1:01:20 – Necker cube
102 minutes | a month ago
BI 092 Russ Poldrack: Cognitive Ontologies
Russ and I discuss cognitive ontologies – the “parts” of the mind and their relations – as an ongoing dilemma of how to map onto each other what we know about brains and what we know about minds. We talk about whether we have the right ontology now, how he uses both top-down and data-driven approaches to analyze and refine current ontologies, and how all this has affected his own thinking about minds. We also discuss some of the current meta-science issues and challenges in neuroscience and AI, and Russ answers guest questions from Kendrick Kay and David Poeppel. Russ’s website.Poldrack Lab.Stanford Center For Reproducible Neuroscience.Twitter: @russpoldrack.Book:The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts.The papers we discuss or mention:Atlases of cognition with large-scale human brain mapping.Mapping Mental Function to Brain Structure: How Can Cognitive Neuroimaging Succeed?From Brain Maps to Cognitive Ontologies: Informatics and the Search for Mental Structure.Uncovering the structure of self-regulation through data-driven ontology discoveryTalks:Reproducibility: NeuroHackademy: Russell Poldrack – Reproducibility in fMRI: What is the problem?Cognitive Ontology: Cognitive Ontologies, from Top to BottomA good series of talks about cognitive ontologies: Online Seminar Series: Problem of Cognitive Ontology. Some take-home points: Our folk psychological cognitive ontology hasn’t changed much since early Greek Philosophy, and especially since William James wrote about attention, consciousness, and so on.Using encoding models, we can predict brain responses pretty well based on what task a subject is performing or what “cognitive function” a subject is engaging, at least to a course approximation.Using a data-driven approach has potential to help determine mental structure, but important human decisions must still be made regarding how exactly to divide up the various “parts” of the mind. Time points0:00 – Introduction 5:59 – Meta-science issues 19:00 – Kendrick Kay question 23:00 – State of the field 30:06 – fMRI for understanding minds 35:13 – Computational mind 42:10 – Cognitive ontology 45:17 – Cognitive Atlas 52:05 – David Poeppel question 57:00 – Does ontology matter? 59:18 – Data-driven ontology 1:12:29 – Dynamical systems approach 1:16:25 – György Buzsáki’s inside-out approach 1:22:26 – Ontology for AI 1:27:39 – Deep learning hype
88 minutes | 2 months ago
BI 091 Carsen Stringer: Understanding 40,000 Neurons
Carsen and I discuss how she uses 2-photon calcium imaging data from over 10,000 neurons to understand the information processing of such large neural population activity. We talk about the tools she makes and uses to analyze the data, and the type of high-dimensional neural activity structure they found, which seems to allow efficient and robust information processing. We also talk about how these findings may help build better deep learning networks, and Carsen’s thoughts on how to improve the diversity, inclusivity, and equality in neuroscience research labs. Guest question from Matt Smith. Stringer Lab.Twitter: @computingnature.The papers we discuss or mention:High-dimensional geometry of population responses in visual cortexSpontaneous behaviors drive multidimensional, brain-wide population activity. Timestamps: 0:00 – Intro 5:51 – Recording > 10k neurons 8:51 – 2-photon calcium imaging 14:56 – Balancing scientific questions and tools 21:16 – Unsupervised learning tools and rastermap 26:14 – Manifolds 32:13 – Matt Smith question 37:06 – Dimensionality of neural activity 58:51 – Future plans 1:00:30- What can AI learn from this? 1:13:26 – Diversity, inclusivity, equality
99 minutes | 2 months ago
BI 090 Chris Eliasmith: Building the Human Brain
Chris and I discuss his Spaun large scale model of the human brain (Semantic Pointer Architecture Unified Network), as detailed in his book How to Build a Brain. We talk about his philosophical approach, how Spaun compares to Randy O’Reilly’s Leabra networks, the Applied Brain Research Chris co-founded, and I have guest questions from Brad Aimone, Steve Potter, and Randy O’Reilly. Chris’s website.Applied Brain Research.The book: How to Build a Brain.Nengo (you can run Spaun).Paper summary of Spaun: A large-scale model of the functioning brain. Some takeaways: Spaun is an embodied fully functional cognitive architecture with one eye for task instructions and an arm for responses.Chris uses elements from symbolic, connectionist, and dynamical systems approaches in cognitive science.The neural engineering framework (NEF) is how functions get instantiated in spiking neural networks.The semantic pointer architecture (SPA) is how representations are stored and transformed – i.e. the symbolic-like cognitive processing. Time Points: 0:00 – Intro2:29 – Sense of awe 6:20 – Large-scale models 9:24 – Descriptive pragmatism 15:43 – Asking better questions 22:48 – Brad Aimone question: Neural engineering framework 29:07 – Engineering to build vs. understand 32:12 – Why is AI world not interested in brains/minds?37:09 – Steve Potter neuromorphics question 44:51 – Spaun 49:33 – Semantic Pointer Architecture 56:04 – Representations 58:21 – Randy O’Reilly question 1 1:07:33 – Randy O’Reilly question 21:10:31 – Spaun vs. Leabra 1:32:43 – How would Chris start over?
87 minutes | 3 months ago
BI 089 Matt Smith: Drifting Cognition
Matt and I discuss how cognition and behavior drifts over the course of minutes and hours, and how global brain activity drifts with it. How does the brain continue to produce steady perception and action in the midst of such drift? We also talk about how to think about variability in neural activity. How much of it is noise and how much of it is hidden important activity? Finally, we discuss the effect of recording more and more neurons simultaneously, collecting bigger and bigger datasets, plus guest questions from Adam Snyder and Patrick Mayo. Smith Lab.Twitter: @SmithLabNeuro.Related:Slow drift of neural activity as a signature of impulsivity in macaque visual and prefrontal cortex.Artwork by Melissa Neely Take home points: The “noise” in the variability of neural activity is likely just activity devoted to processing other things.Recording lots of neurons simultaneously helps resolve the question of what’s noise and how much information is in a population of neurons.There’s a neural signature of the behavioral “slow drift” of our internal cognitive state.The neural signature is global, and it’s an open question how the brain compensates to produce steady perception and action. Timestamps: 0:00 – Intro 4:35 – Adam Snyder question 15:26 – Multi-electrode recordings 17:48 – What is noise in the brain? 23:55 – How many neurons is enough? 27:43 – Patrick Mayo question 33:17 – Slow drift 54:10 – Impulsivity 57:32 – How does drift happen? 59:49 – Relation to AI 1:06:58 – What AI and neuro can teach each other 1:10:02 – Ecologically valid behavior 1:14:39 – Brain mechanisms vs. mind 1:17:36 – Levels of description 1:21:14 – Hard things to make in AI 1:22:48 – Best scientific moment
99 minutes | 3 months ago
BI 088 Randy O’Reilly: Simulating the Human Brain
Randy and I discuss his LEABRA cognitive architecture that aims to simulate the human brain, plus his current theory about how a loop between cortical regions and the thalamus could implement predictive learning and thus solve how we learn with so few examples. We also discuss what Randy thinks is the next big thing neuroscience can contribute to AI (thanks to a guest question from Anna Schapiro), and much more. Computational Cognitive Neuroscience Laboratory.The papers we discuss or mention:The Leabra Cognitive Architecture: How to Play 20 Principles with Nature and Win!Deep Predictive Learning in Neocortex and Pulvinar.Unraveling the Mysteries of Motivation.His youTube series detailing the theory and workings of Leabra:Computational Cognitive Neuroscience.The free textbook:Computational Cognitive Neuroscience A few take-home points: Leabra has been a slow incremental project, inspired in part by Alan Newell’s suggested approach.Randy began by developing a learning algorithm that incorporated both kinds of biological learning (error-driven and associative).Leabra’s core is 3 brain areas – frontal cortex, parietal cortex, and hippocampus – and has grown from there.There’s a constant balance between biological realism and computational feasibility.It’s important that a cognitive architecture address multiple levels- micro-scale, macro-scale, mechanisms, functions, and so on.Deep predictive learning is a possible brain mechanism whereby predictions from higher layer cortex precede input from lower layer cortex in the thalamus, where an error is computed and used to drive learning.Randy believes our metacognitive ability to know what we do and don’t know is a key next function to build into AI. Timestamps:0:00 – Intro 3:54 – Skip Intro 6:20 – Being in awe 18:57 – How current AI can inform neuro 21:56 – Anna Schapiro question – how current neuro can inform AI.29:20 – Learned vs. innate cognition 33:43 – LEABRA 38:33 – Developing Leabra 40:30 – Macroscale 42:33 – Thalamus as microscale 43:22 – Thalamocortical circuitry 47:25 – Deep predictive learning 56:18 – Deep predictive learning vs. backrop 1:01:56 – 10 Hz learning cycle 1:04:58 – Better theory vs. more data 1:08:59 – Leabra vs. Spaun 1:13:59 – Biological realism 1:21:54 – Bottom-up inspiration 1:27:26 – Biggest mistake in Leabra 1:32:14 – AI consciousness 1:34:45 – How would Randy begin again?
83 minutes | 3 months ago
BI 087 Dileep George: Cloning for Cognitive Maps
When a waiter hands me the bill, how do I know whether to pay it myself or let my date pay? On this episode, I get a progress update from Dileep on his company, Vicarious, since Dileep’s last episode. We also talk broadly about his experience running Vicarious to develop AGI and robotics. Then we turn to his latest brain-inspired AI efforts using cloned structured probabilistic graph models to develop an account of how the hippocampus makes a model of the world and represents our cognitive maps in different contexts, so we can simulate possible outcomes to choose how to act. Special guest questions from Brad Love (episode 70: How We Learn Concepts) . Vicarious website – Dileep’s AGI robotics company.Twitter: @dileeplearning.Papers we discuss:Learning cognitive maps as structured graphs for vicarious evaluation.A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model.Probabilistic graphical models.Hierarchical temporal memory. Time stamps: 0:00 – Intro3:00 – Skip Intro4:00 – Previous Dileep episode10:22 – Is brain-inspired AI over-hyped?14:38 – Compteition in robotics field15:53 – Vicarious robotics22:12 – Choosing what product to make28:13 – Running a startup30:52 – Old brain vs. new brain37:53 – Learning cognitive maps as structured graphs41:59 – Graphical models47:10 – Cloning and merging, hippocampus53:36 – Brad Love Question 11:00:39 – Brad Love Question 21:02:41 – Task examples1:11:56 – What does hippocampus do?1:14:14 – Intro to thalamic cortical microcircuit1:15:21 – What AI folks think of brains1:16:57 – Which levels inform which levels1:20:02 – Advice for an AI startup
96 minutes | 4 months ago
BI 086 Ken Stanley: Open-Endedness
Ken and I discuss open-endedness, the pursuit of ambitious goals by seeking novelty and interesting products instead of advancing directly toward defined objectives. We talk about evolution as a prime example of an open-ended system that has produced astounding organisms, Ken relates how open-endedness could help advance artificial intelligence and neuroscience, and we discuss a range of topics related to the general concept of open-endedness, and Ken takes a couple questions from Stefan Leijnen and Melanie Mitchell. Related: Ken’s website.Twitter: @kenneth0stanley.The book:Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth Stanley and Joel Lehman.Papers:Evolving Neural Networks Through Augmenting Topologies (2002)Minimal Criterion Coevolution: A New Approach to Open-Ended Search Some key take-aways: Many of the best inventions were not the result of trying to achieve a specific objective.Open-endedness is the pursuit of ambitious advances without a clearly defined objective.Evolution is a quintessential example of an open-ended process: it produces a vast array of complex beings by searching the space of possible organisms, constrained by the environment, survival, and reproduction.Perhaps the key to developing artificial general intelligence is by following an open-ended path rather that pursing objectives (solving the same old benchmark tasks, etc.). 0:00 – Intro3:46 – Skip Intro4:30 – Evolution as an Open-ended process8:25 – Why Greatness Cannot Be Planned20:46 – Open-endedness in AI29:35 – Constraints vs. objectives36:26 – The adjacent possible41:22 – Serendipity44:33 – Stefan Leijnen question53:11 – Melanie Mitchell question1:00:32 – Efficiency1:02:13 – Gentle Earth1:05:25 – Learning vs. evolution1:10:53 – AGI1:14:06 – Neuroscience, AI, and open-endedness1:26:06 – Open AI
104 minutes | 4 months ago
BI 085 Ida Momennejad: Learning Representations
Ida and I discuss the current landscape of reinforcement learning in both natural and artificial intelligence, and how the old story of two RL systems in brains – model-free and model-based – is giving way to a more nuanced story of these two systems constantly interacting and additional RL strategies between model-free and model-based to drive the vast repertoire of our habits and goal-directed behaviors. We discuss Ida’s work on one of those “in-between” strategies, the successor representation RL strategy, which maps onto brain activity and accounts for behavior. We also discuss her interesting background and how it affects her outlook and research pursuit, and the role philosophy has played and continues to play in her thought processes. Related links: Ida’s website.Twitter: @criticalneuro.A nice review of what we discuss:Learning Structures: Predictive Representations, Replay, and Generalization. Time stamps: 0:00 – Intro4:50 – Skip intro9:58 – Core way of thinking 19:58 – Disillusionment27:22 – Role of philosophy34:51 – Optimal individual learning strategy39:28 – Microsoft job44:48 – Field of reinforcement learning51:18 – Learning vs. innate priors59:47 – Incorporating other cognition into RL1:08:24 – Evolution1:12:46 – Model-free and model-based RL1:19:02 – Successor representation1:26:48 – Are we running all algorithms all the time?1:28:38 – Heuristics and intuition1:33:48 – Levels of analysis1:37:28 – Consciousness
116 minutes | 4 months ago
BI 084 György Buzsáki and David Poeppel
David, Gyuri, and I discuss the issues they argue for in their back and forth commentaries about the importance of neuroscience and psychology, or implementation-level and computational-level, to advance our understanding of brains and minds – and the names we give to the things we study. Gyuri believes it’s time we use what we know and discover about brain mechanisms to better describe the psychological concepts we refer to as explanations for minds; David believes the psychological concepts are constantly being refined and are just as valid as objects of study to understand minds. They both agree these are important and enjoyable topics to debate.Also, special guest questions from Paul Cisek and John Krakauer. Related: Buzsáki lab; Poeppel labTwitter: @davidpoeppel.The papers we discuss or mention:Calling Names by Christophe BernardThe Brain–Cognitive Behavior Problem: A Retrospective by György Buzsáki.Against the Epistemological Primacy of the Hardware: The Brain from Inside Out, Turned Upside Down by David Poeppel.Books:The Brain from Inside Out by György Buzsáki.The Cognitive Neurosciences (edited by David Poeppel et al). Timeline: 0:00 – Intro5:31 – Skip intro8:42 – Gyuri and David summaries25:45 – Guest questions36:25 – Gyuri new language49:41 – Language and oscillations53:52 – Do we know what cognitive functions we’re looking for?58:25 – Psychiatry1:00:25 – Steve Grossberg approach1:02:12 – Neuroethology1:09:08 – AI as tabula rasa1:17: 40 – What’s at stake?1:36:20 – Will the space between neuroscience and psychology disappear?
73 minutes | 5 months ago
BI 083 Jane Wang: Evolving Altruism in AI
Jane and I discuss the relationship between AI and neuroscience (cognitive science, etc), from her perspective at Deepmind after a career researching natural intelligence. We also talk about her meta-reinforcement learning work that connects deep reinforcement learning with known brain circuitry and processes, and finally we talk about her recent work using evolutionary strategies to develop altruism and cooperation among the agents in a multi-agent reinforcement learning environment. Related: Jane’s website.Twitter: @janexwang. The papers we discuss or mention:Learning to reinforcement learn.Blog post with a link to the paper: Prefrontal cortex as a meta-reinforcement learning system.Deep Reinforcement Learning and its Neuroscientific ImplicationsEvolving Intrinsic Motivations for Altruistic Behavior.Books she recommended:Human Compatible: AI and the Problem of Control, by Stuart Russell:Algorithms to Live By, by Brian Christian and Tom Griffiths. Timeline: 0:00 – Intro3:36 – Skip Intro4:45 – Transition to Deepmind19:56 – Changing perspectives on neuroscience24:49 – Is neuroscience useful for AI?33:11 – Is deep learning hitting a wall?35:57 – Meta-reinforcement learning52:00 – Altruism in multi-agent RL
136 minutes | 5 months ago
BI 082 Steve Grossberg: Adaptive Resonance Theory
Steve and I discuss his long and productive career as a theoretical neuroscientist. We cover his tried and true method of taking a large body of psychological behavioral findings, determining how they fit together and what’s paradoxical about them, developing design principles, theories, and models from that body of data, and using experimental neuroscience to inform and confirm his model predictions. We talk about his Adaptive Resonance Theory (ART) to describe how our brains are self-organizing, adaptive, and deal with changing environments. We also talk about his complementary computing paradigm to describe how two systems can complement each other to create emergent properties neither system can create on its own , how the resonant states in ART support consciousness, his place in the history of both neuroscience and AI, and quite a bit more. Related: Steve’s BU website.Some papers we discuss or mention (much more on his website):Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world.Towards solving the Hard Problem of Consciousness: The varieties of brain resonances and the conscious experiences that they support.A Path Toward Explainable AI and Autonomous Adaptive Intelligence: Deep Learning, Adaptive Resonance, and Models of Perception, Emotion, and Action. Topics Time stamps: 0:00 – Intro5:48 – Skip Intro9:42 – Beginnings18:40 – Modeling method44:05 – Physics vs. neuroscience54:50 – Historical credit for Hopfield network1:03:40 – Steve’s upcoming book1:08:24 – Being shy1:11:21 – Stability plasticity dilemma1:14:10 – Adaptive resonance theory1:18:25 – ART matching rule1:21:35 – Consciousness as resonance1:29:15 – Complementary computing1:38:58 – Vigilance to re-orient1:54:58 – Deep learning vs. ART
82 minutes | 5 months ago
BI 081 Pieter Roelfsema: Brain-propagation
Pieter and I discuss his ongoing quest to figure out how the brain implements learning that solves the credit assignment problem, like backpropagation does for neural networks. We also talk about his work to understand how we perceive individual objects in a crowded scene, his neurophysiological recordings in support of the global neuronal workspace hypothesis of consciousness, and the visual prosthetic device he’s developing to cure blindness by directly stimulating early visual cortex. Related: Pieter’s lab website.Twitter: @Pieters_Tweet.His startup to cure blindness: Phosphoenix.Talk:Seeing and thinking with your visual brainThe papers we discuss or mention:Control of synaptic plasticity in deep cortical networks.A Biologically Plausible Learning Rule for Deep Learning in the Brain.Conscious Processing and the Global Neuronal Workspace Hypothesis.Pieter’s neuro-origin book inspiration (like so many others): Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter.
91 minutes | 6 months ago
BI 080 Daeyeol Lee: Birth of Intelligence
Daeyeol and I discuss his book Birth of Intelligence: From RNA to Artificial Intelligence, which argues intelligence is a function of and inseparable from life, bound by self-replication and evolution. The book covers a ton of neuroscience related to decision making and learning, though we focused on a few theoretical frameworks and ideas like division of labor and principal-agent relationships to understand how our brains and minds are related to our genes, how AI is related to humans (for now), metacognition, consciousness, and a ton more. Related: Lee Lab for Learning and Decision Making.Twitter: @daeyeol_lee.Daeyeol’s side passion, creating music.His book: Birth of Intelligence: From RNA to Artificial Intelligence.
79 minutes | 6 months ago
BI 079 Romain Brette: The Coding Brain Metaphor
Romain and I discuss his theoretical/philosophical work examining how neuroscientists rampantly misuse the word “code” when making claims about information processing in brains. We talk about the coding metaphor, various notions of information, the different roles and facets of mental representation, perceptual invariance, subjective physics, process versus substance metaphysics, and the experience of writing a Behavior and Brain Sciences article (spoiler: it’s a demanding yet rewarding experience). Romain’s website.Twitter: @RomainBrette.The papers we discuss or mention:.Philosophy of the spike: rate-based vs. spike-based theories of the brain.Is coding a relevant metaphor for the brain? (bioRxiv link).Subjective physics.Related worksThe Ecological Approach to Visual Perception by James Gibson.Why Red Doesn’t Sound Like a Bell by Kevin O’Reagan.
75 minutes | 6 months ago
BI 078 David and John Krakauer: Part 2
In this second part of our conversation David, John, and I continue to discuss the role of complexity science in the study of intelligence, brains, and minds. We also get into functionalism and multiple realizability, dynamical systems explanations, the role of time in thinking, and more. Be sure to listen to the first part, which lays the foundation for what we discuss in this episode. Notes: David’s page at the Santa Fe Institute.John’s BLAM lab website.Follow SFI on twitter: @sfiscience.BLAM on Twitter: @blamlab Related Krakauer stuff:At the limits of thought. An Aeon article by DavidComplex Time: Cognitive Regime Shift II – When/Why/How the Brain Breaks. A video conversation with both John and David.Complexity Podcast.Books mentioned:Worlds Hidden in Plain Sight: The Evolving Idea of Complexity at the Santa Fe Institute, ed. David Krakauer.Understanding Scientific Understanding by Henk de Regt.The Idea of the Brain by Matthew Cobb.New Dark Age: Technology and the End of the Future by James Bridle.The River of Consciousness by Oliver Sacks.
93 minutes | 6 months ago
BI 077 David and John Krakauer: Part 1
David, John, and I discuss the role of complexity science in the study of intelligence. In this first part, we talk about complexity itself, its role in neuroscience, emergence and levels of explanation, understanding, epistemology and ontology, and really quite a bit more. Notes: David’s page at the Santa Fe Institute.John’s BLAM lab website.Follow SFI on twitter: @sfiscience.BLAM on Twitter: @blamlab Related Krakauer stuff:At the limits of thought. An Aeon article by DavidComplex Time: Cognitive Regime Shift II – When/Why/How the Brain Breaks. A video conversation with both John and David.Complexity Podcast.Books mentioned:Worlds Hidden in Plain Sight: The Evolving Idea of Complexity at the Santa Fe Institute, ed. David Krakauer.Understanding Scientific Understanding by Henk de Regt.The Idea of the Brain by Matthew Cobb.New Dark Age: Technology and the End of the Future by James Bridle.The River of Consciousness by Oliver Sacks.
106 minutes | 7 months ago
BI 076 Olaf Sporns: Network Neuroscience
Olaf and I discuss the explosion of network neuroscience, which uses network science tools to map the structure (connectome) and activity of the brain at various spatial and temporal scales. We talk about the possibility of bridging physical and functional connectivity via communication dynamics, and about the relation between network science and artificial neural networks and plenty more. Notes: Computational Cognitive Neuroscience Laboratory.Twitter: @spornslabHis excellent book: Networks of the Brain.Related papers:Network Neuroscience.The economy of brain network organization.Communication dynamics in complex brain networks.
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2020