Created with Sketch.
111 minutes | 4 days ago
BI 104 John Kounios and David Rosen: Creativity, Expertise, Insight
What is creativity? How do we measure it? How do our brains implement it, and how might AI?Those are some of the questions John, David, and I discuss. The neuroscience of creativity is young, in its “wild west” days still. We talk about a few creativity studies they’ve performed that distinguish different creative processes with respect to different levels of expertise (in this case, in jazz improvisation), and the underlying brain circuits and activity, including using transcranial direct current stimulation to alter the creative process. Related to creativity, we also discuss the phenomenon and neuroscience of insight (the topic of John’s book, The Eureka Factor), unconscious automatic type 1 processes versus conscious deliberate type 2 processes, states of flow, creative process versus creative products, and a lot more. John Kounios.Secret Chord Laboratories (David’s company).Twitter: @JohnKounios; @NeuroBassDave.John’s book (with Mark Beeman) on insight and creativity.The Eureka Factor: Aha Moments, Creative Insight, and the Brain.The papers we discuss or mention:All You Need to Do Is Ask? The Exhortation to Be Creative Improves Creative Performance More for Nonexpert Than Expert Jazz MusiciansAnodal tDCS to Right Dorsolateral Prefrontal Cortex Facilitates Performance for Novice Jazz Improvisers but Hinders ExpertsDual-process contributions to creativity in jazz improvisations: An SPM-EEG study. Timestamps0:00 – Intro16:20 – Where are we broadly in science of creativity?18:23 – Origins of creativity research22:14 – Divergent and convergent thought26:31 – Secret Chord Labs32:40 – Familiar surprise38:55 – The Eureka Factor42:27 – Dual process model52:54 – Creativity and jazz expertise55:53 – “Be creative” behavioral study59:17 – Stimulating the creative brain1:02:04 – Brain circuits underlying creativity1:14:36 – What does this tell us about creativity?1:16:48 – Intelligence vs. creativity1:18:25 – Switching between creative modes1:25:57 – Flow states and insight1:34:29 – Creativity and insight in AI1:43:26 – Creative products vs. process
87 minutes | 16 days ago
BI 103 Randal Koene and Ken Hayworth: The Road to Mind Uploading
Randal, Ken, and I discuss a host of topics around the future goal of uploading our minds into non-brain systems, to continue our mental lives and expand our range of experiences. The basic requirement for such a subtrate-independent mind is to implement whole brain emulation. We discuss two basic approaches to whole brain emulation. The “scan and copy” approach proposes we somehow scan the entire structure of our brains (at whatever scale is necessary) and store that scan until some future date when we have figured out how to us that information to build a substrate that can house your mind. The “gradual replacement” approach proposes we slowly replace parts of the brain with functioning alternative machines, eventually replacing the entire brain with non-biological material and yet retaining a functioning mind. Randal and Ken are neuroscientists who understand the magnitude and challenges of a massive project like mind uploading, who also understand what we can do right now, with current technology, to advance toward that lofty goal, and who are thoughtful about what steps we need to take to enable further advancements. Randal A KoeneTwitter: @randalkoeneCarboncopies Foundation.Randal’s website.Ken HayworthTwitter: @KennethHayworthBrain Preservation Foundation.Youtube videos. Timestamps0:00 – Intro6:14 – What Ken wants11:22 – What Randal wants22:29 – Brain preservation27:18 – Aldehyde stabilized cryopreservation31:51 – Scan and copy vs. gradual replacement38:25 – Building a roadmap49:45 – Limits of current experimental paradigms53:51 – Our evolved brains1:06:58 – Counterarguments1:10:31 – Animal models for whole brain emulation1:15:01 – Understanding vs. emulating brains1:22:37 – Current challenges
92 minutes | a month ago
BI 102 Mark Humphries: What Is It Like To Be A Spike?
Mark and I discuss his book, The Spike: An Epic Journey Through the Brain in 2.1 Seconds. It chronicles how a series of action potentials fire through the brain in a couple seconds of someone’s life. Starting with light hitting the retina as a person looks at a cookie, Mark describes how that light gets translated into spikes, how those spikes get processed in our visual system and eventually transform into motor commands to grab that cookie. Along the way, he describes some of the big ideas throughout the history of studying brains (like the mechanisms to explain how neurons seem to fire so randomly), the big mysteries we currently face (like why do so many neurons do so little?), and some of the main theories to explain those mysteries (we’re prediction machines!). A fun read and discussion. This is Mark’s second time on the podcast – he was on episode 4 in the early days, talking more in depth about some of the work we discuss in this episode! The Humphries Lab.Twitter: @markdhumphriesBook: The Spike: An Epic Journey Through the Brain in 2.1 Seconds.Related papersA spiral attractor network drives rhythmic locomotion. Timestamps: 0:00 – Intro3:25 – Writing a book15:37 – Mark’s main interest19:41 – Future explanation of brain/mind27:00 – Stochasticity and excitation/inhibition balance36:56 – Dendritic computation for network dynamics39:10 – Do details matter for AI?44:06 – Spike failure51:12 – Dark neurons1:07:57 – Intrinsic spontaneous activity1:16:16 – Best scientific moment1:23:58 – Failure1:28:45 – Advice
105 minutes | a month ago
BI 101 Steve Potter: Motivating Brains In and Out of Dishes
Steve and I discuss his book, How to Motivate Your Students to Love Learning, which is both a memoir and a guide for teachers and students to optimize the learning experience for intrinsic motivation. Steve taught neuroscience and engineering courses while running his own lab studying the activity of live cultured neural populations (which we discuss at length in his previous episode). He relentlessly tested and tweaked his teaching methods, including constant feedback from the students, to optimize their learning experiences. He settled on real-world, project-based learning approaches, like writing wikipedia articles and helping groups of students design and carry out their own experiments. We discuss that, plus the science behind learning, principles important for motivating students and maintaining that motivation, and many of the other valuable insights he shares in the book. The first half of the episode we discuss diverse neuroscience and AI topics, like brain organoids, mind-uploading, synaptic plasticity, and more. Then we discuss many of the stories and lessons from his book, which I recommend for teachers, mentors, and life-long students who want to ensure they’re optimizing their own learning. Potter Lab.Twitter: @stevempotter.The Book: How to Motivate Your Students to Love Learning.The glial cell activity movie. 0:00 – Intro6:38 – Brain organoids18:48 – Glial cell plasticity24:50 – Whole brain emulation35:28 – Industry vs. academia45:32 – Intro to book: How To Motivate Your Students To Love Learning48:29 – Steve’s childhood influences57:21 – Developing one’s own intrinsic motivation1:02:30 – Real-world assignments1:08:00 – Keys to motivation1:11:50 – Peer pressure1:21:16 – Autonomy1:25:38 – Wikipedia real-world assignment1:33:12 – Relation to running a lab
50 minutes | a month ago
BI 100.6 Special: Do We Have the Right Vocabulary and Concepts?
We made it to the last bit of our 100th episode celebration. These have been super fun for me, and I hope you’ve enjoyed the collections as well. If you’re wondering where the missing 5th part is, I reserved it exclusively for Brain Inspired’s magnificent Patreon supporters (thanks guys!!!!). The final question I sent to previous guests: Do we already have the right vocabulary and concepts to explain how brains and minds are related? Why or why not? Timestamps: 0:00 – Intro5:04 – Andrew Saxe7:04 – Thomas Naselaris7:46 – John Krakauer9:03 – Federico Turkheimer11:57 – Steve Potter13:31 – David Krakauer17:22 – Dean Buonomano20:28 – Konrad Kording22:00 – Uri Hasson23:15 – Rodrigo Quian Quiroga24:41 – Jim DiCarlo25:26 – Marcel van Gerven28:02 – Mazviita Chirimuuta29:27 – Brad Love31:23 – Patrick Mayo32:30 – György Buzsáki37:07 – Pieter Roelfsema37:26 – David Poeppel40:22 – Paul Cisek44:52 – Talia Konkle47:03 – Steve Grossberg
64 minutes | 2 months ago
BI 100.4 Special: What Ideas Are Holding Us Back?
In the 4th installment of our 100th episode celebration, previous guests responded to the question: What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why? As usual, the responses are varied and wonderful! Timestamps: 0:00 – Intro6:41 – Pieter Roelfsema7:52 – Grace Lindsay10:23 – Marcel van Gerven11:38 – Andrew Saxe14:05 – Jane Wang16:50 – Thomas Naselaris18:14 – Steve Potter19:18 – Kendrick Kay22:17 – Blake Richards27:52 – Jay McClelland30:13 – Jim DiCarlo31:17 – Talia Konkle33:27 – Uri Hasson35:37 – Wolfgang Maass38:48 – Paul Cisek40:41 – Patrick Mayo41:51 – Konrad Kording43:22 – David Poeppel44:22 – Brad Love46:47 – Rodrigo Quian Quiroga47:36 – Steve Grossberg48:47 – Mark Humphries52:35 – John Krakauer55:13 – György Buzsáki59:50 – Stefan Leijnan1:02:18 – Nathaniel Daw
69 minutes | 2 months ago
BI 100.3 Can We Scale Up to AGI with Current Tech?
Part 3 in our 100th episode celebration. Previous guests answered the question: Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3): Do you think the current trend of scaling compute can lead to human level AGI? If not, what’s missing? It likely won’t surprise you that the vast majority answer “No.” It also likely won’t surprise you, there is differing opinion on what’s missing. Timestamps: 0:00 – Intro3:56 – Wolgang Maass5:34 – Paul Humphreys9:16 – Chris Eliasmith12:52 – Andrew Saxe16:25 – Mazviita Chirimuuta18:11 – Steve Potter19:21 – Blake Richards22:33 – Paul Cisek26:24 – Brad Love29:12 – Jay McClelland34:20 – Megan Peters37:00 – Dean Buonomano39:48 – Talia Konkle40:36 – Steve Grossberg42:40 – Nathaniel Daw44:02 – Marcel van Gerven45:28 – Kanaka Rajan48:25 – John Krakauer51:05 – Rodrigo Quian Quiroga53:03 – Grace Lindsay55:13 – Konrad Kording57:30 – Jeff Hawkins102:12 – Uri Hasson1:04:08 – Jess Hamrick1:06:20 – Thomas Naselaris
85 minutes | 2 months ago
BI 100.2 What Are the Biggest Challenges and Disagreements?
In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on. Timestamps: 0:00 – Intro7:10 – Rodrigo Quian Quiroga8:33 – Mazviita Chirimuuta9:15 – Chris Eliasmith12:50 – Jim DiCarlo13:23 – Paul Cisek16:42 – Nathaniel Daw17:58 – Jessica Hamrick19:07 – Russ Poldrack20:47 – Pieter Roelfsema22:21 – Konrad Kording25:16 – Matt Smith27:55 – Rafal Bogacz29:17 – John Krakauer30:47 – Marcel van Gerven31:49 – György Buzsáki35:38 – Thomas Naselaris36:55 – Steve Grossberg48:32 – David Poeppel49:24 – Patrick Mayo50:31 – Stefan Leijnen54:24 – David Krakuer58:13 – Wolfang Maass59:13 – Uri Hasson59:50 – Steve Potter1:01:50 – Talia Konkle1:04:30 – Matt Botvinick1:06:36 – Brad Love1:09:46 – John Brennan1:19:31 – Grace Lindsay1:22:28 – Andrew Saxe
43 minutes | 2 months ago
BI 100.1 Special: What Has Improved Your Career or Well-being?
Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I’ve collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, “In the last five years, what new belief, behavior, or habit has most improved your career or well being?” See below for links to each previous guest. And away we go… Timestamps: 0:00 – Intro6:13 – David Krakauer8:50 – David Poeppel9:32 – Jay McClelland11:03 – Patrick Mayo11:45 – Marcel van Gerven12:11 – Blake Richards12:25 – John Krakauer14:22 – Nicole Rust15:26 – Megan Peters17:03 – Andrew Saxe18:11 – Federico Turkheimer20:03 – Rodrigo Quian Quiroga22:03 – Thomas Naselaris23:09 – Steve Potter24:37 – Brad Love27:18 – Steve Grossberg29:04 – Talia Konkle29:58 – Paul Cisek32:28 – Kanaka Rajan34:33 – Grace Lindsay35:40 – Konrad Kording36:30 – Mark Humphries
107 minutes | 2 months ago
BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness
Hakwan, Steve, and I discuss many issues around the scientific study of consciousness. Steve and Hakwan focus on higher order theories (HOTs) of consciousness, related to metacognition. So we discuss HOTs in particular and their relation to other approaches/theories, the idea of approaching consciousness as a computational problem to be tackled with computational modeling, we talk about the cultural, social, and career aspects of choosing to study something as elusive and controversial as consciousness, we talk about two of the models they’re working on now to account for various properties of conscious experience, and, of course, the prospects of consciousness in AI. For more on metacognition and awareness, check out episode 73 with Megan Peters. Hakwan’s lab: Consciousness and Metacognition Lab.Steve’s lab: The MetaLab.Twitter: @hakwanlau; @smfleming.Hakwan’s brief Aeon article: Is consciousness a battle between your beliefs and perceptions?Related papersAn Informal Internet Survey on the Current State of Consciousness Science.Opportunities and challenges for a maturing science of consciousness.What is consciousness, and could machines have it?”Understanding the higher-order approach to consciousness.Awareness as inference in a higher-order state space. (Steve’s bayesian predictive generative model)Consciousness, Metacognition, & Perceptual Reality Monitoring. (Hakwan’s reality-monitoring model a la generative adversarial networks) Timestamps0:00 – Intro7:25 – Steve’s upcoming book8:40 – Challenges to study consciousness15:50 – Gurus and backscratchers23:58 – Will the problem of consciousness disappear?27:52 – Will an explanation feel intuitive?29:54 – What do you want to be true?38:35 – Lucid dreaming40:55 – Higher order theories50:13 – Reality monitoring model of consciousness1:00:15 – Higher order state space model of consciousness1:05:50 – Comparing their models1:10:47 – Machine consciousness1:15:30 – Nature of first order representations1:18:20 – Consciousness prior (Yoshua Bengio)1:20:20 – Function of consciousness1:31:57 – Legacy1:40:55 – Current projects
93 minutes | 3 months ago
BI 098 Brian Christian: The Alignment Problem
Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about: The history of machine learning and how we got this point;Some methods researches are creating to understand what’s being represented in neural nets and how they generate their output;Some modern proposed solutions to the alignment problem, like programming the machines to learn our preferences so they can help achieve those preferences – an idea called inverse reinforcement learning;The thorny issue of accurately knowing our own values- if we get those wrong, will machines also get it wrong? Links: Brian’s website.Twitter: @brianchristian.The Alignment Problem: Machine Learning and Human Values.Related papersNorbert Wiener from 1960: Some Moral and Technical Consequences of Automation. Timestamps:4:22 – Increased work on AI ethics8:59 – The Alignment Problem overview12:36 – Stories as important for intelligence16:50 – What is the alignment problem17:37 – Who works on the alignment problem?25:22 – AI ethics degree?29:03 – Human values31:33 – AI alignment and evolution37:10 – Knowing our own values?46:27 – What have learned about ourselves?58:51 – Interestingness1:00:53 – Inverse RL for value alignment1:04:50 – Current progress1:10:08 – Developmental psychology1:17:36 – Models as the danger1:25:08 – How worried are the experts?
84 minutes | 3 months ago
BI 097 Omri Barak and David Sussillo: Dynamics and Structure
Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss: The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);The difference between classical approaches to modeling brains and the machine learning approach;The concept of universality – that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains. Barak LabTwitter: @SussilloDavidThe papers we discuss or mention:Sussillo, D. & Barak, O. (2013). Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks.Computation Through Neural Population Dynamics.Implementing Inductive bias for different navigation tasks through diverse RNN attrractors.Dynamics of random recurrent networks with correlated low-rank structure.Quality of internal representation shapes learning performance in feedback neural networks.Feigenbaum’s universality constant original paper: Feigenbaum, M. J. (1976) “Universality in complex discrete dynamics”, Los Alamos Theoretical Division Annual Report 1975-1976TalksUniversality and individuality in neural dynamics across large populations of recurrent networks.World Wide Theoretical Neuroscience Seminar: Omri Barak, January 6, 2021 Timestamps:0:00 – Intro 5:41 – Best scientific moment 9:37 – Why do you do what you do? 13:21 – Computation via dynamics 19:12 – Evolution of thinking about RNNs and brains 26:22 – RNNs vs. minds 31:43 – Classical computational modeling vs. machine learning modeling approach 35:46 – What are models good for? 43:08 – Ecological task validity with respect to using RNNs as models 46:27 – Optimization vs. learning 49:11 – Universality 1:00:47 – Solutions dictated by tasks 1:04:51 – Multiple solutions to the same task 1:11:43 – Direct fit (Uri Hasson) 1:19:09 – Thinking about the bigger picture
94 minutes | 3 months ago
BI 096 Keisuke Fukuda and Josh Cosman: Forking Paths
K, Josh, and I were postdocs together in Jeff Schall’s and Geoff Woodman’s labs. K and Josh had backgrounds in psychology and were getting their first experience with neurophysiology, recording single neuron activity in awake behaving primates. This episode is a discussion surrounding their reflections and perspectives on neuroscience and psychology, given their backgrounds and experience (we reference episode 84 with György Buzsáki and David Poeppel). We also talk about their divergent paths – K stayed in academia and runs an EEG lab studying human decision-making and memory, and Josh left academia and has worked for three different pharmaceutical and tech companies. So this episode doesn’t get into gritty science questions, but is a light discussion about the state of neuroscience, psychology, and AI, and reflections on academia and industry, life in lab, and plenty more. The Fukuda Lab.Josh’s website.Twitter: @KeisukeFukuda4 Time stamps0:00 – Intro4:30 – K intro5:30 – Josh Intro10:16 – Academia vs. industry16:01 – Concern with legacy19:57 – Best scientific moment24:15 – Experiencing neuroscience as a psychologist27:20 – Neuroscience as a tool30:38 – Brain/mind divide33:27 – Shallow vs. deep knowledge in academia and industry 36:05 – Autonomy in industry42:20 – Is this a turning point in neuroscience?46:54 – Deep learning revolution49:34 – Deep nets to understand brains54:54 – Psychology vs. neuroscience1:06:42 – Is language sufficient?1:11:33 – Human-level AI1:13:53 – How will history view our era of neuroscience?1:23:28 – What would you have done differently?1:26:46 – Something you wish you knew
85 minutes | 4 months ago
BI 095 Chris Summerfield and Sam Gershman: Neuro for AI?
It’s generally agreed machine learning and AI provide neuroscience with tools for analysis and theoretical principles to test in brains, but there is less agreement about what neuroscience can provide AI. Should computer scientists and engineers care about how brains compute, or will it just slow them down, for example? Chris, Sam, and I discuss how neuroscience might contribute to AI moving forward, considering the past and present. This discussion also leads into related topics, like the role of prediction versus understanding, AGI, explainable AI, value alignment, the fundamental conundrum that humans specify the ultimate values of the tasks AI will solve, and more. Plus, a question from previous guest Andrew Saxe. Chris’s lab: Human Information Processing lab.Sam’s lab: Computational Cognitive Neuroscience Lab.Twitter: @gershbrain; @summerfieldlabPapers we discuss or mention or are related:If deep learning is the answer, then what is the question?Neuroscience-Inspired Artificial Intelligence.Building Machines that Learn and Think Like People. 0:00 – Intro 5:00 – Good ol’ days 13:50 – AI for neuro, neuro for AI 24:25 – Intellectual diversity in AI 28:40 – Role of philosophy 30:20 – Operationalization and benchmarks 36:07 – Prediction vs. understanding 42:48 – Role of humans in the loop 46:20 – Value alignment 51:08 – Andrew Saxe question 53:16 – Explainable AI 58:55 – Generalization 1:01:09 – What has AI revealed about us? 1:09:38 – Neuro for AI 1:20:30 – Concluding remarks
79 minutes | 4 months ago
BI 094 Alison Gopnik: Child-Inspired AI
Alison and I discuss her work to accelerate learning and thus improve AI by studying how children learn, as Alan Turing suggested in his famous 1950 paper. The ways children learn are via imitation, by learning abstract causal models, and active learning by implementing a high exploration/exploitation ratio. We also discuss child consciousness, psychedelics, the concept of life history, the role of grandparents and elders, and lots more. Alison’s Website.Cognitive Development and Learning Lab.Twitter: @AlisonGopnik.Related papers:Childhood as a solution to explore-exploit tensions.The Aeon article about grandparents, children, and evolution: Vulnerable Yet Vital.Books:The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children.The Scientist in the Crib: What Early Learning Tells Us About the Mind.The Philosophical Baby: What Children’s Minds Tell Us About Truth, Love, and the Meaning of Life. Take-home points: Children learn by imitation, and not just unthinking imitation. They pay attention to and evaluate the intentions of others and judge whether a person seems to be a reliable source of information. That is, they learn by sophisticated socially-constrained imitation.Children build abstract causal models of the world. This allows them to simulate potential outcomes and test their actions against those simulations, accelerating learning.Children keep their foot on the exploration pedal, actively learning by exploring a wide spectrum of actions to determine what works. As we age, our exploratory cognition decreases, and we begin to exploit more what we’ve learned. Timestamps0:00 – Intro 4:40 – State of the field 13:30 – Importance of learning 20:12 – Turing’s suggestion 22:49 – Patience for one’s own ideas 28:53 – Learning via imitation 31:57 – Learning abstract causal models 41:42 – Life history 43:22 – Learning via exploration 56:19 – Explore-exploit dichotomy 58:32 – Synaptic pruning 1:00:19 – Breakthrough research in careers 1:04:31 – Role of elders 1:09:08 – Child consciousness 1:11:41 – Psychedelics as child-like brain 1:16:00 – Build consciousness into AI?
67 minutes | 4 months ago
BI 093 Dileep George: Inference in Brain Microcircuits
Dileep and I discuss his theoretical account of how the thalamus and cortex work together to implement visual inference. We talked previously about his Recursive Cortical Network (RCN) approach to visual inference, which is a probabilistic graph model that can solve hard problems like CAPTCHAs, and more recently we talked about using his RCNs with cloned units to account for cognitive maps related to the hippocampus. On this episode, we walk through how RCNs can map onto thalamo-cortical circuits so a given cortical column can signal whether it believes some concept or feature is present in the world, based on bottom-up incoming sensory evidence, top-down attention, and lateral related features. We also briefly compare this bio-RCN version with Randy O’Reilly’s Deep Predictive Learning account of thalamo-cortical circuitry. Vicarious website – Dileeps AGI robotics company.Twitter: @dileeplearningThe papers we discuss or mention:A detailed mathematical theory of thalamic and cortical microcircuits based on inference in a generative vision model.From CAPTCHA to Commonsense: How Brain Can Teach Us About Artificial Intelligence.Probabilistic graphical models.Hierarchical temporal memory. Time Stamps: 0:00 – Intro 5:18 – Levels of abstraction 7:54 – AGI vs. AHI vs. AUI12:18 – Ideas and failures in startups 16:51 – Thalamic cortical circuitry computation 22:07 – Recursive cortical networks 23:34 – bio-RCN 27:48 – Cortical column as binary random variable 33:37 – Clonal neuron roles 39:23 – Processing cascade 41:10 – Thalamus 47:18 – Attention as explaining away 50:51 – Comparison with O’Reilly’s predictive coding framework 55:39 – Subjective contour effect 1:01:20 – Necker cube
102 minutes | 5 months ago
BI 092 Russ Poldrack: Cognitive Ontologies
Russ and I discuss cognitive ontologies – the “parts” of the mind and their relations – as an ongoing dilemma of how to map onto each other what we know about brains and what we know about minds. We talk about whether we have the right ontology now, how he uses both top-down and data-driven approaches to analyze and refine current ontologies, and how all this has affected his own thinking about minds. We also discuss some of the current meta-science issues and challenges in neuroscience and AI, and Russ answers guest questions from Kendrick Kay and David Poeppel. Russ’s website.Poldrack Lab.Stanford Center For Reproducible Neuroscience.Twitter: @russpoldrack.Book:The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts.The papers we discuss or mention:Atlases of cognition with large-scale human brain mapping.Mapping Mental Function to Brain Structure: How Can Cognitive Neuroimaging Succeed?From Brain Maps to Cognitive Ontologies: Informatics and the Search for Mental Structure.Uncovering the structure of self-regulation through data-driven ontology discoveryTalks:Reproducibility: NeuroHackademy: Russell Poldrack – Reproducibility in fMRI: What is the problem?Cognitive Ontology: Cognitive Ontologies, from Top to BottomA good series of talks about cognitive ontologies: Online Seminar Series: Problem of Cognitive Ontology. Some take-home points: Our folk psychological cognitive ontology hasn’t changed much since early Greek Philosophy, and especially since William James wrote about attention, consciousness, and so on.Using encoding models, we can predict brain responses pretty well based on what task a subject is performing or what “cognitive function” a subject is engaging, at least to a course approximation.Using a data-driven approach has potential to help determine mental structure, but important human decisions must still be made regarding how exactly to divide up the various “parts” of the mind. Time points0:00 – Introduction 5:59 – Meta-science issues 19:00 – Kendrick Kay question 23:00 – State of the field 30:06 – fMRI for understanding minds 35:13 – Computational mind 42:10 – Cognitive ontology 45:17 – Cognitive Atlas 52:05 – David Poeppel question 57:00 – Does ontology matter? 59:18 – Data-driven ontology 1:12:29 – Dynamical systems approach 1:16:25 – György Buzsáki’s inside-out approach 1:22:26 – Ontology for AI 1:27:39 – Deep learning hype
88 minutes | 5 months ago
BI 091 Carsen Stringer: Understanding 40,000 Neurons
Carsen and I discuss how she uses 2-photon calcium imaging data from over 10,000 neurons to understand the information processing of such large neural population activity. We talk about the tools she makes and uses to analyze the data, and the type of high-dimensional neural activity structure they found, which seems to allow efficient and robust information processing. We also talk about how these findings may help build better deep learning networks, and Carsen’s thoughts on how to improve the diversity, inclusivity, and equality in neuroscience research labs. Guest question from Matt Smith. Stringer Lab.Twitter: @computingnature.The papers we discuss or mention:High-dimensional geometry of population responses in visual cortexSpontaneous behaviors drive multidimensional, brain-wide population activity. Timestamps: 0:00 – Intro 5:51 – Recording > 10k neurons 8:51 – 2-photon calcium imaging 14:56 – Balancing scientific questions and tools 21:16 – Unsupervised learning tools and rastermap 26:14 – Manifolds 32:13 – Matt Smith question 37:06 – Dimensionality of neural activity 58:51 – Future plans 1:00:30- What can AI learn from this? 1:13:26 – Diversity, inclusivity, equality
99 minutes | 6 months ago
BI 090 Chris Eliasmith: Building the Human Brain
Chris and I discuss his Spaun large scale model of the human brain (Semantic Pointer Architecture Unified Network), as detailed in his book How to Build a Brain. We talk about his philosophical approach, how Spaun compares to Randy O’Reilly’s Leabra networks, the Applied Brain Research Chris co-founded, and I have guest questions from Brad Aimone, Steve Potter, and Randy O’Reilly. Chris’s website.Applied Brain Research.The book: How to Build a Brain.Nengo (you can run Spaun).Paper summary of Spaun: A large-scale model of the functioning brain. Some takeaways: Spaun is an embodied fully functional cognitive architecture with one eye for task instructions and an arm for responses.Chris uses elements from symbolic, connectionist, and dynamical systems approaches in cognitive science.The neural engineering framework (NEF) is how functions get instantiated in spiking neural networks.The semantic pointer architecture (SPA) is how representations are stored and transformed – i.e. the symbolic-like cognitive processing. Time Points: 0:00 – Intro2:29 – Sense of awe 6:20 – Large-scale models 9:24 – Descriptive pragmatism 15:43 – Asking better questions 22:48 – Brad Aimone question: Neural engineering framework 29:07 – Engineering to build vs. understand 32:12 – Why is AI world not interested in brains/minds?37:09 – Steve Potter neuromorphics question 44:51 – Spaun 49:33 – Semantic Pointer Architecture 56:04 – Representations 58:21 – Randy O’Reilly question 1 1:07:33 – Randy O’Reilly question 21:10:31 – Spaun vs. Leabra 1:32:43 – How would Chris start over?
87 minutes | 6 months ago
BI 089 Matt Smith: Drifting Cognition
Matt and I discuss how cognition and behavior drifts over the course of minutes and hours, and how global brain activity drifts with it. How does the brain continue to produce steady perception and action in the midst of such drift? We also talk about how to think about variability in neural activity. How much of it is noise and how much of it is hidden important activity? Finally, we discuss the effect of recording more and more neurons simultaneously, collecting bigger and bigger datasets, plus guest questions from Adam Snyder and Patrick Mayo. Smith Lab.Twitter: @SmithLabNeuro.Related:Slow drift of neural activity as a signature of impulsivity in macaque visual and prefrontal cortex.Artwork by Melissa Neely Take home points: The “noise” in the variability of neural activity is likely just activity devoted to processing other things.Recording lots of neurons simultaneously helps resolve the question of what’s noise and how much information is in a population of neurons.There’s a neural signature of the behavioral “slow drift” of our internal cognitive state.The neural signature is global, and it’s an open question how the brain compensates to produce steady perception and action. Timestamps: 0:00 – Intro 4:35 – Adam Snyder question 15:26 – Multi-electrode recordings 17:48 – What is noise in the brain? 23:55 – How many neurons is enough? 27:43 – Patrick Mayo question 33:17 – Slow drift 54:10 – Impulsivity 57:32 – How does drift happen? 59:49 – Relation to AI 1:06:58 – What AI and neuro can teach each other 1:10:02 – Ecologically valid behavior 1:14:39 – Brain mechanisms vs. mind 1:17:36 – Levels of description 1:21:14 – Hard things to make in AI 1:22:48 – Best scientific moment
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2021