Created with Sketch.
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
42 minutes | Dec 6, 2021
re:Invent Roundup 2021 with Bratin Saha - #542
Today we’re joined by Bratin Saha, vice president and general manager at Amazon.In our conversation with Bratin, we discuss quite a few of the recent ML-focused announcements coming out of last weeks re:Invent conference, including new products like Canvas and Studio Lab, as well as upgrades to existing services like Ground Truth Plus. We explore what no-code environments like the aforementioned Canvas mean for the democratization of ML tooling, and some of the key challenges to delivering it as a consumable product. We also discuss industrialization as a subset of MLOps, and how customer patterns inform the creation of these tools, and much more!The complete show notes for this episode can be found at twimlai.com/go/542.
45 minutes | Dec 2, 2021
Multi-modal Deep Learning for Complex Document Understanding with Doug Burdick - #541
Today we’re joined by Doug Burdick, a principal research staff member at IBM Research. In a recent interview, Doug’s colleague Yunyao Li joined us to talk through some of the broader enterprise NLP problems she’s working on. One of those problems is making documents machine consumable, especially with the traditionally archival file type, the PDF. That’s where Doug and his team come in.In our conversation, we discuss the multimodal approach they’ve taken to identify, interpret, contextualize and extract things like tables from a document, the challenges they’ve faced when dealing with the tables and how they evaluate the performance of models on tables. We also explore how he’s handled generalizing across different formats, how fine-tuning has to be in order to be effective, the problems that appear on the NLP side of things, and how deep learning models are being leveraged within the group.The complete show notes for this episode can be found at twimlai.com/go/541
49 minutes | Nov 29, 2021
Predictive Maintenance Using Deep Learning and Reliability Engineering with Shayan Mortazavi - #540
Today we’re joined by Shayan Mortazavi, a data science manager at Accenture. In our conversation with Shayan, we discuss his talk from the recent SigOpt HPC & AI Summit, titled A Novel Framework Predictive Maintenance Using Dl and Reliability Engineering. In the talk, Shayan proposes a novel deep learning-based approach for prognosis prediction of oil and gas plant equipment in an effort to prevent critical damage or failure. We explore the evolution of reliability engineering, the decision to use a residual-based approach rather than traditional anomaly detection to determine when an anomaly was happening, the challenges of using LSTMs when building these models, the amount of human labeling required to build the models, and much more!The complete show notes for this episode can be found at twimlai.com/go/540
50 minutes | Nov 24, 2021
Building a Deep Tech Startup in NLP with Nasrin Mostafazadeh - #539
Today we’re joined by friend-of-the-show Nasrin Mostafazadeh, co-founder of Verneek. Though Verneek is still in stealth, Nasrin was gracious enough to share a bit about the company, including their goal of enabling anyone to make data-informed decisions without the need for a technical background, through the use of innovative human-machine interfaces. In our conversation, we explore the state of AI research in the domains relevant to the problem they’re trying to solve and how they use those insights to inform and prioritize their research agenda. We also discuss what advice Nasrin would give to someone thinking about starting a deep tech startup or going from research to product development. The complete show notes for today’s show can be found at twimlai.com/go/539.
41 minutes | Nov 22, 2021
Models for Human-Robot Collaboration with Julie Shah - #538
Today we’re joined by Julie Shah, a professor at the Massachusetts Institute of Technology (MIT). Julie’s work lies at the intersection of aeronautics, astronautics, and robotics, with a specific focus on collaborative and interactive robotics. In our conversation, we explore how robots would achieve the ability to predict what their human collaborators are thinking, what the process of building knowledge into these systems looks like, and her big picture idea of developing a field robot that doesn’t “require a human to be a robot” to work with it. We also discuss work Julie has done on cross-training between humans and robots with the focus on getting them to co-learn how to work together, as well as future projects that she’s excited about.The complete show notes for this episode can be found at twimlai.com/go/538.
57 minutes | Nov 18, 2021
Four Key Tools for Robust Enterprise NLP with Yunyao Li - #537
Today we’re joined by Yunyao Li, a senior research manager at IBM Research. Yunyao is in a somewhat unique position at IBM, addressing the challenges of enterprise NLP in a traditional research environment, while also having customer engagement responsibilities. In our conversation with Yunyao, we explore the challenges associated with productizing NLP in the enterprise, and if she focuses on solving these problems independent of one another, or through a more unified approach. We then ground the conversation with real-world examples of these enterprise challenges, including enabling level document discovery at scale using combinations of techniques like deep neural networks and supervised and/or unsupervised learning, and entity extraction and semantic parsing to identify text. Finally, we talk through data augmentation in the context of NLP, and how we enable the humans in-the-loop to generate high-quality data.The complete show notes for this episode can be found at twimlai.com/go/537
62 minutes | Nov 15, 2021
Machine Learning at GSK with Kim Branson - #356
Today we’re joined by Kim Branson, the SVP and global head of artificial intelligence and machine learning at GSK. We cover a lot of ground in our conversation, starting with a breakdown of GSK’s core pharmaceutical business, and how ML/AI fits into that equation, use cases that appear using genetics data as a data source, including sequential learning for drug discovery. We also explore the 500 billion node knowledge graph Kim’s team built to mine scientific literature, and their “AI Hub”, the ML/AI infrastructure team that handles all tooling and engineering problems within their organization. Finally, we explore their recent cancer research collaboration with King’s College, which is tasked with understanding the individualized needs of high- and low-risk cancer patients using ML/AI amongst other technologies. The complete show notes for this episode can be found at twimlai.com/go/536.
59 minutes | Nov 11, 2021
The Benefit of Bottlenecks in Evolving Artificial Intelligence with David Ha - #535
Today we’re joined by David Ha, a research scientist at Google. In nature, there are many examples of “bottlenecks”, or constraints, that have shaped our development as a species. Building upon this idea, David posits that these same evolutionary bottlenecks could work when training neural network models as well. In our conversation with David, we cover a TON of ground, including the aforementioned biological inspiration for his work, then digging deeper into the different types of constraints he’s applied to ML systems. We explore abstract generative models and how advanced training agents inside of generative models has become, and quite a few papers including Neuroevolution of self-interpretable agents, World Models and Attention for Reinforcement Learning, and The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning.This interview is Nerd Alert certified, so get your notes ready! PS. David is one of our favorite follows on Twitter (@hardmaru), so check him out and share your thoughts on this interview and his work!The complete show notes for this episode can be found at twimlai.com/go/535
42 minutes | Nov 8, 2021
Facebook Abandons Facial Recognition. Should Everyone Else Follow Suit? With Luke Stark - #534
Today we’re joined by Luke Stark, an assistant professor at Western University in London, Ontario. In our conversation with Luke, we explore the existence and use of facial recognition technology, something Luke has been critical of in his work over the past few years, comparing it to plutonium. We discuss Luke’s recent paper, “Physiognomic Artificial Intelligence”, in which he critiques studies that will attempt to use faces and facial expressions and features to make determinations about people, a practice fundamental to facial recognition, also one that Luke believes is inherently racist at its core. Finally, briefly discuss the recent wave of hires at the FTC, and the news that broke (mid-recording) announcing that Facebook will be shutting down their facial recognition system and why it's not necessarily the game-changing announcement it seemed on its… face. The complete show notes for this episode can be found at twimlai.com/go/534.
43 minutes | Nov 4, 2021
Building Blocks of Machine Learning at LEGO with Francesc Joan Riera - #533
Today we’re joined by Francesc Joan Riera, an applied machine learning engineer at The LEGO Group. In our conversation, we explore the ML infrastructure at LEGO, specifically around two use cases, content moderation and user engagement. While content moderation is not a new or novel task, but because their apps and products are marketed towards children, their need for heightened levels of moderation makes it very interesting. We discuss if the moderation system is built specifically to weed out bad actors or passive behaviors if their system has a human-in-the-loop component, why they built a feature store as opposed to a traditional database, and challenges they faced along that journey. We also talk through the range of skill sets on their team, the use of MLflow for experimentation, the adoption of AWS for serverless, and so much more!The complete show notes for this episode can be found at twimlai.com/go/534.
40 minutes | Nov 1, 2021
Exploring the FastAI Tooling Ecosystem with Hamel Husain - #532
Today we’re joined by Hamel Husain, Staff Machine Learning Engineer at GitHub. Over the last few years, Hamel has had the opportunity to work on some of the most popular open source projects in the ML world, including fast.ai, nbdev, fastpages, and fastcore, just to name a few. In our conversation with Hamel, we discuss his journey into Silicon Valley, and how he discovered that the ML tooling and infrastructure weren’t quite as advanced as he’d assumed, and how that led him to help build some of the foundational pieces of Airbnb’s Bighead Platform. We also spend time exploring Hamel’s time working with Jeremy Howard and the team creating fast.ai, how nbdev came about, and how it plans to change the way practitioners interact with traditional jupyter notebooks. Finally, talk through a few more tools in the fast.ai ecosystem, fastpages, fastcore, how these tools interact with Github Actions, and the up and coming ML tools that Hamel is excited about. The complete show notes for this episode can be found at twimlai.com/go/532.
37 minutes | Oct 28, 2021
Multi-task Learning for Melanoma Detection with Julianna Ianni - #531
In today’s episode, we are joined by Julianna Ianni, vice president of AI research & development at Proscia.In our conversation, Julianna shares her and her team’s research focused on developing applications that would help make the life of pathologists easier by enabling tasks to quickly and accurately be diagnosed using deep learning and AI.We also explore their paper “A Pathology Deep Learning System Capable of Triage of Melanoma Specimens Utilizing Dermatopathologist Consensus as Ground Truth”, while talking through how ML aids pathologists in diagnosing Melanoma by building a multitask classifier to distinguish between low-risk and high-risk cases. Finally, we discussed the challenges involved in designing a model that would help in identifying and classifying Melanoma, the results they’ve achieved, and what the future of this work could look like.The complete show notes for this episode can be found at twimlai.com/go/531.
44 minutes | Oct 26, 2021
House Hunters: Machine Learning at Redfin with Akshat Kaul - #530
Today we’re joined by Akshat Kaul, the head of data science and machine learning at Redfin. We’re all familiar with Redfin, but did you know that redfin.com is the largest real estate brokerage site in the US? In our conversation with Akshat, we discuss the history of ML at Redfin and a few of the key use cases that ML is currently being applied to, including recommendations, price estimates, and their “hot homes” feature. We explore their recent foray into building their own internal platform, which they’ve coined “Redeye”, how they’ve built Redeye to support modeling across the business, and how Akshat thinks about the role of the cloud when building and delivering their platform. Finally, we discuss the impact the pandemic has had on ML at the company, and Akshat’s vision for the future of their platform and machine learning at the company more broadly. The complete show notes for this episode can be found at twimlai.com/go/530.
47 minutes | Oct 21, 2021
Attacking Malware with Adversarial Machine Learning, w/ Edward Raff - #529
Today we’re joined by Edward Raff, chief scientist and head of the machine learning research group at Booz Allen Hamilton. Edward’s work sits at the intersection of machine learning and cybersecurity, with a particular interest in malware analysis and detection. In our conversation, we look at the evolution of adversarial ML over the last few years before digging into Edward’s recently released paper, Adversarial Transfer Attacks With Unknown Data and Class Overlap. In this paper, Edward and his team explore the use of adversarial transfer attacks and how they’re able to lower their success rate by simulating class disparity. Finally, we talk through quite a few future directions for adversarial attacks, including his interest in graph neural networks.The complete show notes for this episode can be found at twimlai.com/go/529.
38 minutes | Oct 18, 2021
Learning to Ponder: Memory in Deep Neural Networks with Andrea Banino - #528
Today we’re joined by Andrea Banino, a research scientist at DeepMind. In our conversation with Andrea, we explore his interest in artificial general intelligence by way of episodic memory, the relationship between memory and intelligence, the challenges of applying memory in the context of neural networks, and how to overcome problems of generalization. We also discuss his work on the PonderNet, a neural network that “budgets” its computational investment in solving a problem, according to the inherent complexity of the problem, the impetus and goals of this research, and how PonderNet connects to his memory research. The complete show notes for this episode can be found at twimlai.com/go/528.
43 minutes | Oct 14, 2021
Advancing Deep Reinforcement Learning with NetHack, w/ Tim Rocktäschel - #527
Take our survey at twimlai.com/survey21!Today we’re joined by Tim Rocktäschel, a research scientist at Facebook AI Research and an associate professor at University College London (UCL). Tim’s work focuses on training RL agents in simulated environments, with the goal of these agents being able to generalize to novel situations. Typically, this is done in environments like OpenAI Gym, MuJuCo, or even using Atari games, but these all come with constraints. In Tim’s approach, he utilizes a game called NetHack, which is much more rich and complex than the aforementioned environments. In our conversation with Tim, we explore the ins and outs of using NetHack as a training environment, including how much control a user has when generating each individual game and the challenges he's faced when deploying the agents. We also discuss his work on MiniHack, an environment creation framework and suite of tasks that are based on NetHack, and future directions for this research.The complete show notes for this episode can be found at twimlai.com/go/527.
41 minutes | Oct 11, 2021
Building Technical Communities at Stack Overflow with Prashanth Chandrasekar - #526
In this special episode of the show, we’re excited to bring you our conversation with Prashanth Chandrasekar, CEO of Stack Overflow. This interview was recorded as a part of the annual Prosus AI Marketplace event. In our discussion with Prashanth, we explore the impact the pandemic has had on Stack Overflow, how they think about community and enable collaboration in over 100 million monthly users from around the world, and some of the challenges they’ve dealt with when managing a community of this scale. We also examine where Stack Overflow is in their AI journey, use cases illustrating how they’re currently utilizing ML, what their role is in the future of AI-based code generation, what other trends they’ve picked up on over the last few years, and how they’re using those insights to forge the path forward.The complete show notes for this episode can be found at twimlai.com/go/526.
39 minutes | Oct 7, 2021
Deep Learning is Eating 5G. Here’s How, w/ Joseph Soriaga - #525
Today we’re joined by Joseph Soriaga, a senior director of technology at Qualcomm. In our conversation with Joseph, we focus on a pair of papers that he and his team will be presenting at Globecom later this year. The first, Neural Augmentation of Kalman Filter with Hypernetwork for Channel Tracking, details the use of deep learning to augment an algorithm to address mismatches in models, allowing for more efficient training and making models more interpretable and predictable. The second paper, WiCluster: Passive Indoor 2D/3D Positioning using WiFi without Precise Labels, explores the use of rf signals to infer what the environment looks like, allowing for estimation of a person’s movement. We also discuss the ability for machine learning and AI to help enable 5G and make it more efficient for these applications, as well as the scenarios that ML would allow for more effective delivery of connected services, and look towards what might be possible in the near future. The complete show notes for this episode can be found at twimlai.com/go/525.
47 minutes | Oct 4, 2021
Modeling Human Cognition with RNNs and Curriculum Learning, w/ Kanaka Rajan - #524
Today we’re joined by Kanaka Rajan, an assistant professor at the Icahn School of Medicine at Mt Sinai. Kanaka, who is a recent recipient of the NSF Career Award, bridges the gap between the worlds of biology and artificial intelligence with her work in computer science. In our conversation, we explore how she builds “lego models” of the brain that mimic biological brain functions, then reverse engineers those models to answer the question “do these follow the same operating principles that the biological brain uses?”We also discuss the relationship between memory and dynamically evolving system states, how close we are to understanding how memory actually works, how she uses RNNs for modeling these processes, and what training and data collection looks like. Finally, we touch on her use of curriculum learning (where the task you want a system to learn increases in complexity slowly), and of course, we look ahead at future directions for Kanaka’s research. The complete show notes for this episode can be found at twimlai.com/go/524.
41 minutes | Sep 30, 2021
Do You Dare Run Your ML Experiments in Production? with Ville Tuulos - #523
Today we’re joined by a friend of the show and return guest Ville Tuulos, CEO and co-founder of Outerbounds. In our previous conversations with Ville, we explored his experience building and deploying the open-source framework, Metaflow, while working at Netflix. Since our last chat, Ville has embarked on a few new journeys, including writing the upcoming book Effective Data Science Infrastructure, and commercializing Metaflow, both of which we dig into quite a bit in this conversation. We reintroduce the problem that Metaflow was built to solve and discuss some of the unique use cases that Ville has seen since it's release, the relationship between Metaflow and Kubernetes, and the maturity of services like batch and lambdas allowing a complete production ML system to be delivered. Finally, we discuss the degree to which Ville is catering is Outerbounds’ efforts to building tools for the MLOps community, and what the future looks like for him and Metaflow. The complete show notes for this episode can be found at twimlai.com/go/523.
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2021