stitcherLogoCreated with Sketch.
Get Premium Download App
Listen
Discover
Premium
Shows
Likes
Merch

Listen Now

Discover Premium Shows Likes

Response-ability.Tech

43 Episodes

36 minutes | Nov 28, 2022
What data scientists can learn from feminist social scientists in India. With Radhika Radhakrishnan.
In this episode, we're in conversation with feminist scholar and activist, Radhika Radhakrishnan. Radhika is a PhD student at the Massachusetts Institute of Technology (MIT) in the HASTS (History, Anthropology, Science, Technology & Society) programme. This programme uses methods from history and anthropology to study how science and technology shape – and are shaped by – the world we live in. Trained in Gender Studies and Computer Science engineering in India, Radhika has worked for over five years with civil society organisations to study the intersections of gender justice and digital technologies using feminist, qualitative research methodologies. Her research focuses on understanding the challenges faced by gender-minoritized communities with emerging digital technologies in India and finding entry points to intervene meaningfully. Her scholarship has spanned the domains of Artificial Intelligence, data governance pertaining to surveillance technologies and health data, and feminist Internets, among others.Radhika shares with us what she'll be researching for her PhD and why she moved away from computer science to social science. In 2021 Radhika’s paper, “Experiments with Social Good: Feminist Critiques of Artificial Intelligence in Healthcare in India” was published in the journal, Catalyst, and we explore her findings, as well as why she was drawn to artificial intelligence in healthcare. We also discuss her experiences of studying up (see Nader 1972) as a female researcher and some of the strategies she used to overcome these challenges.Lastly, Radhika recommends Annihilation of Caste by B. R. Ambedkar, and explains why it's important that we openly discuss caste. (Check out this article in WIRED about caste in Silicon Valley.)Follow Radhika on Twitter @so_radhikal, and connect with her on LinkedIn. Check out her website, and read her blog on Medium.
31 minutes | May 23, 2022
Why Human Rights Law is AI Ethics With Teeth. With Susie Alegre.
Our guest today is Susie Alegre. Susie is an international human rights lawyer and author. We're in conversation about her book, Freedom To Think: The Long Struggle to Liberate Our Minds (Atlantic Books, 2022). Susie talks about freedom of thought in the context of our digital age, human rights, surveillance capitalism, emotional AI, and AI ethics. Susie explains why she wrote the book and why she thinks our freedom of thought is important in terms of our human rights in the digital age. We explore what freedom of thought is ("some people talk about it as mental privacy") and the difference between an absolute right and a qualified right, and why absolute rights are protected differently. Susie shares some historical examples including witch trials as well as the work of Ewen Cameron, a Scottish psychiatrist in Canada, who experimented on ordinary people without their consent to explore ways to control the human mind.  Facial recognition technology is a modern attempt to get inside our heads and predict such things as our sexual orientation. Susie explains why researchers shouldn’t be experimenting with facial recognition or emotional AI: you’re “effectively opening Pandora’s box”.Susie explains the difference between surveillance advertising, which uses data captured about our inner lives that is sold and auctioned on an open market, in order to manipulate us as individuals, and targeted advertising. Over the past few years there’s been a great deal of focus on ethics and Susie suggests we need to move away from the discussion of ethics “back to the law, specifically human rights law”. She explains that human rights law is being constantly eroded, and says “one way of reducing the currency of human rights law is refocusing on ethics”. Ethics are simply a “good marketing tool” used by companies. The inferences being made about us, the data profiling, the manipulation means it's practically impossible to avoid leaving traces of ourselves, it's beyond our personal control, and privacy settings don't help. In her book Susie suggests that by looking at digital rights (data and privacy protection) in terms of freedom of thought, "the solutions become simpler and more radical". It’s a point that Mary Fitzgerald, in her review of Susie’s book in the Financial Times, suggested was a "unique contribution" to the debates about freedoms in the digital age, and that "reframing data privacy as our right to inner freedom of thought" might capture "the popular imagination" in a way that other initiatives like GDPR have failed to do. Susie explains for us how this approach would work. Follow Susie on Twitter @susie_alegre, and check out her website susiealegre.com. Read the full transcript.  Read the conversation as a web article.Watch the interview on our YouTube channel.
30 minutes | Apr 20, 2022
Anthropology and Artificial Intelligence. With Veronica Barassi
Our guest today is Professor Veronica Barassi. Veronica is an anthropologist and author of Data Child Citizen (MIT Press, 2020). Veronica campaigns and writes about the impact of data technologies and artificial intelligence on human rights and democracy. As a mother, Veronica was becoming increasingly concerned about the data being collected on her two children by digital platforms. Her research resulted in the book as well as a TED talk, What tech companies know about your kids, that’s had over 2 million views. Since the publication of her book, she says there's been a huge acceleration in the datafication of children, partly due to the pandemic, and an increase in the ways in which AI technologies are being used to profile people. Veronica explores what she believes anthropology uniquely brings to the study of data technologies and AI. She asks (and answers), “why would an anthropological approach be different from say, for instance, Virginia Eubanks, who uses ethnographic methodologies and has a real context-specific understanding of what's happening on the ground.”  Turning to anthropology’s (late) engagement with AI, data, and algorithms, she says it used to be a niche area of research. But “we’ve actually seen a reality check for anthropologists because these technologies are…involved in deeply problematic and hierarchical processes of meaning-construction and power-making that there's no way that anthropologists could shy away from this”. One of the best books “that really makes us see things for what they are ["in this current time we’re living in"] is David Graeber’s The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy. Graeber “talks about how bureaucracy is actually there to construct social truth, but this type of bureaucratic work has been now replaced by algorithms and artificial intelligence”, a connection she tries to make in her article, David Graeber, Bureaucratic Violence and the Critique of Surveillance Capitalism. We discuss how anthropologists can make their work both academically rigorous and accessible to the public, and she talks about her own personal experience of doing the TED talk and how she felt a responsibility to bring the topic of child datafication to a wider audience, campaigning, and raising awareness. Veronica provokes anthropology scholars with a call to action given that one of her “major critiques of anthropology…is the fact that as anthropologists often shy away from engaging theoretically with disciplines that do not share their approach". And what does it mean when we say research is “not anthropological enough”?  Lastly, Veronica suggests that, given machines must be taught basic concepts, like what is a child (“as anthropologists, we know that these concepts are so complex, so culturally specific, so biased”), what anthropology can do is “highlight the way in which these technologies are always inevitably going to get and be biased”. She ends on a note of excitement: “We're going to see such great research emerging in the next few years. I'm actually looking forward to that”. Follow Veronica on Twitter @veronicabarassi.  Read an edited version of our conversation together with reading list.
28 minutes | Feb 9, 2022
Understanding Data and Privacy as a UX Researcher. With Laura Musgrave
Our guest today is Laura Musgrave. Laura was named one of 100 Brilliant Women in AI Ethics™ for 2022. Laura is a digital anthropology and user experience (UX) researcher.  Her research specialism is artificial intelligence, particularly data and privacy. Laura gave a short talk at the inaugural conference in 2019 on privacy and convenience in the use of AI smart speakers. And at the 2021 event Laura chaired the panel, Data: Privacy and Responsibility. We start our conversation by exploring Laura’s interest in data and privacy, and smart assistants in particular. During her research on smart speaker use in homes, she's noticed a shift in people’s attitudes and a growing public awareness around privacy and technology, and the use of AI. This shift, she feels, has been aided by documentaries like The Social Dilemma (despite well-founded criticisms such as this article by Ivana Bartoletti in the Huffington Post) and Coded Bias. Laura talks about where the responsibility of privacy lies — with the technology companies, with the users, with the regulators — and that as a user researcher, she has a part to play in helping people understand what’s happening with their data. I ask Laura what drew her to anthropology and how she thinks the research methods and lens of anthropology can be used to design responsible AI. She says, "The user researchers that really stood out to me very early on in my career were the anthropologists and ethnographers"  because "the way that they looked at things…really showed a deep understanding of human behaviour". It "set the bar" for her, she explains, and she wanted to know: “How do I do what they do”. Laura shares the book she’d recommend to user researchers, like her, who are starting out on their ethnographic journey, a book which helped her “make sense of how ethnography fitted into my everyday work “. Because Laura’s been named one of the 100 Brilliant Women in AI Ethics™ for 2022, I ask her to share what the AI ethics landscape, with respect to data and privacy, looks like for 2022. As she explains, “in some senses it is much the same as last year but it's also a constantly developing space and there are constantly new initiatives” before sharing some of the key themes she thinks we are likely to see in 2022. Lastly, Laura recommends two books, both published by Meatspace Press: Fake AI, and Data Justice and Covid-19: Global Perspectives. (The former we picked for our 2021 Recommended Reads and the latter for our 2020 Recommended Reads.) You can connect with Laura on LinkedIn and on Twitter @lmusgrave. Read an edited version of our conversation which you can read online and also download as a PDF.
27 minutes | Jan 12, 2022
Social Science-Led User Research in Tech. With Rosie Webster
Our guest today is Dr Rosie Webster. Rosie has a PhD and an MSc in health psychology. She’s currently Science Lead for Zinc’s venture builder programme. Prior to Zinc, Rosie worked as a UX researcher at digital health company, Zava, and was Lead User Researcher at Babylon Health. While at Babylon, Rosie established the foundations of an effective Behavioural Science practice, which is partly what we’re here to talk about today.Rosie explains that if businesses are interested in delivering impact and making a difference, then social science can be really key.  She says that research, in similar ways to design, is often underestimated and under-utilised in tech.  Our power, she says, lies in understanding the problem and what the right thing to build is. This is a truly user-centred approach that requires trusting in the process and being willing to scrap an idea when the research points in a different direction. Often people don’t know what social science is, says Rosie, and equate it to academic research, with the corresponding but erroneous perception that it’s slow, when in actual fact it provides answers much more quickly. Rosie explains how she established the beginnings of a behavioural science practice at Babylon Health, with the support of two managers who understood its value and importance. She shares why she wanted to ‘democratise’ behavioural research, the benefits of that approach, and how she ‘marketed and sold’ behavioural science within the company. User research should utilise the existing academic literature more, “building on the shoulders of giants”, as Rosie calls it, “supercharging” primary research, and using evidence to understand what the solution might be. It’s an approach she says results in understanding people deeply, while increasing impact and reducing risk, and without slowing down the fast-paced product development environment. As our conversation draws to an end, Rosie has a final piece of advice for businesses that are genuinely open to achieving impactful outcomes, and recommends two books for people who are looking to bring behavioural science into their work: Engaged by Amy Bucher, and Designing for Behaviour Change by Stephen Wendel. Follow Rosie on Twitter @DrRosieW, and connect with her on LinkedIn.Read an edited version of our conversation which you can read online and also download as a PDF.
48 minutes | Dec 8, 2021
Engineering Cultures and Internet Infrastructure Politics. With Corinne Cath-Speth
My guest today is Dr Corinne Cath-Speth. Corinne is a cultural anthropologist whose research focuses on Internet infrastructure politics, engineering cultures, and technology policy and governance. Corinne has recently completed their PhD at the Oxford Internet Institute (OII), which was titled, Changing Minds & Machines. It was an ethnographic study of internet governance, the culture(s) and politics of internet infrastructure, standardization and civil society.  Drawing on their research, Corinne gave a talk as part of an event series hosted by the Oxford Internet Institute which explored the opaque companies and technologists who exercise significant but rarely questioned power over the Internet. As Corinne said during their talk, this mostly unknown aspect of the Internet is “as important as platform accountability". I invited Corinne onto the show to tell us more.Using the Fastly incident in June, Corinne explains who and what these largely invisible, powerful Internet infrastructure companies are and how an outage can have a “large impact on the entirety of our online ecosystem”. The incident shows “how power is enacted through the functioning and maintenance of Internet infrastructure design.” Corinne goes on to say that  “just because the Internet infrastructure is largely invisible to users doesn't mean that it's apolitical [in the case of Cloudflare and 8chan in particular] and it doesn't mean that these companies can claim neutrality”. Corinne talks about their PhD dissertation and says, “I was really interested in understanding how the engineering cultures of infrastructure organizations influence what but also whose values end up steering technical discussions”. Their fieldwork was conducted in an organization called the Internet Engineering Taskforce (IETF). (Corinne brilliantly summarised their PhD in a series of tweets.) Corinne explains what drew them to research this particular topic and notes that “it is so important to get at the personal drivers of our research and being really upfront and explicit about how those are key part of our research practice and the kind of decisions that we end up making.” Corinne shares why they believe cultural anthropology is relevant “to questions of Internet infrastructure of politics and power”, saying “I believe that anthropology really can provide new, novel perspectives on current Internet infrastructure dilemmas, including those related to the connections between cultures and code.” While there’s rightly concern about platform accountability or the power of tech companies, what many people don’t realise is that companies like Meta and Amazon are also infrastructure companies. We need to ask ourselves, says Corinne, “how comfortable we are with the fact that a handful of companies are starting to influence huge parts of the entire Internet”.  Corinne “really wants to encourage people” to study aspects of the Internet “because the last thing we want” is for a small number of companies to have “a say over many parts of our lives….And us not understanding how it happened”. Lastly, Corinne says, “what we need is a balanced and well-resourced counter-power to the influence of corporate actors that are steering the future of the Internet”. Further readingCorinne has kindly supplied a list of resources and reading that they mentioned in the podcast.
50 minutes | Nov 10, 2021
Recommender Systems and Inequality in the Creator Economy. With Matt Artz
Our guest today is Matt Artz. Matt is a business and design anthropologist, consultant, author, speaker, and creator. As a creator he creates podcasts, music, and visual art. Many people will know Matt through his Anthropology in Business and Anthro to UX podcasts. We talk about his interdisciplinary educational background — he has degrees  in Computer Information Systems, Biotechnology,  Finance and Management Information Systems, and Applied Anthropology — and Matt explains what drew him along this path.He shares his recent realisation that he identifies primarily as a technologist ("I am still at heart a technologist. I love technology. I love playing with technology") and his conflict around the "harm that comes out of some AI, but I'm also really interested in it and to some degree kind of helping to fuel the rise of it."This leads to us discussing — in the context of recommender systems and Google more broadly — how we are forced to identify on the internet as one thing or another, either an anthropologist, a technologist, or a creator but not all three. As Matt explains, "finding an ideal way to brand yourself on the Internet is actually very critical...it's a real challenge".We turn next to recommender systems and his interest in how capital and algorithmic bias contribute to inequality in the creator economy, which is based on his art market research as the Head of Product & Experience for Artmatcher.  Artmatcher is a mobile app that aims to address access and inclusion issues in the art market. The work being done on Artmatcher may lead to innovations in the way the approximately 50 million people worldwide in the Creator Economy get noticed in our "technologically-mediated world" as well as in other multi-sided markets (e.g. Uber, Airbnb) where there are multiple players. It's a model he hopes will ensure that people's "hard work really contributes to their own success".Design anthropology is one approach to solving this challenge, Matt suggests, because it is "very interventionist, very much focused on what are we going to do to enact some kind of positive change". As Matt says, "even if this [model] doesn't work, I do feel there's some value in just having the conversation about how can we value human behaviour and reward people for productive effort and how can we factor that back into the broader conversation of responsible tech or responsible AI?".He recommends two books, Design Anthropology: Theory and Practice, edited by Wendy Gunn, Ton Otto, Rachel Charlotte Smith, and Media, Anthropology and Public Engagement, edited by Sarah Pink and Simone Abram.Lastly, Matt leaves us with a hopeful note about what we can do in the face of "really hard challenges" such as climate change.You can find Matt on his website, follow him on Twitter @MattArtzAnthro, and connect with him on LinkedIn.
44 minutes | Oct 6, 2021
Communicating the Social Impacts of AI. With Nat Kendall-Taylor
Our guest today is Dr Nat Kendall-Taylor. Nat received his PhD in Anthropology at UCLA and in 2008 he joined the FrameWorks Institute, a non-profit research organisation in Washington, D.C., where he is now the CEO. FrameWorks uses rigorous social science methods to study how people understand complex social issues such as climate change, justice reform, and the impact of poverty on early childhood development. It develops evidence-based techniques that help researchers, advocates, and practitioners explain them more effectively.Nat explains what drew him from pre-med to anthropology. He did his PhD at UCLA because of the Anthropology department's "unapologetic focus on applied anthropology". His fieldwork in Kenya on children with seizure disorders explored the question of why so few sought biomedical treatment. His experience there, working with public health officials and others, demonstrated the value of understanding culture, the importance of multi-modal transdisciplinary perspectives, and the often "counterintuitive and frequently frustrating nature of communications when you're trying to do this kind of cross-cultural work".For the past 18 months, FrameWorks has worked on how to frame and communicate the social impacts of artificial intelligence.  The project came to FrameWorks through their long-term collaboration with the MacArthur Foundation when it became clear that some of their Grantees "had been having a lot of difficulty advancing their ideas" about algorithmic justice to the general public. The project has explored "the cultural models, the deep patterns of reasoning that either make it hard for people to appreciate the social implications" of AI as well as how to allow people to "engage with the issue in helpful and meaningful ways". The report will be publicly available on the FrameWorks website.As Nat explains, if the public "doesn't understand what the thing is [artificial intelligence] that you are claiming has pernicious negative impacts on certain groups of people, then it becomes very hard to have a meaningful conversation about what those are, who is affected".  This is compounded when "people don't really have a sense what structural or systemic racism means outside of a few issues, how that might work and what the outcomes of that might be."Nat says their work "suggests that it is a responsibility, it's an obligation, for those who understand how these things work to bring the public along, and to deepen people's understanding of how [for example] using algorithms to make resourcing decisions...can be seriously problematic".Nat recommends three books (Metaphors We Live By, Finding Culture in Talk, and Cultural Models in Language and Thought) and ends with a call for more anthropologists to work outside the academy where they can also do impactful work.Read an edited excerpt [PDF] of this interview.You can follow Nat on Twitter at @natkendallt and connect with him on LinkedIn. FrameWorks are on Twitter @FrameWorksInst. Update: FrameWorks published “Communicating About the Social Implications of AI: A FrameWorks Strategic B
46 minutes | Sep 7, 2021
The Ethics of Venture Capital Investors. With Johannes Lenhard
Our guest today is Dr Johannes Lenhard. Johannes received his PhD in Anthropology at Cambridge University and in 2017 started a post-doctoral research project, at the Max Planck Centre Cambridge for the Study of Ethics, the Economy and Social Change, on the ethics of venture capital investors.Johannes spoke at the 2021 Response-ability Summit. He shares what drew him to studying venture capitalists and how he does ethnography in this very closed, elite world across various field sites including Silicon Valley and London. Johannes explains that "not a single book" has been written about venture capitalists by someone who isn't one. As he says, "only an engaged anthropology" can enable someone to be both insider and outsider in this rarefied world.Johannes explains the impact of the lack of diversity in venture capital since not only are VC's hiring people who look like them (white, male) but they "also reproduce themselves into who runs these tech companies". The issue of venture funding is explored by Johannes and Erika Brodnock in their book, Better Venture, which will be published later in 2021.Johannes also briefly discusses Environmental Social and Corporate Governance (ESG) metrics which are starting to affect VC’s and the “aggregate confusion” identified by an MIT paper.Johannes believes more scrutiny into venture capital investors is needed, saying "they are the ones deciding the big tech companies in the next 10-15 years....scrutinizing them now has an impact on everything in the future. They are the kingmakers, and we've been solely focussing on the kings, the Mark Zuckerbergs and the Jeff Bezos of this world".Scrutiny, he explains, will benefit both society and the VC's themselves.Drawing on his Medium post, "The Ultimate Primer on Venture Capital and Silicon Valley", Johannes shares his top reading picks for anyone eager to learn more: Doing Capitalism in the Innovation Economy by William Janeway; The Code by Margaret O’Mara; VC: An American History by Tom Nicholas; and a paper, "How Do Venture Capitalists Make Decisions?". And lastly, Johannes explains why more academics "of any kind" are needed to study the world of venture capital investors.You can follow Johannes on Twitter at @JFLenhard and connect with him on LinkedIn. Academics and articles also mentioned in our conversation: Saskia Sassen James Laidlaw Johannes Lenhard, Can Tech Ever Be Good? Public Books, September 2020.
39 minutes | Jun 30, 2021
Humanising Cybersecurity Through Anthropology. With Lianne Potter
Our guest today is Lianne Potter. Lianne is an anthropologist, self-taught software developer, cyber security evangelist, and entrepreneur. Lianne works at Covea Insurance as their Information Security Transformation Manager where she advocates for innovation in the cyber security field. Lianne's talk at the 2021 Response-ability Summit was titled, "Reciprocity: Why The Cyber Security Industry Needs to Hire More Anthropologists".In this episode Lianne is in conversation with Isabelle Cotton, a digital anthropologist and social researcher, who was curious to interview Lianne for us. As Isabelle explains, "I was interested to talk to Lianne, who uses anthropology to humanise cybercrime. I find her acute awareness of the digital divide in all of the work she does particularly powerful. She has managed to carve out a space for anthropology in an industry that favours faceless data and numbers".During their conversation Lianne explains why she's so passionate about the digital divide and why she believes a people-based, behavioural approach to cybersecurity is so important. Lianne also explains why the technical terms used in the industry can be off-putting to many general users and why she believes storytelling is a way to raise awareness and increase engagement. Isabelle and Lianne also explore biometric security, two-factor authentication, and the 'culture' of hacking. Lastly, Lianne shares some advice for anthropologists looking to get into cybersecurity and tech more generally.Follow Lianne on Twitter at @Tech_Soapbox  and connect with her on LinkedIn. Connect with Isabelle on LinkedIn and check out her website.
36 minutes | Jun 16, 2021
Bringing an Anthropological Lens to Covid-19. With Gitika Saksena
My guest today is Gitika Saksena. Gitika is a Director at LagomWorks, a research and innovation consulting firm she founded in 2018. Before that she was a Vice President at Accenture Technology in India, where she led the strategy and design for various talent initiatives.Gitika gave a talk at the 2021 Response-ability Summit in May.Gitika has degrees in Economics and Business Management, as well as a second Master's degree in Social Anthropology from SOAS University of London. During our conversation, Gitika explains what drew her to anthropology and to study full-time at SOAS. She reflects how her experience at Accenture helped her on her new path, and shares some advice for other anthropologists looking to set up their own consultancies. She explains, for instance, that "clients won't engage with you for anthropology in and of itself. They demand, and will demand, very tangible outcomes and to be challenged and offered fresh perspectives."We explore the Covid-19 research that she and her colleague, Abhishek Mohanty, have variously conducted in both the UK and India on masks, well-being, as well as privacy with respect to contact-tracing apps. They presented their research at the RAI Film Festival 2021, ASA 2021, and the 2021 Response-ability Summit respectively. Lastly Gitika explains the value that she believes anthropology brings to understanding the unprecedented shifts that the world is undergoing, saying that it's essential that anthropologists "bring our conceptual rigour to understand these shifts".You can find Gitika on Twitter at GitikaSaksena. Follow LagomWorks on LinkedIn and check out their website where you can sign up to their newsletter.
36 minutes | May 17, 2021
Dignity-Centred Technology: Enabling Human Flourishing. With Lorenn Ruster and Thea Snow
My guests today are Lorenn Ruster and Thea Snow. Lorenn has recently completed her Masters at the School of Cybernetics at the Australian National University and Thea is the Director at the Centre for Public Impact for Australia and New Zealand.Lorenn and Thea are speaking at the 2021 Response-ability Summit on May 20-21. Their talk is titled, "Dignity-centred technology — moving beyond protecting harms to enabling human flourishing". Thea and Lorenn explain how they came to work together, and their respective backgrounds.Lorenn shares her experience as a Masters student at the 3A Institute, which was established by anthropologist, Genevieve Bell, and the aims of the Institute, which is to "build the skills and knowledge needed to help shape the future safely, sustainably and responsibly".Thea describes the Centre for Public Impact's work, which is to re-imagine government, and together Lorenn and Thea share their dignity ecosystem model, which is a different way into a conversation about ethics in artificial intelligence. We discuss how their model might be used by technology companies as well as governments, given their model focuses on proactively enabling human flourishing rather than simply harm minimisation. You can download their report, Exploring the role of dignity in government AI ethics instruments, from the CPI website.Find Lorenn on Twitter at @LorennRuster and on Medium.  Find Thea on Twitter at @theasnow and on Medium. Follow the CPI at @CPI_foundation.
31 minutes | May 11, 2021
Creating Emergent Socio-Digital Futures. With Susan Halford
My guest today is Professor Susan Halford, who is the co-Director of the Bristol Digital Futures Institute at the University of Bristol. Susan is our academic keynote at the 2021 Summit. The Bristol Digital Futures Institute (BDFI) is a University Research Institute that pioneers transformative approaches to digital innovation. It brings together researchers from across the disciplines and works with partners in industry, government and civil society. The BDFI is developing in-depth systematic understanding of sociotechnical futures to drive the creation of digital technologies for inclusive, prosperous and sustainable societies.During our conversation Susan explores the word ‘futures’ in the sense of recognising that futures are not fixed rather than in the sense of prediction, and explains the work and research that the Institute does. She also discusses the difference between the terms ‘sociotechnical’ and ‘socio-digital’, and why it’s important that social scientists and technologists know enough about each other’s fields so we can collaboratively effectively.  Lastly, Susan talks about 'response-ability' and briefly explores some of the ideas from Donna Haraway's book, Staying With The Trouble, that she finds incredibly provocative.You can find Susan on Twitter at @susanjhalford and the BDFI at @DigiFutures.
35 minutes | May 5, 2021
How Spotify and Google are Using Social Science to Innovate. With Tom Hoy
My guest today is Tom Hoy. Tom is one of the founding Partners at Stripe Partners, the London-based innovation consultancy. Alongside co-founders Tom Rowley and Simon Roberts, Tom has built Stripe Partners from a kitchen table to a thriving business, advising clients including Spotify, Facebook, Google, and Intel. Tom’s particular interests lie in designing new ways to work collaboratively with clients to maximise the impact of Stripe Partners’s work, and helping them to see the value of social science has in unlocking their most complex business challenges. His work has been featured in publications including the Financial Times and the Guardian.Stripe Partners are our 2021 Silver Partner, and Senior Research Consultant Anna Leggett will be sharing a research project at this year’s Summit that explored opportunities to connect with marginalised communities during the pandemic. During our conversation Tom shares how Stripe Partners began and some of the reasons for their success as an innovation consultancy. He explains their main areas of practice and tells us about two of their projects, working with Spotify and Google.We also explore how Stripe Partners has adapted its methodologies during the pandemic, changing the way it does ethnographic research, and what has been gained and lost by doing research solely online.Lastly Tom recommends three worthwhile reads: Recommendation Engines by Michael Schrage, Valuing the Unique: The Economics of Singularities by Lucien Karpik, and Leave the World Behind by Rumaan Alam.You can find Tom on Twitter at @thoy and Stripe Partners at @stripepartners.
39 minutes | Apr 21, 2021
Building Trust with Algorithmic Audits. With Gemma Galdon-Clavell
Our guest today is Dr Gemma Galdon-Clavell. Gemma is the Founder and CEO of Eticas Consulting. Her multidisciplinary background in the social, ethical and legal impact of data-intensive technology has enabled her and her team to design and implement practical solutions to data protection, ethics, explainability, and bias challenges in AI. Gemma, together with her colleague Emma Lopez, is talking at the 2021 Response-ability Summit where they will be sharing their bottom-up approach to algorithmic auditing.During our conversation Gemma shares how she moved from an interest in public spaces to a PhD on surveillance, security and urban policy in 2012 to then founding Eticas.Gemma explains why Eticas is focused on digital ethics and trustworthy AI, and why she thinks that enforceable regulation is a good thing, not least because it means people can trust the tech.Gemma explains the three main phases that comprise their Algorithmic Audit Framework technology. She also talks about why we need more people who understand society working in this space — as she says, the future of humanity depends on it. And she has suggestions for young women, particularly those who are studying the social sciences, who want to make a positive contribution to emerging technologies.Lastly, Gemma shares some recommended reads and further resources.Follow Gemma on Twitter @gemmagaldon. To  find out more about their algorithmic auditing work, visit Eticas Consulting, and Eticas Foundation for information about their pubic impact work. 
51 minutes | Apr 7, 2021
The Future of Privacy Tech. With Gilbert Hill
In this episode we're in conversation with Gilbert Hill. Gilbert is a privacy technologist and he's talking at the 2021 Summit in May.Most recently Gilbert was CEO and Advisor to Tapmydata, a start-up building consumer-grade tools for people to exercise data rights, with blockchain keeping score.  Before becoming CEO of TapMyData, Gilbert founded Optanon and, as the MD, grew it to become the market leader in the provision of website auditing and cookie compliance solutions in the UK and EU. Gilbert is a Fellow and Senior Tutor on Privacy and Ethics at the Institute of Data and Marketing.During our conversation Gilbert explains how, after graduating from Cambridge University with a degree in anthropology and archaeology, he became a privacy technologist. We discuss how he conceives of privacy and we talk about Tapmydata and how it enables consumers to exercise their data rights — contrary to popular opinion at the time that people didn't care about their data — and the advantage for companies who hold it.Gilbert talks about the growing movement to re-emancipating citizens in terms of their data and its value, and the concept of data unions, which is enshrined in the EU’s Digital Markets Act. We also discuss the role that blockchain and crypto have to play in data privacy. Lastly, Gilbert shares some of his recommended reads and why he's looking forward to the summit.Follow Gilbert on Twitter @GilbertHill and read his writing at gilberthill.medium.com. Mentioned in our conversation:Covid-19 and the cult of privacy by Daniel MillerAn Artificial Revolution: On Power, Politics and AI by Ivana BartolettiThe End of Trust (McSweeney's 54) - features an interview with Ed Snowden explaining blockchain to his lawyer.Privacy is Power: Why and How You Should Take Back Control of Your Data by Carissa VélizThe Cryptocurrency Revolution: Finance in the Age of Bitcoin, Blockchains and Tokens by Rhian LewisAnd lastly enjoy comedian Stevie Martin's funny video, which is a biting commentary on the “accept all” cookie option.
43 minutes | Mar 24, 2021
The Power and Politics of Algorithmic Life. With Taina Bucher
In this episode we talk with Taina Bucher who is an associate professor in screen cultures at the Department of Media and Communication, University of Oslo. Taina is the author of IF...THEN: Algorithmic Power and Politics, published by Oxford University Press in 2018.Taina explains why, as a media scholar, she became interested in algorithms and software, and we discuss her book and her proposal that we must approach algorithms not by asking what is an algorithm but instead when and how are algorithms. We discuss black boxes, a metaphor Taina finds problematic, and she uses the Facebook 'trending topics' controversy in 2016 as an example of the lack of nuance in discussions around attributing agency to either algorithms or humans. Taina explores how algorithms materialise in the institutional setting of news media in the context of the recent law passed by the Australian government aimed at making Google and Facebook pay for news content on their platforms.We also briefly talk about Taina's book, Facebook, published in May by Polity Press. Lastly, Taina recommends three books with respect to the questions she addressed in our conversation, Cloud Ethics by Louise Amoore, You Are Here by Whitney Phillips and Ryan M Milner, and Metrics at Work by Angèle Christin. Follow Taina on Twitter (@tainab) and find more about her at tainabucher.com.
57 minutes | Mar 10, 2021
An Engineering Anthropologist. With Astrid Countee
My guest today is Astrid Countee. Astrid is an anthropologist and technologist based in Houston, Texas. She is co-founder of Missing Link Studios. In 2016, Astrid wrote an article for Ethnography Matters on why tech companies need to hire software developers with ethnographic skills, and it's this article I explore with her during our conversation.Astrid shares her journey from dreaming of being a surgeon to studying forensic science and then medical anthropology before becoming a software engineer.Astrid explains what she thinks the skills are that anthropologists and ethnographers bring to a development team and we talk about the benefits of technologists who have social understanding.Astrid shares some advice and tips for social scientists and researchers working alongside technologists and how we can work together.We also discuss the topic of whether anthropologists need to learn to code and she likens it to learning a language, which anthropologists doing fieldwork often have to do.Lastly, Astrid discusses how anthropologists might be seen as more than the people who 'make technology usable', and that anthropologists should play a bigger part in tackling the wicked problems of this century.You can find Astrid on Twitter or LinkedIn.
28 minutes | Feb 24, 2021
Making Data and AI Work for People and Society. With Reema Patel
In this episode we talk to Reema Patel, Head of Public Engagement at the Ada Lovelace Institute. The Ada Lovelace Institute is an independent research institute that was established in 2018. Its mission is to ensure data and AI work for people and society. Reema leads the organisation’s public attitudes and public deliberation research.During our conversation, Reema shares her journey from Cambridge University where she studied philosophy to becoming one of the founding team members of the Ada Lovelace Institute. Reema explains why the Ada Lovelace Institute was established, its mission and purpose, and takes us through the four main themes on which the Institute is focused: algorithm accountability; data for the public good; justice and equalities; and Covid-19 technologies. Reema also touches on the Institute's recent work on the risks and benefits of digital COVID-19 vaccine certification schemes.We discuss the impact the Ada LoveLace Instiute's work is having, determined as it is not to be a "talking shop". Lastly, Reema tells us about JUST AI, a humanities-led network established in 2020 that is committed to understanding the social and ethical value of data and AI.Follow Reema on Twitter and connect with her on LinkedIn.Follow the Ada Lovelace Institute on Twitter, check out their blog, and sign up to their informative fortnightly newsletter.
43 minutes | Feb 10, 2021
The Office, Media, and Embodied Computing. With Simon Roberts
In this episode we talk to Dr Simon Roberts, business anthropologist and Partner at Stripe Partners, a strategy and innovation consultancy based in London. He's also the author of The Power of Not Thinking. Simon was a keynote at our inaugural summit. And Stripe Partners sponsored both the 2019 and 2020 events. During our conversation, Simon shares how he started out as a business anthropologist. We talk about his 2018 article, The UX-ification of Research in which he decried the fact that research is being squeezed into a new temporal rhythm — being thoughtful is out, speed is in — and how optimistic he feels now, at a time of economic crisis, when budgets are being squeezed. We also talk about the office, now that working from home is a reality for most of us for the foreseeable future, and how do we utilise offices to do what they do best. We discuss his article, The Age of the Ear, in which he calls for a deep understanding of how people experience the aural dimensions of life. Which leads us to think about embodiment, the subject of his book, and embodied computing more generally. Lastly, Simon shares a couple of his favourite reads from 2020. We hope you enjoy the show. Mentioned in our conversation: ‘The Big Shift’: Internal Facebook Memo Tells Employees to Do Better on Privacy We will miss the office if it dies. Lucy Kellaway, Financial Times, May 15 2020. The rise and fall of the office, Henry Mance, Financial Times, May 15 2020. James Rebanks: nature is my office, come rain or shine, Financial Times, December 29 2020. If Then: How the Simulmatics Corporation Invented the Future by Jill Lepore; Caste by Isabel Wilkerson; These Truths: A History of the United States by Jill Lepore; India After Gandhi: The History of the World's Largest Democracy by Ramachandra Guha; Unchartered: How to Map the Future by Margaret Hefferman; and Shuggie Bain by Douglas Stuart.  
COMPANY
About us Careers Stitcher Blog Help
AFFILIATES
Partner Portal Advertisers Podswag Stitcher Originals
Privacy Policy Terms of Service Your Privacy Choices
© Stitcher 2023