51 minutes | Jun 14, 2021

Ep. 980: Toby Ord Interview with Michael Covel on Trend Following Radio

Protecting humanity’s future is the central challenge of our time?

My guest Toby Ord makes the case.

If all goes well, human history is just beginning. Our species could survive for billions of years – enough time to end disease, poverty, and injustice, and to flourish in ways unimaginable today. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, where we face existential catastrophes — those from which we could never come back. Since then, these dangers have only multiplied, from climate change to engineered pathogens and artificial intelligence. If we do not act fast to reach a place of safety, it will soon be too late.

Drawing on over a decade of research, Ord explores the cutting-edge science behind the risks we face. He puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And he points the way forward, to the actions and strategies that can safeguard humanity.

In Ord’s book The Precipice he offers a startling reassessment of human history, the future we are failing to protect, and the steps we must take to ensure that our generation is not the last.

Bio: Toby Ord is a philosopher at Oxford University, working on the big picture questions facing humanity. His current research is on risks that threaten human extinction or the permanent collapse of civilization, and on how to safeguard humanity through these dangers, which he considers to be among the most pressing and neglected issues we face. He has advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, and the UK Prime Minister’s Office.

In this episode of Trend Following Radio:

Existential Risk Definition Nuclear War Climate Change and Global Population We Are In This Together How Long Will The Earth Remain Habitable? Modern Technology Different Types of Risks Existential Risk From Artificial Intelligence Unaligned AI The Future of Humanity

Play Next