Bummer! You're not a
Stitcher Premium subscriber yet.
Learn More
Start Free Trial
$4.99/Month after free trial
HELP

Episode Info

Episode Info:

Rebroadcast: this episode was originally released in November 2017.

Prof Philip Tetlock is a social science legend. Over forty years he has researched whose predictions we can trust, whose we can’t and why - and developed methods that allow all of us to be better at predicting the future.

After the Iraq WMDs fiasco, the US intelligence services hired him to figure out how to ensure they’d never screw up that badly again. The result of that work – Superforecasting– was a media sensation in 2015.

Links to learn more, summary and full transcript

It described Tetlock’s Good Judgement Project, which found forecasting methods so accurate they beat everyone else in open competition, including thousands of people in the intelligence services with access to classified information.

Today he’s working to develop the best forecasting process ever, by combining top human and machine intelligence in the Hybrid Forecasting Competition, which you can sign up and participate in.

We start by describing his key findings, and then push to the edge of what is known about how to foresee the unforeseeable:

* Should people who want to be right just adopt the views of experts rather than apply their own judgement?
* Why are Berkeley undergrads worse forecasters than dart-throwing chimps?
* Should I keep my political views secret, so it will be easier to change them later?
* How can listeners contribute to his latest cutting-edge research?
* What do we know about our accuracy at predicting low-probability high-impact disasters?
* Does his research provide an intellectual basis for populist political movements?
* Was the Iraq War caused by bad politics, or bad intelligence methods?
* What can we learn about forecasting from the 2016 election?
* Can experience help people avoid overconfidence and underconfidence?
* When does an AI easily beat human judgement?
* Could more accurate forecasting methods make the world more dangerous?
* How much does demographic diversity line up with cognitive diversity?
* What are the odds we’ll go to war with China?
* Should we let prediction tournaments run most of the government?

Read our problem profile on improving institutional decision-making here

Get this episode by subscribing: type '80,000 Hours' into your podcasting app.

The 80,000 Hours Podcast is produced by Keiran Harris.

Read more »

Discover more stories like this.

Like Stitcher On Facebook

EMBED

Episode Options

Listen Whenever

Similar Episodes

Related Episodes