Created with Sketch.
The Inside View
104 minutes | Jun 8, 2021
The Inside View #3–Evan Hubinger—Takeoff speeds, Risks from learned optimization & Interpretability
transcript: https://www.alignmentforum.org/posts/NFfZsWrzALPdw54NL/the-inside-view-3-evan-hubinger-homogeneity-in-takeoff youtube: https://youtu.be/uQN0wqzy164
89 minutes | May 4, 2021
The Inside View #2–Connor Leahy
In the first part of the podcast we chat about how to speed up GPT-3 training, how Conor updated on recent announcements of large language models, why GPT-3 is AGI for some specific definitions of AGI , the obstacles in plugging planning to GPT-N and why the brain might approximate something like backprop. We end this first chat with solomonoff priors , adversarial attacks such as Pascal Mugging , and whether direct work on AI Alignment is currently tractable. In the second part, we chat about his current projects at Eleuther , multipolar scenarios and reasons to work on technical AI Alignment research.  https://youtu.be/HrV19SjKUss?t=4785  https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference  https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities  https://www.eleuther.ai/  https://discord.gg/j65dEVp5
26 minutes | Apr 25, 2021
The Inside View #1—Does the world really need another podcast?
In this first episode I'm the one being interviewed. Questions: - Does the world really needs another podcast? - Why call your podcast superintelligence? - What is the Inside view? The Outside view? - What could be the impact of podcast conversations? - Why would a public discussion on superintelligence be different? - What are the main reasons we listen to podcasts at all? - Explaining GPT-3 and how we could scale to GPT-4 - Could GPT-N write a PhD thesis? - What would a superintelligence need on top of text prediction? - Can we just accelerate human-level common sense to get superintelligence?
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2021