Created with Sketch.
The Nonlinear Library
11 minutes | May 4, 2022
EA - My Reflections Facilitating for EA Virtual Programs by ElikaSomani
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Reflections Facilitating for EA Virtual Programs, published by ElikaSomani on May 2, 2022 on The Effective Altruism Forum. TL:DR We need to put more resources, support, and perhaps a restructuring of the fellowships into Virtual Programs if we are to reach CEA’s organizing goals of expanding community diversity (geographically in particular) and increasing virtual programs introductory participant to high-impact EAer rate. I have some thoughts on how we can do this! Setting the Stage - A Brief Overview of Virtual Programs (VP) and Myself I have been facilitating Virtual Programs for ∼ 5 months. So far I have facilitated four cohorts (2 introductory, 2 in-depth) and participated in 1 introductory and 1 in-depth program. I'm planning to keep doing 2 groups a round I have also helped with some EA DC and Twin Cities community-building, ran a fellowship amongst friends, as well as volunteered on some specific cause area projects I also am starting some specific community-building projects related to community health and making EA a more welcoming, person-first space. There’s not much to share or reflect on since this is so new, but the projects definitely stem from the need for better VP community building that I’ve noticed in my facilitating Virtual programs (VP) is run by the Center for Effective Altruism and essentially provides EA seminar programs for anyone around the world. I would say its main focus is running introductory, in-depth, and The Precipice fellowships/reading groups. These fellowships are ‘run’ by facilitators - whose volunteer role is to facilitate the fellowship meetings. VP also hosts many other one-off virtual events for the community. The Problem: My Experience with VP: I took both fellowships through VP and, to be honest, did not have the most positive experience. It was not related to the material; rather I did not feel any sense of community with my cohort or with the broader EA community. I should note that my perception of EA (which was a misconception) delayed me from taking this fellowship in the first place (by about two years). Thus, the language we use and how we spread EA to new members and via word of mouth is important to get right. Since then – due to having friends/family that encouraged and welcomed me into EA outside of the fellowship – I have become highly engaged in EA. I have attended 2 conferences (and plan to attend many more), helped with community building and various cause-area volunteer projects, and even switched my career from public health to biosecurity (a work in progress) based on longtermism’s neglectedness. I have reason to believe this isn’t an experience unique to me Out of my intro and in-depth fellowship group, I am aware of only one other person who is still active in the EA community. Coincidence or not, she happens to also have a family member who was already active in the EA community. From my (limited) observation, the fellowships (intro and in-depth) are the main pathway for those not involved in university/city groups to become highly involved in EA. However, I don’t think the current structure of the fellowships run by VP are optimized to produce highly-engaged EAs. I also would not be surprised if VP’s retention rate (what % of those who take the fellowships stay involved in EA) isn’t as high as in-person university/city-group organized fellowships. I’d be curious to see the data. To clarify, this is definitely not the organizer's fault! Community building is hard and getting people to act on EA ideas (not just be interested in them) is extremely hard. I believe doing it virtually is even harder. This is a secondary issue that I haven’t fully fleshed out yet, but I think we need to do more to support facilitating fellowships for non-English speakers and people not living in the US/Europe/the UK. In ...
11 minutes | May 4, 2022
EA - Posting More Better: Social Media Rules of Thumb by Nathan Young
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Posting More Better: Social Media Rules of Thumb, published by Nathan Young on May 3, 2022 on The Effective Altruism Forum. Thank you to those who commented. This does not imply they agree with this post: Rob Bensinger, John Bridge, Garrsion Lovely, Fin Moorhouse, Bruce Tsai, Rachel Edwards, Linda Linsefors, Joseph Lemien, Ines, Charles Dhillon, Neel Nanda, Linchuan Zhang, Chana Messinger, Kirsten Horton, Vaidehi Agarwalla, Frances Lorenz, Dan Elton, Shakeel Hashim, please DM me if I've forgotten you, I'm very very forgetful. Tl;dr The downside risk of small accounts is low The upside of using social media is high If you like social media you should post more Context I use twitter a lot. Often I think, “Am I having a positive impact?”. I think I am. This post is an attempt to lower the barriers to entry to social media and offer you the chance to critique my thinking. My sense (and yours, though small sample) is that EA is getting more attention recently. Things feel higher upside and higher downside. So here are some rules of thumb I use when thinking about social media. I intend to write a second post around EA, comms and reputation, but if someone wants to beat me to it, feel free. While researching this, I learned the Community Health team are good point people to email for any developing community reputation issues email@example.com. There is also this article on how to talk to journalists Doing more good on social media: Some rules of thumb If you only read one bit of this article My life is my own. I find deference tempting, but my social media choices are my own. I want to start from a position of agency. I don't need anyone's permission to post Do good better. My social media presence is a resource just like everything else. How can I use it to maximally increase the outcomes I want/behave virtuously? There isn’t a cutoff between my money, my time and my social media use (you might have different boundaries here) You can talk to people you’ve always wanted to through social media. Because of twitter, I have made several close friends, I have gotten 2 job offers and I interviewed Bryan Caplan the other day. If you’re a small presence on social media, perhaps take a few more risks. I generally think people should post their actual views more, not less. There is a huge space of ideas and I think generally we under-discuss, rather than the opposite If you want a place to do this, I suggest discussing your cause area is a good place to explore boundaries. I would not recommend starting out with radical takes on sexual ethics If I make an error, I admit it and say why I was wrong If I posted something completely out of order, I often just delete it. I sometimes take flack from rationalists for this, but twitter doesn’t allow EA forum-style crossing out. I don’t want to be attached to a viral post I no longer endorse. Even correcting doesn’t help if people don’t see the correction I post about what gives me energy. I find social media a good thing between tasks. What do you enjoy talking or writing about and how can you use that for good? Reply well. Social media is about making connection with other people. Learn how to respond to someone in a friendly and conversation-starting way. In some way we are all "reply guys" for someone up the chain. I want Matt Yglesias to respond to my ideas. Matt Yglesias wants Joe Biden to respond to his. Twitter especially just condenses this social graph Correcting strangers is overrated. Changing minds is hard and it often requires relationship. If you are new to twitter, please do not start by arguing with everyone (unless it’s rationalists, they love that) If you are in Global Poverty or Animal Welfare you are particularly missing on social media. Perhaps it’s just me, but EA twitter feels like it has a strong...
4 minutes | May 3, 2022
LW - Less Wrong Community Weekend 2022, open for application! by UnplannedCauliflower
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Less Wrong Community Weekend 2022, open for application!, published by UnplannedCauliflower on May 1, 2022 on LessWrong. Less Wrong Community Weekend 2022, open for application! When: Friday 26th August - Monday 29th August 2022 Where: jh-wannsee.de (Berlin) The tickets: Regular ticket: 150€ Supporter ticket: 200/300/400€ Angel ticket: 75€ This year’s awesome keynote speaker will be Duncan Sabien whose talk is: “The moments that matter”. Duncan is the former director of curriculum at CFAR, the primary preparer of the CFAR handbook, and a regular producer of consistently interesting and thought provoking essays such as In Defense of Punch Bug, and Lies, Damn Lies, and Fabricated Options. From Friday August 26th through Monday August 29th aspiring rationalists from all around Europe and beyond will gather for four days at the lovely Lake Wannsee near Berlin to socialize, run workshops, talk, and enjoy our shared forms of nerdiness. What the event is like: On Friday afternoon we put up four wall-sized daily planners and by Saturday morning the attendees fill them up with 50+ workshops, talks and activities of their own devising such as Icebreaker games, Rationality techniques, EA community building discussions, Comfort zone exploration workshop, Polyamory and relationships workshops, morning meditation sessions in the Winter garden and many more. This is our 7th year and we feel that the atmosphere and sense of community at these weekends is something that is really special. If that sounds like something you would enjoy and you have some exciting ideas and skills to contribute come along and get involved. And of course if you want to spend some time relaxing and recharging on your own you can hike in the forests, sunbathe or stroll lazily along the banks of Lake Wannsee whenever you like. The venue for the event, Wannsee youth hostel, is nestled amongst the beautiful lakes and forests of South West Berlin and provides shared accommodation, a canteen, a selection of large and small seminar rooms and plenty of comfortable spaces inside and out for us to use as we please. Application Process: There are usually about 20% more people who would like to come than we have spots for, so priority will be given to those who seem particularly interested, or who plan to make interesting contributions by running a talk, workshop or activity. However, everyone is welcome to apply. So apply now and tell us about why it would be awesome to have you as part of the group! The primary ticket will be 150 euros, but if you want to be an unusually and exceptionally cool person, we also have supporter tickets that allow us to help people who need financial help, and improve this and future community events. Also we are hoping that some of you volunteer to be an ‘on-site angel’ for a half priced ticket who will help us set everything up, keep things clear, prepare to make sure the snacks table is kept filled, and similar tasks. They will only be expected to work either before, after or during the event while joining sessions is still often feasible. If this sounds interesting to you, send us an email (firstname.lastname@example.org) and we’ll schedule a call to talk with you. We do have funds to help people who would struggle to afford the ticket. If you need help, choose a regular ticket when applying and send us an email explaining your situation after your application has been accepted. Also tell your friends about this event if you think they will be interested! When: Friday 26th August - Monday 29th August 2022 Where: jh-wannsee.de The tickets: Regular ticket: 150€ Supporter ticket: 200/300/400€ Angel ticket: 75€ http://tiny.cc/lwcw22_apply If you have any questions, please email us at email@example.com. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please vi...
18 minutes | May 3, 2022
EA - Should You Have Children Despite Climate Change? by Jeroen W
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should You Have Children Despite Climate Change?, published by Jeroen W on May 3, 2022 on The Effective Altruism Forum. Background information This is the script of what would've been A Happier World's video on climate change if Kurzgesagt hadn't published theirs. The main ideas we wanted to convey were: Climate change is important but the doomerism is overstated The best actions you can take are participating in political activism and donating to effective charities The question whether or not to have children despite of climate change was just our way of framing the video to promote these two ideas. And since (1) has been covered by Kurzgesagt and (2) in our response video, we decided it doesn't make that much sense anymore to create a video on the question of having children. Instead, it seemed a low effort with potential positive impact to publish this script on the EA forum instead. Note that there could still be a few mistakes, since we never got to entirely finalizing it. Should You Have Children Despite of Climate Change? Intro In october 2020, the then 33-year old swiss national Marc Fehr got a vasectomy. The reason? He wants to save mankind from extinction. Fewer offspring means fewer emissions, the argument goes. He’s not alone in this reasoning. Even Prince Harry has said he doesn’t want more than two children because of climate change. Miley Cyrus doesn’t want to bring children on this “piece-of-shit” planet. British front woman Blythe Pepino gave up on the idea of having a family after listening to a lecture by Extinction Rebellion, and then started a movement of women not procreating in response to the coming ‘climate breakdown and civilisation collapse’. And even David Attenborough has stated several times that we need fewer people on this planet in order to solve our environmental problems. A poll last year found that 39% of young people “feel uncertain” about having children because of climate change. The main reasons are that they’re scared of the future their offspring will face, and that having children could contribute to even more climate change. Climate change is definitely bad. There’s no doubt about it. Many animal species are at risk of extinction, natural disasters significantly worsen the lives of a lot of people especially in the southern hemisphere, but increasingly also affect the population in western countries. But how bad will our future exactly be, and will having children really make things worse? What are the best actions we can take to help the climate? Will the future really be that bad? When climate activists or politicians talk about climate change, their rhetoric tends to be very drastic, sometimes even apocalyptic. Many claim we have less than a decade left to save the planet. Johannes Ackva: I think where this idea comes from, is from IPCC report that says if we want to hit the 1.5 degree target, we need to reduce emissions by 50 percent over this decade. This is very different from saving the planet, which is very, very underspecified to begin with. But it's not that we either reach 1.5 degree or the world ends. The IPCC is an internationally accepted authority on climate change, and its work is widely agreed upon by leading climate scientists as well as governments. We’re currently at 1.1 degrees celcius of warming compared to the average temperature between 1850 and 1900. Maarten Boudry: Climate change is a problem that will get worse and worse. And we have agreed on some artificial thresholds that we don't want to cross, but that's partly arbitrary. It's not as if there's a specific moment in time when suddenly all hell will break loose. I think we should be aiming for as little warming as realistically possible. Two degrees is worse than one degree, three degrees is worse than two degrees, etc.. But the idea that we o...
4 minutes | May 3, 2022
LW - Predicting for charity by Austin Chen
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predicting for charity, published by Austin Chen on May 2, 2022 on LessWrong. Excerpted from Above the Fold. Prediction markets succeed when they require people to bet something they value, like money. But past attempts at real-money prediction markets like Intrade have been shut down by the CFTC. At Manifold Markets, we currently allow users to trade on Manifold Dollars (aka M$ or “mana”), an in-game currency specific to our platform. But one of the most common criticisms we hear is: “Why should I care about trading fake currency?” It would be nice to find a middle ground where users can bet something of real value which doesn’t run afoul of financial regulations. Thinking about this, we were inspired by donor lotteries: if you can gamble with charitable donations, shouldn’t you be able to make bets with them? Thus, Manifold for Good was born. You can now donate your M$ winnings to charity! Through the month of May, every M$ 100 you contribute turns into USD $1 sent to your chosen charity - we’ll cover all processing fees! Why? Manifold for Good solves two problems with today’s prediction markets. First, it allows you to bet using something valuable to you (i.e. donations to your favorite charity), which increases the incentive to bet correctly, relative to just virtual points. Second, it respects existing financial regulations, which has proven difficult for prediction markets in the past. By providing an entertaining and impactful way to allocate money to charity, Manifold for Good can also increase the total amount of money donated. Just as donors participating in a charity bingo night are willing to pay extra for the value of entertainment, so too can Manifold’s markets provide a fun, motivating reason to participate in charitable activities. What’s Next? Think of Manifold for Good as an experiment! We’re seeing what the level of demand is for this kind of redemption for Manifold Dollars; let us know if you have any thoughts or suggestions. In the future, we’d like to grow the program to increase the number of available charities. We currently support 30+ charities; if you have a charity recommendation, let us know and we’ll pay a M$ 500+ bounty once we add it! We’d also love to offer donation matching to cause areas or charities — we think this would get users even more excited about forecasting and donating! If you would like to partner with us to fund an experiment like this, or be featured as a charity, please get in touch at firstname.lastname@example.org. Finally: one HUGE shoutout to Sam Harsimony and Sinclair Chen for leading the effort to build out Manifold for Good! Note: we are not affiliated with most of these charities, other than being fans of their work. As Manifold itself is a for-profit org, your M$ contributions will not be tax-deductible. Bonus: our codebase is now open source! At Manifold, we’ve always aimed to be transparent about the way we do things. Some examples: We post markets on our product and business decisions Our analytics and seed round memo are public for the world to see We have internal team discussions on our public Discord I’m excited to say that we’ve taken the next step forward: open sourcing our entire codebase! Check out our Github repo here. Don’t forget to like and subscribe leave a Github Star! This effort was spearheaded by Marshall Polaris, to whom the whole Manifold community owes a huge debt of gratitude. Marshall has been behind the scenes, doing the necessary work to prepare us for this big step forward: Ensuring our user data is secure against attackers Expanding our documentation to help new contributors get up and running Improving our processes to scale up with many new potential contributors I can’t wait to see what features you build, bugs you fix, or projects you start, now that our code is open to you! Austin Ku...
23 minutes | May 3, 2022
EA - Kidney stone pain as a potential cause area by Dan Elton
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Kidney stone pain as a potential cause area, published by Dan Elton on May 3, 2022 on The Effective Altruism Forum. This is a cross-post from my Substack (original post here). Note: At the Effective Altruism Global: San Francisco conference in 2017, Prof. Will MacAskill implored the audience to “keep EA weird”. As the EA movement grows, it’s important to keep EA’s original spirit of exploration alive. To help do that, I’m planning to write several articles on potential new — and weird — cause areas. Effective altruists want to figure out how to do the most good per dollar spent. “Good” is often cached out in terms of deaths prevented or quality-adjusted life years saved (QALYs). QALYs attempt to adjust life-years for different states of health. However, the very method of QALY calculation, which typically involves surveys asking about trade-offs, might have some blind spots. For instance, if a disease state is rare, than most likely the requisite survey data for it has never been collected. The surveys themselves may have blind spots too. Consider these points from Andrés Gómez Emilsson (emphasis mine): “Someone described the experience of having a kidney stone as ‘indistinguishable from being stabbed with a white-hot-glowing knife that's twisted into your insides non-stop for hours’. It’s likely that the reason why we do not hear about this is because (1) trauma often leads to suppressed memories, (2) people don’t like sharing their most vulnerable moments, and (3) memory is state-dependent (you cannot easily recall the pain of kidney stones .. you’ve lost a tether/handle/trigger for it, as it is an alien state-space on a wholly different scale of intensity than everyday life).” Andres Gomez Emilsson As Daniel Kahneman describes in his book Thinking Fast and Slow, the remembering self is different than the experiencing self. People have trouble describing and conceptualizing extreme events, either positive or negative. People also don’t like thinking about extreme negative events generally, whether they experienced them or others did. I personally sometimes notice my brain flinching away when thinking about kidney stone pain, even though I haven’t experienced it myself. In the first part of this post I’ll go over the evidence for extreme pain events. Then, I’ll focus on kidney stones. The main reason for focusing on kidney stone pain is that over the past two years I’ve worked off-an-on on automated deep learning based software for detecting and measuring kidney stones in CT scans (see my paper in Medical Physics). So I have some expertise on the subject. Currently I am working with a radiologist at Massachusetts General Hopsital who is an expert on stone disease, Prof. Avinash Kambadakone. Background - suffering focused ethics “In my opinion human suffering makes a direct moral appeal, namely, the appeal for help, while there is no similar call to increase the happiness of a man who is doing well anyway.” “Instead of the greatest happiness for the greatest number, one should demand, more modestly, the least amount of avoidable suffering for all.”— Karl Popper, The Open Society and Its Enemies (1945) The idea that we should focus on eliminating suffering over increasing pleasure is intuitive to many people. See this recent Twitter poll from Robin Hanson: So, I don’t think I need to spend much time here convincing people that reducing suffering should take precedent over increasing happiness. Note what I have in mind here is what is called “weakly-negative utilitarianism” which is quite different than pure negative utilitarianism, which focuses only on eliminating suffering. Readers interested in diving further into these topics should check out Lukas Gloor’s essay “The Case for Suffering-Focused Ethics”. Background - long-tailed distributions of pleasure and pain “...
8 minutes | May 3, 2022
EA - Is EA "just longtermism" now? by frances lorenz
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is EA "just longtermism" now?, published by frances lorenz on May 3, 2022 on The Effective Altruism Forum. Acknowledgements A ginormous thank you to Bruce Tsai who kindly combed through multiple drafts and shared tons of valuable feedback. Also a big thank you to Shakeel Hashim, Ines, and Nathan Young for their insightful notes and additions. Preface In this post, I address the question: is EA just longtermism? I then ask, among other questions, what factors contribute to this perception? What are the implications? 1. Introduction Recently, I’ve heard a few criticisms of EA that hinge on the following: “EA only cares about longtermism.” I’d like to explore this perspective a bit more and the questions that naturally follow, namely: How true is it? Where does it come from? Is it bad? Should it be true? 2. Is EA just longtermism? In 2021, around 60% of funds deployed by the Effective Altruism movement came from Open Philanthropy (1). Thus, we can use their grant data to try and explore EA funding priorities. The following graph (from Effective Altruism Data) shows Open Philanthropy’s total spending, by cause area, since 2012: Overall, Global Health & Development accounts for the majority of funds deployed. How has that changed in recent years, as AI Safety concerns grow? We can look at this uglier graph (bear with me) showing Open Philanthropy grants deployed from January, 2021 to present (data from the Open Philanthropy Grants Database): We see that Global Health & Development is still the leading fund-recipient; however, Risks from Advanced AI is now a closer second. We can also note that the third and fourth most funded areas, Criminal Justice Reform and Farm Animal Welfare, are not primarily driven by a goal to influence the long-term future With this data, I feel pretty confident that EA is not just longtermism. However, it is also true (and well-known) that funding for longtermist issues, particularly AI Safety, has increased. This raises a few more questions: 2.1 Funding has indeed increased, but what exactly is contributing to the view that EA essentially is longtermism/AI Safety? (Note: this list is just an exploration and not meant to claim whether the below things are good or bad) William Macaskill’s upcoming book, What We Owe the Future, has generated considerable promotion and discussion. Following Toby Ord’s The Precipice, published in March, 2020, I imagine this has contributed to the outside perception that EA is becoming synonymous with longtermism. The longtermist approach to philanthropy is different from mainstream, traditional philanthropy. When trying to describe a concept like Effective Altruism, sometimes the thing that most differentiates it is what stands out, consequently becoming its defining feature. Of the longtermist causes, AI Safety receives the most funding, and furthermore, has a unique ‘weirdness’ factor that generates interest and discussion. For example, some of the popular thought experiments used to explain Alignment concerns can feel unrealistic, or something out of a sci-fi movie. I think this can serve to both: 1. draw in onlookers whose intuition is to scoff, 2. give AI-related discussions the advantage of being particularly interesting/compelling, leading to more attention. AI Alignment is an ill-defined problem with no clear solution and tons of uncertainties: What counts as AGI? What does it mean for an AI system to be fair or aligned? What are the best approaches to Alignment research? With so many fundamental questions unanswered, it’s easy to generate ample AI Safety discussion in highly visible places (e.g. forums, social media, etc.) to the point that it can appear to dominate EA discourse. AI Alignment is a growing concern within the EA movement, so it's been highlighted recently by EA-aligned orgs (for example, AI S...
19 minutes | May 3, 2022
EA - Information security considerations for AI and the long term future by Jeffrey Ladish
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Information security considerations for AI and the long term future, published by Jeffrey Ladish on May 2, 2022 on The Effective Altruism Forum. Summary This post is authored by Jeffrey Ladish, who works on the security team at Anthropic, and Lennart Heim, who works on AI Governance with GovAI (more about us at the end). The views in the post are our own and do not speak for Anthropic or GovAI. This post follows up on Claire Zabel and Luke Muehlhauser’s 2019 post, Information security careers for GCR reduction. We’d like to provide a brief overview on: How information security might impact the long term future Why we’d like the community to prioritize information security In a following post, we will explore: How you could orient your career toward working on security Tl;dr: New technologies under development, most notably artificial general intelligence (AGI), could pose an existential threat to humanity. We expect significant competitive pressure around the development of AGI, including a significant amount of interest from state actors. As such, there is a large risk that advanced threat actors will hack organizations — that either develop AGI, provide critical supplies to AGI companies, or possess strategically relevant information— to gain a competitive edge in AGI development. Limiting the ability of advanced threat actors to compromise organizations working on AGI development and their suppliers could reduce existential risk by decreasing competitive pressures for AGI orgs and making it harder for incautious or uncooperative actors to develop AGI systems. What is the relevance of information security to the long term future? The bulk of existential risk likely stems from technologies humans can develop. Among candidate technologies, we think that AGI, and to a lesser extent biotechnology, are most likely to cause human extinction. Among technologies that pose an existential threat, AGI is unique in that it has the potential to permanently shift the risk landscape and enable a stable future without significant risks of extinction or other permanent disasters. While experts in the field have significant disagreements about how to navigate the path to powerful aligned AGI responsibly, they tend to agree that actors that seek to develop AGI should be extremely cautious in the development, testing, and deployment process, given the failures could result in catastrophic risks, including human extinction. NIST defines information security as “The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction.” We believe that safe paths to aligned AGI will require extraordinary information security effort for the following reasons: Insufficiently responsible or malicious actors are likely to target organizations developing AGI, software and hardware suppliers, and supporting organizations to gain a competitive advantage. Thus, protecting those systems will reduce the risk that powerful AGI systems are developed by incautious actors hacking other groups. Okay, but why is an extraordinary effort required? Plausible paths to AGI, especially if they look like existing AI systems, expose a huge amount of attack surface because they’re built using complex computing systems with very expansive software and hardware supply chains. Securing systems as complex as AGI systems is extremely difficult, and most attempts to do this have failed in the past, even when the stakes have been large, for example, the Manhattan Project. The difficulty of defending a system depends on the threat model, namely the resources an attacker brings to bear to target a system. Organizations developing AGI are likely to be targeted by advanced state actors who are amongst the most capable hackers. Even though this is a challengin...
6 minutes | May 3, 2022
EA - P(utopia) is more important than P(doom) and this could have important strategic implications by Joshua Clymer
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: P(utopia) is more important than P(doom) and this could have important strategic implications, published by Joshua Clymer on May 2, 2022 on The Effective Altruism Forum. Edit: I'm assuming total utilitarian values and the basic ideas behind longtermism without providing any justification for them. If you do not have these values or disagree with longtermism, these arguments probably won't apply to you. I am concerned that the longtermist community is too down-side focused and this could result in people not thinking enough about increasing the probability of utilitarian utopia. Claim 1: the probability that superintelligent AGI has utilitarian values dominates EV calculations. There are roughly three ways I can see superintelligence going down: The superintelligence is misaligned and we all die. The superintelligence is aligned to non-utilitarian values (probably normal human values). Perhaps humans end up colonizing other solar systems. Or maybe they all decide to enter some kind of simulation – but they don’t use every joule and kilogram available to produce utility. The superintelligence is aligned to utilitarian values – i.e. every Joule in every reachable galaxy is used to produce happy, meaningful lives. Imagine Dyson spheres surrounding every star to support countless brains in vats or simulated minds. Evaluating outcome 1: I'll ignore the moral relevance of the superintelligence itself since it is present in all three outcomes. If the superintelligence is unaligned, I don’t see any reason to expect it to do anything particularly utilitarian (or anti-utilitarian), so I will value this outcome at 0. Evaluating outcome 2: I'll be generous and say that humans will populate all reachable galaxies and there will be 10 billion happy humans per star. Evaluating outcome 3: I'll be pessimistic and assume that the hard problem of consciousness can’t be solved and the superintelligence must construct biological brains to ensure that real experiences are being produced. A brain consumes 20 joules per second and a star produces something like 10^26 joules per second, so a single star can support 10^25 brains. 10^25 brains would require a lot of matter, but energy can be converted into matter. There are 10^16 Joules in a kg and a brain weighs ~ 1 kg which means it would take 10^9 seconds = 32 years for a star to produce enough matter to support its full brain capacity, which is no time at all in the scheme of things. So, outcome three produces something like 10^25 happy humans per star (or many more). So, the expected utility of the universe should look something like: Utopia is where almost all of the value is. Claim 2: It is not obvious that reducing the probability of extinction is the best way to increase the probability of utopia. According to the previous section, almost all of the value in reducing x-risk comes from indirectly increasing the probability of utopia. Imagine a scale that represents the probabilities of the three outcomes with two handles: the doom handle, and the utopia handle: When you pull either handle, the rest of the chart scales proportionally. Some interventions that mostly just pull the doom handle: The majority of AI alignment research agendas: interpretability, inner-alignment, learning from human feedback, etc. Increasing awareness about the alignment problem. Some interventions that pull the utopia handle: Directing EA community-building efforts at people who could have influence over AGI. Most longtermist governance research. Investing in capabilities at companies where there is more EA influence (e.g. anthropic) Trying to get EAs into positions of influence in AI companies or the government. Comparing scale Note that the value of pulling the doom handle depends a lot on the ratio between the blue and yellow areas. x = p(normal values)/p(ut...
9 minutes | May 2, 2022
EA - My Job: EA Office Manager by Jonathan Michel
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Job: EA Office Manager, published by Jonathan Michel on May 2, 2022 on The Effective Altruism Forum. Summary I think being an Office Manager at an EA office can be a really impactful job, and I would like to share my experience and give some advice. Part of the reason I’m writing this is that I’m the new Head of Property at the CEA Operations Team, but my previous job was being the office manager, and I’m hiring someone to replace me. If you’re interested, please apply here or get in touch with me if you have questions. How much you should trust me? I’m biased, but I’m trying to be honest and just share my experience. Why am I writing this? There has already been a bit of past discussion of ops on the forum. The reason I want to add to this pile of writing about operations is that I want to advocate/give context on the specific role of an EA Office Manager — one that oftentimes people underestimate in its importance. There has been some talk regarding EA Hubs and where we should start new ones. I think that EA Hubs can be very impactful and that a great office manager is a crucial component. An outstanding EA office which makes people more productive and happy needs an outstanding office manager. My Background Overall: I had a lot of experience with EA (I ran a local group for over four years, was well-read, and organised a bunch of events), and did a lot of volunteering through which I built operations skills. I did a lot of volunteer work for the German EA community such as organizing fellowships, talks and a retreat. I co-founded two NGOs (one focused on COVID relief and the other on cellular agriculture) I did a bunch of volunteering for GFI, ProVeg, and an internship for www.effektiv-spenden.org Day in the life Last year, I started as the Office Manager of the Oxford EA office, Trajan House. Trajan House currently accommodates the Centre for Effective Altruism, the Future of Humanity Institute, the Global Priorities Institute, the Forethought Foundation, the Centre for the Governance of AI, the Global Challenges Project, Our World in Data, and a number of people working at other EA organisations (such as Rethink Priorities, HLI, LEEP, and OpenPhil). At the moment, around 80 EA professionals work at Trajan House, and this number is growing. If you want to get a better sense of Trajan House (including some photos) you can see the office guide here. Until very recently, the office team consisted of me (as the office manager), and employees of Oxford University working in the reception area and in facilities management. One month ago two office assistants joined my team, so we now have two additional FTEs helping to run the office. As described above, I’m now transitioning out of the office manager role, but the below outlines my week in the position. How I spent my time: 40% - Expanding, changing and optimising the office set-up (including thinking about how we can further expand and improve the services we provide) 20% - Developing the culture and community aspects of the office (e.g. by planning events) 20% - Processing direct requests like “Can I get a MacBook charger, please?”, “Do we have spare copies of The Precipice?” 10% - managing the Office Assistants and liaising with the Facilities Management team 5% - Processing requests of individuals or organisations for office space For a more tangible sense of what I did, here are some specific things I did in the last couple of months: Changing the acoustics (the “soundscape”) of our cafeteria (getting different quotes done, thinking about the interior design of the space, liaising with the contractors implementing it) Finding a new caterer (researching different companies, work-trialling them, negotiating a contract in cooperation with our lawyers, having regular check-ins to ensure quality and improve their s...
2 minutes | May 2, 2022
EA - New study on whether animal welfare reforms affect wider attitudes towards animals by Jamie Harris
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New study on whether animal welfare reforms affect wider attitudes towards animals, published by Jamie Harris on May 2, 2022 on The Effective Altruism Forum. A new study called “The Effects of Exposure to Information About Animal Welfare Reforms on Animal Farming Opposition: A Randomized Experiment” has been published in Anthrozoös. As context, some advocates worry that learning about farmed animal welfare reforms can increase complacency with factory farming. (More discussion here.) We wanted to put that to the test. The experiment found that, unlike information about current farming practices, which somewhat increased animal farming opposition, providing information about animal welfare reforms had no significant effect on animal farming opposition, but we found evidence of indirect effects via some of the mediators we tested. Written by me, Ali Ladak (Sentience Institute), and Maya Mathur (Assistant Professor, Stanford University). ABSTRACT There is limited research on the effects of animal welfare reforms, such as transitions from caged to cage-free eggs, on attitudes toward animal farming. This preregistered, randomized experiment (n = 1,520) found that participants provided with information about current animal farming practices had somewhat higher animal farming opposition (AFO) than participants provided with information about an unrelated topic (d = 0.17). However, participants provided with information about animal welfare reforms did not report significantly different AFO from either the current-farming (d = −0.07) or control groups (d = 0.10). Although these latter effects on AFO were small and nonsignificant, they appeared to be mediated by changes in perceived social attitudes toward farmed animals and optimism about further reforms to factory farming. Exploratory analysis found no evidence that hierarchical meat-eating justification or beliefs about how well-treated farmed animals currently are mediated the effect. Further research is needed to better understand why providing information about animal welfare reforms did not substantially increase AFO overall, whereas providing information about current practice did somewhat increase AFO. FULL PAPER Author Accepted Manuscript / Preprint: Anthrozoös (paywalled): Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
2 minutes | May 2, 2022
LW - What Would It Cost to Build a World-Class Dredging Vessel in America? by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Would It Cost to Build a World-Class Dredging Vessel in America?, published by Zvi on May 2, 2022 on LessWrong. I'm doing some research into questions surrounding the Foreign Dredge Act of 1906, and thought I'd experiment by throwing this out there. For context, the 31 biggest dredging vessels were not built in America, and thus cannot be used in America by law. We only have a small number of less capable vessels, and they often get redirected to short-term emergency tasks. This is preventing us from doing a bunch of very valuable things, like repairing or expanding ports, which end up taking much more time and money or not happening at all. This podcast is recommended. You can find a transcript here. They claim that there's no way America will be able to have such capacity for at least decades. I want to verify that (and also check if any other claims here don't ring true)? As an alternative to repealing the Dredge Act (which I'm exploring and planning to write about) another alternate would be to build world-class dredging vehicles here in America, such that they could be used. Before assuming that this is impossible, and to have a straight answer, what would happen if someone with deep pockets tried to commission a world-class dredging ship that would qualify? Could be done? Are there other impossible barriers to solve? How much would it cost and how much more would that be than building it elsewhere? How long would it take? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
8 minutes | May 2, 2022
EA - Doing good easier: how to have passive impact by Kat Woods
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Doing good easier: how to have passive impact, published by Kat Woods on May 2, 2022 on The Effective Altruism Forum. What’s better - starting an effective charity yourself, or inspiring a friend to leave a low-impact job to start a similarly effective charity? Most EAs would say that the second is better: the charity gets founded, and you’re still free to do other things. Persuading others to do impactful work is an example of what I call passive impact. In this post, I explain what passive impact is, and why the greatest difference you make may not be through your day-to-day work, but through setting up passively-impactful projects that continue to positively affect the world even when you’ve moved on to other things. What is passive impact? When we talk about making money, we can talk about active income and passive income. Active income is money that is linked to work (for example, a salary). Passive income is money that is decoupled from work, money that a person earns with minimal effort. Landlords, for example, earn passive income from their properties: rent comes in monthly and the landlord doesn’t have to do much, beyond occasional maintenance. Similarly, when we talk about our positive impact, we can talk about active impact and passive impact. When most people think about their impact, they think about what they do. A student might send $100 to the world’s poorest people, who might use this money to buy a roof for their house or education for their kids. Or an AI researcher might spend 2 hours working on a problem in machine learning, to help us make superintelligent AI more likely to share our values. These people are having an active impact - making the world better through their actions. Their impact is active because, in order to have the same impact again, they’d have to repeat the action - make another donation, or spend more time working on the problem. Now consider the career advisors at 80,000 Hours. Imagine that, thanks to their advice, a young person decides to work for an effective animal advocacy charity rather than at her local cat shelter, and thus save hundreds of thousands of chickens from suffering on factory farms. The 80,000 Hours advisors can claim some of the credit for this impact - after all, without their advice, their advisee would have had a much less impactful career. But after the initial advising session, the coaches don’t need to keep meeting with their advisee - the advisee generates impact on her own. This is what I mean by passive impact: taking individual actions or setting up projects that keep on making the world better, without much further effort. The ultra-wealthy make most of their money through passive income. Bill Gates hasn’t worked at Microsoft since 2008, but it continues to make money for him. Similarly, many highly successful altruists are most impactful not through their day-to-day work, but through old projects that continue to generate positive impact, without further input. Why should you try to create passive impact What are the benefits of passive impact? Here are a few: You can have a really big impact Your active impact is limited by your time, energy, and money, but your passive impact is boundless because you can just keep on setting up impactful projects that run in parallel to each other. It’s satisfying It’s really pleasing to be lounging on a beach somewhere and to hear that one of my projects has had a positive impact. It’s more efficient When I set up the Nonlinear Library, people asked me why I didn’t get a human to read the posts, rather than a machine. But by automating the process, I’m saving loads of money and time. It will take a robot two weeks and $6,000 to record the entire Less Wrong backlog; if we’d hired a human to read all those posts, it would take many years and over a million d...
57 minutes | May 2, 2022
EA - My thoughts on nanotechnology strategy research as an EA cause area by Ben Snodin
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My thoughts on nanotechnology strategy research as an EA cause area, published by Ben Snodin on May 2, 2022 on The Effective Altruism Forum. Two-sentence summary: Advanced nanotechnology might arrive in the next couple of decades (my wild guess: there’s a 1-2% chance in the absence of transformative AI) and could have very positive or very negative implications for existential risk. There has been relatively little high-quality thinking on how to make the arrival of advanced nanotechnology go well, and I think there should be more work in this area (very tentatively, I suggest we want 2-3 people spending at least 50% of their time on this by 3 years from now). Context: This post reflects my current views as someone with a relevant PhD who has thought about this topic on and off for roughly the past 20 months (something like 9 months FTE). Note that some of the framings and definitions provided in this post are quite tentative, in the sense that I’m not at all sure that they will continue to seem like the most useful framings and definitions in the future. Some other parts of this post are also very tentative, and are hopefully appropriately flagged as such. Key points I define advanced nanotechnology as any highly advanced future technology, including atomically precise manufacturing (APM), that uses nanoscale machinery to finely image and control processes at the nanoscale, and is capable of mechanically assembling small molecules into a wide range of cheap, high-performance products at a very high rate (note that my definition of advanced nanotechnology is only loosely related to what people tend to mean by the term “nanotechnology”). (more) If developed, advanced nanotechnology could increase existential risk, for example by making destructive capabilities widely accessible, by allowing the development of weapons that pose a higher existential risk, or by accelerating AI development; or it could decrease existential risk, for example by causing the world’s most destructive weapons to be replaced by weapons that pose a lower existential risk. (more) Timelines for advanced nanotechnology are extremely uncertain and poorly characterised, but the chance it arrives by 2040 seems non-negligible (I’d guess 1-2%), even in the absence of transformative AI. (more) It seems likely that there’d be a long period of development with clear warning signs before advanced nanotechnology is developed, pushing against prioritising work in this area and pushing towards a focus on monitoring and foundational work. (more) There has been relatively little high-quality nanotechnology strategy work, and by default this seems unlikely to change much in the near future. (more) It seems possible to make progress in this area, for example by clarifying timelines, tracking potential warning signs of accelerating progress, and doing strategic planning. (more) Overall, I think that nanotechnology strategy research could be very valuable from a longtermist EA perspective. Currently, my extremely rough, unstable guess is that we should have 2-3 people spending at least 50% of their time on this by 3 years from now (against a background of perhaps 0-0.5 FTE over the past 5 years or so). (more) Note that it seems that we don’t want to accelerate progress towards advanced nanotechnology because of (i) the dramatic but highly uncertain net effects of the technology, including the possibility of very bad outcomes, (ii) the plausible difficulty of reversing an increase in the rate of progress, and (iii) the option of waiting to gain more information. (Though note that I still feel a bit confused about how harmful various forms of accelerating progress might be, and I’d like to think more carefully about this topic.) (more) Introduction This post has two main goals: To provide a resource that EA community...
4 minutes | May 2, 2022
LW - How to be skeptical about meditation/Buddhism by Viliam
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to be skeptical about meditation/Buddhism, published by Viliam on May 1, 2022 on LessWrong. Here is how I think we should approach the topic of meditation/Buddhism in the rationalist community. The short version is that a meaningful "yes" requires a credible possibility of "no", and the long version is that: If we post scientific studies showing that "meditation works", then we should either also post scientific studies showing that "meditation doesn't work" or explicitly mention their absence. Otherwise there is a possibility that simply by doing a lot of studies about any topic, 5% of them will confirm the hypothesis at "p<0.05". In other words, is there a meta-review on meditation research? (Then we should ask Scott Alexander to review it.) There are many different claims made about the effects of meditation. I find it quite plausible that some of them may be true (e.g. "meditation helps you relax") and some others may be false (e.g. "meditation helps you remember your previous reincarnations"). So instead of talking about proving "meditation" we should talk about proving specific claims about meditation. Actually, we should first make the list of "claims usually made about meditation" and then evaluate each of them individually. Otherwise, if we mention the claims that are supported by evidence, but keep silent about those that are not, it creates a biased overall picture, and contributes to a halo effect. (It is easier to assume that X is supported by evidence if all you know is that A, B, C are supported; compared to a situation where you know that A, B, C are supported, but D, E, F are not.) The problem with anecdotal evidence about meditation is that we would get it even in a universe where meditation helps 1/3 of the population, does nothing for another 1/3, and actively harms the remaining 1/3. The people who get no or harmful results would simply stop doing it, the people who get useful results would continue... and one or two of them would happen to be high-status in the rationalist community. Generally, how do you distinguish between "meditation only works for some people, or only in some situations" and "you are doing meditation wrong / not enough"? What about the anecdotal evidence in the opposite direction, such as sex scandals of famous experts on meditation? (In context of Buddhism, "sex scandals" is not just a bad behavior, but specifically the kind of behavior that meditation is supposed to prevent. So I am not mentioning it here as a moral judgment, but as an evidence that the claims of effects of meditation are falsified by the very people who spent huge amounts of time meditating presumably the right way.) If you find scientific support for some Buddhist dogma, consider the possibility that you could also find scientific support for its opposite, if you approached it with the same degree of charity. For example, if the teaching of "no self" makes you say "yes, mind is composed of agents which are not themselves minds", maybe a teaching of "all self" would make you say "yes, neurons are all over the human body, not just in brain; also our mood is influenced by gut bacteria and sunshine and talking to other humans". Similarly, if the teaching of "impermanence" reminds you of changing moods, growing up, effects of sickness, etc., maybe a teaching of "permanence" would remind you of the stability and heredity of the OCEAN traits. So maybe the actual lesson is not "Buddhism is correct about so many things" but "for a sufficiently general statement one can always find a charitable interpretation". Especially if you keep silent about those Buddhist teachings where there is no charitable interpretation that would appeal to the rationalist community. (Such as Buddha doing miracles, using the superpowers he got as a result of meditation.) Thanks for ...
26 minutes | May 2, 2022
LW - Nuclear Energy - Good but not the silver bullet we were hoping for by Marius Hobbhahn
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear Energy - Good but not the silver bullet we were hoping for, published by Marius Hobbhahn on April 30, 2022 on LessWrong. This article was written by Samuel Scheuer and Marius Hobbhahn (equal contribution). It was crossposted from Marius' personal blog. Nuclear power is a very polarized issue. Proponents think it is a clean solution to most of our energy problems and opponents think it is too dangerous or too expensive. In this post, we want to take a deeper look at the case for and against nuclear while trying to be as unbiased as possible. Executive summary: Disclaimer: Samuel does an MSc in Environmental Change and Management at Oxford but has no special focus on nuclear energy. Marius has no expertise in nuclear energy and just likes to dig deep into topics. There is a chance we might have misunderstood important details. You should think of the following as our best guess after spending a collective ~50 hours on it. Both of us were proponents of nuclear before doing this research but we are now considerably less convinced of its value. Answering questions about nuclear power is more complicated than we assumed. However, there are a couple of things we can say with high confidence. Nuclear power is a carbon-neutral, stable source of energy that gives you high independence from other countries and conflicts. Nuclear power is likely very safe and creates way less harm than e.g. a coal power plant of similar size. The nuclear waste problem is likely neither very big nor very expensive. On average nuclear energy is more expensive than renewables (~2x) and we can expect this gap to widen in the future. On the other hand, there are a couple of other questions that carry more uncertainty. Some people argue that nuclear energy regulation is unreasonably stringent and thus prevents innovation and inflates prices. The majority of reports (pro- and anti- nuclear) don’t mention overregulation as a major problem. We think the narrative is plausible because of misaligned regulatory incentive structures but we aren’t very confident. The “true cost” of nuclear energy is hard to determine because it depends on specific timelines (e.g. waste management) and discount factors. Therefore, all arguments around “nuclear is cheap/expensive” carry considerable uncertainty. Our best guess is that nuclear is and will stay on the more expensive side of energy production. There are many people who claim that “nuclear just needs more innovation/research”. However, governments have invested large amounts of money into nuclear energy research over the last 70 years and progress has remained slow and produced underwhelming results. There is a chance we just haven’t found the breakthrough yet but we think that is unlikely---especially when the alternative is renewable energy which has shown insane speeds of progress (see below). We speculate that this stems from systemic factors, e.g. because nuclear doesn’t profit from economies of scale as much as renewables do. The role of nuclear in a world dominated by renewables is unclear. Nuclear takes long to ramp up and is only profitable with high uptime. Therefore, it is not very compatible with the high volatility of renewables--at least in its current form. Framing: In general, we find it useful to think of nuclear energy has a high-risk, high-reward investment. There is a small chance we can get the price down significantly and then nuclear will be god-tier and there is a large chance it will always be too expensive. Practical implications: It does NOT make any sense to close existing nuclear power plants unless you can guarantee that their energy will be replaced by 100% renewables and storage+distribution has been sorted out. Nuclear is nearly always better than fossil fuels. It creates way fewer health problems and GHG emissions. However,...
13 minutes | May 2, 2022
LW - What DALL-E 2 can and cannot do by Swimmer963
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What DALL-E 2 can and cannot do, published by Swimmer963 on May 1, 2022 on LessWrong. I got access to DALL-E 2 earlier this week, and have spent the last few days (probably adding up to dozens of hours) playing with it, with the goal of mapping out its performance in various areas – and, of course, ending up with some epic art. Below, I've compiled a list of observations made about DALL-E, along with examples. If you want to request art of a particular scene, or to test see what a particular prompt does, feel free to comment with your requests. DALL-E's strengths Stock photography content It's stunning at creating photorealistic content for anything that (this is my guess, at least) has a broad repertoire of online stock images – which is perhaps less interesting because if I wanted a stock photo of (rolls dice) a polar bear, Google Images already has me covered. DALL-E performs somewhat better at discrete objects and close-up photographs than at larger scenes, but it can do photographs of city skylines, or National Geographic-style nature scenes, tolerably well (just don't look too closely at the textures or detailing.) Some highlights: Clothing design: DALL-E has a reasonable if not perfect understanding of clothing styles, and especially for women's clothes and with the stylistic guidance of "displayed on a store mannequin" or "modeling photoshoot" etc, it can produce some gorgeous and creative outfits. It does especially plausible-looking wedding dresses – maybe because wedding dresses are especially consistent in aesthetic, and online photos of them are likely to be high quality? Close-ups of cute animals. DALL-E can pull off scenes with several elements, and often produce something that I would buy was a real photo if I scrolled past it on Tumblr. Close-ups of food. These can be a little more uncanny valley – and I don't know what's up with the apparent boiled eggs in there – but DALL-E absolutely has the plating style for high-end restaurants down. Jewelry. DALL-E doesn't always follow the instructions of the prompt exactly (it seems to be randomizing whether the big pendant is amber or amethyst) but the details are generally convincing and the results are almost always really pretty. Pop culture and media DALL-E "recognizes" a wide range of pop culture references, particularly for visual media (it's very solid on Disney princesses) or for literary works with film adaptations like Tolkien's LOTR. For almost all media that it recognizes at all, it can convert it in almost-arbitrary art styles. [Tip: I find I get more reliably high-quality images from the prompt "X, screenshots from the Miyazaki anime movie" than just "in the style of anime", I suspect because Miyazaki has a consistent style, whereas anime more broadly is probably pulling in a lot of poorer-quality anime art.] Art style transfer Some of most impressively high-quality output involves specific artistic styles. DALL-E can do charcoal or pencil sketches, paintings in the style of various famous artists, and some weirder stuff like "medieval illuminated manuscripts". IMO it performs especially well with art styles like "impressionist watercolor painting" or "pencil sketch", that are a little more forgiving around imperfections in the details. Creative digital art DALL-E can (with the right prompts and some cherrypicking) pull off some absolutely gorgeous fantasy-esque art pieces. Some examples: The output when putting in more abstract prompts (I've run a lot of "[song lyric or poetry line], digital art" requests) is hit-or-miss, but with patience and some trial and error, it can pull out some absolutely stunning – or deeply hilarious – artistic depictions of poetry or abstract concepts. I kind of like using it in this way because of the sheer variety; I never know where it's going to go with a prompt...
2 minutes | May 1, 2022
LW - How confident are we that there are no Extremely Obvious Aliens? by Logan Zoellner
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How confident are we that there are no Extremely Obvious Aliens?, published by Logan Zoellner on May 1, 2022 on LessWrong. Robin Hanson's writing on Grabby Aliens is interesting to me, since it seems to be one of the more sound attempts to apply mathematical reasoning to the Fermi Paradox. Unfortunately, it still relies on anthropics, so I wouldn't the slightest bit surprised if it was off by an order of magnitude (or two) in either direction. What I would like to know (preferably from someone with a strong astronomy background) is: how confident are we that there are no (herein defined) Extremely Obvious Aliens? Extremely Obvious Aliens Define Extremely Obvious Aliens in the following way: They colonize every single star that they encounter by building a Dyson swarm around it that reduces visible radiation by at least 50% They expand in every direction at a speed of at least 0.5C They have existed for at least 1 billion years If such aliens existed, it should be really easy to detect they by just looking for a cluster of galaxies that is 50% dimmer than it should be which is at least 0.5Billion light years across. How confident are we that there are no Extremely Obvious Aliens? As with Grabby aliens, it is safe to say there are no Extremely Obvious Aliens in the Solar System. Nor, for that matter are there any Extremely Obvious Aliens within 0.5BLY of the Milky Way Galaxy. So, for my astronomy friends. What is the biggest radius for which we can confidently say there are 0 Extremely Obvious Aliens? The best answer I can come up with is SLOAN, which was done at a redshift of z=0.1, which I think corresponds to a distance of 1.5BLY. Is this accurate? Namely, is it safe to say (with high confidence) there are no Extremely Obvious Aliens within 1.5BLY of Earth? Is there another survey that would let us raise this number even higher? What is the theoretical limit (using something like JWST)? Has someone written a good paper answering questions like these already? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
21 minutes | May 1, 2022
EA - A tale of 2.75 orthogonality theses by Arepo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A tale of 2.75 orthogonality theses, published by Arepo on May 1, 2022 on The Effective Altruism Forum. tl;dr-tl;dr You can summarise this whole post as ‘we shouldn’t confuse theoretical possibility with likelihood, let alone with theoretical certainty’. tl;dr I’m concerned that EA AI-advocates tend to equivocate between two or even three different forms of the orthogonality thesis using a motte and bailey argument, and that this is encouraged by misleading language in the two seminal papers. The motte (the trivially defensible position) is the claim that it is theoretically possible to pair almost any motivation set with high intelligence and that AI will therefore not necessarily be benign or human-friendly. The inner bailey (a nontrivial but plausible position with which it’s equivocated) is the claim that there’s a substantial chance that AI will be unfriendly and non-benign, and that caution is wise until we can be very confident that it won't. The outer bailey (a still less defensible position with which both are also equivocated) is the claim that we should expect almost no relationship, if any, between intelligence and motivations, and therefore that AI alignment is extremely unlikely. This switcheroo overemphasises the chance of hostile AI, and so might be causing us to overemphasise the priority of AI work. Motte: the a priori theoretical possibility thesis In the paper that introduced the term ‘orthogonality thesis’, Bostrom gave a handful of arguments against a very strong relationship between intelligence and motivation, e.g. A member of an intelligent social species might also have motivations related to cooperation and competition: like us, it might show in-group loyalty, a resentment of free-riders, perhaps even a concern with reputation and appearance. By contrast, an artificial mind need not care intrinsically about any of those things, not even to the slightest degree. This seems a reasonable way of disabusing the idea that AI is obviously guaranteed to behave in ‘moral’ ways: all of what we typically think of as intelligence has a common root (Earth-specific evolution), and thus could only be one branch of a much larger tree - of which we have a very biased view. This and arguments like it focus on theoretical possibility: they aim to establish the very weak thesis that almost no pairing of intelligence and motivation is logically inconsistent or ruled out by physics. But coining this argument ‘orthogonality’ seems to have been a poor choice of name. ‘Orthogonality’ is not normally a statistical concept, so has no natural interpretation. But by far the most upvoted comments on these two stats.stackexchange threads explicitly understand it ‘not correlated’, an interpretation that would imply the much stronger outer bailey - that AI alignment is extremely unlikely. This ambiguity continues in the other prominent paper on the subject, General Purpose Intelligence: Arguing the Orthogonality Thesis, in which Stuart Amstrong argues in more depth for ‘a narrower version of the [orthogonality] thesis’. For example, Armstrong initially states that he’s arguing for the thesis that ‘high-intelligence agents can exist having more or less any final goals’ - ie theoretical possibility - but then adds that he will ‘be looking at proving the . still weaker thesis [that] the fact of being of high intelligence provides extremely little constraint on what final goals an agent could have’ - which I think Armstrong meant as ‘there are very few impossible pairings of high intelligence and motivation’, but which much more naturally reads to me as ‘high intelligence is almost equally as likely to be paired with any set of motivations as any other’. He goes on to describe a purported counterthesis to ‘orthogonality’, which he labels ‘convergence’, but which I would call n...
12 minutes | May 1, 2022
EA - EA Forum Lowdown: April 2022 by NunoSempere
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Forum Lowdown: April 2022, published by NunoSempere on May 1, 2022 on The Effective Altruism Forum. Imagine a tabloid to the EA Forum digest's weather report. This publication gives an opinionated curation of EA forum posts published during April 2022, according to my whim. I probably have a bias towards forecasting, longtermism, evaluations, wit, and takedowns. You can sign up for this newsletter on substack. Index How did we get here? Monthly highlights Finest newcomers Keen criticism Underupvoted underdogs How did we get here? Back and forth discussions On occasion, a few people discuss a topic at some length in the EA forum, going back and forth and considering different perspectives across a few posts. There is a beauty to it. On how much of one’s life to give to EA Altruism as a central purpose (a) struck a chord with me: After some thought I have decided that the descriptor that best fits the role altruism plays in my life, is that of a central purpose. A purpose can be a philosophy and a way of living. A central purpose transcends a passion; even considering how intense and transformative a passion can be. It carries a far deeper significance in one’s life. When I describe EA as my purpose, it suggests that it is something that my life is built around; a fundamental and constitutive value. Of course, effective altruism fits into people’s lives in different ways and to different extents. For many EAs, an existing descriptor adequately captures their perspective. But there are many subgroups in EA for whom I think it has been helpful to have a more focused discussion on the role EA plays for them. I would imagine that a space within EA for purpose-driven EAs could be particularly useful for this subset, while of little interest to the broader community. In contrast, Eric Neyman's My bargain with the EA machine (a) explains how someone who isn't quite as fanatically altruistic established his boundaries in his relationship with the EA machinery: I’m only willing to accept a bargain that would allow me to attain a higher point than what I would attain by default – but besides that, anything is on the table. I really enjoy socializing and working with other EAs, more so than with any other community I’ve found. The career outcomes that are all the way up (and pretty far to the right) are ones where I do cool work at a longtermist office space, hanging out with the awesome people there during lunch and after work. Note that there is a tension between creating inclusive spaces that would include people like Eric and creating spaces restricted to committed altruists. As a sign of respect, I left a comment on Eric’s post here. Jeff Kaufman reflects on the increasing demandingness in EA (a). In response, a Giving What We Can (GWWC) pledger explains that he just doesn't care (a) all that much what the EA community thinks about him. He writes (lightly edited for clarity): "my relative status as an EA is just not very important to me... no amount of focus on direct work by people across the world is likely to make me feel inadequate or rethink this... we [GWWC pledgers] are perfectly happy to be by far the most effectively altruistic person we know of within dozens of miles". On decadence and caution I found the contrast between EA Houses: Live or Stay with EAs Around The World (a) and posts such as Free-spending EA might be a big problem for optics and epistemics (a) and The Vultures Are Circling (a) striking and amusing. Although somewhat forceful, the posts presenting considerations to the contrary probably didn’t move the spending plans already in motion. Personal monthly highlights I appreciated the ruthlessness of Project: bioengineering an all-female breed of chicken to end chick culling (a) Luke Muehlhauser writes Features that make a report especially helpful ...
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2022