The coming decades will be marked by extraordinary new technological innovations. They could provide us with enormous benefits—or bring us to the brink of disaster. What happens next hinges on whether we can figure out how to handle the risks. So, if you’re aiming at positive impact in your life, one of the best things you might do is to join the effort to make sure the most powerful technologies we are building work for the benefit of all humanity.
Here’s why.
Breakthroughs in biotechnology such as CRISPR gene editing and mRNA may give researchers the power to fight genetic disorders and infectious diseases, or even help eradicate malaria-carrying mosquitos. But this same breakneck progress in our power to manipulate biology also makes it significantly easier to unleash catastrophic pandemics, even by accident. Already, it’s possible for researchers to fine-tune and synthesize viruses known to be dangerous. Right now, only a few dozen trusted scientists are able to do this. But as costs fall, this capability could spread to thousands of people. If we keep building the tools to manufacture pandemics before we build the tools to robustly defend against them, the consequences could be dire.
Developments in AI will be no less momentous. AI systems can already explain jokes, recreate artistic styles, and use “common sense” across domains. Soon, they’ll be able to speed drug discovery and research into green energy. It looks entirely possible that this kind of progress simply won’t stop. In that case, the state of the art in AI will surpass human abilities not just in certain narrow tasks, but much more comprehensively. Advanced AI might then be used to entrench discrimination and empower dictators more effectively than ever before—or we could simply lose control of these systems altogether, just as they become too powerful to contain.
These are the two technologies whose risks we think have the greatest combination of scale, tractability, and neglect. Forecasters on Metaculus, a community prediction platform, put the risk of a catastrophe that kills at least 95% of the world’s population by 2100 from synthetic biology at 1%. For AI, that figure is 6%. Yet almost no one is working on these issues, and there is much we can do. If more dedicated people join the effort to wisely and safely navigate the risks, we can make sure they work for everyone’s benefit, now and in the future.
But what does that mean for you? What are the most impactful ways to help?
Your most important decision
If you want to make a positive difference in the world, one decision stands out above all the others: How will you use your career?
Consider climate change. Behavior changes made a real difference: Recycling averts about 0.15 tonnes of CO2 emissions per year, ditching driving altogether averts just over 2 tonnes. But there’s a limit to the good you can do with simple changes like this. You can’t drive less than never, or recycle more waste than you produce. On the other hand, donating $ 1000 to the very best climate-focused charities appears to avert something closer to 100 tonnes of CO2 emissions: like recycling for more than 500 years. That’s an incredible step up. But, unless you’re extremely wealthy, your most valuable contribution could be your own time, by working directly on the issues you care about. After all, skilled and dedicated people are needed to help turn other people’s donations into real change, and donations are useful only as far as they empower people to bring about that change.
This is true more generally: Your career is likely your biggest opportunity to make a difference. It’s true whether you want to work on steering new technologies for the better, or some other pressing task.
In 2011, we cofounded a nonprofit called 80,000 Hours, which provides free advice for people who want to do good with their careers (named after the approximate time you’ll spend in your working life). The team at 80,00 Hours has spent years researching the question: if you want to use your career to do work on a profoundly important issue, how should you decide what to do? Based on that research, we suggest that you grapple with three key questions. Each question tells us something about how best to positively influence the impacts of AI and biotechnology.
Finding the right problem to work on
First: How pressing is the problem you want to focus on?
In other words, how much impact can people have by choosing to work on it? The most pressing problems in the world combine three factors:
- They’re large in scale (they significantly affect large numbers of people–solving them would be a huge deal)
- They’re neglected (not nearly enough effort has been spent on them)
- They’re tractable (progress is possible with extra effort)
Take the risks from biotechnology. They’re certainly huge in scale: COVID-19 has killed over one million Americans and over twenty million people abroad, yet engineered pandemics could be significantly more destructive.
Yet, they’re also neglected: Humanity is doing far too little to prevent the next pandemic, natural or otherwise. The U.S. has only modestly increased its investment in pandemic preparedness, leaving a huge number of programs unfunded. For instance, the Biological Weapons Convention–the U.N. organization which monitors the development of dangerous bioweapons–has less funding than a typical McDonald’s. After 9/11, the U.S. spent a trillion dollars on foreign interventions, created the Department of Homeland Security, and radically transformed its foreign policy. COVID-19 killed 100 many times as many people, and yet the U.S. has done almost nothing in response. Meanwhile, forecasters put the risk of a pandemic that kills 95% of the global population at an unnervingly high 1% this century.
What about AI risk? It could be one of the most important problems we’ll face. For instance, AI systems could empower future totalitarian regimes by increasing dictators’ grasp over their populations. Or we could lose control entirely to AI systems that don’t share our values. At least according to AI researchers themselves (even those not focused on reducing the risks), this isn’t just idle speculation. In a recent survey of machine learning experts, the median respondent assigned a 5% probability to an outcome where AI has an impact as dire as human extinction.
It’s also shockingly neglected. Currently, for every 100 or so people researching how to advance the capacities of AI, there is just one person researching how to prevent AI from inflicting catastrophic harm.
So, when we step back and consider which issues look especially pressing, powerful emerging technologies stand out: they have the potential to imperil our future like few other things, but their risks are hugely underappreciated.
Making the most of your contribution
Second: How can you make a big impact on the problem?
Some solutions to important problems work far better than other well-intentioned approaches–and some approaches can even be counterproductive. By picking the right solution to focus on, you could 100x your impact.
What do the most promising solutions look like for reducing biological risks? Even though the field is relatively new, we already know several concrete measures that dedicated people could help scale up. First, we need ongoing surveillance of emerging pathogen outbreaks, such as the Nucleic Acid Observatory project that has grown out of MIT; it aims to sequence wastewater to spot exponentially growing new pathogens. Second, when a new outbreak is identified, we need large stocks of next-level PPE to enable essential workers to keep the economy functioning; plus adaptable rapid-flow tests to track the outbreak. Third, we need to maintain mRNA vaccine production capacity, and speed up the process of testing and deploying new vaccines to bring it to an end.
Alternatively, how could you have a big impact on how AI turns out? To make sure AI development goes well, we need a solution to the technical “alignment problem”–the problem of ensuring that an AI system will perform as intended, even as it becomes more capable than us. We also need progress on AI governance, such as better proposals for whether and how AI systems should be used.
Finding a fulfilling career
That brings us to the third, and final, question: What is your personal fit with the career path you’re considering?
On climate, perhaps you are an excellent writer and public speaker, but you’ve never enjoyed scientific research. In that case, you’ll probably make a greater difference by working in politics or policy to promote green energy than by working on clean tech R&D.
Looking at biotechnology and AI, there are many pathways to impact, each fitting very different skill sets. Some of the biggest challenges are highly technical–like developing defensive technologies for pandemics, or coming up with new insights on the alignment problem.
But we also need new organizations to implement the most promising solutions, and for that we need people with skills in entrepreneurship, management, operations, accounting, and fundraising. We also need community builders, to support people who want to work on all these problems. And we need communicators to spread awareness of the solutions. Navigating the most important technologies on the horizon is a team effort. It will require people with a whole range of backgrounds and strengths.
What next?
In biosecurity and AI, and in many other fields, we need people to use their careers to help humanity get its act together.
There are plenty of other options for doing good with your career, too, including many options that don’t involve shaping new technology. That’s why we created 80,000 Hours to help you figure out which high-impact career is best for you. It’s totally free, and represents thousands of hours of research on how you can actually make a difference.
You can also support people working on these issues indirectly, by donating to organizations doing good work in the area. The Longtermism Fund is one place to start.
With careful thought, people can find careers that are engaging and satisfying, and enable them to have a tremendous positive impact. We’ve seen this over and over again. And the stakes are high: There’s never been a better time to find the high-impact career in which you’ll thrive.
William MacAskill is an Associate Professor of Philosophy at Oxford University and a Senior Research Fellow at the Global Priorities Institute. He is the author of What We Owe the Future (Basic Books, August 2022).
Ben Todd is the President and Founder of 80,000 Hours, a non-profit that conducts research on which careers have the largest positive social impact.