Skip to content

Planning for the end of the world

Published: at 04:01 AM

When laid up with a minor problem, what do you do? I try to preserve rationality and reason my way out. I try to account for uncertainties. When each variable becomes clearer, so does the path to what I want.

But sometimes, the variables never become clearer. Sometimes, the more you read and think about the problem, the fuzzier they get. I’ve realized thinking about the future often falls into this category.

In an effort to maximize my time at university, I have been trying to plan the next few years of my life. Interestingly, this runs contrary to what older (wiser) people tell me: that it doesn’t matter what you study; most people end up doing something completely unrelated.

Let me justify why I still worry about having a plan. Last month, I had a conversation with a mentor about a career trajectory towards AI safety research. He explained that because nobody can sufficiently predict what will happen, the best thing I can do is to make decisions with the best information I have at the time. That is to say, plans naturally become less binding as their timeframes increase, and that’s okay, because short term clarity is still a net positive.

Therefore, I’ve attempted to map out the different scenarios on my mind. These are the biggest uncertainties I’ve been thinking about:

Artificial general intelligence, the alignment problem, and a post-AGI world

Why I’m worried and uncertain

If you are unfamiliar, the 80,000 Hours executive summary is fantastic:

The 80,000 hours summary

I expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). I think more work needs to be done to reduce these risks.

Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity. There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is neglected and may well be tractable. I estimated that there were around 400 people worldwide working directly on this in 2022, though I believe that number has grown. As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.

Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. As policy approaches continue to be developed and refined, we need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.

via:

Problem profiles: Risks from power-seeking AI systems | 80,000 Hours

The future of AI is difficult to predict. But while AI systems could have substantial positive effects, there's a growing consensus about the dangers of AI.

80000hours.org

I know that came to school wanting to study CS, because I found it interesting and I thought I could be good at it. I also came to school wanting to have some sort of impactful career (I say impact as in how orgs like 80,000 Hours use impact).

The issue of AGI and aligning AGI is thus a natural fit. But obviously, it is an extremely difficult problem to work on. In terms of timelines for artificial general intelligence, there is no consensus on whether we will have a fast or a slow takeoff, when AGI or superintelligence will come, or any of that. I, myself, haven’t properly sat down and elucidated my views to myself either.

As of March 3rd, 2025, the prediction market Metaculus1 has 2026/10/18 as the median date for the release of the first weakly general AI system. The first full general AI system is a bit later, at 2030/06/07.

That is not a long time. That could be before I finish my first degree.

My attempts to plan

In November 2024, I had a chat with the president of the UBC AI Safety club. I asked him for advice on my next steps as someone wanting to make an impact on the issue. If you’re curious, these were the raw notes I jotted down for my “rough next steps:”

Notes on rough next steps (25 November, 2024)

Scaling knowledge

  • Under technical knowledge:
    • Build mental model of paradigms
    • AI safety fundamentals self paced course
    • ARENA self paced or next term
    • 80,000 hours overview of career path
    • Understand core arguments of each paradigm to solving alignment (why researchers think this method may work or not work)
    • Read 80,000 hours problem page - “preventing an AI-related catastrophe”
  • Under governance knowledge:
    • Read 80,000 hours overview of career path
  • In general:
    • Podcasts: Future of life, Lex friedman episode with Dario (first and last section especially pertinent), 80,000 hours podcast, Hear this idea

Gaining experience

  • In general:
    • 80,000 hours 1-on-1 advising
    • Taking to people in the field
    • EAG
    • EA taco tuesdays
    • AI safety slacks
  • Under technical research:
    • Undergrad research opportunities
      • Lucy’s reference to ECE prof working on safety
  • Under governance research:
    • Research fellowships (think tanks)
  • Building a ladder of tests, and just trying to do the work

Spend time figuring out what to work on

  • Personal fit is very important. What’s most impactful is a very subjective, personal thing.
  • What’s most impactful is an exponential graph, so if it takes x hours to go from 5th most impactful to 4th most impactful it may be worth the time.

It became clear to me in that meeting that I felt I was too late. Compared to the people I knew working in that field, I’m 3-4 years younger, with much less knowledge, experience, and career capital. That is something I cannot change quickly.

The other piece is something I read (and perhaps shared to you) over winter (keep in mind the general stance of those on LessWrong):

via:

Orienting to 3 year AGI timelines — LessWrong

My median expectation is that AGI will be created 3 years from now. I share some thoughts on how to orient to and plan for short AGI timelines.

www.lesswrong.com

In the 3-4 years it would take for me to reach the (what I view as quasi-) required career and knowledge capital, it may be too late. Thinking that, I no longer wanted to put all my eggs in this basket.

I’m still reflecting on this reconfiguration of ideals. Immediately and out of pragmatism (fear for the world ending), my priorities partly shifted to encourage myself to make more life experiences. I’ve spent more time working on music, going out, eating with friends, etc.

That’s not to say I’ve given it up. In fact, there have been many arguments presented to me against this — I just don’t see myself as the hero in this story. Statistically, not everyone can be destined for greatness. At the same time, I guess it is an easy cop-out to say there is no reason for me to work so hard—it’s likely not going to happen. I’m not sure.

Taking stock

Specialization wise, I’ve pivoted to targeting Cognitive Sciences (CS + linguistics + psychology + philosophy), partly because I find it more interesting, and partly because I think the soft skills would be valuable to have in 1) a world where I’m working on aligning AGI, or 2) a post-AGI world — whatever that looks like.

Over the past year, I’ve also been heavily involved with the UBC AI Safety Club. This has given me many opportunities and connections, and I’m very grateful to have met peers that share this interest with me.

  • Last term, I took part in the club’s AI policy reading group, exploring the EU AI Act, California’s SB-1047 bill, and more.
  • This term, I am taking their Intro to AI Alignment course, guided by the French Center for AI Safety’s AI Safety Atlas. This has already contributed to my upskilling; I have a better big-picture view of both the technical and governance side of AI safety.

Looking forward — what I hope to do

Over the past 6 months, I forewent multiple opportunities to do technical research in alignment or interpretability because I felt I lacked the technical fundamentals. I feared I would be wasting my time—and theirs. Whether that was the right mindset to have or not, I’m not sure. The pressing nature of the issue just makes me feel rushed.

  • I hope to be in a place where I can try to gain some real-life experience soon. That’s #2 on the rough next steps doc, with scaling knowledge being #1.

  • I keep up with technological developments, but less so developments in governance.

Political instability, world order, and the alt-right pipeline

An aside: I recognize that as someone privileged, I’m often able to ignore many of the real-life impacts of these decisions. I also believe that the way to nullify this is to practice empathy. As much as I can.

Why I’m worried — AGI

The more I read about AGI, the more I believe good governance could save us. The more I read about the state of governance around the world… yeah, we’re fucked.

I believe we see a prime example of incentive misalignment. There is no financial incentive for large AGI labs — the labs that have the monetary, human, and political capital I believe is needed to solve alignment — to do anything other than accelerate.

via:

How Do AI Companies Fine-Tune Policy? Examining Regulatory Capture in AI Governance | RAND

Regulatory capture of AI policy could allow industry actors to co-opt regulatory regimes to prioritize private over public welfare. We present results from interviews with 17 AI policy experts on potential outcomes and mechanisms of capture.

www.rand.org

Regulatory capture and lobbying will make it even harder for governments to regulate the industry. This is on top of the already-slow bureaucratic process.

Why I’m worried — world order

It is quite concerning to see what is going on in the US. It doesn’t really affect me (like I said earlier), except that I think it sets a dangerous precedent for the rest of the world.

I won’t say much more, but on a personal level, it is sad to see people I once called close say things that invalidate and commodify/politicize my existence.

I don’t know what to do on this end. I really don’t. I will do my research and vote in the next Thai election, I guess.

What now?

Clearly, I only have concepts of a plan — as more information comes and consensus is formed, I hope to achieve more clarity. The one mantra I can probably take away from trying to connect all of these uncertainties is that, as long as I keep making decisions guided by the best information of the time, and I still make time to do things that bring me joy, I should be ok.

A nothingburger, I know.

Sometimes there are just a lot of issues you feel strongly about, but that you can’t do anything about — two of my biggest uncertainties happen to fall squarely into that category. Being able to take stock, though, has helped me worry less.

Related: Don’t think to write, write to think.

This article was the catalyst for thinking about/writing this post. I’ve been thinking about it for a while, and thought once I achieved more clarity I would try to record my views down somewhere.

via:

Don’t think to write, write to think – Herbert Lui

This is one of the lessons that every writer comes to appreciate: writing is thinking. Writing is not the artifact of thinking, it’s the actual thinking process. There’s no shortage of great quotes on this topic, the implications are less clear: Writing is the planning process and the final product: You don’t design a final […]

herbertlui.net

This is one of the lessons that every writer comes to appreciate: writing is thinking. Writing is not the artifact of thinking, it’s the actual thinking process.

Footnotes

  1. https://en.wikipedia.org/wiki/Efficient-market_hypothesis


Previous Post
On Apple Exclaves
Next Post
nroottag - Turning any Bluetooth device into an Airtag