Thoughts at Midnight

These are the thoughts that sometimes keep me awake at night.

These are things I don’t want to think about. These are things I’ve spent hours thinking about, never productively. They are worrying, but unlike typical worries in my life, it is fundamentally impractical to take steps to resolve or mitigate them, after which I may rest assured that I’ve done my best. The reason is that they also happen to be either untestable/unfalsifiable or only testable if one incurs absurd and irreversible costs, mainly dying.

Sometimes I explain them away to myself successfully and move on. Sometimes I read what I’ve written and think about these thoughts and do the cognitive equivalent of looking at them funny, as I’m expecting most readers to feel if they get that far — why would anybody be bothered, or afraid, or soul-crushingly panicked about these things? Life is so busy, there are literally more than sixty-four items on my HabitRPG to-do list, and besides, there are so many serious global issues humanity is actually facing right now, and people who are actually deprived of basic rights and resources and have to struggle to stay alive. How can I possibly be bothered by these absurd remote thoughts?

But I know that other times I do feel those emotions exactly. And if I stare just right, I can feel those emotions bubbling beneath the surface in me. Sometimes I can’t explain the issues away to myself, and a deep soul-sucking pang grows in my stomach. I’m irrational — I’m afraid of some of these thoughts — and I have submitted to the fact that there are some edges of my irrationality that would not be worth the effort to fix if just not thinking about them is better.

Sometimes these thoughts make me wish I were not so rational. Sometimes they even make me wish I were religious; it would be easier if (I believed) consciousness were, somehow, special. I suspect if I tried really hard, I could make myself believe something like that sincerely. But I think that’s a betrayal of myself I’m not willing to take. I think there are better ways to remain happy.

I want to maximize happiness. Thinking about more general moral principles will help with that, but the remoteness of these particular thoughts is such that I doubt I’ll ever have to make a choice that would benefit from me having thought about them. At least, I think the chance is small enough to not be worth the negative utility spent thinking about them.

So: “There is nothing to fear but fear itself.”

But I feel frustrated: not thinking about something just doesn’t seem like a solution. I don’t know how to come to terms with just how irrational happiness fundamentally is. And I still can’t resist thinking about them sometimes…

If you’re reading this post from the top down as a casual follower of this blog or just leisurely browsing: well, I don’t know whether or not these thoughts bother most people, but if any of this sounds like you, consider that reading on might make you miserable or unproductive. Like I’m saying, it sometimes does that to me whenever I think about it, and has never made me happier. (Writing about it can make me happier, somewhat, because when I write, my thoughts are slowed down enough that I will think of many more random tangents, and some of the random tangents are amusing. Or sometimes the fact that I’d think of those random tangents in particular is amusing. Metahumor! Here’s a somewhat random tangent that also demonstrates how you are not alone if you decide not to read something that might negatively impact you: I chose not to read Friendship is Optimal: Caelum est Conterrens because Eliezer Yudkowsky calls it “the first and only effective horror novel I have ever read, since unlike Lovecraft, it contains things I actually find scary.” I don’t know whether I can resist the temptation for the rest of my life, or if I should, but that’s the state of affairs for now.)

You get the idea. I won’t be bothered in the slightest if nobody reads this. The main goal of this post is just for me to organize my deeply charged thoughts into an emotionless medium. After they’re written down, they look a lot more manageable; I have managed a constructive proof that they’re all composed of ideas and relations that are simple and mundane enough to be expressible with human language without me needing to make up any words. (This is probably a rationalization to some degree. I don’t know. Maybe writing stuff down is just my magic ritual placebo. But if the placebo’s effect is good enough, I’m damn well going to take it.)

Of course, that begs the question: why am I still posting this? Well, I see a small chance that maybe there are people out there who think like me, and knowing some troubled soul out here shares their thoughts might make them feel less alone, relieved, and happier. I don’t think I’ve told anybody else yet, but this is how I felt after I read On the Boredom Problem. Actually, I still feel relieved after rereading it. I remember I realized some time ago how similar my thoughts had been to somebody else who happened to blog about it. I can’t exactly pinpoint what inspired me to write this — if the source was where I faintly remember it to be, then the author has already deleted it (no, not the one ending in “zq”) — but I do think my intuition needs some correction from System 2 in that regard. But if this happens, it’s just a bonus.

See, I’m being relentlessly rational again.

Also, it’s interesting that my former self thought The Minister’s Black Veil was worthy of an unsubtle reference. I should really post more often and less stringently. (← I don’t remember what my narrative purpose was when I wrote this sentence in my draft for this post or if there was a purpose at all, but I’m leaving it in because I want to.)

Also, it makes me happy to go on random tangents in blog posts, so since this whole post is about irrational things I don’t feel bad about doing that.

But tangents aren’t tangents without a main stem to wander from, so, continuing the explication of the long list to follow: I’m also deliberately using a lot of somewhat technical terms from or references to related circles without linking them, just to make it harder for myself or anybody else to get lost in a maze of hyperlinks when there are better things to be done. They shouldn’t be hard to look up if you really want to go that way. (A technical term summary: in Bostrom’s information hazard typology, I’d classify these thoughts as distraction hazards and neuropsychological hazards.) I’m already guilty enough for indirectly linking you to My Little Pony: Friendship is Magic technological singularity fanfiction earlier in this post. You didn’t miss that, did you?

Some of the arguments have counterarguments after them. Like I said, sometimes I find them convincing; sometimes I don’t.

Dear future self: go do your homework!

  • What happens when I die?

    Really?

    I’m not a subscriber to some pearly gates vision of heaven or threat of eternal torture with fire-and-brimstone. It is trivial to come up with elaborate and mutually exclusive visions of them, like Wonko’s. They tend to be complex enough visions that I feel fine discounting them, since I can produce billions of other visions that are equally credible and equally bad, or equally credible and good enough to counter the bad ones.

    It’s supposed to be annihilation, of course. Just… not existing. Even that is kind of scary, but my worst fear here is perhaps that it will be actively felt nothingness (which surely has a very low Kolmogorov complexity, compared to the other options so far): I’ll be a disembodied thinking mind floating in an ether of eternal sensory deprivation. I don’t know how I’ll handle that. In eternity, all things happen with probability one. I’ll go crazy. I’ll experience some meditative periods of blissful calm too, but craziness strikes me as the (ironically) more stable state of mind, in the sense that transitioning from blissful calm to insanity is easier than transitioning from insanity to blissful calm. It’s a Markov chain satisfying a few inequalities, is what I’m trying to say. And the inequalities point the bad way. Oh, inequalities.

    But what else could it be?

    Even if some form of heaven exists (and I can imagine some nontraditional heaven built from abstract, relatively simple principles like maximizing pleasure) — won’t eternal happiness get boring after a while?

    Alternatively: what if quantum immortality actually happens, and I end up surviving indefinitely, physically, in my own beyond-astronomically unlikely branch of the universe? What if the rest of my world is destroyed and I lose everything I hold dear, but some jumble of my atoms harboring a continuous strand of my consciousness keeps perceiving and thinking in the positive-probability universes where the pattern of atoms doesn’t get messed up, simply because those universes are simply the only places the atoms can keep perceiving and thinking?

    There is, ever so slightly, some comfort to be had from the idea that everybody else in the world — my friends, my family, the influential people on TV and in our brands and news feeds, even Donald Knuth (I have no idea why I picked Donald Knuth, but that choice is definitely part of a specific thought I have formulated before, probably has to be related to the strange copy of Volume I of The Art of Computer Programming that I found in my dad’s office a long time ago when bored and is so old that it teaches you to use self-modifying code for subroutines to return to their callers, and I’ll keep running with it because why not), will go through the same things I do (possibly in different universes), until I wonder…
  • What if I’m special?

    What if metaphysical solipsism is true, and the entire world is a figment of my imagination? That would be an existence of crushing loneliness. Possibly inescapable, even by death, which might turn out to be a component of the fantasy construct world.

    It’s worth noting that I’m not actually bothered by any implications this may have that the life I live and the world I interact with is “fake” — without an alternate reference frame I call “real” to compare this world with, what meaning or purpose could I have for calling it “fake”? I can still discover things about it and receive feedback from it and feel happy or surprised or accomplished or reassured. (At least, I think I’ve been bothered on significantly fewer occasions that I think about this, compared to some of the other things in this post. But being bothered is a very irrational feeling, which is the whole point.)

    Of course, this is all resting on the assumption that I don’t ever gain access to a “real” reference frame. So, alternatively — what if I wake up and discover I’ve been a twenty-dimensional being experiencing a playback, or even fictional narrative, of this human body’s life?

    What if the chain goes on beyond that? What if it goes on forever as a natural consequence of quantum immortality?

    This is related to the simulation hypothesis but has, I think, more worrisome implications about identity for me.

    Of course, a twenty-dimensional being would probably not have the problems shrugging off a few decades’ immersion in an approximately-three-plus-one-dimensional being’s life that the approximately-three-plus-one-dimensional being imagines it would have. So maybe for all “I” care it’s the same as death. Or even better, because I keep existing as a fragment of a greater mind, minus all the turmoil of thinking weird philosophical thoughts like these. Or at least, those thoughts will be bequeathed to a being that is not really “me” any more.
  • Okay, moving beyond metaphysics and into the real world, for some definition of “real”, but it doesn’t get less bothersome to me: What happens when the technological singularity comes (if it does)?

    Of course, there are many troubling scenarios that could follow from a misprogrammed superoptimizer, which seems so ridiculously easy to make compared to a “friendly” superoptimizer. Still (from a selfish subjectively-experienced point of view, not totaling up the missing utility over billions of actual or potential human lives), it can’t be worse than death, right?

    Yeah, I suppose so. Plus, I can console myself that there are practical things I can do to ever so slightly decrease the risk, like learn to be more rational (read: put off homework by reading LessWrong sequences instead, ignoring the blatantly ironic irrationality of that choice). I think I psychologically handle potential crises like these just fine when I’m confident I’ve taken steps to mitigate them, like I talked about very early in this post and in one of my MIT admissions essays. No, the problem is that even from an optimistic point of view, under the best-case scenario, I’m not really sure what my place will be in such a society. What to do, when computers are orders of magnitude of orders of magnitude better than all of us at everything?

    Alright, it’s not a given that self-improving computational power truly works so magically. But the scenario doesn’t have to be as extreme as a singularity, either, to be bothersome. I thrive on the dream of proving an important theorem or teaching an influential course or contributing to a miracle cure or maybe even writing a popular book — doing something that matters. And yet proofs are, by their abstract nature, something that computers really ought to be better than humans at. The other things are decreasingly abstract, but it still seems foolishly anthrocentric to suppose that we won’t figure out good algorithms for them.

    My miserable set of neurons won’t have any ideas anywhere close to the machine’s optimality. Even if I’m on the team that helps bring the Singularity about, there will be no more achievements or room for self-actualization afterwards.

    What happens? Do us flesh-and-bone beings just fade away into a cloud of hedonism? I’m trying to think about ways we might work around that by becoming cyborgs, but how will we find new things to optimize?

    Maybe I’m underestimating my own psychological adaptivity. People can probably get used to not being special after living in a world like that, even if the transition is stressful. So why add extra stress by thinking about it now? Alternatively, maybe it is a helpful step I can take now to think about hobbies that don’t depend on being original or competitive. Enough people enjoy idyllic farming or fishing out there, don’t they?

    Separately: what if my innate resistance to such a universe actually just a symptom of a broken or excessively anthropic moral system? Is the idea that the best possible universe is one tiled with infinitely many minds in endless hedonic loops just an uncomfortable truth we have to come to terms with? (Yes, this is the most troublesome consequence of plain act utilitarianism for me, and I’m still considering just biting the bullet and accepting it.) Do we have a rational basis for being as bothered by the threat of a paperclip optimizer as we are? It doesn’t affect us in the present.
  • What if I get somehow duplicated — not in the genetic sense, but with absolute atomic accuracy at the atomic level, with the result that the duplicate is me, with my thoughts and my personality and everything else constituting my identity, but suddenly in a different environment? This could mean so many things.

    What if somebody tries to implement acausal trade with me by simulating such a duplicate and threatening to torture it?

    What if multiple people make mutually exclusive requests of me with such threats?

    (Actually, I get it now. All I have to do is solidly declare right now, before technology is advanced enough for any entity to plausibly scan my brain for simulation, that I will not accept acausal blackmail and I will never consider changing my mind about it, and nobody will be motivated to acausally blackmail me. I think… There’s the concern that a superintelligence, being a superintelligence, would still find it child’s play to convince me to change my mind. On the other hand, the concept of acausal blackmail seems obvious enough that it can’t slip by my mental defenses without sounding a lot of alarms. I deal with math enough to have some familiarity with mental certainties. Therefore, I might actually stand a chance. In any case, a superintelligence would probably find far easier paths to its goal, right? Was this meant to be a comforting parenthetical?)

    Even if it’s not one of these absurd scenarios, the mere idea that it’s possible for me to suddenly transition into a different state of living without any spirituality or different dimensions being involved is not comforting. Some part of me is even worried that something will actually happen as I type these sentences about transitions. It would be a darkly humorously appropriate time to decide to start simulating somebody if nothing else, right?

    On the other hand, I am not sure whether or not I like this if it also implies I’m living in a universe where I’m important or accomplished enough that I get simulated for whatever reason. And maybe my simulators would be tactful about it.

    Also, it’s quite conceivable that the computational resources needed to make a duplicate accurate enough for the most troubling of these scenarios aren’t going to lay about. If they do they’ll be in a higher dimension and I’ll have even less reason to believe that those being would care about me in particular.
  • What if a gamma ray burst hits Earth?

    What if it’s the opposite hemisphere of Earth, so I get to enjoy a few suffocating minutes in extreme temperature realizing that I’m about to die with no warning? I don’t know if this is a scientifically accurate. I don’t really want to know, either.

    I admit the counterargument is that us human lives are absolutely microscopic on a cosmic scale and we don’t know of any gamma ray bursts in recent times, so the probability is pretty low. There are several other world-ending scenarios, though.
  • Do we have good a priori reasons to think that the end of the world might be near? Is the doomsday argument valid? Is the great filter argument valid? There are lots of current events that could be observed to bolster this case…

    Alternatively, considering the ridiculous closeness of several nuclear events, what if this is actually evidence for quantum immortality?
  • If either the doomsday argument or the threat of the technological singularity is valid, should I have children? Will they have to live a life in which some of these fears are even more pressing than those I face? I still have the consolation (if you can call it that) that I might naturally grow old enough to become satisfied with my accomplishments and at peace with all of this, and then depart for the next Platonic realm without regrets. Things get a lot more tense the closer to the future one is born, as, tautologically, any children I might have would be.

    On the other hand, I don’t want to contribute to idiocracy either.

This post really, really doesn’t have a point. It’s just random things my brain spits out at me to annoy me or something. So I’m not even going to try to write a conclusion. What a mess.

Anyway, I’m going to sleep now. (It’s getting further from midnight for me.)

(note: the commenting setup here is experimental and I may not check my comments often; if you want to tell me something instead of the world, email me!)