sad mammal| Evolution as the Great Filter

2025-06-11

I've long admired/feared those think-y, cool personal pages that outline psychosis-inducing-feeling (or perhaps psychosis-induced?) pet theories, and now I have one of my own to put out there. Which is that the process of evolution is bugged. Any sufficiently conscious and intelligent life which evolves will inevitably ruin its planet.

There's a thought experiment called the "paperclip maximiser"[1] in the field of artifical general intelligence ethics.

Quote

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Nick Bostrom

I propose that the process of evolution is kind of like a paperclip maximiser in that both are blind optimisers. One cares about nothing but producing as many paperclips as possible. The other cares about nothing but optimising for local, short-to-medium-term reproductive success. Both of these are myopic optimisations. The paperclip maximiser because... it's going to turn all matter into paperclips (that is too many paperclips). Evolution, because traits which are highly locally adaptive in the short-to-medium term can be planetarily maladaptive in the long-term - and when this happens, it produces doomed offspring.

The capability for conceptual/abstract thought, and the capability to self-model[2], are strongly adaptive traits. The former enables an intelligence to mentally model its experience - so it can, say, use tools, develop agriculture. But it would have no motive to do things like that without self-concept. The ability to conceptualise the self as a discerete agent, and thus to insert this conceptual agent into those mental models, is what enables autonomous strategic agency.

In the evolutionary meta, this seems overpowered. It will pretty reliably allow the intelligence to dominate over intelligences which are driven by instinct and/or do not have self-concept.

The problem is that, while these traits are adaptive at local scale, they are deeply maladaptive at planetary scale. They enable intelligences to exceed ecological constraints faster than they can model and internalise them, and this temporal scale mismatch leads to a malapdative evolutionary feedback lag. It becomes possible for the intelligence to fuck around, but to not find out until it's done some really dumb shit, because it can do dumb shit faster than that can be evolutionarily selected against. It can, for a time, avoid the need to be ecologically adapted in order to succeed - in fact, it can achieve supernormal short-term success by becoming ecologically maladapted. This is the basis of planetary-scale overshoot.

This species will easily outcompete others and achieve planetary scale. Any remaining animist/ecocentric cultures or tendencies within it will also be "outcompeted" (absorbed, overwhelmed) by species-centric, extractive ones. Restraint and reciprocity are wonderful, sustainable and ecocentric cultural traits, but cultures which embody them are inherently poor at scaling and reproducing.

After that, you have a planet dominated by a species which sees itself as separate from the rest of existence and whose culture is thus as extractive as its technology level permits. At some point an industrial revolution will kick off, rapidly raise that technology level, and catapult them into planetary overshoot.

The consequences of that don't necessarily mean extinction. But, a planetary civilisation having extracted all the easily-reachable resources from its planet, rendered it far less of a hospitable habitat and then, rudely, collapsed is going to make it - at best - enormously harder for any future civilisations to achieve an equivalent technological level[3].

This is what is happening to humans. Without us having achieved even Kardashaev one. Perhaps it's what happened to a lot of other species too.


  1. Ethical Issues in Advanced Artificial Intelligence, Nick Bostrom ↩︎

  2. Have some sort of ego - conceptualise oneself as an agent which exists separately from experience. Which, as an aside, is not how things actually are (see: nonduality). ↩︎

  3. Riddley Walker is a cool book. ↩︎