Fixative patterns

[epistemic status: pretty sure there is a thing in this direction from experience and observation, but my model of why may well be wrong, and I’m likely missing parts]

While memeplex mining, I’ve noticed that some topics tug you towards fixations much more than others. These ideas, while they can be powerful and contain genuine insight, will end up being applied too widely if not extraordinarily closely observed, and are dramatically more untrustworthy than even normal human reasoning. Once I noticed this new property, I followed the advice of a friend, Herb Doughty, and made a list in order to explore it.

I’ll give a few examples which I’ve noticed in myself or others, and my model of why this happens.

  • Excessive meta
  • Simulation hypothesis (e.g. using anthropic evidence to guess about other layers and acausal trade)
  • Evolutionary Psychology
  • Shadowy powerful groups (the idea of moloch seems to cause a non-agent pattern to fall into this category, which is super neat as a memetic device to combat a harmful pattern)

It’s interesting to note that we have overt or subtle ‘antibody memes’ which discourage trust in each of these hanging around the culture, generally with a vague feeling of them being low status to go too far into.

From these examples, I’ve picked three common themes:

  1. Incredibly broad applicability leading to novelty superstimuli. Each of these ideas have potential to interlink with an unusually large fraction of your thoughts, leading to endless new branches of thought to explore, and many chances for it to be brought to mind.
  2. Lack of negative evidence. In none of these cases it’s easy to get rapid feedback when you’re wrong, and in one cases maybe metaphysically impossible. This maybe throws off the normal grow then trim cycle of exploration.
  3. Help you play tribal politics. The lower items on the list all have the potential to give you the edge in the cognitive arms race in the EEA.

It’s generally bad to go into positive feedback loops and endlessly obsess about one thing, so we’ll have an architecture which usually resists that. However, when a concept can attach to many things at once and generate lots of rewards, the fail-safes against obsessiveness can be overwhelmed. Usually you get some form of negative reinforcement which stops this cycle, but some ideas don’t have good sources of negative evidence.

tl;dr: Things which help in the EEA give more rewards, things which are hyper-general and give a stream of novel insight spread more and are awakened more, things you can’t test and realize are wrong don’t get corrected and their generating-pattern does not get trimmed.

I suspect that some people would benefit from being warned about the potential for obsession, and others would benefit from getting some of the core insights from these concepts rather than being put off entirely by the antibody memes

Also, there seems to be another class which has similar effects but is mostly powered by #3 alone:

  • Things which high-status members of your perceived tribe want you to do
  • Interpersonal conflicts

untitled

Crony beliefs also seem related but more pragmatic, and less likely to lead to full obsession.

I’d be interested in more examples, I imagine I’ve not run into all of these 🙂

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s