Raemon

I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Raemon's Comments

Is a near-term, self-sustaining Mars colony impossible?

Huh. An interesting thing this points to is that when/if eventually there are self sustaining colonies on Mars, there’s a chance the supply chains will be much more legible (because they’ll have been built from scratch, without having to have been built on top of tons of spaghetti towers)

Should we stop using the term 'Rationalist'?

I didn’t think of ‘what others call us’ as the topic of this post, and think it’s much harder to change. 

On the construction of the self

This is a neat conceptualization.

Should we stop using the term 'Rationalist'?

For the past year, I've used "rationalist" to mean "person who has made a serious study of truthseeking skills", and used "LessWrong folk" or "Berkeley Rationality Community" or other more specific names to refer to the second group 

A man dies and is sent to hell

Sure is a clickbaity event title

Should we stop using the term 'Rationalist'?

"Aspirationalist" is actually... maybe surprisingly good as a word? Aspiring Rationalist was bad because it was too long, but I might actually be able to use Aspirationalist in conversation.

It has the downside of getting even more obviously appreviated to "Aspie Rat", but, well, maybe that's fine. :P

Comment on "Endogenous Epistemic Factionalization"

Curated.

I like that this post took a very messy, complicated subject, and picked one facet of to gain a really crisp understanding of. (MIRI's 2018 Research Direction update goes into some thoughts on why you might want to become deconfused on a subject, and the Rocket Alignment Problem is a somewhat more narrativized version)

I personally suspect that the principles Zack points to here aren't the primary principles at play for why epistemic factions form. But, it is interesting to explore that even when you strip away tons of messy-human-specific-cognition (i.e. propensity for tribal loyalty for ingroup protection reasons), a very simple model of purely epistemic agents may still form factions.

I also really liked Zack lays out his reasoning very clearly, with coding steps that you can follow along with. I should admit that I haven't followed along all the way through (I got about a third of the way through before realizing I'd need to set aside more time to really process it). So, this curation is not an endorsement that all his coding checks out. The bar for Curated is, unfortunately, not the bar for Peer Review. (But! Later on when we get to the 2020 LessWrong Review, I'd want this sort of thing checked more thoroughly).

It is still relatively uncommon on LessWrong for someone to even rise to the bar of "clearly lays out their reasoning in a very checkable way", and when someone does that while making a point that seems interesting and important-if-true, it seems good to curate it.

Raemon's Scratchpad

I had a very useful conversation with someone about how and why I am rambly. (I rambled a lot in the conversation!).

Disclaimer: I am not making much effort to not ramble in this post.

A couple takeaways:

1. Working Memory Limits

One key problem is that I introduce so many points, subpoints, and subthreads, that I overwhelm people's working memory (where human working memory limits is roughly "4-7 chunks").

It's sort of embarrassing that I didn't concretely think about this before, because I've spent the past year SPECIFICALLY thinking about working memory limits, and how they are the key bottleneck on intellectual progress.

So, one new habit I have is "whenever I've introduced more than 6 points to keep track of, stop and and figure out how to condense the working tree of points down to <4.

(Ideally, I also keep track of this in advance and word things more simply, or give better signposting for what overall point I'm going to make, or why I'm talking about the things I'm talking about)

...

2. I just don't finish sente

I frequently don't finish sentences, whether in person voice or in text (like emails). I've known this for awhile, although I kinda forgot recently. I switch abruptly to a new sentence when I realize the current sentence isn't going to accomplish the thing I want, and I have a Much Shinier Sentence Over Here that seems much more promising.

But, people don't understand why I'm making the leap from one half-finished thought to another.

So, another simple habit is "make sure to finish my god damn sentences, even if I become disappointed in them halfway through"

...

3. Use Mindful Cognition Tuning to train on *what is easy for people to follow*, as well as to improve the creativity/usefulness of my thoughts.

I've always been rambly. But a thing that I think has made me EVEN MORE rambly in the past 2 years is a mindful-thinking-technique, where you notice all of your thoughts on the less-than-a-second level, so that you can notice which thought patterns are useful or anti-useful.

This has been really powerful for improving my thought-quality. I'm fairly confident that I've become a better programmer and better thinker because of it.

But, it introduces even more meta-thoughts for me to notice while I'm articulating a sentence, which distract me from the sentence itself.

What I realized last weekend was: I can use Mindful Cognition to notice what types of thoughts/sentences are useful for *other people's comprehension of me*, not just how useful m original thought processes are.

The whole point of the technique is to improve your feedback loop (both speed and awareness), which makes it easier to deliberate practice. I think if I just apply that towards Being More Comprehensible, it'll change from being a liability in rambliness to an asset.

What are the best tools for recording predictions?

I think I might have phrased the OP “hey, is there a reason to use Foretold or Metaculus  over Prediction Book?”, and it sounds in both cases like they’re really optimized for a different thing. 

What are the best tools for recording predictions?

That makes sense. Thanks for chiming in. 

Load More