Rationality vs. Reasoning
Tomorrow’s events:
Live Journaling w/ Peter Limberg. Daily @ 8:00 AM ET. Patreon event. 90 mins.
The Glass Bead Game w/ The Metabeaders. Every Saturday @ 4:00 PM ET. RSVP here. 120 mins.
Newly posted event:
Seducing the Galaxy Brain w/ Jote Lamar and Peter Limberg. June 10th @ 6:00 PM ET. RSVP here.
An event to (maybe) get excited about:
Counterculture Is Not Dead, It's Just Sleeping in a Dark Forest w/ Caroline Busta. June 3rd at 12:00 PM ET. RSVP here.
Caroline Busta, the founder of New Models, visits The Stoa to discuss her excellent article titled The Internet Didn’t Kill Counterculture—You Just Won’t Find It on Instagram. We will explore the notion of the ‘dark forest’ in the dark forest that is The Stoa. Our boy Yancey Strickler has been thinking a lot about this dark forest thing, and I have written about it in a previous entry as well.
***
May 21st, 2021
This might be my last entry journaling on reasoning, but it probably will not be. In my last few entries I wrote about how arguments differ from opinions, a taxonomy of fallacies and conversational heuristics that stem from the straw man fallacy, and the importance of guarding one’s premises.
I am nerding out on this topic for some reason, and I am not quite sure why. I am sensing it is because good reasoning is a missing piece in the ecology of practices for a lot of people in the various scenes The Stoa flirts with. I’ll discuss the rationalist scene first, then the Sensemaking Web, to examine how they both relate to reasoning.
The rationality (rat) scene, of which our friends like LessWrong and CFAR are representative, have some crossovers with this reasoning thing, but they are talking about something different. Rationality, in my understanding of how this scene uses the word, can be split into “epistemic rationality” and “instrumental rationality.”
The former is about refining the accuracy of your beliefs, and the latter is about achieving your values. According to Eliezer Yudkowsky, to achieve the latter, you’ll need to do the former. This is not about arriving at beliefs that perfectly correspond to reality, but more about being “less wrong.”
How they go about being less wrong is by learning about cognitive biases and heuristics, and the rat scene has developed a lot of cool tools housed in a coherent framework that maps over nicely to some of the best tech from the psychotherapy and self-help scenes. Some cool methods/models they've developed …
Debugging: a process for spotting small ‘bugs’ in decision making, then applying methods to ‘debug’ them. The idea here is that small changes can lead to compounding benefits.
MurphyJitsu: a process of envisioning potential ‘failure modes’ of your plans in an iterative fashion such that if your plan still fails, you’d be greatly surprised.
Credence Calibration: credence is a statistics term meaning the strength of one’s belief expressed as a percentage. The rats have developed some cool ways to calibrate one’s credence levels.
Hamming question: a simple question inspired from the mathematician Richard Hamming: what are the important problems in your life, and what is stopping you from working on them?
The cool thing about the rat scene is that they have a slew of these tools. I use them, and I reason that they are important to use. I do lovingly dunk on rats a bit though, and I've called them out before on what I sense is their “ameliorate-my-anxiety-through-control” temperament. I also notice the following trend: most people who get plugged into the rat scene slowly drift to the postrational (postrat) scene.
I think grokking rationality is good, as is engaging in an ‘applied rationality,’ and I think getting lost in postrationality is good, too. I also sense that an ‘applied postrationality’ is a thing waiting to be born.
Circling back to reasoning though, and how it is somewhat of a different thing to rationality. I view rationality as something that affords the conditions needed to reason well, and reasoning well is a different skill than setting up the conditions to do so. I also think reasoning well is about reasoning wildly, which can seemingly paradoxically undermine the presuppositions of rationality itself. This might be too jazzy of a claim, but I view this claim as yes/anding rationality and not negating it.
You totally need to gain the talent stack of being rational, but I sense shedding the propositional architecture that is required to focus on improving one’s rationality is also needed. When that happens, you get to tap into a daemonic language. I’ll pause this thread though, as there are a lot of suppressed premises to unpack here. I will pivot now to how people in the Sensemaking Web approach reasoning.
It seems like the really smart “influencers” associated with the Sensemaking Web have tons of coherent arguments, but they do not have an academic philosophy background. Given this, they know how to reason well, but they are not familiar with the ‘metalanguage’ of reasoning, meaning they do reasoning but they do not have a language to explain how they are reasoning. This is where all the reasoning books I have been mentioning in the last few entries are handy. They help with learning, speaking, and teaching the metalanguage of reasoning.
The people who are attracted to the Sensemaking Web, and adjacent spaces, are not really plugged into this metalanguage of reasoning thing, as they are more plugged into psychotechnologies like meditation, mindfulness, psychedelics, embodiment practices, trauma work, shadow work, and intersubjective practices like Circling. I think all of these things are fucking cool, and they are needed, but there is a fashionable thing going on here with them.
Reasoning is not fashionable, and perhaps it should not be, but I sense it can be made stylish. I do think it is good to know what deductive and inductive reasoning is, how to structure your arguments, and to know when to guard your premises. Having a toolbox of formal and informal fallacies can also be helpful.
The open question for me though is this: what is the minimum viable metalanguage of reasoning that one needs? You can get too geeky about all of this shit, and this is where I would argue that the failure modes of learning the metalanguage of reasoning occur.
I am called to continue to journal about this now, but it is prudent for me to stop here, because time is limited and this is quite the topical pivot. Before I go, I’ll tease out what I sense the failure modes of learning the metalanguage of reasoning are:
Neutered wisdom, logicbros, and weaponized metalanguage.
Oh man. This tripartite failure mode of learning the metalanguage of reasoning sounds fucking delicious. I am excited. My nerd out will continue tomorrow.
***
Support The Stoa @ patreon.com/the_stoa
Receive coaching from Peter and others @ thestoa.ca/coaches