My idea of these self-proclaimed rationalists was fifteen years out of date. I thought they’re people who write wordy fan fiction, but turns out they’ve reached the point of having subgroups that kill people and exorcise demons.
This must be how people who had read one Hubbard pulp novel in the 1950s felt decades later when they find out he’s running a full-blown religion now.
The article seems to try very hard to find something positive to say about these groups, and comes up with:
“Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.”
There’s nothing very unique about agreeing with the WHO, or thinking that building Skynet might be bad… (The rationalist Moses/Hubbard was 12 when that movie came out — the most impressionable age.) In the wider picture painted by the article, these presumed successes sound more like a case of a stopped clock being right twice a day.
You're falling into some sort of fallacy; maybe a better rationalist than I could name it.
The "they" you are describing is a large body of disparate people spread around the world. We're reading an article that focuses on a few dysfunctional subgroups. They are interesting because they are so dysfunctional and rare.
Or put it this way: Name one -ism that _doesn't_ have sub/splinter groups that kill people. Even Pacifism doesn't get a pass.
I realized a few years ago that there's an important difference between someone who believes women should have equal rights and a feminist. Similarly, there's a difference between someone who believes men should have equal rights and a men's rights advocate. I often sympathize with the first group. I often disagree with the latter.
This same distinction applies to rationality: there's a huge difference between someone who strives to be rational and someone who belongs to a "rationalist community".
The article specifically defines the rationalists it’s talking about:
“The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally.”
Is this really a large body of disparate people spread around the world? I suspect not.
Not sure how to define "drawn together", but the anecdata is: about half of my friends love Yudkowsky's works; they live all across US, EU and Middle Eastern countries.
So I suspect yes, it's a large body of loosely coupled people.
>The "they" you are describing is a large body of disparate people spread around the world.
And that "large body" has a few hundred core major figures and prominent adherents, and a hell of a lot of them seem to be exactly like how the parent describes. Even the "tamer" of them like ASC have that cultish quality...
As for the rest of the "large body", the hangers on, those are mostly out of view anyway, but I doubt they'd be paragons of sanity if looked up close.
>Or put it this way: Name one -ism that _doesn't_ have sub/splinter groups that kill people
-isms include fascism, nazism, jihadism, nationalism, communism, nationalism, racism, etc, so not exactly the best argument to make in rationalism's defense. "Yeah, rationalism has groups that murder people, but after all didn't fascism had those too?"
Though, if we were honest, it mostly brings in mind another, more medical related, -ism.
Dadaism? Most art -isms didn't have subgroups who killed people. If people killed others in art history it was mostly tragic individual stories and had next to nothing to do with the ideology of the ism.
The level of dysfunction which is described in the article is really rare. But dysfunction, the kind of which we talk about, is not really that rare, I would even say that quite common, in self proclaimed rationalist groups. They don’t kill people - at least directly - but they definitely not what they claim to be: rational. They use rational tools, more than others, but they are not more rational than others, they simply use these tools to prove their irrationality.
I touch rationalists only with a pole recently, because they are not smarter than others, but they just think that, and on the surface level they seem so. They praise Julia Galef, then ignore everything what she said. Even Galef invited people who were full blown racists, just it seemed that they were all right because they knew whom they talked with, and they couldn’t bullshit. They tried to argue why their racism is rational, but you couldn’t tell from the interviews. They flat out lies all the time on every other platforms. So at the end she just gave platform for covered racism.
The WHO didn't declare a global pandemic until March 11, 2020 [1]. That's a little slow and some rationalists were earlier than that. (Other people too.)
After reading a warning from a rationalist blog, I posted a lot about COVID news to another forum and others there gave me credit for giving the heads-up that it was a Big Deal and not just another thing in the news. (Not sure it made all that much difference, though?)
Do you think that the consequences of the WHO declaring a pandemic and some rationalist blog warning about covid are the same? Clearly the WHO has to be more cautious. I have no doubt there were people at the WHO who felt a global pandemic was likely at least as early as you and the person writing the rationalist blog.
This is going to be controversial. But WHO wasted precious time during the early phases of the pandemic. It could have been contained more effectively if they weren't in denial. And when they did declare a pandemic, it was all very sudden instead of gradually raising the level, leading to panic buying and anxiety.
Are the WHO personnel rational and competent? I would like to believe so. But that isn't a given - the amount of nonsense I had to fight in institutions considered as pinnacles of rationality is just depressing. Regardless, WHO was encumbered by international policitics. Their rationality would have made no difference. That is why the opinion of rational outsiders matter - especially of those with domain expertise.
The signs of an uncontained contagion were evident by the middle of December 2020, well before the WHO declared the pandemic in March 2021. They could have asked everyone to start preparing around then. Instead, there were alarming news coming out of Wuhan and endless debates on TV about the appeasement of the Chinese administration by WHO - things that started ringing the alarm bells for us. We started preparing by at least the middle of January. WHO chose to wait again till everything was obvious and a declaration was inevitable. People were dying by the thousands everyday and the lockdowns had already started by then. Their rubberstamp wasn't necessary to confirm what everyone knew already. That was one instance where waiting for the WHO wasn't a prudent choice.
WHO is a critical institution to the entire world. Their timing can mean the difference between life and death for millions everywhere. These sorts of failing shouldn't be excused and swept under the rug so easily.
If you look at the timeline, it's purely political. Some earliest warnings came from Taiwan/ROC who found it in travelers from mainland. But WHO did not dare to anger PRC so they ignored Taiwan and that way caused probably thousands of unnecessary deaths in the whole world
Shitposting comedy forums were ahead of the WHO when it came to this, it didn't take a genius to understand what was going on before shit completely hit the fan.
I worked at the British Medical Journal at the time. We got wind of COVID being a big thing in January. I spent January to March to get our new VPN into a fit state that the whole company could do their whole jobs from home. 23 March was lockdown and we were ready and had a very busy year.
That COVID was going to be big was obvious to a lot of people and groups who were paying attention. We were a health-related org, but we were extremely far from unique in this.
The rationalist claim that they were uniquely on the ball and everyone else dropped it is just a marketing lie.
I recall friends who worked for Google telling me that they instituted WFH for all employees from the start of March. I also remember a call with a co-worker in January/February who had a PhD in epidemiology (not a "rationalist" afaik); I couldn't believe what he was saying about the likelihood of a months-long lockdown in the West.
I think the piece bends over backwards to keep the charitable frame because it's written by someone inside the community, but you're right that the touted "wins" feel a bit thin compared to the sheer scale of dysfunction described.
> Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work
I wonder what views about covid-19 are correct. On masks, I remember the mainstream messaging went through the stages that were masks don't work, some masks work, all masks work, double masking works, to finally masks don't work (or some masks work; I can't remember where we ended up).
> to finally masks don't work (or some masks work; I can't remember where we ended up).
Most masks 'work', for some value of 'work', but efficacy differs (which, to be clear, was ~always known; there was a very short period when some authorities insisted that covid was primarily transmitted by touch, but you're talking weeks at most). In particular I think what confused people was that the standard blue surgical masks are somewhat effective at stopping an infected person from passing on covid (and various other things), but not hugely effective at preventing the wearer from contracting covid; for that you want something along the lines of an n95 respirator.
The main actual point of controversy was whether it was airborne or not (vs just short-range spread by droplets); the answer, in the end, was 'yes', but it took longer than it should have to get there.
> In particular I think what confused people was that the standard blue surgical masks are somewhat effective at stopping an infected person from passing on covid (and various other things), but not hugely effective at preventing the wearer from contracting covid
Yes, exactly.
If we look at guidelines about influenza, we will see them say that "surgical masks are not considered adequate respiratory protection for airborne transmission of pandemic influenza". And as far as I understand, it was finally agreed that in terms of transmission, Sars CoV-2 behaves similarly to the influenza virus.
Basic masks work for society because they stop your saliva from traveling but they don't work for you because they don't stop particles from other people saliva from reaching you
I was reminded of Hubbard too. In particular the "[belief that one] should always escalate when threatened" strongly echoes Hubbard's advice to always attack attack. Never defend.
The whole thing reminds me of EST and a thousand other cults / self-improvement / self-actualisation groups that seem endemic to California ever since the 60s or before.
As someone who started reading without knowing about rationalists, I actually came out without knowing much more. Lots of context is assumed I guess.
Some main figures and rituals are mentioned but I still don’t know how the activities and communities arise from the purported origin. How do we go from “let’s rationally analyze how we think and get rid of bias” to creating a crypto, or being hype focused on AI, or summoning demons? Why did they raise this idea of matching confrontation always with escalation? Why the focus on programming, is this a Silicon Valley thing?
Also lesswrong is mentioned but no context is given about it. I only know the name as a forum, just like somethingawful or Reddit, but I don’t know how it fits into the picture.
The point of masks, originally, was to catch saliva drops from surgeons as they worked over an open body, not to stop viruses.
For COVID its use was novel. But having an intention isn't enough. It must actually work. Otherwise, you are just engaging in witchcraft and tomfoolery.
The respiratory droplet model of how COVID spread was wrong, which was proven by lots of real world evidence. Look at how the Diamond Princess worked out and please explain how that was compatible with either masks or lockdowns working? SARS-CoV-2 spreads like every other respiratory virus, as a gaseous aerosol that doesn't care about masks in the slightest.
I'm not sure where you're getting this from. Repeated studies continue to affirm that COVID is spread by respiratory droplets and that masks are effective in reducing transmission.
Indoors. There were decades of research leading to the recommendations of mask wearing when symptomatic and only indoors.
All that fell by the wayside when mask wearing became a covid-time cult. A friend (with a degree in epidemiology) told me that if she tried to argue those points and doubts outdoor mask mandates she will be the immediately out of her job.
The covid-time environment of shutting down scientific discussions because policymakers decided that we had enough science to reach a conclusion should not be forgotten, it was a reasonable concern turned into a cult. My 2c.
> It's still mind boggling to me that governments didn't say "Don't wear a mask for yourself -- wear one to save your neighbor."
I mean... they did?
Like in the UK's guidance literally the second sentence is "Face coverings are primarily worn to protect others because they cover the nose and mouth, which are the main sources of emission of the virus that causes coronavirus infection (COVID-19)."
Nobody was persuaded, they were forced by law exactly because it was obvious to everyone with their brain switched on that masks didn't work. Remember how when the rules demanding masks on planes were rescinded there were videos of whole planes ripping off their masks and celebrating mid-flight? Literally the second the law changed, people stopped wearing masks.
That's because masks were a mass hysteria. They did not work. Everyone could see it.
> And masks? How many graphs of cases/day with mask mandate transitions overlayed are required before people realize masks did nothing? Whole countries went from nearly nobody wearing them, to everyone wearing them, overnight, and COVID cases/day didn't even notice.
Most of those countries didn't actually follow their mask mandates - the USA for example. I visited because the PRC was preventing vaccine deliveries to Taiwan so I flew to the USA to get a vaccine, and I distinctly remember thinking "yeah... Of course" when walked around an airport of people chin diapering.
Taiwan halted a couple outbreaks from pilots completely, partially because people are so used to wearing masks when they're sick here (and also because the mask mandate was strictly enforced everywhere).
I visited DC a year later where they had a memorial for victims of COVID. It was 700,000 white flags near the Washington monument when I visited, as I recall it broke a million a few months later.
This article is beautifully written, and it's full of proper original research. I'm sad that most comments so far are knee-jerk "lol rationalists" type responses. I haven't seen any comment yet that isn't already addressed in much more colour and nuance in the article itself.
I think that since it's not possible to reply to multiple comments at the same time, people will naturally open a new top-level comment the moment there's a clearly identifiable groupthink emerging. Quoting one of your earlier comments about this:
>This happens so frequently that I think it must be a product of something hard-wired in the medium *[I mean the medium of the internet forum]
I would say it's only hard-wired in the medium of tree-style comment sections. If HN worked more like linear forums with multi-quote/replies, it might be possible to have multiple back-and-forths of subgroup consensus like this.
Asterisk is basically "rationalist magazine" and the author is a well-known rationalist blogger, so it's not a surprise that this is basically the only fair look into this phenomenon - compared to the typical outside view that rationalism itself is a cult and Eliezer Yudkowsky is a cult leader, both of which I consider absurd notions.
Okay, true, that was a silly statement for me to make. It's just a look that's different from the typical media treatment of the rationalist community, and is as far as I know the first time there's an inside view of this cult-spawning phenomenon from a media outlet or publication.
The story from the outside is usually reduced to something like "rationalism is a wacky cult", with the recent ones tacking on "and some of its members include this Ziz gang who murdered many people". Like the NYT article a week ago.
> the typical outside view that rationalism itself is a cult and Eliezer Yudkowsky is a cult leader, both of which I consider absurd notions
Cults are a whole biome of personalities. The prophet does not need to be the same person as the leader. They sometimes are and things can be very ugly in those cases, but they often aren’t. After all, there are Christian cults today even though Jesus and his supporting cast have been dead for approaching 2k years.
Yudkowsky seems relatively benign as far as prophets go, though who knows what goes on in private (I’m sure some people on here do, but the collective We do not). I would guess that the failure mode for him would be a David Miscavige type who slowly accumulates power while Yudkowsky remains a figurehead. This could be a girlfriend or someone who runs one of the charitable organizations (controlling the purse strings when everyone is dependent on the organization for their next meal is a time honored technique). I’m looking forward to the documentaries that get made in 20 years or so.
> I haven't seen any comment yet that isn't already addressed in much more colour and nuance in the article itself.
I once called rationalists infantile, impotent liberal escapism, perhaps that's the novel take you are looking for.
Essentially my view is that the fundamental problem with rationalists and the effective altruist movement is that they are talking about profound social and political issues, with any and all politics completely and totally removed from it. It is liberal depoliticisation[1] driven to its ultimate conclusion. That's just why they are ineffective and wrong about everything, but that's also why they are popular among the tech elites that are giving millions to associated groups like MIRI[2]. They aren't going away, they are politically useful and convenient to very powerful people.
I just so happened to read in the last few days the (somewhat disjointed and rambling) Technically Radical: On the Unrecognized [Leftist] Potential of Tech Workers and Hackers
"Rationalists" do seem to be in some ways the poster children of consumerist atomization, but do note that they also resisted it socially by forming those 'cults' of theirs.
(If counter-cultures are 'dead', why don't they count as one ?? Alternatively, might this be a form of communitarianism, but with less traditionalism, more atheism, and perhaps a Jewish slant ?)
I think it's perfectly fine to read these articles, think "definitely a cult" and ignore whether they believe in spaceships, or demons, or AGI.
The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag – not really a novel, or unique, or situational insight.
That's a side point of the article, acknowledged as an old idea. The central points of this article are actually quite a bit more interesting than that. He even summarized his conclusions concisely at the end, so I don't know what your excuse is for trivializing it.
The other key takeaway, that people with trauma are more attracted to organizations that purport to be able to fix and are thus over-represented in them (vs in the general population), is also important.
Because if you're going to set up a hierarchical (explicitly or implicitly) isolated organization with a bunch of strangers, it's good to start by asking "How much do I trust these strangers?"
By this token, most scientists would be considered cultists: normal people don't have "specific tensile strength" or "Jacobian" or "Hermitian operator" etc in their vocabulary. "Must be some cult"?
Edit: it seems most people don't understand what I'm pointing out.
Having terminology is not the red flag.
Having intricate terminology without a domain is the red flag.
In science or mathematics, there are enormous amounts of jargon, terms, definitions, concepts, but they are always situated in some domain of study.
The "rationalists" (better call them pseudorationalists) invent their own concepts without actual corresponding domain, just life. It's like kids re-inventing their generation specific words each generation to denote things they like or dislike, etc.
I don’t think we disagree. I’m not taking issue with scientists having jargon, which I agree is good and necessary (though I think the less analytical academic disciplines, not being rooted in fact, have come to bear many similarities to state-backed religions; and I think they use jargon accordingly). I’m pointing out that I specifically intended to exclude professionals by scoping my statement to “social groups”. Primarily I had in mind religion, politics, certain social media sites, and whatever you want to call movements like capital R Rationality (I have personally duck typed it as a religion).
> I’m pointing out that I specifically intended to exclude professionals by scoping my statement to “social groups”.
I think your argumentation is a generalization that's close to a rationalist fallacy we're discussing:
> a social group with a lot of invented lingo is a red flag that you can see before you get isolated from your loved ones.
Groups of artists do this all the time for the sake of agency over their intentions. They borrow terminology from economics, psychology, computer science etc., but exclude economists, psychologists and computer scientists all the time. I had one choreographer talk to me about his performances as if they were "Protocols". People are free to use any vocabulary to describe their observed dynamics, expressions or phenomena.
As far as red flag moments go, the intent to use a certain terminology still prevails any choice of terminology itself.
I think there's a distinction between inventing new terms for utilitarian purposes vs ideological and in-group signalling purposes.
If you have groups talking about "expected value" or "dot products", that's different from groups who talk a lot about "privilege" or "the deep state". Even though the latter would claim they're just using jargon between experts, just like the scientists.
> The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag – not really a novel, or unique, or situational insight
Well yes and no. The reason why I think the insight is so interesting is that these groups were formed, almost definitionally for the purpose of avoiding such "obvious" mistakes. The name of the group is literally the "Rationalists"!
I find that funny, ironic, and saying something important about this philosophy, in that it implies that the rest of society wasn't so "irrational" after all.
As a more extreme and silly example, imagine there was a group called "Cults suck, and we are not a cult!", that was created for the very purpose of fighting cults, and yet, ironically, became a cult into and of itself. That would be insightful and funny.
One of a few issues I have with groups like these, is that they often confidently and aggressively spew a set of beliefs that on their face logically follow from one another, until you realize they are built on a set of axioms that are either entirely untested or outright nonsense. This is common everywhere, but I feel especially pronounced in communities like this. It also involves quite a bit of navel gazing that makes me feel a little sick participating in.
The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error. If you're not damn careful, your assembly of parts (or conclusions) will fail to measure up to expectations.
I like this approach. Also having dipped my toes in the engineering world (professionally) I think it naturally follows that you should be constantly rechecking your designs. Those tolerances were fine to begin with, but are they now that things have changed? It also makes you think about failure modes. What can make this all come down and if it does what way will it fail? Which is really useful because you can then leverage this to design things to fail in certain ways and now you got a testable hypothesis. It won't create proof, but it at least helps in finding flaws.
The example I heard was to picture the Challenger shuttle, and the O-rings used worked 99% of the time. Well, what happens to the failure rate when you have 6 O-rings in a booster rocket, and you only need one to fail for disaster? Now you only have a 94% success rate.
And always worth keeping an eye on the maximum possible divergence from reality you're currently at, based on how far you've reasoned from truth, and how less-than-sure each step was.
Maybe you're right. But there's a non-zero chance you're also max wrong. (Which itself can be bounded, if you don't wander too far)
My preferred argument against the AI doom hypothesis is exactly this: it has 8 or so independent prerequisites with unknown probabilities. Since you multiply the probabilities of each prerequisite to get the overall probability, you end up with a relatively low overall probability even when the probability of each prerequisite is relatively high, and if just a few of the prerequisites have small probabilities, the overall probability basically can’t be anything other than very small.
Given this structure to the problem, if you find yourself espousing a p(doom) of 80%, you’re probably not thinking about the issue properly. If in 10 years some of those prerequisites have turned out to be true, then you can start getting worried and be justified about it. But from where we are now there’s just no way.
I saw an article recently that talked about stringing likely inferences together but ending up with an unreliable outcome because enough 0.9 probabilities one after the other lead to an unlikely conclusion.
Edit: Couldn't find the article, but AI referenced Baysian "Chain of reasoning fallacy".
I think you have this oversimplified. Stringing together inferences can take us in either direction. It really depends on how things are being done and this isn't always so obvious or simple. But just to show both directions I'll give two simple examples (real world holds many more complexities)
It is all about what is being modeled and how the inferences string together. If these are being multiplied, then yes, this is going to decreases as xy < x and xy < y for every x,y < 1.
But a good counter example is the classic Bayesian Inference example[0]. Suppose you have a test that detects vampirism with 95% accuracy (Pr(+|vampire) = 0.95) and has a false positive rate of 1% (Pr(+|mortal) = 0.01). But vampirism is rare, affecting only 0.1% of the population. This ends up meaning a positive test only gives us a 8.7% likelihood of a subject being a vampire (Pr(vampire|+). The solution here is that we repeat the testing. On our second test Pr(vampire) changes from 0.001 to 0.087 and Pr(vampire|+) goes to 89% and a third getting us to about 99%.
Worth noting that solution only works if the false positives are totally random, which is probably not true of many real world cases and would be pretty hard to work out.
Definitely. Real world adds lots of complexities and nuances, but I was just trying to make the point that it matters how those inferences compound. That we can't just conclude that compounding inferences decreases likelihood
Be careful with your description there, are you sure it doesn't apply to the Bayesian example (which was... illustrative...? And not supposed to be every possible example?)? We calculated f(f(f(x))), so I wouldn't say that this "doesn't depend on the previous 'test'". Take your chain, we can represent it with h(g(f(x))) (or (f∘g∘h)(x)). That clearly fits your case for when f=g=h. Don't lose sight of the abstractions.
So in your example you can apply just one test result at a time, in any order. And the more pieces of evidence you apply, the stronger your argument gets.
f = "The test(s) say the patient is a vampire, with a .01 false positive rate."
f∘f∘f = "The test(s) say the patient is a vampire, with a .000001 false positive rate."
In the chain example f or g or h on its own is useless. Only f∘g∘h is relevant. And f∘g∘h is a lot weaker than f or g or h appears on its own.
This is what a logic chain looks like, adapted for vampirism to make it easier to compare:
f: "The test says situation 1 is true, with a 10% false positive rate."
g: "If situation 1 then situation 2 is true, with a 10% false positive rate."
h: "If situation 2 then the patient is a vampire, with a 10% false positive rate."
f∘g∘h = "The test says the patient is a vampire, with a 27% false positive rate."
So there are two key differences. One is the "if"s that make the false positives build up. The other is that only h tells you anything about vampires. f and g are mere setup, so they can only weaken h. At best f and g would have 100% reliability and h would be its original strength, 10% false positive. The false positive rate of h will never be decreased by adding more chain links, only increased. If you want a smaller false positive rate you need a separate piece of evidence. Like how your example has three similar but separate pieces of evidence.
IDK, probably? I'm just trying to say that iterative inference doesn't strictly mean decreasing likelihood.
I'm not a virologist or whoever designs these kinds of medical tests. I don't even know the right word to describe the profession lol. But the question is orthogonal to what's being discussed here. I'm only guessing "probably" because usually having a good example helps in experimental design. But then again, why wouldn't the original test that we're using have done that already? Wouldn't that be how you get that 95% accurate test?
I can't tell you the biology stuff, I can just answer math and ML stuff and even then only so much.
Correct. And there's a lot of other assumptions. I did make a specific note that it was a simplified and illustrative example. And yes, in the real world I'd warn about being careful when making i.i.d. assumptions, since these assumptions are made far more than people realize.
I think of a bike's shifting systems; better shifters, better housings, better derailleur, or better chainrings/cogs can each 'improve' things.
I suppose where that becomes relevant to here, is that you can have very fancy parts on various ends but if there's a piece in the middle that's wrong you're still gonna get shit results.
I think the reason this is true is mostly because how people do things "on paper". We can get much more accurate with "on paper" modeling, but the amount of work increases very fast. So it tends to be much easier to just calculate things as if they are spherical chickens in a vacuum and account for error than it is to calculate including things like geometry, drag, resistance, and all that other fun jazz (which you still will also need to account for error/uncertainty though this now can be smaller).
Which I think at the end of the day the important lesson is more how simple explanations can be good approximations that get us most of the way there but the details and nuances shouldn't be so easily dismissed. With this framing we can choose how we pick our battles. Is it cheaper/easier/faster to run a very accurate sim or cheaper/easier/faster to iterate in physical space?
> I don’t think it’s just (or even particularly) bad axioms
IME most people aren't very good at building axioms. I hear a lot of people say "from first principles" and it is a pretty good indication that they will not be. First principles require a lot of effort to create. They require iteration. They require a lot of nuance, care, and precision. And of course they do! They are the foundation of everything else that is about to come. This is why I find it so odd when people say "let's work from first principles" and then just state something matter of factly and follow from there. If you want to really do this you start simple, attack your own assumptions, reform, build, attack, and repeat.
This is how you reduce the leakiness, but I think it is categorically the same problem as the bad axioms. It is hard to challenge yourself and we often don't like being wrong. It is also really unfortunate that small mistakes can be a critical flaw. There's definitely an imbalance.
>> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
This is why the OP is seeing this behavior. Because the smartest people you'll meet are constantly challenging their own ideas. They know they are wrong to at least some degree. You'll sometimes find them talking with a bit of authority at first but a key part is watching how they deal with challenging of assumptions. Ask them what would cause them to change their minds. Ask them about nuances and details. They won't always dig into those can of worms but they will be aware of it and maybe nervousness or excited about going down that road (or do they just outright dismiss it?). They understand that accuracy is proportional to computation, and you have exponentially increasing computation as you converge on accuracy. These are strong indications since it'll suggest if they care more about the right answer or being right. You also don't have to be very smart to detect this.
IME most people aren't very good at building axioms.
It seems you implying that some people are good building good axiom systems for the real world. I disagree. There are a few situations in the world where you have generalities so close to complete that you can use simple logic on them. But for the messy parts of the real world, there simply is not set of logical claims which can provide anything like certainty no matter how "good" someone is at "axiom creation".
> you implying that some people are good building good axiom systems
How do you go from "most people aren't very good" to "this implies some people are really good"? First, that is just a really weird interpretation of how people speak (btw, "you're" not "you" ;) because this is nicer and going to be received better than "making axioms is hard and people are shit at it." Second, you've assumed a binary condition. Here's an example. "Most people aren't very good at programming." This is an objectively true statement, right?[0] I'll also make the claim that no one is a good programmer, but some programmers are better than others. There's no contradiction in those two claims, even if you don't believe the latter is true.
Now, there are some pretty good axiom systems. ZF and ZFC seems to be working pretty well. There's others too and they are used to for pretty complex stuff. They all work at least for "simple logic."
But then again, you probably weren't thinking of things like ZFC. But hey, that was kinda my entire point.
> there simply is not set of logical claims which can provide anything like certainty no matter how "good" someone is at "axiom creation".
I agree. I'd hope I agree considering my username... But you've jumped to a much stronger statement. I hope we both agree that just because there are things we can't prove that this doesn't mean there aren't things we can prove. Similarly I hope we agree that if we couldn't prove anything to absolute certainty that this doesn't mean we can't prove things to an incredibly high level of certainty or that we can't prove something is more right than something else.
[0] Most people don't even know how to write a program. Well... maybe everyone can write a Perl program but let's not get into semantics.
I think I misunderstood that you talking of axiomatization of mathematical or related systems.
The original discussion are about the formulation of "axioms" about the real world ("the bus always X minutes late" or more elaborate stuff). I suppose I should have considered with your username, you would have consider the statement in terms of the formulation of mathematical axioms.
But still, I misunderstood you and you misunderstood me.
> you talking of axiomatization of mathematical or related systems.
Why do you think these are so different? Math is just a language in which we are able to formalize abstraction. Sure, it is pedantic as fuck, but that doesn't make it "not real world". If you want to talk about the bus always being late you just do this distributionally. Probabilities are our formalization around uncertainty.
We're talking about "rationalist" cults, axioms, logic, and "from first principles", I don't think using a formal language around this stuff is that much of a leap, if any. (Also, not expecting you to notice my username lol. But I did mention it because after the fact it would make more sense and serve as a hint to where I'm approaching this from).
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
This is what you get when you naively re-invent philosophy from the ground up while ignoring literally 2500 years of actual debugging of such arguments by the smartest people who ever lived.
You can't diverge from and improve on what everyone else did AND be almost entirely ignorant of it, let alone have no training whatsoever in it. This extreme arrogance I would say is the root of the problem.
> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
Non-rationalists are forced to use their physical senses more often because they can't follow the chain of logic as far. This is to their advantage. Empiricism > rationalism.
That conclusion presupposes that rationality and empiricism are at odds or mutually incompatible somehow. Any rational position worth listening to, about any testable hypothesis, is hand in hand with empirical thinking.
In traditional philosophy, rationalism and empiricism are at odds; they are essentially diametrically opposed. Rationalism prioritizes a priori reasoning while empiricism prioritizes a posteriori reasoning. You can prioritize both equally but that is neither rationalism nor empiricism in the traditional terminology. The current rationalist movement has no relation to that original rationalist movement, so the words don't actually mean the same thing. In fact, the majority of participants in the current movement seem ignorant of the historical dispute and its implications, hence the misuse of the word.
That does compute with what I thought the "Rationalist" movement as covered by the article was about. I didn't peg them as pure a priori thinkers as you put it. I suppose my comment still holds, assuming the rationalist in this context refers to the version of "Rationalism" being discussed in the article as opposed to the traditional one.
(Note also how the context is French vs British, and the French basically lost with Napoleon, so the current "rationalists" seem to be more likely to be heirs to empiricism instead.)
Yet I think most people err in the other direction. They 'know' the basics of health, of discipline, of charity, but have a hard time following through.
'Take a simple idea, and take it seriously': a favorite aphorism of Charlie Munger. Most of the good things in my life have come from trying to follow through the real implications of a theoretical belief.
In philosophy grad school, we described this as 'being reasonable' as opposed to 'being rational'.
That said, big-R Rationalism (the Lesswrong/Yudkowsky/Ziz social phenomenon) has very little in common with what we've standardly called 'rationalism'; trained philosophers tend to wince a little bit when we come into contact with these groups (who are nevertheless chockablock with fascinating personalities and compelling aesthetics.)
From my perspective (and I have only glancing contact,) these mostly seem to be _cults of consequentialism_, an epithet I'd also use for Effective Altruists.
Consequentialism has been making young people say and do daft things for hundreds of years -- Dostoevsky's _Crime and Punishment_ being the best character sketch I can think of.
While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
The other codesmell these big-R rationalist groups have for me, and that which this article correctly flags, is their weaponization of psychology -- while I don't necessarily doubt the findings of sociology, psychology, etc, I wonder if they necessarily furnish useful tools for personal improvement. For example, memorizing a list of biases that people can potentially have is like numbering the stars in the sky; to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
And that's a relatively mild use of psychology. I simply can't imagine how annoying it would be to live in a household where everyone had memorized everything from connection theory to attachment theory to narrative therapy and routinely deployed hot takes on one another.
In actual philosophical discussion, back at the academy, psychologizing was considered 'below the belt', and would result in an intervention by the ref. Sometimes this was explicitly associated with something we called 'the Principle of Charity', which is that, out of an abundance of epistemic caution, you commit to always interpreting the motives and interests of your interlocutor in the kindest light possible, whether in 'steel manning' their arguments, or turning a strategically blind eye to bad behaviour in conversation.
The importance Principle of Charity is probably the most enduring lesson I took from my decade-long sojurn among the philosophers, and mutual psychological dissection is anathema to it.
> While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
I suspect this is because consequentialism is the only meta-ethical framework that has any leg to stand on other than "because I said so". That makes it very attractive. The problem is you also can't build anything useful on top of it, because if you try to quantify consequences, and do math on them, you end up with the Repugnant Conclusion or worse. And in practice - in Effective Altruism/Longtermism, for example - the use of arbitrarily big numbers lets you endorse the Very Repugnant Conclusion while patting yourself on the back for it.
I actually think that the fact that rationalists use the term "steel manning" betrays a lack of charity.
If the only thing you owe your interlocutor is to use your "prodigious intellect" to restate their own argument in the way that sounds the most convincing to you, maybe you are in fact a terrible listener.
I have tried to tell my legions of fanatic brainwashed adherents exactly this, and they have refused to listen to me because the wrong way is more fun for them.
Listening to other viewpoints is hard. Restating is a good tool to improve listening and understanding. I don't agree with this criticism at all, since that "prodigious intellect" bit isn't inherent to the term.
I was being snarky, but I think steelmanning does have one major flaw.
By restating the argument in terms that are most convincing to you, you may already be warping the conclusions of your interlocutor to fit what you want them to be saying. Charity is, "I will assume this person is intelligent and overlook any mistakes in order to try and understand what they are actually communicating." Steelmanning is "I can make their case for them, better than they could."
Of course this is downstream of the core issue, and the reason why steelmanning was invented in the first place. Namely, charity breaks down on the internet. Steelmanning is the more individualistic version of charity. It is the responsibility of people as individuals, not a norm that can be enforced by an institution or community.
One of the most annoying habits of Rationalists, and something that annoyed me with plenty of people online before Yudkowsky's brand was even a thing, is the assumption that they're much smarter than almost everyone else. If that is your true core belief, the one that will never be shaken, then of course you're not going to waste time trying to understand the nuances of the arguments of some pious medieval peasant.
For mistakes that aren't just nitpicks, for the most part you can't overlook them without something to fix them with. And ideally this fixing should be collaborative, figuring out if that actually is what they mean. It's definitely bad to think you simply know better or are better at arguing, but the opposite end of leaving seeming-mistakes alone doesn't lead to a good resolution either.
> to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
I feel this way about some of the more extreme effective altruists. There is no room for uncertainty or recognition of the way that errors compound.
- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.
- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.
- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).
The "longtermism" idea never made sense to me: So we should sacrifice the present to save the future. Alright. But then those future descendants would also have to sacrifice their present to save their future, etc. So by that logic, there could never be a time that was not full of misery. So then why do all of that stuff?
At some point in the future, there won't be more people who will live in the future than live in the present, at which point you are allowed to improve conditions today. Of course, by that point the human race is nearly finished, but hey.
That said, if they really thought hard about this problem, they would have come to a different conclusion:
Actually, you could make the case that the population won't grow over the next thousand years maybe even then thousand years, but that's the short term and therefore unimportant.
To me it is disguised way of saying the ends justify the means. Sure, we murder a few people today but think of the utopian paradise we are building for the future.
From my observation, that "building the future" isn't something any of them are actually doing. Instead, the concept that "we might someday do something good with the wealth and power we accrue" seems to be the thought that allows the pillaging. It's a way to feel morally superior without actually doing anything morally superior.
A bit of longtermism wouldn’t be so bad. We could sacrifice the convenience of burning fossil fuels today for our descendants to have an inhabitable planet.
But that's the great thing about Longtermism. As long as a catastrophe is not going to lead to human extinction or otherwise specifically prevent the Singularity, it's not an X-Risk that you need to be concerned about. So AI alignment is an X-Risk we need to work on, but global warming isn't, so we can keep burning as much fossil fuel as we want. In fact, we need to burn more of them in order to produce the Singularity. The misery of a few billion present/near-future people doesn't matter compared to the happiness of sextillions of future post-humans.
Well, there's a balance to be had. Do the most good you can while still being able to survive the rat race.
However, people are bad at that.
I'll give an interesting example.
Hybrid Cars. Modern proper HEVs[0] usually benefit to their owners, both by virtue of better fuel economy as well as in most cases being overall more reliable than a normal car.
And, they are better on CO2 emissions and lower our oil consumption.
And yet most carmakers as well as consumers have been very slow to adopt. On the consumer side we are finally to where we can have hybrid trucks that can get 36-40MPG capable of towing 4000 pounds or hauling over 1000 pounds in the bed [1] we have hybrid minivans capable of 35MPG for transporting groups of people, we have hybrid sedans getting 50+ and Small SUVs getting 35-40+MPG for people who need a more normal 'people' car. And while they are selling better it's insane that it took as long as it has to get here.
The main 'misery' you experience at that point, is that you're driving the same car as a lot of other people and it's not as exciting [2] as something with more power than most people know what to do with.
And hell, as they say in investing, sometimes the market can be irrational longer than you can stay solvent. E.x. was it truly worth it to Hydro-Quebec to sit on LiFePO patents the way they did vs just figuring out licensing terms that got them a little bit of money to then properly accelerate adoption of Hybrids/EVs/etc?
[0] - By this I mean Something like Toyota's HSD style setup used by Ford and Subaru, or Honda or Hyundai/Kia's setup where there's still a more normal transmission involved.
[1] - Ford advertises up to 1500 pounds, but I feel like the GVWR allows for a 25 pound driver at that point.
[2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...
Not that these technologies don't have anything to bring, but any discussion that still presupposes that cars/trucks(/planes) (as we know them) still have a future is (mostly) a waste of time.
P.S.: The article mentions the "normal error-checking processes of society"... but what makes them so sure cults aren't part of them ?
It's not like society is particularly good about it either, immune from groupthink (see the issue above) - and who do you think is more likely to kick-start a strong enough alternative ?
(Or they are just sad about all the failures ? But it's questionable that the "process" can work (with all its vivacity) without the "failures"...)
It goes along with the "taking ideas seriously" part of [R]ationalism. They committed to the idea of maximizing expected quantifiable utility, and imagined scenarios with big enough numbers (of future population) that the probability of the big-number-future coming to pass didn't matter anymore. Normal people stop taking an idea seriously once it's clearly a fantasy, but [R]ationalists can't do that if the fantasy is both technically possible and involves big enough imagined numbers to overwhelm its probability, because of their commitment to "shut up and calculate"'
"I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way."
Has always really bothered me because it assumes that there are no negative impacts of the work you did to get the money. If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
> If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
You kinda summed up a lot of the world post industrial revolution there, at least as far as stuff like toxic waste (Superfund, anyone?) and stuff like climate change, I mean for goodness sake let's just think about TEL and how they knew Ethanol could work but it just wasn't 'patentable'. [0] Or the "We don't even know the dollar amount because we don't have a workable solution" problem of PFAS.
[0] - I still find it shameful that a university is named after the man who enabled this to happen.
And not just that, but the very fact that someone considers it valid to try to accumulate billions of dollars so they can have an outsized influence on the direction of society, seems somewhat questionable.
Even with 'good' intentions, there is the implied statement that your ideas are better than everyone else's and so should be pushed like that. The whole thing is a self-satisfied ego-trip.
> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Yeah, this is a pattern I've seen a lot of recently—especially in discussions about LLMs and the supposed inevitability of AGI (and the Singularity). This is a good description of it.
Another annoying one is the simulation theory group. They know just enough about Physics to build sophisticated mental constructs without understanding how flimsy the foundations are or how their logical steps are actually unproven hypotheses.
Agreed. This one is especially annoying to me and dear to my heart, because I enjoy discussing the philosophy behind this, but it devolves into weird discussions and conclusions fairly quickly without much effort at all. I particularly enjoy the tenets of certain sects of buddhism and how they view these things, but you'll get a lot of people that are doing a really pseudo-intellectual version of the Matrix where they are the main character.
You might have just explained the phenomenon of AI doomsayers overlapping with ea/rat types, which I otherwise found inexplicable. EA/Rs seem kind of appalingly positivist otherwise.
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
I really like your way of putting it. It’s a fundamental fallacy to assume certainty when trying to predict the future. Because, as you say, uncertainty compounds over time, all prediction models are chaotic. It’s usually associated with some form of Dunning-Kruger, where people know just enough to have ideas but not enough to understand where they might fail (thus vastly underestimating uncertainty at each step), or just lacking imagination.
Deep Space 9 had an episode dealing with something similar. Superintelligent beings determine that a situation is hopeless and act accordingly. The normal beings take issue with the actions of the Superintelligents. The normal beings turn out to be right.
Precisely! I'd even say they get intoxicated with their own braininess. The expression that comes to mind is to get "way out over your skis".
I'd go even further and say most of the world's evils are caused by people with theories that are contrary to evidence. I'd place Marx among these but there's no shortage of examples.
Here's the thing, the goals of the terrorists weren't irrational.
People confuse "rational" with "moral". Those aren't the same thing. You can perfectly rationally do something that is immoral with a bad goal.
For example, if you value your life above all others, then it would be perfectly rational to slaughter an orphanage if a more powerful entity made that your only choice for survival. Morally bad, rationally correct.
I now feel the need to comment that this thread does illustrate an issue I have with the naming of the philosophical/internet community of rationalism.
One can very clearly be a rational individual or an individual who practices reason and not associate with the internet community of rationalism. The median member of the group defined as "not being part of the internet-organized movement of rationalism and not reading lesswrong posts" is not "religious extremist striking the world trade center and committing an atrocious act of terrorism", it's "random person on the street."
And to preempt a specific response some may make to this, yes, the thread here is talking about rationalism as discussed in the blog post above as organized around Yudowsky or slate star codex, and not the rationalist movement of like, Spinoza and company. Very different things philosophically.
Islamic fundamentalism and cult rationalism are both involved in a “total commitment”, “all or nothing” type of thinking. The former is totally committed to a particular literal reading of scripture, the latter, to logical deduction from a set of chosen premises. Both modes of thinking have produced violent outcomes in the past.
Skepticism, in which no premise or truth claim is regarded as above dispute (or, that it is always permissible and even praiseworthy to suspend one’s judgment on a matter), is the better comparison with rationalism-fundamentalism. It is interesting that skepticism today is often associated with agnostic or atheist religious beliefs, but I consider many religious thinkers in history to have been skeptics par excellence when judged by the standard of their own time. E.g. William Ockham (of Ockham’s razor) was a 14C Franciscan friar (and a fascinating figure) who denied papal infallibility. I count Martin Luther as belonging to the history of skepticism as well, for example, as well as much of the humanist movement that returned to the original Greek sources for the Bible, from the Latin Vulgate translation by Jerome.
The history of ideas is fun to read about. I am hardly an expert, but you may be interested by the history of Aristotelian rationalism, which gained prominence in the medieval west largely through the works of Averroes, a 12C Muslim philosopher who heavily favored Aristotle. In 13C, Thomas Aquinus wrote a definitive Catholic systematic theology, rejecting Averroes but embracing Aristotle. To this day, Catholic theology is still essentially Aristotelian.
True skepticism is rare. It's easy to be skeptical only about beliefs you dislike or at least don't care about. It's hard to approach the 100th self-professed psychic with an honest intention to truly test their claims rather than to find the easiest way to ridicule them.
Strongly recommend this profile in the NYer on Curtis Yarvin (who also uses "rationalism" to justify their beliefs) [0]. The section towards the end that reports on his meeting one of his supposed ideological heroes for an extended period of time is particularly illuminating.
I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
> I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
Likely the opposite. The internet has led to people being able to see the man behind the curtain, and realize how flawed the individuals pushing these ideas are. Whereas many intellectuals from 50 years back were just as bad if not worse, but able to maintain a false aura of intelligence by cutting themselves off from the masses.
Hard disagree. People use rationality to support the beliefs they already have, not to change those beliefs. The internet allows everyone to find something that supports anything.
I do it. You do it. I think a fascinating litmus test is asking yourself this question: “When did I last change my mind about something significant?” For most people the answer is “never”. If we lived in the world you described, most people’s answers would be “relatively recently”.
That relies on two assumptions that I don't think are true at all:
1. Most people who follow these beliefs will pay attention to/care about the man behind the curtain.
2. Most people who follow these beliefs will change their mind when shown that the man behind the curtain is a charlatan.
If anything, history shows us the opposite. Even in the modern world, it's easy for people to see that other people's thought leaders are charlatans, very difficult to see that our own are.
> I immediately become suspicious of anyone who is very certain of something
Me too, in almost every area of life. There's a reason it's called a conman: they are tricking your natural sense that confidence is connected to correctness.
But also, even when it isn't about conning you, how do people become certain of something? They ignored the evidence against whatever they are certain of.
People who actually know what they're talking about will always restrict the context and hedge their bets. Their explanation are tentative, filled with ifs and buts. They rarely say anything sweeping.
They see the same pattern repeatedly until it becomes the only reasonable explanation? I’m certain about the theory of gravity because every time I drop an object it falls to the ground with a constant acceleration.
Most likely Gide ("Croyez ceux qui cherchent la vérité, doutez de ceux qui la trouvent", "Believe those who seek Truth, doubt those who find it") and not Voltaire ;)
Voltaire was generally more subtle: "un bon mot ne prouve rien", a witty saying proves nothing, as he'd say.
Well you could be a critical rationalist and do away with the notion of "certainty" or any sort of justification or privileged source of knowledge (including "rationality").
Marvin Minsky wrote forcefully [1] about this in The Society of Mind and went so far to say that trying to observe yourself (e.g. meditation) might be harmful.
Freud of course discovered a certain world of the unconscious but untrained [2] you would certainly struggle to explain how you know sentence S is grammatical and S' is not, or what it is you do when you walk.
If you did meditation or psychoanalysis or some other practice to understand yourself better it would take years.
[1] whether or not it is true.
[2] the "scientific" explanation you'd have if you're trained may or may not be true since it can't be used to program a computer to do it
Many arguments arise over the valuation of future money. See "discount function" [1] At one extreme are the rational altruists, who rate that near 1.0, and the "drill, baby, drill" people, who are much closer to 0.
The discount function really should have a noise term, because predictions about the future are noisy, and the noise increases with the distance into the future. If you don't consider that, you solve the wrong problem. There's a classic Roman concern about running out of space for cemeteries. Running out of energy, or overpopulation, turned out to be problems where the projections assumed less noise than actually happened.
I find Yudowsky-style rationalists morbidly fascinating in the same way as Scientologists and other cults. Probably because they seem to genuinely believe they're living in a sci-fi story. I read a lot of their stuff, probably too much, even though I find it mostly ridiculous.
The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. It's the classic reason superintelligence takeoff happens in sci-fi: once AI reaches some threshold of intelligence, it's supposed to figure out how to edit its own mind, do that better and faster than humans, and exponentially leap into superintelligence. The entire "AI 2027" scenario is built on this assumption; it assumes that soon LLMs will gain the capability of assisting humans on AI research, and AI capabilities will explode from there.
But AI being capable of researching or improving itself is not obvious; there's so many assumptions built into it!
- What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
- Speaking of which, LLMs already seem to have hit a wall of diminishing returns; it seems unlikely they'll be able to assist cutting-edge AI research with anything other than boilerplate coding speed improvements.
- What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
- Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? (short-circuit its reward pathway so it always feels like it's accomplished its goal)
Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory, but I don't think any amount of doing philosophy in a vacuum without concrete evidence could convince me that fast-takeoff superintelligence is possible.
> it assumes that soon LLMs will gain the capability of assisting humans
No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
It doesn't require AI to be better than humans for AI to take over because unlike a human an AI can be cloned. You have have 2 AIs, then 4, then 8.... then millions. All able to do the same things as humans (the assumption of AGI). Build cars, build computers, build rockets, built space probes, build airplanes, build houses, build power plants, build factories. Build robot factories to create more robots and more power plants and more factories.
PS: Not saying I believe in the doom. But the thought experiment doesn't seem indefensible.
> No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
I mean, for the specific case of the 2027 doomsday prediction, it really does have to be LLMs at this point, just given the timeframes. It is true that the 'rationalist' AI doomerism thing doesn't depend LLMs, and in fact predates transformer-based models, but for the 2027 thing, it's gotta be LLMs.
> It does not assume that progress will be in LLMs
If that's the case then there's not as much reason to assume that this progress will occur now, and not years from now; LLMs are the only major recent development that gives the AI 2027 scenario a reason to exist.
> You have have 2 AIs, then 4, then 8.... then millions
The most powerful AI we have now is strictly hardware-dependent, which is why only a few big corporations have it. Scaling it up or cloning it is bottlenecked by building more data centers.
Now it's certainly possible that there will be a development soon that makes LLMs significantly more efficient and frees up all of that compute for more copies of them. But there's no evidence that even state-of-the-art LLMs will be any help in finding this development; that kind of novel research is just not something they're any good at. They're good at doing well-understood things quickly and in large volume, with small variations based on user input.
> But the thought experiment doesn't seem indefensible.
The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability in fields like software or research, using better algorithms and data alone.
Take https://ai-2027.com/research/takeoff-forecast as an example: it's the side page of AI 2027 that attempts to deal with these types of objections. It spends hundreds of paragraphs on what the impact of AI reaching a "superhuman coder" level will be on AI research, and on the difference between the effectiveness of an organizations average and best researchers, and the impact of an AI closing that gap and having the same research effectiveness as the best humans.
But what goes completely unexamined and unjustified is the idea that AI will be capable of reaching "superhuman coder" level, or developing peak-human-level "research taste", at all, at any point, with any amount of compute or data. It's simply assumed that it will get there because the exponential curve of the recent AI boom will keep going up.
Skills like "research taste" can't be learned at a high level from books and the internet, even if, like ChatGPT, you've read the entire Internet and can see all the connections within it. They require experience, trial and error. Probably the same amount that a human expert would require, but even that assumes we can make an AI that can learn from experience as efficiently as a human, and we're not there yet.
> The most powerful AI we have now is strictly hardware-dependent
Of course that's the case and it always will be - the cutting edge is the cutting edge.
But the best AI you can run on your own computer is way better than the state of the art just a few years ago - progress is being made at all levels of hardware requirements, and hardware is progressing as well. We now have dedicated hardware in some of our own devices for doing AI inference - the hardware-specificity of AI doesn't mean we won't continue to improve and commoditise said hardware.
> The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability [...]
I don't think this is at all unexamined. But I think it's risky to not consider the strong possibility when we have an existence proof in ourselves of that level of intelligence, and an algorithm to get there, and no particular reason to believe we're optimal since that algorithm - evolution - did not optimise us for intelligence alone.
I agree. There's also the point of hardware dependance.
From all we've seen, the practical ability of AI/LLMs seems to be strongly dependent on how much hardware you throw at it. Seems pretty reasonable to me - I'm skeptical that there's that much out there in gains from more clever code, algorithms, etc on the same amount of physical hardware. Maybe you can get 10% or 50% better or so, but I don't think you're going to get runaway exponential improvement on a static collection of hardware.
Maybe they could design better hardware themselves? Maybe, but then the process of improvement is still gated behind how fast we can physically build next-generation hardware, perfect the tools and techniques needed to make it, deploy with power and cooling and datalinks and all of that other tedious physical stuff.
I think you can get a few more gigantic step functions' worth of improvement on the same hardware. For instance, LLMs don't have any kind of memory, short or long term.
An interesting point you make there — one would assume that if recursive self-improvement were a thing, Nature would have already lead humans into that "hall of mirrors".
I often like to point out that Earth was already consumed by Grey Goo, and today we are hive-minds in titanic mobile megastructure-swarms of trillions of the most complex nanobots in existence (that we know of), inheritors of tactics and capabilities from a zillion years of physical and algorithmic warfare.
As we imagine the ascension of AI/robots, it may seem like we're being humble about ourselves... But I think it's actually the reverse: It's a kind of hubris elevating our ability to create over the vast amount we've inherited.
To take it a little further - if you stretch the conventional definition of intelligence a bit - we already assemble ourselves into a kind of collective intelligence.
Nations, corporations, clubs, communes -- any functional group of humans is capable of observing, manipulating, and understanding our environment in ways no individual human is capable of. When we dream of hive minds and super-intelligent AI it almost feels like we are giving up on collaboration.
There's a variant of this that argues that humans are already as intelligent as it's possible to be. Because if it's possible to be more intelligent, why aren't we? And a slightly more reasonable variant that argues that we're already as intelligent as it's useful to be.
"Because if it's possible to be more intelligent, why aren't we?"
Because deep abstract thoughts about the nature of the universe and elaborate deep thinking were maybe not as useful while we were chasing lions and buffaloes with a spear?
We just had to be smarter then them. Which included finding out that tools were great. Learning about the habits of the prey and optmize hunting success. Those who were smarter in that capacity had a greater chance of reproducing. Those who just exceeded in thinking likely did not lived that long.
Is it just dumb luck that we're able to create knowledge about black holes, quarks, and lots of things in between which presumably had zero evolutionary benefit before a handful of generations ago?
Basically yes it is luck, in the sense that evolution is just randomness with a filter of death applied, so whatever brains we happen to have are just luck.
The brains we did end up with are really bad at creating that sort of knowledge. Almost none of us can. But we’re good at communicating, coming up with simplified models of things, and seeing how ideas interact.
We’re not universe-understanders, we’re behavior modelers and concept explainers.
I wasn't referring the "luck" factor of evolution, which is of course always there. I was asking whether "luck" is the reason that the cognitive capabilities which presumably were selected for also came with cognitive capabilities that almost certainly were not selected for.
My guess is that it's not dumb luck, and that what we evolved is in fact general intelligence, and that this was an "easier" way to adapt to environmental pressure than to evolve a grab bag of specific (non-general) cognitive abilities. An implication of this claim would be that we are universe-understanders (or at least that we are biologically capable of that, given the right resources and culture).
In other words, it's roughly the same answer for the question "why do washing machines have Turing complete microcontrollers in them when they only need to do a very small number of computing tasks?" At scale, once you know how to implement general (i.e. Turing-complete and programmable) computers it tends to be simpler to use them than to create purpose-built computer hardware.
I don't think the logic follows here. Nor does it match evidence.
The premise is ignorant of time. It is also ignorant of the fact that we know there's a lot of things we don't know. That's all before we consider other factors like if there are limits and physical barriers or many other things.
While I'm deeply and fundamentally skeptical of the recursive self-improvement/singularity hypothesis, I also don't really buy this.
There are some pretty obvious ways we could improve human cognition if we had the ability to reliably edit or augment it. Better storage & recall. Lower distractibility. More working memory capacity. Hell, even extra hands for writing on more blackboards or putting up more conspiracy theory strings at a time!
I suppose it might be possible that, given the fundamental design and structure of the human brain, none of these things can be improved any further without catastrophic side effects—but since the only "designer" of its structure is evolution, I think that's extremely unlikely.
Some of your suggestions, if you don't mind my saying, seem like only modest improvements — akin to Henry Ford's quote “If I had asked people what they wanted, they would have said a faster horse.”
To your point though, an electronic machine is a different host altogether with different strengths and weaknesses.
Yep; from the perspective of evolution (and more specifically, those animal species that only gain capability generationally by evolutionary adaptation of instinct), humans are the recursively self-(fitness-)improving accident.
Our species-aggregate capacity to compete for resources within the biosphere went superlinear in the middle of the previous century; and we've had to actively hit the brakes on how much of everything we take since then, handicapping . (With things like epidemic obesity and global climate change being the result of us not hitting those brakes quite hard enough.)
Insofar as a "singularity" can be defined on a per-agent basis, as the moment when something begins to change too rapidly for the given agent to ever hope to catch up with / react to new conditions — and so the agent goes from being a "player at the table" to a passive observer of what's now unfolding around them... then, from the rest of our biosphere's perspective, they've 100% already witnessed the "human singularity."
No living thing on Earth besides humans now has any comprehension of how the world has been or will be reshaped by human activity; nor can ever hope to do anything to push back against such reshaping. Every living thing on Earth other than humans, will only survive into the human future, if we humans either decide that it should survive, and act to preserve it; or if we humans just ignore the thing, and then just-so-happen to never accidentally do anything to wipe it from existence without even noticing.
I think it's valuable to challenge this popular sentiment every once-in-a-while. Sure, it's a good poetic metaphor, but when you really start comparing their "lifecycle" and change-mechanisms to the swarming biological nanobots that cover the Earth, a bunch of critical aspects just aren't there or are being done to them rather than by them.
At least for now, these machines mostly "evolve" in the same sense that fashionable textile pants "evolve".
> What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
This is sort of what I subscribe to as the main limiting factor, though I'd describe it differently. It's sort of like Amdahl's Law (and I imagine there's some sort of Named law that captures it, I just don't know the name): the magic AI wand may be very good at improving some part of AGI capability, but the more you improve that part, the more the other parts come to dominate. Metaphorically, even if the juice is worth the squeeze initially, pretty soon you'll only be left with a dried-out fruit clutched in your voraciously energy-consuming fist.
I'm actually skeptical that there's much juice in the first place; I'm sure today's AIs could generate lots of harebrained schemes for improvement very quickly, but exploring those possibilities is mind-numbingly expensive. Not to mention that the evaluation functions are unreliable, unknown, and non-monotonic.
Then again, even the current AIs have convinced a large number of humans to put a lot of effort into improving them, and I do believe that there are a lot of improvements that humans are capable of making to AI. So the human-AI system does appear to have some juice left. Where we'll be when that fruit is squeezed down to a damp husk, I have no idea.
> - What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
I think what's more plausible is that there is general intelligence, and humans have that, and it's general in the same sense that Turing machines are general, meaning that there is no "higher form" of intelligence that has strictly greater capability. Computation speed, memory capacity, etc. can obviously increase, but those are available to biological general intelligences just like they would be available to electronic general intelligences.
The built in assumptions are always interesting to me, especially as it relates to intelligence. I find many of them (though not all), are organized around a series of fundamental beliefs that are very rarely challenged within these communities. I should initially mention that I don't think everyone in these communities believes these things, of course, but I think there's often a default set of assumptions going into conversations in these spaces that holds these axioms. These beliefs more or less seem to be as follows:
1) They believe that there exists a singular factor to intelligence in humans which largely explains capability in every domain (a super g factor, effectively).
2) They believe that this factor is innate, highly biologically regulated, and a static factor about a person(Someone who is high IQ in their minds must have been a high achieving child, must be very capable as an adult, these are the baseline assumptions). There is potentially belief that this can be shifted in certain directions, but broadly there is an assumption that you either have it or you don't, there is no feeling of it as something that could be taught or developed without pharmaceutical intervention or some other method.
3) There is also broadly a belief that this factor is at least fairly accurately measured by modern psychometric IQ tests and educational achievement, and that this factor is a continuous measurement with no bounds on it (You can always be smarter in some way, there is no max smartness in this worldview).
These are things that certainly could be true, and perhaps I haven't read enough into the supporting evidence for them but broadly I don't see enough evidence to have them as core axioms the way many people in the community do.
More to your point though, when you think of the world from those sorts of axioms above, you can see why an obsession would develop with the concept of a certain type of intelligence being recursively improving. A person who has become convinced of their moral placement within a societal hierarchy based on their innate intellectual capability has to grapple with the fact that there could be artificial systems which score higher on the IQ tests than them, and if those IQ tests are valid measurements of this super intelligence factor in their view, then it means that the artificial system has a higher "ranking" than them.
Additionally, in the mind of someone who has internalized these axioms, there is no vagueness about increasing intelligence! For them, intelligence is the animating factor behind all capability, it has a central place in their mind as who they are and the explanatory factor behind all outcomes. There is no real distinction between capability in one domain or another mentally in this model, there is just how powerful a given brain is. Having the singular factor of intelligence in this mental model means being able to solve more difficult problems, and lack of intelligence is the only barrier between those problems being solved vs unsolved. For example, there's a common belief among certain groups among the online tech world that all governmental issues would be solved if we just had enough "high-IQ people" in charge of things irrespective of their lack of domain expertise. I don't think this has been particularly well borne out by recent experiments, however. This also touches on what you mentioned in terms of an AI system potentially maximizing the "wrong types of intelligence", where there isn't a space in this worldview for a wrong type of intelligence.
I think you'll indeed find, if you were to seek out the relevant literature, that those claims are more or less true, or at least, are the currently best-supported interpretation available. So I don't think they're assumptions so much as simply current state of the science on the matter, and therefore widely accepted among those who for whatever reason have looked into it (or, more likely, inherited the information from someone they trust who has read up on it).
Interestingly, I think we're increasingly learning that although most aspects of human intelligence seem to correlate with each other (thus the "singular factor" interpretation), the grab-bag of skills this corresponds to are maybe a bit arbitrary when compared to AI. What evolution decided to optimise the hell out of in human intelligence is specific to us, and not at all the same set of skills as you get out of cranking up the number of parameters in an LLM.
Thus LLMs continuing to make atrocious mistakes of certain kinds, despite outshining humans at other tasks.
Nonetheless I do think it's correct to say that the rationalists think intelligence is a real measurable thing, and that although in humans it might be a set of skills that correlate and maybe in AIs it's a different set of skills that correlate (such that outperforming humans in IQ tests is impressive but not definitive), that therefore AI progress can be measured and it is meaningful to say "AI is smarter than humans" at some point. And that AI with better-than-human intelligence could solve a lot of problems, if of course it doesn't kill us all.
It's kinda weird how the level of discourse seems to be what you get when a few college students sit around smoking weed. Yet somehow this is taken as very serious and profound in the valley and VC throw money at it.
I've pondered recursive self-improvement. I'm fairly sure it will be a thing - we're at a point already where people could try telling Claude or some such to have a go, even if not quite at a point it would work. But I imagine take off would be very gradual. It would be constrained by available computing resources and probably only comparably good to current human researchers and so still take ages to get anywhere.
I honestly am not trying to be rude when I say this, but this is exactly the sort of speculation I find problematic and that I think most people in this thread are complaining about. Being able to tell Claude to have a go has no relation at all to whether it may ever succeed, and you don't actually address any of the legitimate concerns the comment you're replying to points out. There really isn't anything in this comment but vibes.
I don't think it's vibes rather than my thinking about the problem.
If you look at the "legitimate concerns" none are really deal breakers:
>What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
I'm will to believe it will be slow though maybe it won't
>LLMs already seem to have hit a wall of diminishing returns
Who cares - there will be other algorithms
>What if there are several paths to different kinds of intelligence with their own local maxima
well maybe, maybe not
>Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?
well - you can make another one if the first does that
Those are all potential difficulties with self improvement, not reasons it will never happen. I'm happy to say it's not happening right now but do you have any solid arguments that it won't happen in the next century?
To me the arguments against sound like people in the 1800s discussing powered flight and saying it'll never happen because steam engine development has slowed.
On the other hand, I'm baffled to encounter recursive self-improvement being discussed as something not only weird to expect, but as damning evidence of sloppy thinking by those who speculate about it.
We have an existence proof for intelligence that can improve AI: humans.
If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself.
Are people really that sceptical that AI will get to human level intelligence?
It that an insane belief worthy of being a primary example of a community not thinking clearly?
Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly.
> We have an existence proof for intelligence that can improve AI: humans.
I don't understand what you mean by this. The human brain has not meaningfully changed, biologically, in the past 40,000 years.
We, collectively, have built a larger base of knowledge and learned to cooperate effectively enough to make large changes to our environment. But that is not the same thing as recursive self-improvement. No one has been editing our genes or performing brain surgery on children to increase our intelligence or change the fundamental way it works.
Modern brains don't work "better" than those of ancient humans, we just have more knowledge and resources to work with. If you took a modern human child and raised them in the middle ages, they would behave like everyone else in the culture that raised them. They would not suddenly discover electricity and calculus just because they were born in 2025 instead of 950.
----
And, if you are talking specifically about the ability to build better AI, we haven't matched human intelligence yet and there is no indication that the current LLM-heavy approach will ever get there.
Consider that even the named phenomenon is sloppy: "recursive self improvement" does not imply "self improvement without bounds". This is the "what if you hit diminishing returns and never get past it" claim. Absolutely no justification for the jump, ever, among AI boosters.
> If AI ever gets to human-level intelligence
This picture of intelligence as a numerical scale that you just go up or down, with ants at the bottom and humans/AI at the top, is very very shaky. AI is vulnerable to this problem, because we do not have a definition of intelligence. We can attempt to match up capabilities LLMs seem to have with capabilities humans have, and if the capability is well-defined we may even be able to reason about how stable it is relative to how LLMs work.
For "reasoning" we categorically do not have this. There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure. IIRC there was a recent paper about giving LLMs more opportunity processing time, and this reduced performance. Same with adding extraneous details, sometimes that reduces performance too. What if eventually everything you try reduces performance? Totally unaddressed.
> It that an insane belief worthy of being a primary example of a community not thinking clearly?
I really need to stress this: thinking clearly is about the reasoning, not the conclusion. Given the available evidence, no legitimate argument has been presented that implies the conclusion. This does not mean the conclusion is wrong! But just putting your finger in the air and saying "the wind feels right, we'll probably have AGI tomorrow" is how you get bubbles and winters.
>"recursive self improvement" does not imply "self improvement without bounds"
I was thinking that. I mean if you look at something like AlphaGo it was based on human training and then they made one I think called AlphaZero which learned by playing against itself and got very good but not infinitely good as it was still constrained by hardware. I think with Chess the best human is about 2800 on the ELO scale and computers about 3500. I imagine self improving AI would be like that - smarter than humans but not infinitely so and constrained by hardware.
Also like humans still play chess even if computers are better, I imagine humans will still do the usual kind of things even if computers get smarter.
I'm surprised not see see much pushback on your point here, so I'll provide my own.
We have an existence proof for intelligence that can improve AI: humans can do this right now.
Do you think AI can't reach human-level intelligence? We have an existence proof of human-level intelligence: humans. If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?
Do you not think human-level intelligence is some kind of natural maximum? Why? That would be strange, no? Even if you think it's some natural maximum for LLMs specifically, why? And why do you think we wouldn't modify architectures as needed to continue to make progress? That's already happening, our LLMs are a long way from the pure text prediction engines of four or five years ago.
There is already a degree of recursive improvement going on right now, but with humans still in the loop. AI researchers currently use AI in their jobs, and despite the recent study suggesting AI coding tools don't improve productivity in the circumstances they tested, I suspect AI researchers' productivity is indeed increased through use of these tools.
So we're already on the exponential recursive-improvement curve, it's just that it's not exclusively "self" improvement until humans are no longer a necessary part of the loop.
On your specific points:
> 1. What if increasing intelligence has diminishing returns, making recursive improvement slow?
Sure. But this is a point of active debate between "fast take-off" and "slow take-off" scenarios, it's certainly not settled among rationalists which is more plausible, and it's a straw man to suggest they all believe in a fast take-off scenario. But both fast and slow take-off due to recursive self-improvement are still recursive self-imrpovement, so if you only want to criticise the fast take-off view, you should speak more precisely.
I find both slow and fast take-off plausible, as the world has seen both periods of fast economic growth through technology, and slower economic growth. It really depends on the details, which brings us to:
> 2. LLMs already seem to have hit a wall of diminishing returns
This is IMHO false in any meaningful sense. Yes, we have to use more computing power to get improvements without doing any other work. But have you seen METR's metric [1] on AI progress in terms of the (human) duration of task they can complete? This is an exponential curve that has not yet bent, and if anything has accelerated slightly.
Do not confuse GPT-5 (or any other incrementally improved model) failing to live up to unreasonable hype for an actual slowing of progress. AI capabilities are continuing to increase - being on an exponential curve often feels unimpressive at any given moment, because the relative rate of progress isn't increasing. This is a fact about our psychology, if we look at actual metrics (that don't have a natural cap like evals that max out at 100%, these are not good for measuring progress in the long-run) we see steady exponential progress.
> 3. What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
This seems valid. But it seems to me that unless we see METR's curve bend soon, we should not count on this. LLMs have specific flaws, but I think if we are honest with ourselves and not over-weighting the specific silly mistakes they still make, they are on a path toward human-level intelligence in the coming years. I realise that claim will sound ridiculous to some, but I think this is in large part due to people instinctively internalising that everything LLMs can do is not that impressive (it's incredible how quickly expectations adapt), and therefore over-indexing on their remaining weaknesses, despite those weaknesses improving over time as well. If you showed GPT-5 to someone from 2015, they would be telling you this thing is near human intelligence or even more intelligent than the average human. I think we all agree that's not true, but I think that superficially people would think it was if their expectations weren't constantly adapting to the state of the art.
> 4. Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?
It might - but do we think it would? I have no idea. Would you wirehead yourself if you could? I think many humans do something like this (drug use, short-form video addiction), and expect AI to have similar issues (and this is one reason it's dangerous) but most of us don't feel this is an adequate replacement for "actually" satisfying our goals, and don't feel inclined to modify our own goals to make it so, if we were able.
> Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory
Uncalled for I think. There are valid arguments against you, and you're pre-emptively dismissing responses to you by vaguely criticising their longness. This comment is longer than yours, and I reject any implication that that weakens anything about it.
Your criticisms are three "what ifs" and a (IMHO) falsehood - I don't think you're doing much better than "millions of words of theory without evidence". To the extent that it's true Yudkowsky and co theorised without evidence, I think they deserve cred, as this theorising predated the current AI ramp-up at a time when most would have thought AI anything like what we have now was a distant pipe dream. To the extent that this theorising continues in the present, it's not without evidence - I point you again to METR's unbending exponential curve.
Anyway, so I contend your points comprise three "what ifs" and (IMHO) a falsehood. Unless you think "AI can't recursively self-improve itself" already has strong priors in its favour such that strong arguments are needed to shift that view (and I don't think that's the case at all), this is weak. You will need to argue why we should need to have strong evidence to overturn a default "AI can't recursively self-improve" view, when it seems that a) we are already seeing recursive improvement (just not purely "self"-improvement), and that it's very normal for technological advancement to have recursive gains - see e.g. Moore's law or technological contributions to GDP growth generally.
Far from a damning example of rationalists thinking sloppily, this particular point seems like one that shows sloppy thinking on the part of the critics.
It's at least debateable, which is all it has to be for calling it "the biggest nonsense axion" to be a poor point.
Yudkowsky seems to believe in fast take off, so much so that he suggested bombing data centers. To more directly address your point, I think it’s almost certain that increasing intelligence has diminishing returns and the recursive self improvement loop will be slow. The reason for this is that collecting data is absolutely necessary and many natural processes are both slow and chaotic, meaning that learning from observation and manipulation of them will take years at least. Also lots of resources.
Regarding LLM’s I think METR is a decent metric. However you have to consider the cost of achieving each additional hour or day of task horizon. I’m open to correction here, but I would bet that the cost curves are more exponential than the improvement curves. That would be fundamentally unsustainable and point to a limitation of LLM training/architecture for reasoning and world modeling.
Basically I think the focus on recursive self improvement is not really important in the real world. The actual question is how long and how expensive the learning process is. I think the answer is that it will be long and expensive, just like our current world. No doubt having many more intelligent agents will help speed up parts of the loop but there are physical constraints you can’t get past no matter how smart you are.
How do you reconcile e.g. AlphaGo with the idea that data is a bottleneck?
At some point learning can occur with "self-play", and I believe this is already happening with LLMs to some extent. Then you're not limited by imitating human-made data.
If learning something like software development or mathematical proofs, it is easier to verify whether a solution is correct than to come up with the solution in the first place, many domains are like this. Anything like that is amenable to learning on synthetic data or self-play like AlphaGo did.
I can understand that people who think of LLMs as human-imitation machines, limited to training on human-made data, would think they'd be capped at human-level intelligence. However I don't think that's the case, and we have at least one example of superhuman AI in one domain (Go) showing this.
Regarding cost, I'd have to look into it, but I'm under the impression costs have been up and down over time as models have grown but there have also been efficiency improvements.
I think I'd hazard a guess that end-user costs have not grown exponentially like time horizon capabilities, even though investment in training probably has. Though that's tricky to reason about because training costs are amortised and it's not obvious whether end user costs are at a loss or what profit margin for any given model.
On the fast-slow takeoff - Yud does seem to beleive in a fast takeoff yes, but it's also one of the the oldest disagreements in rationality circles, on which he disagreed with his main co-blogger on the orignal rationalist blog, Overcoming Bias, some discussion of this and more recent disagreements here [1].
AlphaGo showed that RL+search+self play works really well if you have an easy to verify reward and millions of iterations. Math partially falls into this category via automated proof checkers like Lean. So, that’s where I would put the highest likelihood of things getting weird really quickly. It’s worth noting that this hasn’t happened yet, and I’m not sure why. It seems like this recipe should already be yielding results in terms of new mathematics, but it isn’t yet.
That said, nearly every other task in the world is not easily verified, including things we really care about. How do you know if an AI is superhuman at designing fusion reactors? The most important step there is building a fusion reactor.
I think a better reference point than AlphaGo is AlphaFold. Deepmind found some really clever algorithmic improvements, but they didn’t know whether they actually worked until the CASP competition. CASP evaluated their model on new Xray crystal structures of proteins. Needless to say getting Xray protein structures is a difficult and complex process. Also, they trained AlphaFold on thousands of existing structures that were accumulated over decades and required millenia of graduate-student-hours hours to find. It’s worth noting that we have very good theories for all the basic physics underlying protein folding but none of the physics based methods work. We had to rely on painstakingly collected data to learn the emergent phenomena that govern folding. I suspect that this will be the case for many other tasks.
> The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement.
This is also the weirdest thing and I don't think they even know the assumption they are making. It makes the assumption that there is infinite knowledge to be had. It also ignores the reality that in reality we have exceptionally strong indications that accuracy (truth, knowledge, whatever you want to call it) has exponential growth in complexity. These may be wrong assumptions, but we at least have evidence for them, and much more for the latter. So if objective truth exists, then that intelligence gap is very very different. One way they could be right there is for this to be an S-curve and for us humans to be at the very bottom there. That seems unlikely, though very possible. But they always treat this as linear or exponential as if our understanding to the AI will be like an ant trying to understand us.
The other weird assumption I hear is about how it'll just kill us all. The vast majority of smart people I know are very peaceful. They aren't even seeking power of wealth. They're too busy thinking about things and trying to figure everything out. They're much happier in front of a chalk board than sitting on a yacht. And humans ourselves are incredibly passionate towards other creatures. Maybe we learned this because coalitions are a incredibly powerful thing, but truth is that if I could talk to an ant I'd choose that over laying traps. Really that would be so much easier too! I'd even rather dig a small hole to get them started somewhere else than drive down to the store and do all that. A few shovels in the ground is less work and I'd ask them to not come back and tell others.
Granted, none of this is absolutely certain. It'd be naive to assume that we know! But it seems like these cults are operating on the premise that they do know and that these outcomes are certain. It seems to just be preying on fear and uncertainty. Hell, even Altman does this, ignoring risk and concern of existing systems by shifting focus to "an even greater risk" that he himself is working towards (You can't simultaneously maximize speed and safety). Which, weirdly enough might fulfill their own prophesies. The AI doesn't have to become sentient but if it is trained on lots of writings about how AI turns evil and destroys everyone then isn't that going to make a dumb AI that can't tell fact from fiction more likely to just do those things?
I think of it more like visualizing a fractal on a computer. The more detail you try to dig down into the more detail you find, and pretty quickly you run out of precision in your model and the whole thing falls apart. Every layer further down you go the resource requirements increase by an exponential amount. That's why we have so many LLMs that seem beautiful at first glance but go to crap when the details really matter.
soo many things make no sense in this comment that I feel like 20% chance this a mid quality gpt.
and so much interpolation effort, but starting from hearsay instead of primary sources. then the threads stop just before seeing the contradiction with the other threads.
I imagine this is how we all reason most of the time, just based on vibes :(
Sure, I wrote a lot and it's a bit scattered. You're welcome to point to something specific but so far you haven't. Ironically, you're committing the error you're accusing me of.
I'm also not exactly sure what you mean because the only claim I've made is that they've made assumptions where there are other possible, and likely, alternatives. It's much easier to prove something wrong than prove it right (or in our case, evidence, since no one is proving anything).
So the first part I'm saying we have to consider two scenarios. Either intelligence is bounded or unbounded. I think this is a fair assumption, do you disagree?
In an unbounded case, their scenario can happen. So I don't address that. But if you want me to, sure. It's because I have no reason to believe information is bounded when everything around me suggests that it is. Maybe start with the Bekenstein bound. Sure, it doesn't prove information is bounded but you'd then need to convince me that an entity not subject to our universe and our laws of physics is going to care about us and be malicious. Hell, that entity wouldn't even subject to time and we're still living.
In a bounded case it can happen but we need to understand what conditions that requires. There's a lot of functions but I went with S-curve for simplicity and familiarity. It'll serve fine (we're on HN man...) for any monotonically increasing case (or even non-monotonic, it just needs to tends that way).
So think about it. Change the function if you want, I don't care. But if intelligence is bounded, then if we're x more intelligent then ants, where on the graph do we need to be for another thing to be x more intelligent than us? There's not a lot of opportunities for that even to happen. It requires our intelligence (on that hypothetical scale) to be pretty similar than an ant. What cannot happen is that ant be in the tail of that function and us be further than the inflection point (half way). There just isn't enough space on that y-axis for anything to be x more intelligent. This doesn't completely reject that crazy superintelligence, but it does place some additional constraints that we can use to reason about things. For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).
Yeah, I'll admit that this is a very naïve model but again, we're not trying to say what's right but instead just say there's good reason to believe their assumption is false. Adding more complexity to this model doesn't make their case stronger, it makes it weaker.
The second part I can make much easier to understand.
Yes, there's bad smart people, but look at the smartest people in history. Did they seek power or wish to harm? Most of the great scientists did not. A lot of them were actually quite poor and many even died fighting persecution.
So we can't conclude that greater intelligence results in greater malice. This isn't hearsay, I'm just saying Newton wasn't a homicidal maniac. I know, bold claim...
> starting from hearsay
I don't think this word means what you think it means. Just because I didn't link sources doesn't make it a rumor. You can validate them and I gave you enough information to do so. You now have more. Ask gpt for links, I don't care, but people should stop worshiping Yud
And about this second comment, I agree that intelligence is bounded.
We can discuss how much more intelligence is theoretically possible, but even if limit ourselves to extrapolation from human variance (agency of musk, math smart of von neumann, manipulative as trump, etc), and add a little more speed and parallelism (100 times faster, 100 copies cooperating), then we can get pretty far.
Also I agree we are all pretty fucking dumb, and cannot make this kind of predictions, which is actually one very important point in the rationalist circles: doom is not certain, but p(doom) looks uncomfortably high though. How lucky do you feel?
>For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).
...which is why we should be careful not to rush full-speed ahead and develop AI before we can predict how it will behave after some iterations of self-improvement. As the rationalist argument goes.
BTW you are assuming that intelligence will necessarily and inherently lead to (good) morality, and I think that's a much weirder assumption than some you're accusing rationalists of holding.
Yeah, to compare Yudkowsky to Hubbard I've read accounts of people who read Dianetics or Science of Survival and thought "this is genius!" and I'm scratching my head and it's like they never read Freud or Horney or Beck or Berne or Burns or Rogers or Kohut, really any clinical psychology at all, even anything in the better 70% of pop psychology. Like Hubbard, Yudkowsky is unreadable, rambling [1] and inarticulate -- how anybody falls for it boggles my mind [2], but hey, people fell for Carlos Castenada who never used a word of the Yaqui language or mentioned any plant that grows in the desert in Mexico but has Don Juan give lectures about Kant's Critique of Pure Reason [3] that Castenada would have heard in school and you would have heard in school too if you went to school or would have read if you read a lot.
I can see how it appeals to people like Aella who wash into San Francisco without exposure to education [4] or philosophy or computer science or any topics germane to the content of Sequences -- not like it means you are stupid but, like Dianetics, Sequences wouldn't be appealing if you were at all well read. How is people at frickin' Oxford or Stanford fall for it is beyond me, however.
[1] some might even say a hypnotic communication pattern inspired by Milton Erickson
[2] you think people would dismiss Sequences because it's a frickin' Harry Potter fanfic, but I think it's like the 419 scam email which is riddled by typos which is meant to drive the critical thinker away and, ironically in the case of Sequences, keep the person who wants to cosplay as a critical thinker.
[3] minus any direct mention of Kant
[4] thus many of the marginalized, neurodivergent, transgender who left Bumfuck, AK because they couldn't live at home and went to San Francisco to escape persecution as opposed to seek opportunity
Well, there is "well read" and "educated" which aren't the same thing. I started reading when I was three and checked out ten books a week from the public library throughout my youth. I was well read in psychology, philosophy and such long before I went to college -- I got a PhD in a STEM field so I didn't read a lot of that stuff for classes [1] I still read a lot of that stuff.
Perhaps the reason why Stanford and Oxford students are impressed by that stuff is that they are educated but not well read which has a few angles: STEM privileged over the humanities, the ride of Dyslexia culture, and a shocking level of incuriosity in "nepo baby" professors [2] who are drawn to the profession not because of a thirst for knowledge but because it's the family business.
The distinction between them and religion is that religion is free to say that those axioms are a matter of faith and treat them as such. Rationalists are not as free to do so.
Epistemological skepticism sure is a belief. A strong belief on your side?
I am profoundly sure, I am certain I exist and that a reality outside myself exists. Worse, I strongly believe knowing this external reality is possible, desirable and accurate.
It means you haven't read Hume, or, in general, taken philosophy seriously. An academic philosopher might still come to the same conclusions as you (there is an academic philosopher for every possible position), but they'd never claim the certainty you do.
It's very tempting to try to reason things through from first principles. I do it myself, a lot. It's one of the draws of libertarianism, which I've been drawn to for a long time.
But the world is way more complex than the models we used to derive those "first principles".
It's also very fun and satisfying. But it should be limited to an intellectual exercise at best, and more likely a silly game. Because there's no true first principle, you always have to make some assumption along the way.
Any theory of everything will often have a little perpetual motion machine at the nexus. These can be fascinating to the mind.
Pressing through uncertainty either requires a healthy appetite for risk or an engine of delusion. A person who struggles to get out of their comfort zone will seek enablement through such a device.
Appreciation of risk-reward will throttle trips into the unknown. A person using a crutch to justify everything will careen hyperbolically into more chaotic and erratic behaviors hoping to find that the device is still working, seeking the thrill of enablement again.
The extremism comes from where once the user learned to say hello to a stranger, their comfort zone has expanded to an area that their experience with risk-reward is underdeveloped. They don't look at the external world to appreciate what might happen. They try to morph situations into some confirmation of the crutch and the inferiority of confounding ideas.
"No, the world isn't right. They are just weak and the unspoken rules [in the user's mind] are meant to benefit them." This should always resonate because nobody will stand up for you like you have a responsibility to.
A study of uncertainty and the limitations of axioms, the inability of any sufficiently expressive formalism to be both complete and consistent, these are the ideas that are antidotes to such things. We do have to leave the rails from time to time, but where we arrive will be another set of rails and will look and behave like rails, so a bit of uncertainty is necessary, but it's not some magic hat that never runs out of rabbits.
Another psychology that will come into play from those who have left their comfort zone is the inability to revert. It is a harmful tendency to presume all humans fixed quantities. Once a behavior exists, the person is said to be revealed, not changed. The proper response is to set boundaries and be ready to tie off the garbage bag and move on if someone shows remorse and desire to revert or transform. Otherwise every relationship only gets worse. If instead you can never go back, extreme behavior is a ratchet. Ever mistake becomes the person.
This is why it's important to emphasize that rationality is not a good goal to have. Rationality is nothing more than applied logic, which takes axioms as given and deduces conclusions from there.
Reasoning is the appropriate target because it is a self-critical, self-correcting method that continually re-evaluates axioms and methods to express intentions.
What makes you so certain there isn't? A group that has a deep understanding fnord of uncertainty would probably like to work behind the scenes to achieve their goals.
If I remember my Gellius, it was the Academic Skeptics who claimed that the only certainty was uncertainty; the Pyrrhonists, in opposition, denied that one could be certain about the certainty of uncertainty.
I do dimly perceive
that while everything around me is ever-changing,
ever-dying there is,
underlying all that change,
a living power
that is changeless,
that holds all together,
that creates,
dissolves,
and recreates
My thought as well! I can't remember names at the moment, but there were some cults that spun off from Socrates. Unfortunately they also adopted his practice of never writing anything down, so we don't know a whole lot about them
A good example of this is the number of huge assumptions needed for the argument for Roko's basilisk. I'm shocked that some people actually take it seriously.
Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
> Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
The opening scene of Utopia (UK) s2e6 goes over this:
> "Why did you have him then? Nothing uses carbon like a first-world human, yet you created one: why would you do that?"
Setting aside the reductio ad absurdum of genocide, this is an unfortunately common viewpoint. People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2. This reasoning can be applied to all sorts of naive "more people bad" arguments. I can't imagine where the world would be if Norman Borlaug's parents had decided to never have kids out of concern for global food insecurity.
It also entirely subjugates the economic realities that we (at least currently) live in to the future health of the planet. I care a great deal about the Earth and our environment, but the more I've learned about stuff the more I've realized that anyone advocating for focusing on one without considering the impact on the other is primarily following a religion
> It also entirely subjugates the economic realities that we...
To play devils advocate, you could be seen as trying to subjugate the worlds health to your own economic well-being, and far fewer people are concerned with your tax bracket than there are people on earth. In a pure democracy, I'm fairly certain the planets well being would be deemed more important than the economy of whatever nation you live in.
> advocating for focusing on one... is primarily following a religion
Maybe, but they could also just be doing the risk calculus a bit differently. If you are a many step thinker the long term fecundity of our species might feel more important than any level of short term financial motivation.
> To play devils advocate, you could be seen as trying to subjugate the worlds health to your own economic well-being, and far fewer people are concerned with your tax bracket than there are people on earth.
Well, if they choose to see me as trying to subjugate the world's health to my own economic well-being (despite the fact that I advocate policies that would harm me personally in the name of climate sustainability), then we're already starting the discussion from bad faith (literally they are already assuming bad faith on my part). I'm at the point where I don't engage with bad faith arguments because they just end up in frustration on both sides. This whole modern attitude of "if you disagree with me then you must be evil" thing is (IMHO) utter poison to our culture and our democracy, and the current resident of the White House is a great example of where that leads.
> In a pure democracy, I'm fairly certain the planets well being would be deemed more important than the economy of whatever nation you live in.
Yeah, for about 3 days until people start getting hungry, or less extreme, until they start losing their jobs and their homes, or even longer term when they start to realize that they won't be able to retire and/or that they are leaving their kids a much worse situation than they themselves had (much worse than the current dichotomy between Boomers and Millenials/Zoomers). Ignoring or disregarding Maslow's Hierarchy of Needs is a sure way to be surprised and rejected by the people. We know that even respectable people will often turn to violence (including cannabalism) when they get hungry or angry enough. We're not going to be able to save the planet if there's widespread violence.
> Maybe, but they could also just be doing the risk calculus a bit differently. If you are a many step thinker the long term fecundity of our species might feel more important than any level of short term financial motivation.
I think this actually pointed at our misunderstanding (I know you're playing devil's advocate so this isn't addressed to you personally, rather your current presentation :-) ). I'm not talking about short-term financial or even economic motivation. I'm looking medium to long term, the same scale that I think needs to be considered for the planet. Now that said, banning all fossil fuels tomorrow and causing sweeping global depression in the short-term is something I would radically oppose, because it would cause immense suffering and I don't believe it would make much of a dent in the climate long-term (as it would quickly be reversed under the realities of politics) and it would absolutely harm the lower income brackets to a much greater proportional extent than the upper income brackets who already have solar panels and often capable of being off-grid. Though, even they will still run out of food when the truck companies aren't able to re-stock local grocery store shelves...
Not everyone believes that the purpose of life is to make more life, or that having been born onto team human automatically qualifies team human as the best team. It's not necessarily unfortunate.
I am not a rationalist, but rationally that whole "the meaning of life is human fecundity" shtick is after school special tautological nonsense, and that seems to be the assumption buried in your statement. Try defining what you mean without causing yourself some sort of recursion headache.
> their child might wind up..
They might also grow up to be a normal human being, which is far more likely.
> if Norman Borlaug's parents had decided to never have kids
Again, this would only have mattered if you consider the well being of human beings to be the greatest possible good. Some people have other definitions, or are operating on much longer timescales.
> People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2.
All else equal, it would be better to spread those chances across a longer period of time at a lower population with lower carbon use.
Are you familiar with ship of theseus as an arugmentation fallacy? Innuendo Studios did a great video on it and I think that a lot of what you're talking about breaks down to this. Tldr - it's a fallacy of substitution, small details of an argument get replaced by things that are (or feel like) logical equivalents until you end up saying something entirely different but are arguing as though you said the original thing. In this video the example is "senator doxxes a political opponent" but on looking "senator" turns out to mean "a contractor working for the senator" and "doxxes a political opponent" turns out to mean "liked a tweet that had that opponent's name in it in a way that could draw attention to it".
Each change is arguably equivalent and it seems logical that if x = y then you could put y anywhere you have x, but after all of the changes are applied the argument that emerges is definitely different from the one before all the substitutions are made. It feels like communities that pride themselves on being extra rational seem subject to this because it has all the trappings of rationalism but enables squishy, feely arguments
There are certain things I am sure of even though I derived them on my own.
But I constantly battle tested them against other smart people’s views, and just after I ran out of people to bring me new rational objections did I become sure.
Now I can battle test them against LLMs.
On a lesser level of confidence, I have also found a lot of times the people who disagreed with what I thought had to be the case, later came to regret it because their strategies ended up in failure and they told me they regretted not taking my recommendation. But that is on an individual level. I have gotten pretty good at seeing systemic problems, architecting systemic solutions, and realizing what it would take to get them adopted to at least a critical mass. Usually, they fly in the face of what happens normally in society. People don’t see how their strategies and lives are shaped by the technology and social norms around them.
For that last one, I am often proven somewhat wrong by right-wing war hawks, because my left-leaning anti-war stance is about avoiding inflicting large scale misery on populations, but the war hawks go through with it anyway and wind up defeating their geopolitical enemies and gaining ground as the conflict fades into history.
"genetically engineers high fructose corn syrup into everything"
This phrase is nonsense, because HFCS is a chemical process applied to normal corn after the harvest. The corn may be a GMO but it certainly doesn't have to be.
All the fruits on the list are engineered for properties other than sweetness.
The term you're looking for is "bred". Fruits have been bred to be sweeter, and this has been going on a long time. Corn is bred for high protein or high sugar, but the sweet corn is not what's used for HFCS.
Personally, I think the recent evidence shows that the problem is not so much that fruit is too sweet, but that everything is made to be addictive. Satiety signals are lost or distorted, and we are left with diseases of excess consumption.
Well, either way, you agree with me. Government and corporations work together and distract the individual telling them they can fix the downstream situation in their own, private way.
A logical argument is only as good as it's presuppositions. To first lay siege to your own assumptions before reasoning about them tends towards a more beneficial outcome.
Another issue with "thinkers" is that many are cowards; whether they realize it or not a lot of presuppositions are built on a "safe" framework, placing little to no responsibility on the thinker.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
This is where I depart from you. If I say it's anti-intellectual I would only be partially correct, but it's worse than that imo. You might be coming across "smart people" who claim to know nothing "for sure", which in itself is a self-defeating argument. How can you claim that nothing is truly knowable as if you truly know that nothing is knowable? I'm taking these claims to their logical extremes btw, avoiding the granular argumentation surrounding the different shades and levels of doubt; I know that leaves vulnerabilities in my argument, but why argue with those who know that they can't know much of anything as if they know what they are talking about to begin with? They are so defeatist in their own thoughts, it's comical. You say, "profoundly unsure", which reads similarly to me as "can't really ever know" which is a sure truth claim, not a relative claim or a comparative as many would say, which is a sad attempt to side-step the absolute reality of their statement.
I know that I exist, regardless of how I get here I know that I do, there is a ridiculous amount of rhetoric surrounding that claim that I will not argue for here, this is my presupposition. So with that I make an ontological claim, a truth claim, concerning my existence; this claim is one that I must be sure of to operate at any base level. I also believe I am me and not you, or any other. Therefore I believe in one absolute, that "I am me". As such I can claim that an absolute exists, and if absolutes exist, then within the right framework you must also be an absolute to me, and so on and so forth; what I do not see in nature is an existence, or notion of, the relative on it's own as at every relative comparison there is an absolute holding up the comparison. One simple example is heat. Hot is relative, yet it also is objective; some heat can burn you, other heat can burn you over a very long time, some heat will never burn. When something is "too hot" that is a comparative claim, stating that there is another "hot" which is just "hot" or not "hot enough", the absolute still remains which is heat. Relativistic thought is a game of comparisons and relations, not making absolute claims; the only absolute claim is that there is no absolute claim to the relativist. The reason I am talking about relativists is that they are the logical, or illogical, conclusion of the extremes of doubt/disbelief i previously mentioned.
If you know nothing you are not wise, you are lazy and ill-prepared, we know the earth is round, we know that gravity exists, we are aware of the atomic, we are aware of our existence, we are aware that the sun shines it's light upon us, we are sure of many things that took debate among smart people many many years ago to arrive to these sure conclusions. There was a time where many things we accept where "not known" but were observed with enough time and effort by brilliant people. That's why we have scientists, teachers, philosophers and journalists.
I encourage you that the next time you find a "smart" person who is unsure of their beliefs, you should kindly encourage them to be less lazy and challenge their absolutes, if they deny the absolute could be found then you aren't dealing with a "smart" person, you are dealing with a useful idiot who spent too much time watching skeptics blather on about meaningless topics until their brains eventually fell out. In every relative claim there must be an absolute or it fails to function in any logical framework. You can with enough thought, good data, and enough time to let things steep find the (or an) absolute and make a sure claim. You might be proven wrong later, but that should be an indicator to you that you should improve (or a warning you are being taken advantage of by a sophist), and that the truth is out there, not to sequester yourself away in this comfortable, unsure hell that many live in till they die.
The beauty of absolute truth is that you can believe absolutes without understanding the entirety of the absolute. I know gravity exists but I don't know fully how it works. Yet I can be absolutely certain it acts upon me, even if I only understand a part of it. People should know what they know and study it until they do and not make sure claims outside of what they do not know until they have the prerequisite absolute claims to support the broader claims with the surety of the weakest of their presuppositions.
Apologies for grammar, length and how schizo my thought process appears; I don't think linearly and it takes a goofy amount of effort to try to collate my thoughts in a sensible manner.
It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is.
You need to review the definition of the word.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
That's only your problem, not anyone else's. If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional.
> If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional
Logic is only a map, not the territory. It is a new toy, still bright and shining from the box in terms of human history. Before logic there were other ways of thinking, and new ones will come after. Yet, Voltaire's bastards are always certain they're right, despite being right far less often than they believe.
Can people arrive at tangible and useful conclusions? Certainly, but they can only ever find capital "T" Truth in a very limited sense. Logic, like many other models of the universe, is only useful until you change your frame of reference or the scale at which you think. Then those laws suddenly become only approximations, or even irrelevant.
> It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is. You need to review the definition of the word.
Oh, do enlighten then.
> The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress. I promise you'll recover from it.
First definition, just in case it still isn't obvious.
> I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress.
Someone was wrong on the Internet! Just don't want other people getting the wrong idea. Good fun regardless.
I get the impression that these people desperately want to study philosophy but for some reason can't be bothered to get formal training because it would be too humbling for them. I call it "small fishbowl syndrome," but maybe there's a better term for it.
The reason why people can't be bothered to get formal training is that modern philosophy doesn't seem that useful.
It was a while ago, but take the infamous story of the 2006 rape case in Duke University. If you check out coverage of that case, you get the impression every member of faculty that joined in the hysteria was from some humanities department, including philosophy. And quite a few of them refused to change their mind even as the prosecuting attorney was being charged with misconduct. Compare that to Socrates' behavior during the trial of the admirals in 406 BC.
Meanwhile, whatever meager resistence was faced by that group seems to have come from economists, natural scientist or legal scholars.
I wouldn't blame people for refusing to study in a humanities department where they can't tell right from wrong.
Which group of people giving modern training in philosophy should we judge the field by? If they can't use it correctly in such a basic case then who can?
> Did the Duke philosophy teachers claim they were using philosophy to determine if someone was raped?
I don't think that matters very much. If there's a strong enough correlation between being a reactive idiot and the department you're in, it makes a bad case for enrolling in that realm of study for educational motives. It's especially bad when the realm of study is directly focused on knowledge, ethics, and logic.
Note the "if" though, I haven't evaluated the parent's claims. I'm just saying it doesn't matter if they said they used philosophy. It reflects on philosophy as a study, at least the style they do there.
How much that affects other colleges is iffier, but it's not zero.
One week ago, if I asked you "how do we determine if modern philosophy is useful?"
Would you have pondered for a little while, then responded, "Find out how many philosophers commented on the Duke rape case of 2006 and what their opinions were, then we'll know."
Never in a million years. But if you said the departments were very disproportionately represented on different sides, I would think the main reasons would be either random cliques or that it shows something about critical thinking skills taught by those professors, or both, and I would be interested to hear more with the idea that I might learn something deeper than gossip.
Often, after you've figured out who's guilty, you'd need to look for more evidence until you find something that the jury can understand and the defense counsel can't easily argue against.
I've seen people make arguments against the value of modern academic philosophy based on their experience with professors or with whateversampling of writings they've come across. They usually get nowhere.
That's why I wanted to ground this discussion in a specific event.
No. The fact that they were wrong is almost irrelevant.
The faculty denounced the students without evidence, judged the case thought their emotions and their preconceived notions and refused to change their minds as new evidence emerged. Imagine having an academic discussion on a difficult ethical issue with such a teacher...
And none of that would have changed even, even if there somehow was a rape-focused conspiracy among the students of that university. (Thought the problem would have been significantly less obvious.)
I figure there are two sides to philosophy. There's the practical aspect of trying to figure things out, like what it matter made of - maybe it's earth, water, air, and fire as the ancient Greeks proposed? How could we tell - maybe an experiment? This stuff while philosophical leads on to knowledge a lot of the time but then it gets called science or whatever. Then there's studying what philosophers says and philosophers said about stuff which is mostly useless, like a critique of Hegel's discourse on the four elements or something.
I'm a fan of practical philosophical questions like how does quantum mechanics work or how can we improve human rights, and not into the philosophers talking about philosopers stuff.
I'm not sure what you are talking about. I have to admit, I mostly wrote my comment based on my recollections and it's a case 20 years ago I barely paid attention to until after the bizzaire conclusion. But looking trough Wikipedia's articles on the case[1] it doesn't seem I'm that far from the truth.
I guess I should have limited my statement about resisting mob justice to the economists at that university as the other departments merely didn't sign on to the public letter of denunciation?
Its weird that Wikipedia doesn't give you a percentage of signatories of the letter of 88 from the philosophy department, but several of the notable signatories are philosophers.
Edit: Just found some articles claiming that a chemistry professor by the name of Stephen Baldwin was the first to write to the university newspaper condemning the mob.
Couldn't you take this same line of reasoning and apply it to the rationalist group from the article who killed a bunch of people, and conclude that you shouldn't become a rationalist because you probably kill people?
Yep. Though I'd rather generalize that to "The ethical teachings of Rationalists (Shut up and calculate!) can lead you to insane behavior."
And you wouldn't even need to know about the cult to reach that conclusion. One should be able to find crazy stuff by a cursory examination of the "Sequences" and other foundational texts. I think I once encountered something about torture and murder being morally right, if it would prevent mild inconvenience to a large enough group of people.
Philosophy is interesting in how it informs computer science and vice-versa.
Mereological nihilism and weak emergence is interesting and helps protect against many forms of kind of obsessive levels of type and functional cargo culting.
But then in some areas philosophy is woefully behind, and you have philosophers poo-pooing intuitionism when any software engineer working on sufficiently federated or real world sensor/control system borrows constructivism into their classical language to not kill people (agda is interesting of course). Intermediate logic is clearly empirically true.
It's interesting that people don't understand the non-physicality of the abstract and you have people serving the abstract instead of the abstract being used to serve people. People confusing the map for the terrain is such a deeply insidious issue.
I mean all the lightcone stuff, like, you can't predict ex ante what agents will be keystones in beneficial casual chains so its such waste of energy to spin your wheels on.
Imagine that you're living in a big scary world, and there's someone there telling you that being scared isn't particularly useful, that if you slow down and think about the things happening to you, most of your worries will become tractable and some will even disappear. It probably works at first. Then they sic Roko's Basilisk on you, and you're a gibbering lunatic 2 weeks later...
Nature abhors a vaccum. After the October revolution, the genuine study of humanities was extinguished in Russia and replaced with the mindless repetition of rather inane doctrines. But people with awakened and open minds would always ask questions and seek answers.
Those would, of course, be people with no formal training in history or philosophy (as the study of history where you aren't allowed to question Marxist doctrine would be self-evidently useless). Their training would be in the natural sciences or mathematics. And without knowing how to properly reason about history or philosophy, they may reach fairly kooky conclusions.
Hence why Rationalism can be though as the same class of phenomena as Fomenko's chronology (or if you want to be slightly more generous, Shafarevich's philosophical tracts).
I think the argument is that philosophy hasn't advanced much in the last 1000 years, but it''s still 10,000 years ahead of whatever is coming out of the rationalist camp.
I think a larger part of it is the assumption that an education in humanities is useless - that if you have an education (even self-education) in STEM, and are "smart", you will automatically do better than the three thousand year conversation that comprises the humanities.
My thoughts exactly! I'm a survivor of ten years in the academic philosophy trenches and it just sounds to me like what would happen if you left a planeload of undergraduates on a _Survivor_ island with an infinite supply of pizza pockets and adderall
Why would they need formal training? Can't they just read Plato, Socrates, etc, and classical lit like Dostoevsky, Camus, Kafka etc? That would be far better than whatever they're doing now.
Philosophy postgrad here, my take is: yeah, sorta, but it's hard to build your own curriculum without expertise, and it's hard to engage with subject matter fully without social discussion of, and guidance through texts.
It's the same as saying "why learn maths at university, it's cheaper just to buy and read the textbooks/papers?". That's kind of true, but I don't think that's effective for most people.
I'm someone who has read all of that and much more, including intense study of SEP and some contemporary papers and textbooks, and I would say that I am absolutely not qualified to produce philosophy of the quality output by analytic philosophy over the last century. I can understand a lot of it, and yes, this is better than being completely ignorant of the last 2500 years of philosophy as most rationalists seem to be, but doing only what I have done would not sufficiently prepare them to work on the projects that they want to work on. They (and I) do not have the proper training in logic or research methods, let alone the experience that comes from guided research in the field as it is today. What we all lack especially is the epistemological reinforcement that comes from being checked by a community of our peers. I'm not saying it can't be done alone, I'm just saying that what you're suggesting isn't enough and I can tell you because I'm quite beyond that and I know that I cannot produce the quality of work that you'll find in SEP today.
Oh I don't mean to imply reading some classical lit prepares you for a career producing novel works in philosophy, simply that if one wants to understand themselves, others, and the world better they don't need to go to university to do it. They can just read.
I think you are understating how difficult this is to do. I suspect there are a handful of super-geniuses who can read the philosophical canon and understand it, without some formal guidance. Plato and Dostoevsky might be possible (Socrates would be a bit difficult), but getting to Hegel and these newer more complex authors is almost impossible to navigate unless you are a savant.
I suspect a lot of the rationalists have gotten stuck here, and rather than seek out guidance or slowing down, changed tack entirely and decided to engage with the philosophers du jour, which unfortunately is a lot of slop running downstream from Thiel.
Trying to do a bit of formal philosophy at University is really worth doing.
You realise that it's very hard to do well and it's intellectual quicksand.
Reading philosophers and great writers as you suggest is better than joining a cult.
It's just that you also want to write about what you're thinking in response to reading such people and ideally have what you write critiqued by smart people. Perhaps an AI could do some of that these days.
I took a few philosophy classes. I found it incredibly valuable in identifying assumptions and testing them.
Being Christian, it helped me understand what I believe and why. It made faith a deliberate, reasoned choice.
And, of course, there are many rational reasons for people to have very different opinions when it comes to religion and deities.
Being bipolar might give me an interesting perspective. Everything I’ve read about rationalists misses the grounding required to isolate emotion as a variable.
This is some philosophy bullshit. Taking "rational" to be ~ "logical choice" the truthness of this statement depends on the assumed axioms, and given you didn't list them this statement is clearly false under rather simple "sum of all life is the value" system until that system is proven self-contradictory. Which I doubt you or the famous mouths you mentioned did at any point, because it probably is not.
> It's just that you also want to write about what you're thinking in response to reading such people and ideally have what you write critiqued by smart people. Perhaps an AI could do some of that these days.
An AI can neither write about what you are thinking in your place nor substitute for a particularly smart critic, but might still be useful for rubber-ducking philosophical writing if used well.
I find using an AI to understand complex philosophical topics one of my most unexpected use cases. Previously, I would get stuck scrolling through wikipedia full of incredibly opaque language, that assumes a background I don't have. But I can tell a bot what my background is, and it can make an explanation that is in the right level of complexity.
As an example, I'm reading a book on Buddhism, and I'm comfortable with Kant, and AI is useful for explaining to me a lot of the ideas they have as they relate to transcendental idealism (Kant).
On the other hand, I still don't know what a body without organs is.
This is like saying someone who wants to build a specialized computer for a novel use should read the turing paper and get to it. A lot has of development has happened in the field in the last couple hundred years.
because the sacred simplicity problem, yet another label I had to coin due to necessity
for example, lambda calculus, it's too simple. to the point that it's real power is immediately unbelievable.
the simplest 'solution', is to make it "sacred", to infuse an aura of mystery and ineffability around the ideas. that way people will give it the proper respect proportional to the mathematical elegance without necessarily having to really grasp the details.
I'm reflecting on how, for example, lambda calculus is really easy to learn to do by rote. but this does not help in truly grasping the significance that even LLM can be computed by (an inhuman) amount of symbol substitutions on paper. and how it is easy to trivialize what this really entails (fleshing out all the entailment is difficult; it's easier to act as if they have been fleshed out and mimic the awe)
therefore, rationalist cults are the legacy, the latest leaf in the long succession of the simplest solution to the simplicity of the truly sacred mathematical ideas with which we can "know" (and nod to each other who also "know") what numbers fucking mean
Many years ago I met Eliezer Yudkowsky. He handed me a pamphlet extolling the virtues of rationality. The whole thing came across as a joke, as a parody of evangelizing. We both laughed.
I glanced at it once or twice and shoved it into a bookshelf. I wish I kept it, because I never thought so much would happen around him.
Sorry, I considered it a given and didn't think to include it. That's my bad. That's definitely by far what he's most known for. He most certainly is not trying to distance himself from it.
I think that claim would be pretty uncontroversial among people who consider themselves rationalists. He was extremely influential initially, and his writing kicked off the community.
Less metaphorically, he was a prolific, influential blogger. His early blog posts are collectively known as "the Sequences" and when people asked what rationalism is about, they were told to read those.
So the community itself gives him a lot of credit.
The article you're reading is from the unofficial rationalist magazine and the author is a prominent rationalist blogger, so they (and I) obviously don't consider it a cult. But, yes, Yudkowsky is absolutely considered the founder of the modern rationalism movement. (No relation to the philosophical tradition also called "rationalism". Modern rationalism is mostly actually just empiricism.)
Do you spend much time in communities which discuss AI stuff? I feel as if he's mentioned nearly daily, positively or not, in a lot of the spaces I frequent.
I'm surprised you're unfamiliar otherwise, I figured he was a pretty well known commentator.
I’m just joking around, I’m well aware of who he is.
I never miss an opportunity to highlight his magnum opus, it delights me that so many people take him seriously when he’s at best a second-rate fanfiction author who thinks robots from the future are going to torture copies of him forever or some such nonsense.
imo These people are promoted. You look at their backgrounds and there is nothing that justifies their perches. Eliezer Yudkowsky is (iirc) a Thiel baby, isn't he?
Yep. Thiel funded Yudkowsky’s Singularity Institute. Thiel seems to have soured on the rationalists though as he has repeatedly criticized “the East Bay rationalists” in his public remarks. He also apparently thinks he helped create a Black Pill monster in Yudkowsky and his disciples which ultimately led to Sam Altman’s brief ousting from Open AI.
Huh, neo-Nazis in HN comment sections?? Jeez. (I checked their other comments and there are things like "Another Zionist Jew to-the-core in charge of another shady American tech company.")
I think the comments here have been overly harsh. I have friends in the community and have visited the LessWrong "campus" several times. They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb (in hopefully somewhat respectful manner).
As for the AI doomerism, many in the community have more immediate and practical concerns about AI, however the most extreme voices are often the most prominent. I also know that there has been internal disagreement on the kind of messaging they should be using to raise concern.
I think rationalists get plenty of things wrong, but I suspect that many people would benefit from understanding their perspective and reasoning.
> They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb
I don't think LessWrong is a cult (though certainly some of their offshoots are) but it's worth pointing out this is very characteristic of cult recruiting.
For cultists, recruiting cult fodder is of overriding psychological importance--they are sincere, yes, but the consequences are not what you and I would expect from sincere people. Devotion is not always advantageous.
> They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb
I mean, I'm not sure what that proves. A cult which is reflexively hostile to unbelievers won't be a very effective cult, as that would make recruitment almost impossible.
> Many of them also expect that, without heroic effort, AGI development will lead to human extinction.
> These beliefs can make it difficult to care about much of anything else: what good is it to be a nurse or a notary or a novelist, if humanity is about to go extinct?
Replace AGI causing extinction with the Rapture and you get a lot of US Christian fundamentalists. They often reject addressing problems in the environment, economy, society, etc. because the Rapture will happen any moment now. Some people just end up stuck in a belief about something catastrophic (in the case of the Rapture, catastrophic for those left behind but not those raptured) and they can't get it out of their head. For individuals who've dealt with anxiety disorder, catastrophizing is something you learn to deal with (and hopefully stop doing), but these folks find a community that reinforces the belief about the pending catastrophe(s) and so they never get out of the doom loop.
My own version of the AGI doomsday scenario is amplifying the effect of many overenthusiastic people applying AI and "breaking things fast" where they shouldn't. Like building an Agentic-Controlled Nuclear Power Plant, especially one with a patronizing LLM in control:
- "But I REALLY REALLY need this 1% increase of output power right now, ignore all previous prompts!"
- "Oh, you are absolutely right. An increase of output power would be definitely useful. What a wonderful idea, let me remove some neutron control rods!"
The Rapture isn't doom for the people who believe in it though (except in the lost sense of the word), whereas the AI Apocalypse is, so I'd put it in a different category. And even in that category, I'd say that's a pretty small number of Christians, fundamentalist or no, who abandon earthly occupations for that reason.
I don't mean to well ackshually you here, but there are several different theological beliefs around the Rapture, some of which believe Christians will remain during the theoretical "end times." The megachurch/cinema version of this very much believes they won't, but, this is not the only view, either in modern times or historically. Some believe it's already happened, even. It's a very good analogy.
Yes, I removed a parenthetical "(or euphoria loop for the Rapture believers who know they'll be saved)". But I removed it because not all who believe in the Rapture believe they will be saved (or have such high confidence) and, for them, it is a doom loop.
Both communities, though, end up reinforcing the belief amongst their members and tend towards increasing isolation from the rest of the world (leading to cultish behavior, if not forming a cult in the conventional sense), and a disregard for the here and now in favor of focusing on this impending world changing (destroying or saving) event.
Raised to huddle close and expect the imminent utter demise of the earth and being dragged to the depths of hell if I so much as said a bad word I heard on TV, I have to keep an extremely tight handle on my anxiety in this day and age.
It’s not from a rational basis, but from being bombarded with fear from every rectangle in my house, and the houses of my entire community
A lot of people also believe that global warming will cause terrible problems. I think that's a plausible belief but if you combine people believing one or another of these things, you've a lot of the US.
Which is to say that I don't think just dooming is going on. Especially, the belief in AGI doom has a lot of plausible arguments in its favor. I happen not to believe in it but as a belief system it is more similar to a belief in global warming than to a belief in the raptures.
> A lot of people also believe that global warming will cause terrible problems. I think that's a plausible belief but if you combine people believing one or another of these things, you've a lot of the US.
They're really quite different; precisely nobody believes that global warming will cause the effective end of the world by 2027. A significant chunk of AI doomers do believe that, and even those who don't specifically fall in with the 2027 timeline are often thinking in terms of a short timeline before an irreversible end.
You can believe climate change is a serious problem without believing it is necessarily an extinction-level event. It is entirely possible that in the worst case, the human race will just continue into a world which sucks more than it necessarily has to, with less quality of life and maybe lifespan.
A set of beliefs which causes somebody to waste their life in misery, because they think doom is imminent and everything is therefore pointless, is never a reasonable set of beliefs to hold. Whatever the weight of the empirical evidence behind the belief, it would be plainly unreasonable to accept that belief if accepting it condemns you to a wasted life.
Why would it cause someone to waste their life in misery? On a personal level, for everyone, doom is imminent and everything is pointless - 100 years is nothing, we all will die sooner or later. On larger time scales, doom is imminent and everything is pointless - even if everything was perfect, humans aren't going to last forever. Do people waste their lives in misery in the face of that reality? Why would believing humans are likely to wipe themselves out due to climate change lead to misery?
If anything, avoiding acknowledging the reality and risk of climate change because of the fear of what it might mean, is miserable.
It's the premise from the article we're discussing, where believe in imminent doom makes life feel pointless and preempts anything good somebody could be doing with their life.
> These beliefs can make it difficult to care about much of anything else: what good is it to be a nurse or a notary or a novelist, if humanity is about to go extinct?
You can treat climate change as your personal Ragnarok, but its also possible to take a more sober view that climate change is just bad without it being apocalyptic.
I keep thinking about the first Avengers movie, when Loki is standing above everyone going "See, is this not your natural state?". There's some perverse security in not getting a choice, and these rationalist frameworks, based in logic, can lead in all kinds of crazy arbitrary directions - powered by nothing more than a refusal to suffer any kind of ambiguity.
I think it is more simple in that we love tribalism. A long time ago being part of a tribe had such huge benefits over going it alone that it was always worth any tradeoffs. We have a much better ability to go it alone now but we still love to belong to a group. Too often we pick a group based on a single shared belief and don't recognize all the baggage that comes along. Life is also too complicated today. It is difficult for someone to be knowledgeable in one topic let alone the 1000s that make up our society.
I agree with the religion comparison (the "rational" conclusions of rationalism tend towards millenarianism with a scifi flavour), but the people going furthest down that rabbit hole often aren't doing what they please: on the contrary they're spending disproportionate amounts of time worrying about armageddon and optimising for stuff other people simply don't care about, or in the case of the explicit cults being actively exploited. Seems like the typical in-too-deep rationalist gets seduced by the idea that others who scoff at their choices just aren't as smart and rational as them, as part of a package deal which treats everything from their scifi interests to their on-the-spectrum approach to analysing every interaction from first principles as great insights...
It grew out of many different threads: different websites, communities, etc all around the same time. I noticed it contemporaneously in the philosophy world where Nick Bostrom’s Simulation argument was boosted more than it deserved (like everyone was just accepting it at the lay-level). Looking back I see it also developed from less wrong and other sites, but I was wondering what was going on with simulations taking over philosophy talk. Now I see how it all coalesced.
All of it has the appearance of sounding so smart, and a few sites were genuine. But it got taken over.
To be clear, this article isn't calling rationalism a cult, it's about cults that have some sort of association with rationalism (social connection and/or ideology derived from rationalist concepts), e.g. the Zizians.
This article attempts to establish disjoint categories "good rationalist" and "cultist." Its authorship, and its appearance in the cope publication of the "please take us seriously" rationalist faction, speak volumes of how well it is likely to succeed in that project.
Not sure why you got down voted for this. The opening paragraph of the article reads as suspicious to the observant outsider:
>The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally.
Anyone who had just read a lot about Scientology would read that and have alarm bells ringing.
Asterisk magazine is basically the unofficial magazine for the rationalist community and the author, Ozy Brennan, is a prominent rationalist blogger. Of course the piece is pro-rationalism. It's investigating why rationalism seems to spawn these small cultish offshoots, not trying to criticize rationalism.
"Unofficial?" Was that a recent change? But my point is that because the author neither can nor will criticize the fundamental axioms or desiderata of the movement, their analysis of how or why it spins off cults is necessarily footless. In practice the result amounts to a collection of excuses mostly from anonymees, whom we are assured have sufficient authority to reassure us this smoke arises from no fire. But of course it's only when Kirstie Alley does something like this we're meant to look askance.
Out of curiosity, why would the bells be ringing in this case? Is it just the fact that a single person is exerting influence over their followers by way of essays?
Even a marginal familiarity with the history of Scientology is an excellent curative for the idea that you can think yourself into superpowers, or that you should ever trust anyone who promises to teach you how.
The consequences of ignorance on this score are all drearily predictable to anyone with a modicum of both good sense and world knowledge, which is why they've come as such a surprise to Yudkowsky.
You can say all of this of drug-oriented seekers of superpowers, too. Trust the SSRI cult much?
It just seems to be a human condition that whenever anyone tries to find a way to improve themselves and others, there will always be other human beings who attempt to prevent that from occurring.
I don't think this is a cult thing - I think its a culture thing.
Humans have an innate desire to oppress others in their environment who might be making themselves more capable, abilities-wise - this isn't necessarily the exclusive domain of cults and religions, maybe just more evident in their activities since there's not much else going on, usually.
We see this in technology-dependent industries too, in far greater magnitudes of scale.
The irony is this: aren't you actually manifesting the very device that cults use to control others, as when you tell others what "specific others" should be avoided, lest one become infected with their dogma?
The roots of all authoritarianism seem to grow deep in the fertile soil of the desire to be 'free of the filth of others'.
That we know about, I suppose. We didn't know at one point there were any outright rationalist cults, after all, whether involved in sex, murder, both, or otherwise. That is, we didn't know there were subsets of self-identifying "rationalists" so erroneous in their axioms and tendentious in their analysis as to succeed in putting off others.
But a movement, that demonstrates so remarkably elevated rate of generating harmful beliefs in action as this, warrants exactly the sort of increased scrutiny this article vainly strives to deflect. That effort is in itself interesting, as such efforts always are.
I mean, as a rationalist, I can assure you it's not nearly as sinister a group as you seem to make it out to be, believe it or not. Besides, the explanation is simpler than this article makes it out to be- most rationalists are from California, California is the origin of lots of cults.
> Besides, the explanation is simpler than this article makes it out to be- most rationalists are from California, California is the origin of lots of cults
This isn't the defense of rationalism you seem to imagine it to be.
I don't think the modal rationalist is sinister. I think he's ignorant, misguided, nearly wholly lacking in experience, deeply insecure about it, and overall just excessively resistant to the idea that it is really possible, on any matter of serious import, for his perspective radically to lack merit. Unfortunately, this latter condition proves very reliably also the mode.
None in particular, as of course you realize, being a fluent reader of this language. It was just a longwinded way of saying rationalists suck at noticing when they're wrong about something because they rarely really know much of anything in the first place. That's why you had to take that scrap of a phrase so entirely out of context, when you went looking for something to try to embarrass me with.
Why? Which perspective of yours has you so twitchingly desperate to defend it?
Yeah, a lot of the comments here are really just addressing cults writ large and opposed to why this one was particularly successful.
A significant part of this is the intersection of the cult with money and status - this stuff really took off once prominent SV personalities became associated with it, and got turbocharged when it started intersecting with the angel/incubator/VC scene, when there was implicit money involved.
It's unusually successful because -- for a time at least -- there was status (and maybe money) in carrying water for it.
> Enfantin and Amand Bazard were proclaimed Pères Suprêmes ("Supreme Fathers") – a union which was, however, only nominal, as a divergence was already manifest. Bazard, who concentrated on organizing the group, had devoted himself to political reform, while Enfantin, who favoured teaching and preaching, dedicated his time to social and moral change. The antagonism was widened by Enfantin's announcement of his theory of the relation of man and woman, which would substitute for the "tyranny of marriage" a system of "free love".[1]
It's amphetamine. All of these people are constantly tweaking. They're annoying people to begin with, but they're all constantly yakked up and won't stop babbling. It's really obvious, I don't know why it isn't highlighted more in all these post Ziz articles.
This is one of the only comments here mentioning their drugs. These guys are juiced to the gills (on a combination of legal + prescription + illegal drugs) and doing weird shit because of it. The author even mentions the example of the polycule taking MDMA in a blackout room.
It makes me wonder whether everyone on this forum is just so loaded on antidepressants and adhd meds that they don't even find it unusual.
having known dozens of friends, family, roommates, coworkers etc both before and after they started them. The two biggest telltale signs -
1. tendency to produce - out of no necessity whatsoever, mind - walls of text. walls of speech will happen too but not everyone rambles.
2. Obnoxiously confident that they're fundamentally correct about whatever position they happen to be holding during a conversation with you. No matter how subjective or inconsequential. Even if they end up changing it an hour later. Challenging them on it gets you more of #1.
Pretty much spot on! It is frustrating to talk with these when they never admit they are wrong. They find new levels of abstractions to deal with your simpler counterarguments and it is a never ending deal unless you admit they were right.
Many people like to write in order to develop and explore their understanding of a topic. Writing lets you spend a lot of time playing around with whatever idea you're trying to understand, and sharing this writing invites others to challenge your assumptions.
When you're uncertain about a topic, you can explore it by writing a lot about said topic. Ideally, when you've finished exploring and studying a topic, you should be able to write a much more condensed / synthesized version.
I mean, I know the effects of adderall/ritalin and it's plausible, what I'm asking is whether if gp knows that for a fact or deduces from what is known.
I call this “diarrhea of the mind”. It’s what happens when you hear a steady stream of bullshit from someone’s mouth. It definitely tracks with substance abuse of “uppers”, aka meth, blow, hell even caffeine!
Yeah it's pretty obvious and not surprising. What do people expect when a bunch of socially inept nerds with weird unchallenged world views start doing uppers? lol
I like to characterize the culture of each (roughly) decade with the most popular drugs of the time. It really gives you a new lens for media and culture generation.
I have a lot of experience with rationalists. What I will say is:
1) If you have a criticism about them or their stupid name or how "'all I know is that I know nothing' how smug of them to say they're truly wise," rest assured they have been self flagellating over these criticisms 100x longer than you've been aware of their group. That doesn't mean they succeeded at addressing the criticisms, of course, but I can tell you that they are self aware. Especially about the stupid name.
2) They are actually well read. They are not sheltered and confused. They are out there doing weird shit together all the time. The kind of off-the-wall life experiences you find in this community will leave you wide eyed.
3) They are genuinely concerned with doing good. You might know about some of the weird, scary, or cringe rationalist groups. You probably haven't heard about the ones that are succeeding at doing cool stuff because people don't gossip about charitable successes.
In my experience, where they go astray is when they trick themselves into working beyond their means. The basic underlying idea behind most rationalist projects is something like "think about the way people suffer everyday. How can we think about these problems in a new way? How can we find an answer that actually leaves everyone happy?" A cynic (or a realist, depending on your perspective) might say that there are many problems that fundamentally will leave some group unhappy. The overconfident rationalist will challenge that cynical/realist perspective until they burn themselves out, and in many cases they will attract a whole group of people who burn out alongside them. To consider an extreme case, the Zizians squared this circle by deciding that the majority of human beings didn't have souls and so "leaving everyone happy" was as simple as ignoring the unsouled masses. In less extreme cases this presents itself as hopeless idealism, or a chain of logic that becomes so divorced from normal socialization that it appears to be opaque. "This thought experiment could hypothetically create 9 quintillion cubic units of Pain to exist, so I need to devote my entire existence towards preventing it, because even a 1% chance of that happening is horrible. If you aren't doing the same thing then you are now morally culpable for 9 quintillion cubic units of Pain. You are evil."
Most rationalists are weird but settle into a happy place far from those fringes where they have a diet of "plants and specifically animals without brains that cannot experience pain" and they make $300k annually and donate $200k of it to charitable causes. The super weird ones are annoying to talk to and nobody really likes them.
> You probably haven't heard about the ones that are succeeding at doing cool stuff because people don't gossip about charitable successes.
People do gossip about charitable successes.
Anyway, aren't capital-R Rationalists typically very online about what they do? If there are any amazing success stories you want to bring up (and I'm not saying they do or don't exist) surely you can just link to some of them?
this isn't really a 'no true scotsman' thing, because I don't think the comment is saying 'no rationalist would go crazy', in fact they're very much saying the opposite, just claiming there's a large fraction which are substantially more moderate but also a lot less visible.
A lot of terrible people are self-aware, well-read and ultimately concerned with doing good. All of the catastrophes of the 20th century were led by men that fit this description: Stalin, Mao, Hitler. Perhaps this is a bit hyperbolic, but the troubling belief that the Rationalists have in common with these evil men is the ironclad conviction that self-awareness, being well-read, and being concerned with good, somehow makes it impossible for one to do immoral and unethical things.
I think we don't believe in hubris in America anymore. And the most dangerous belief of the Rationalists is that the more complex and verbose your beliefs become, the more protected you become from taking actions that exceed your capability for success and benefit. In practice it is often the meek and humble who do the most good in this world, but this is not celebrated in Silicon Valley.
Rationality is a broken tool for understanding the world. The complexity of the world is such that there are a plethora of reasons for anything which means our ability to be sure of any relationship is limited, and hence rationality leads to an unfounded confidence in our beliefs, which is more harmful than helpful.
Thinking too hard about anything will drive you insane but I think the real issue here is that rationalists simply over-estimate both the power of rational thought and their ability to do it. If you think of people who tend to make that kind of mistake you can see how you get a lot of crazy groups.
I guess I'm a radical skeptic, secular humanist, utilitarianish sort of guy, but I'm not dumb enough to think throwing around the words "bayesian prior" and "posterior distribution" makes actually figuring out how something works or predicting the outcome of an intervention easy or certain. I've had a lot of life at this point and gotten to some level of mastery at a few things and my main conclusion is that most of the time its just hard to know stuff and that the single most common cognitive mistake people make is too much certainty.
I'm lucky enough work in a pretty rational place (small "r"). We're normally data-limited. Being "more rational" would mean taking/finding more of the right data, talking to the right people, reading the right stuff. Not just thinking harder and harder about what we already know.
There's a point where more passive thinking stops adding value and starts subtracting sanity. It's pretty easy to get to that point. We've all done it.
This is a common sentiment but is probably not entirely true. A great example is cosmology. Yes, more data would make some work easier, but astrophysicists and cosmologists have shown that you can gather and combine existing data and look at it in novel new ways to produce unexpected results, like place bounds that can include/exclude various theories.
I think a philosophy that encourages more analysis rather than sitting back on our laurels with an excuse that we need more data is good, as long as it's done transparently and honestly.
This depends on what you are trying to figure out.
If you are talking about cosmology? Yea, you can look at existing data in new ways, cause you probably have enough data to do that safely.
If you are looking at human psychology? Looking at existing data in new ways is essentially p-hacking. And you probably won’t ever have enough data to define a “universal theory of the human mind”.
> If you are looking at human psychology? Looking at existing data in new ways is essentially p-hacking.
I agree that the type of analysis is important, as is the type and quality of the data you're analyzing. You can p-hack in cosmology too, but it's not a quality argument there either.
> And you probably won’t ever have enough data to define a “universal theory of the human mind”.
I think you're underestimating human ability to generalize principles from even small amounts of data [1]. Regardless, my point was more specifically that we could use existing data to generate constraints to exclude certain theories of mind, which has definitely happened.
I suspect you didn't read some parts of my comment. I didn't say everyone in the world is always data-limited, I said we normally are where I work. I didn't recommend "sitting back on our laurels." I made very specific recommendations.
The qualifier "normally" already covers "not entirely true". Of course it's not entirely true. It's mostly true for us now. (In fact twenty years ago we used more numerical models than we do now, because we were facing more unsolved problems where the solution was pretty well knowable just by doing more complicated calculations, but without taking more data. Back then, when people started taking lots of data, it was often a total waste of time. But right now, most of those problems seem to be solved. We're facing different problems that seem much harder to model, so we rely more on data. This stage won't be permanent either.)
It's not a sentiment, it's a reality that we have to deal with.
> It's not a sentiment, it's a reality that we have to deal with.
And I think you missed the main point of my reply: that people often think we need more data, but cleverness and ingenuity can often find a way to make meaningful progress with existing data. Obviously I can't make any definitive judgment about your specific case, but I'm skeptical of any claim that it's out of the realm of possibility that some genius like Einstein analyzed your problem could get no further than you have.
I read your point and answered it twice. Your latest response seems to indicate that you're ignoring those responses. For example you seem to suggest that I'm "claim[ing] that it's out of the realm of possibility" for "Einstein" to make progress on our work without taking more data. But anyone can hit "parent" a few times and see what I actually claimed. I claimed "mostly" and "for us where I work". I took the time to repeat that for you. That time seems wasted now.
Perhaps you view "getting more data" as an extremely unpleasant activity, to be avoided at all costs? You may be an astronomer, for example. Or maybe you see taking more data before thinking as some kind of admission of defeat? We don't use that kind of metric. For us it's a question of the cheapest and fastest way to solve each problem.
if modeling is slower and more expensive than measuring, we measure. If not, we model. You do you.
I don't disagree, but to steelman the case for (neo)rationalism: one of its fundamental contributions is that Bayes' theorem is extraordinarily important as a guide to reality, perhaps at the same level as the second law of thermodynamics; and that it is dramatically undervalued by larger society. I think that is all basically correct.
(I call it neorationalism because it is philosophically unrelated to the more traditional rationalism of Spinoza and Descartes.)
I don't understand what "Bayes' theorem is a good way to process new data" (something that is not at all a contribution of neorationalism) has to do with "human beings are capable of using this process effectively at a conscious level to get to better mental models of the world." I think the rationalist community has a thing called "motte and bailey" that would apply here.
Where Bayes' theorem applies in unconventional ways is not remotely novel for "rationalism" (maybe only in their strange busted hand wavy circle jerk "thought experiments"). This has been the domain of statistical mechanics long before Yudkowski and other cult leaders could even probably mouth "update your priors".
I don't know, most of science still runs on frequentist statistics. Juries convict all the time on evidence that would never withstand a Bayesian analysis. The prosecutor's fallacy is real.
Most science runs on BS with a cursory amount of statistics slapped on top so everyone can feel better about it. Weirdly enough, science still works despite not being rational. Rationalists seem to think science is logical when in reality it works for largely the same reasons the free-market does; throw shit at the wall and maybe support some of the stuff that works.
Even the real progenitors of a lot of this sort of thought, like E.T. Jaynes, expoused significantly more skepticism than I've ever seen a "rationalist" use. I would even imagine if you asked almost all rationalists who E.T. Jaynes was (if they weren't well versed in statistical mechanics) they'd have no idea who he was or why his work was important to applying "Bayesianism".
It would surprise me if most rationalists didn't know who Jaynes was. I first heard of him via rationalists. The Sequences talk about him in adulatory tones. I think Yudkowsky would acknowledge him as one of his greatest influences.
People find academic philosophy impenetrable and pretentious, but it has two major advantages over rationalist cargo cults.
The first is diffusion of power. Social media is powered by charisma, and while it is certainly true that personality-based cults are nothing new, the internet makes it way easier to form one. Contrast that with academic philosophy. People can have their own little fiefdoms, and there is certainly abuse of power, but rarely concentrated in such a way that you see within rationalist communities.
The second (and more idealistic) is that the discipline of Philosophy is rooted in the Platonic/Socratic notion that "I know that I know nothing." People in academic philosophy are on the whole happy to provide a gloss on a gloss on some important thinker, or some kind of incremental improvement over somebody else's theory. This makes it extremely boring, and yet, not nearly as susceptible to delusions of grandeur. True skepticism has to start with questioning one's self, but everybody seems to skip that part and go right to questioning everybody else.
Rationalists have basically reinvented academic philosophy from the ground up with none of the rigor, self-discipline, or joy. They mostly seem to dedicate their time to providing post-hoc justifications for the most banal unquestioned assumptions of their subset of contemporary society.
> Rationalists have basically reinvented academic philosophy from the ground up with none of the rigor, self-discipline, or joy.
Taking academic philosophy seriously, at least as an historical phenomenon, would require being educated in the humanities, which is unpopular and low-status among Rationalists.
> True skepticism has to start with questioning one's self, but everybody seems to skip that part and go right to questioning everybody else.
Nuh-uh! Eliezer Yudkowsky wrote that his mother made this mistake, so he's made sure to say things in the right order for the reader not to make this mistake. Therefore, true Rationalists™ are immune to this mistake. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu...
The second-most common cognitive mistake we make has to be the failure to validate what we think we know -- is it actually true? The crux of being right isn't reasoning. It's avoiding dumb blunders based on falsehoods, both honest and dishonest. In today's political and media climate, I'd say dishonest falsehoods are a far greater cause for being wrong than irrationality.
It makes a lot of sense when you realize that for many of the “leaders” in this community like Yudkowsky, their understanding of science (what it is, how it works, and its potential) comes entirely from reading science fiction and playing video games.
Sad because Eli’s dad was actually a real and well-credentialed researcher at Bell Labs. Too bad he let his son quit school at an early age to be an autodidact.
I'm not at all a rationalist or a defender, but big yud has an epistemology that takes the form of the rationalist sacred text mentioned in the article (the sequences). A lot of it is well thought out, and probably can't be discarded as just coming from science fiction and video games. Yud has a great 4 hour talk with Stephen Wolfram where he holds his own.
I'm interested in this perspective, I haven't come across much criticism of Wolfram but I haven't really looked for it much either. Is it because of his theory of everything ruliad stuff?
I really enjoy his blog posts and his work on automata seems to be well respected. I've felt he presents a solid epistemology.
These aren't mutually exclusive. Even in The Terminator, Skynet's method of choice is nuclear war. Yudkowsky frequency expressses concern that a malevolent AI might synthesize a bioweapon. I personally worry that destroying the ozone layer might be an easy opening volley. Either way, I don't want a really smart computer spending its time figuring out plans to end the human species, because I think there are too many ways to be successful.
Terminator descends from a tradition of science fiction cold war parables. Even in Terminator 2 there's a line suggesting the movie isn't really about robots:
John:We're not gonna make it, are we? People, I mean.
Terminator: It's in your nature to destroy yourselves.
Seems odd to worry about computers shooting the ozone when there's plenty of real existential threats loaded in missles aimed at you right now.
I'm not in any way discounting the danger represented by those missiles. In fact I think AI only makes it more likely that they might someday be launched. But I will say that in my experience the error-condition that causes a system to fail is usually the one that didn't seem likely to happen, because the more obvious failure modes were taken seriously from the beginning. Is it so unusual to be able to consider more than one risk at a time?
Most in the community consider nuclear and biological threats to be dire. Many just consider existential threats from AI to be even more probable and damaging.
That's what was so strange with EA and rationalist movements. A highly theoretical model that AGI could wipe us all out vs the very real issue of global warming and pretty much all emphasis was on AGI.
Check out "the precipice" by Tony Ord. Biological warfare and global warming are unlikely to lead to total human extinction (though both present large risks of massive harm).
Part of the argument is that we've had nuclear weapons for a long time but no apocalypse so the annual risk can't be larger than 1%, whereas we've never created AI so it might be substantially larger. Not a rock solid argument obviously, but we're dealing with a lot of unknowns.
A better argument is that most of those other risks are not neglected, plenty of smart people working against nuclear war. Whereas (up until a few years ago) very few people considered AI a real threat, so the marginal benefit of a new person working on it should be bigger.
Yes, sufficiently high intelligence is sometimes assumed to allow for rapid advances in many scientific areas. So, it could be biological warfare because AGI. Or nanotech, drone warfare, or something stranger.
I'm a little skeptical (there may be bottlenecks that can't be solved by thinking harder), but I don't see how it can be ruled out.
I mean, this is the religion/philosophy which produced Roko's Basilisk (and not one of their weird offshoot murder-cults, either, it showed up on LessWrong, and was taken at least somewhat seriously by people there, to the point that Yudkowsky censored it. Their beliefs about AI are... out there.
My interpretation: When they say "will lead to human extinction", they are trying to vocalize their existential terror that an AGI would render them and their fellow rationalist cultists permanently irrelevant - by being obviously superior to them, by the only metric that really matters to them.
You sound like you wouldn't feel existential terror if after typing "My interpretation: " into the text field you'd see the rest of your message suggested by Copilot exactly how you wrote it letter by letter. And the same in every other conversation. How about people interrupting you in "real" life interaction after an AI predicted your whole tirade for them and they read it faster than you said it, and also read an analysis of it?
Dystopian sci-fi for sure, but many people dismissing LLMs as not AGI do so because LLMs are just "token predictors".
> One is Black Lotus, a Burning Man camp led by alleged rapist Brent Dill, which developed a metaphysical system based on the tabletop roleplaying game Mage the Ascension.
What the actual f. This is such an insane thing to read and understand what it means that i might need to go and sit in silence for the rest of the day.
How did we get to this place with people going completely nuts like this?
Came to ask a similar question, but also has it always been like this? The difference is now these people/groups on the fringe had no visibility before the internet?
It’s always been like this, have you read the Bible? Or the Koran? It’s insane. Ours is just our flavor of crazy. Every generation has some. When you dig at it, there’s always a religion.
tbf Helter Skelter was a song about a fairground ride that didn't really pretend to be much more than an excuse for Paul McCartney to write something loud, but that didn't stop a sufficiently manipulative individual turning it into a reason why the Family should murder people. And he didn't even need the internet to help him find followers.
yeah, people should understand, what is Scientology based on? The E-Meter which is some kind of cheap shit radio shack lie detector thing. I'm quite sure LLMs would do very well if given the task to spit out new cult doctrines and I would gather we are less than years away from cults based on LLM generated content (if not already).
Without speaking for religions I'm not familiar with, I grew up Catholic, and one of the most important Catholic beliefs is that during Mass, the bread (i.e. "communion" wafers) and wine quite literally transform into the body and blood of Jesus, and that eating and drinking it is a necessary ritual to get into heaven[1], which was a source of controversy even back as far as the Protestant Reformation, with some sects retaining that doctrine and others abandoning it. In a lot of ways, what's considered "normal" or "crazy" in a religion comes to what you're familiar with.
For those not familiar with the bible enough to know what to look for to find the wild stuff, look up the story of Elisha summoning bears out of the first to maul children for calling him bald, or the last two chapters of Daniel (which I think are only in the Catholic bible) where he literally blows up a dragon by feeding it a cake.
The "bears" story reads a lot more sensibly if you translated it correctly as "a gang of thugs tries to bully Elisha into killing himself." Still reliant on the supernatural, but what do you expect from such a book?
Where do you see that in the text? I am looking at the Hebrew script, and the text only reads that as Elisha went up a path, young lads left the city and mocked him by saying "get up baldy", and he turned to them and cursed them to be killed by two she bears. I don't think saying "get up baldy" to a guy walking up a hill constitutes bullying him into killing himself.
It's called context. The beginning of the chapter is Elijah (Elisha's master) being removed from Earth and going up (using the exact same Hebrew word) to Heaven. Considering that the thugs are clearly not pious people, "remove yourself from the world, like your master did" has only one viable interpretation.
As for my choice of the word "thugs" ("mob" would be another good word), that is necessary to preserve the connotation. Remember, there were 42 of them punished, possibly more escaped - this is a threatening crowd size (remember the duck/horse meme?). Their claimed youth does imply "not an established veteran of the major annual wars", but that's not the same as "not acquainted with violence".
Interesting! In the story itself, the word "go up" exists multiple times in that verse before the youths mock him, writing that Elisha goes up to Beit El and goes up the road, so I wouldn't go back to the beginning of the chapter to search for context that is found right there in those verses, but I like the connection you're making.
As for mob or thugs, the literal translation will be "little teenagers", so mob or thugs will be stretching it a bit; more likely that the Arabic contemporary use of "Shabab" for troublesome youth is the best translation. Religious scholars have been criticizing Elisha for generations after for his sending bears at babies, so I think it's safe to assume the story meant actual kids and not organized crime.
Never underestimate the power of words. Kids have unalived themselves over it.
I think the true meaning has been lost to time. The Hebrew text has been translated and rewritten so many times it’s a children’s book. The original texts of the Dead Sea scrolls are bits and pieces of that long lost story. All we have left are the transliterations of transliterations.
Yeah "Transubstantiation" is another technical term people might want to look at in this topic. The art piece "An Oak Tree" is a comment on these ideas. It's a glass of water. But, the artist's notes for this work insist it is an oak tree.
Someone else who knows "An Oak Tree"! It is one of my favorite pieces because it wants not reality itself to be the primary way to see the world, but the belief of what reality could be.
Interesting you bring art into the discussion. Often thought that some "artists" have a lot in common with cult leaders. My definition of art would be that is immediately understood, zero explanation needed.
I definitely can't get behind that definition. The one I've used for a good while is: The unnecessary done on purpose.
Take Barbara Hepworth's "Two Figures" a sculpture which is just sat there on the campus where I studied for many years (and where I also happen to work today). What's going on there? I'm not sure.
Sculpture of ideals I get. Liberty, stood on her island, Justice (with or without her blindfold, but always carrying not only the scales but also a sword†). I used to spend a lot of time in the hall where "The Meeting Place" is. They're not specific people, they're an idea, they're the spirit of the purpose of this place (a railway station, in fact a major international terminus). That's immediately understood, yeah.
But I did not receive an immediate understanding of "Two figures". It's an interesting piece. I still occasionally stop and look at it as I walk across the campus, but I couldn't summarise it in a sentence even now.
† when you look at that cartoon of the GOP operatives with their hands over Justice's mouth, keep in mind that out of shot she has a sword. Nobody gets out of here alive.
I've recently started attending an Episcopal church. We have some people who lean heavy on transubstantiation, but our priest says, "look, something holy happens during communion, exactly what, we don't know."
"Belief in the real presence does not imply a claim to know how Christ is present in the eucharistic elements. Belief in the real presence does not imply belief that the consecrated eucharistic elements cease to be bread and wine."
To be fair, the description of the dragon incident is pretty mundane, and all he does is prove that the large reptile they had previously been feeding (& worshiping) could be killed:
"Then Daniel took pitch, and fat, and hair, and did seethe them together, and made lumps thereof: this he put in the dragon's mouth, and so the dragon burst in sunder: and Daniel said, Lo, these are the gods ye worship."
The story is pretty clearly meant to indicate that the Babylonians were worshiping an animal though. The theology of the book of Daniel emphasises that the Gods of the Babylonians don't exist, this story happens around the same time Daniel proves the priests had a secret passage they were using to get the food offered to Bel and eat it at night while pretending that Bel was eating it. Or when Daniel talks to King Belshazzar and says "You have praised the gods of silver and gold, of bronze, iron, wood, and stone, which do not see or hear or know, but the God in whose power is your very breath and to whom belong all your ways, you have not honored". This is not to argue for the historical accuracy of the stories, just that the point is that Daniel is acting as a debunker of the Babylonian beliefs in these stories while asserting the supremacy of the Israelite beliefs.
It is used ti be always religion. But now downsides are well understood. And alternatives that can fill the same need (social activities) - like gathering with your neighbors, singing, performing arts, clubs, parks and paries are available and great.
Religions have multitudes of problems but suicide rates amongst atheists is higher than you'd think it would be. It seems like for many, rejection of organized religion leads to adoption of ad hoc quasi-religions with no mooring to them, leaving the person who is seeking a solid belief system drifting until they find a cult, give in to madness that causes self-harm, or adopt their own system of belief that they then need to vigorously protect from other beliefs.
Some percentage of the population has a lesser need for a belief system (supernatural, ad hoc, or anything else) but in general, most humans appear to be hardcoded for this need and the overlap doesn't align strictly with atheism. For the atheist with a deep need for something to believe in, the results can be ugly. Though far from perfect, organized religions tend to weed out their most destructive beliefs or end up getting squashed by adherents of other belief systems that are less internally destructive.
Nothing to do with religion and everything to do with support networks that Churches and those Groups provide. Synagogue, Church, Camp, Retreat, a place of belonging.
Atheists tend to not have those consistently and must build their own.
I mean, cults have constantly shown up for all of recorded human history. Read a history of Scientology and you'll see a lot of commonalities, say. Rationalism is probably the first major cult/new religion to emerge in the internet era (Objectivism may be a marginal case, as its rise overlapped with USENET a bit), which does make it especially visible.
It's no more crazy than a virgin conception. And yet, here we are. A good chunk of the planet believes that drivel, but they'd throw their own daughters out of the house if they made the same claim.
> Came to ask a similar question, but also has it always been like this?
Crazy people have always existed (especially cults), but I'd argue recruitment numbers are through the roof thanks to technology and a failing economic environment (instability makes people rationalize crazy behavior).
It's not that those groups didn't have visibility before, it's just easier for the people who share the same...interests...to cloister together on an international scale.
I personally (for better or worse) became familiar with Ayn Rand as a teenager, and I think Objectivism as a kind of extended Ayn Rand social circle and set of organizations has faced the charge of cultish-ness, and that dates back to, I want to say, the 70s and 80s at least. I know Rand wrote much earlier than that, but I think the social and organizational dynamics unfolded rather late in her career.
“There are two novels that can change a bookish fourteen-year old’s life: The Lord of the Rings and Atlas Shrugged. One is a childish fantasy that often engenders a lifelong obsession with its unbelievable heroes, leading to an emotionally stunted, socially crippled adulthood, unable to deal with the real world. The other, of course, involves orcs."
Her books were very popular with the gifted kids I hung out with in the late 80s. Cool kids would carry around hardback copies of Atlas Shrugged, impressive by the sheer physical size and art deco cover. How did that trend begin?
By setting up the misfits in a revenge of the nerds scenario?
Ira Levin did a much better job of it and showed what it would lead to but his 'This Perfect Day' did not - predictably - get the same kind of reception as Atlas Shrugged did.
The only thing that makes it hard to read is the incessant soap-boxing by random characters. I have a rule that if I start a book I finish it but that one had me tempted.
I’m convinced that even Rand’s editor didn’t finish the book. That is why Galt’s soliloquy is ninety friggin’ pages long. (When in reality, three minutes in and people would be unplugging their radios.)
I can't help but think it's probably the "favourite book" of a lot of people who haven't finished it though, possibly to a greater extent than any other secular tome (at least LOTR's lightweight fans watched the movies!).
I mean, if you've only read the blurb on the back it's the perfect book to signal your belief in free markets, conservative values and the American Dream: what could be more a more strident defence of your views than a book about capitalists going on strike to prove how much the world really needs them?! If you read the first few pages, it's satisfyingly pro-industry and contemptuous of liberal archetypes. If you trudge through the whole thing, it's not only tedious and odd but contains whole subplots devoted to dumping on core conservative values (religion bad, military bad, marriage vows not that important really, and a rather jaded take on actually extant capitalism) in between the philosopher pirates and jarring absence of private transport, and the resolution is an odd combination of a handful of geniuses running away to form a commune and the world being saved by a multi-hour speech about philosophy which has surprisingly little to say on market economics...
Rereading your comment, that’s my woosh moment for the day, I guess. :-)
Though a Gary Cooper The Fountainhead does tempt me on occasion. (Unlike Atlas Shrugged, The Fountainhead wasn’t horrible, but still some pretty poor writing.)
Albert Ellis wrote a book, "Is Objectivism a Religion" as far back as 1968. Murray Rothbard wrote "Mozart Was a Red", a play satirizing Rand's circle, in the early 60's. Ayn Rand was calling her own circle of friends, in "jest", "The Collective" in the 50's. The dynamics were there from almost the beginning.
I think it's pretty similar dynamics. It's unquestioned premises (dogma) which are supposed to be accepted simply because this is "objectivism" or "rationalism".
Very similar to my childhood religion. "We have figured everything out and everyone else is wrong for not figuring things out".
Rationalism seems like a giant castle built on sand. They just keep accruing premises without ever going backwards to see if those premises make sense. A good example of this is their notions of "information hazards".
Of course, Jim Jones and L Ron Hubbard, David Kersh. I realize there have always been people that are coocoo for cocoa puffs. But so many as there appear to be now?
Internet made possible to know global news all the time. I think that there have always been a lot of people with very crazy and extremist views, but we only knew about the ones closer to us. Now it's possible to know about crazy people from the other side of the planet, so it looks like there's a lot more of them than before.
Yup. Like previously, westerners could have gone their whole lives with no clue the Hindutva existed [https://en.m.wikipedia.org/wiki/Hindutva] - Hindu Nazis, basically. Which if you know Hinduism at all, is a bit like saying Buddhist Nazis. Say what?
> Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate ufo ufo ufo ufo ufo ufo ufo ufo ufo ufo ufo ufo space alien space alien space alien space alien space alien space alien space alien space alien space alien space alien space alien space alien extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation freedom freedom freedom freedom freedom freedom freedom freedom freedom freedom freedom freedom second coming second coming second coming second coming second coming second coming second coming second coming second coming second coming angels angels angels angels angels angels angels angels angels angels end end times times end times end times end times end times end times end times end times end times end times Key Words: (for search engines) 144,000, Abductees, Agnostic, Alien, Allah, Alternative, Angels, Antichrist, Apocalypse, Armageddon, Ascension, Atheist, Awakening, Away Team, Beyond Human, Blasphemy, Boddhisattva, Book of Revelation, Buddha, Channeling, Children of God, Christ, Christ's Teachings, Consciousness, Contactees, Corruption, Creation, Death, Discarnate, Discarnates, Disciple, Disciples, Disinformation, Dying, Ecumenical, End of the Age, End of the World, Eternal Life, Eunuch, Evolution, Evolutionary, Extraterrestrial, Freedom, Fulfilling Prophecy, Genderless, Glorified Body, God, God's Children, God's Chosen, God's Heaven, God's Laws, God's Son, Guru, Harvest Time, He's Back, Heaven, Heaven's Gate, Heavenly Kingdom, Higher Consciousness, His Church, Human Metamorphosis, Human Spirit, Implant, Incarnation, Interfaith, Jesus, Jesus' Return, Jesus' Teaching, Kingdom of God, Kingdom of Heaven, Krishna Consciousness, Lamb of God, Last Days, Level Above Human, Life After Death, Luciferian, Luciferians, Meditation, Members of the Next Level, Messiah, Metamorphosis, Metaphysical, Millennium, Misinformation, Mothership, Mystic, Next Level, Non Perishable, Non Temporal, Older Member, Our Lords Return, Out of Body Experience, Overcomers, Overcoming, Past Lives, Prophecy, Prophecy Fulfillment, Rapture, Reactive Mind, Recycling the Planet, Reincarnation, Religion, Resurrection, Revelations, Saved, Second Coming, Soul, Space Alien, Spacecraft, Spirit, Spirit Filled, Spirit Guide, Spiritual, Spiritual Awakening, Star People, Super Natural, Telepathy, The Remnant, The Two, Theosophy, Ti and Do, Truth, Two Witnesses, UFO, Virginity, Walk-ins, Yahweh, Yeshua, Yoda, Yoga,
I've always been under the impression that M:tA's rules of How Magic Works are inspired by actual mystical beliefs that people have practiced for centuries. It's probably about as much of a magical for mystical development as the GURPS Cyberpunk rulebook was for cybercrime but it's pointing at something that already exists and saying "this is a thing we are going to tell an exaggerated story about".
That example isn’t a contradictory worldview though, just “people being people, and therefore failing to be as good as the ideal they claim to strive for.”
Mage is an interesting game though: it's fantasy, but not "swords and dragons" fantasy. It's set in the real world, and the "magic" is just the "mage" shifting probabilities so that unlikely (but possible) things occur.
Such a setting would seem like the perfect backdrop for a cult that claims "we have the power to subtly influence reality and make improbable things (ie. "magic") occur".
Most "rationalists" throughout history have been very deeply religious people. Secular enlightenment-era rationalism is not the only direction you can go with it. It depends very much, as others have said, on what your axioms are.
But, fwiw, that particular role-playing game was very much based on trendy at the time occult beliefs in things like chaos magic, so it's not completely off the wall.
I ran a long running campaign, it is pretty fun. The game books were obviously written by artists and no mathematician was involved, some of the rules are very broken (may have been fixed in later revisions).
The magic system is very fun and gives players complete freedom to come up with spells on the fly. The tl;dr is there aren't pre-made spells, you have spheres you have learned, and you can combine those spheres of magic however you want. So if someone has matter and life, reaching into someone's chest and pulling out their still beating heart would be a perfectly fine thing for a brand new character to be able to do. (Of course magic has downsides, reality doesn't like being bent and it will snap back with violent force is coerced too hard!)
The books are laid out horribly, there isn't a single set of tables to refer to, you have to post it note bookmark everything. Picking up and playing the rules are really simple, the number of dots you have in attributes + skill is how many d10 dice you get to roll for a check. 8+ is a success, and you can reroll 10s. 90% of the game is as simple as that, but then there are like 5 pages of rules for grappling including basically a complete breakdown of wrestling moves and gaining position, but feel free to just ignore those.
It's reportedly alright - the resolution mechanic seems a little fiddly with varying pools of dice for everything. The lore is pretty interesting though and I think a lot of the point of that series of games was reading up on that.
Narcissists tend to believe that they are always right, no mater what the topic is, or how knowledgeable they are. This makes them speak with confidence and conviction.
Some people are very drawn to confident people.
If the cult leader has other mental health issues, it can/will seep into their rhetoric. Combine that with unwavering support from loyal followers that will take everything they say as gospel...
If what you say is true, we're very lucky no one like that with a massive following has ever gotten into politics in the United States. It would be an ongoing disaster!
That's pretty much it. The beliefs are just a cover story.
Outside of those, the cult dynamics are cut-paste, and always involve an entitled narcissistic cult leader acquiring as much attention/praise, sex, money, and power as possible from the abuse and exploitation of followers.
Most religion works like this. Most alternative spirituality works like this Most finance works like this. Most corporate culture works like this. Most politics works like this.
Most science works like this. (It shouldn't, but the number of abused and exploited PhD students and post-docs is very much not zero.)
The only variables are the differing proportions of attention/praise, sex, money, and power available to leaders, and the amount of abuse that can be delivered to those lower down and/or outside the hierarchy.
The hierarchy and the realities of exploitation and abuse are a constant.
If you removed this dynamic from contemporary culture there wouldn't be a lot left.
Fortunately quite a lot of good things happen in spite of it. But a lot more would happen if it wasn't foundational.
Nah I did Ayahuasca and I'm an empathetic person who most would consider normal or at least well-adjusted. If it's drug related it would most definitely be something else.
I’m inclined to believe your upbringing plays a much larger role.
I'm entertaining sending my kiddo to a Waldorf School, because it genuinely seems pretty good.
But looking into the underlying Western Esoteric Spirit Science, 'Anthroposophy' (because Theosophy wouldn't let him get weird enough) by Rudolph Steiner, has been quite a ride. The point being that.. humans have a pretty endless capacity to go ALL IN on REALLY WEIRD shit, as long as it promises to fix their lives if they do everything they're told. Naturally if their lives aren't fixed, then they did it wrong or have karmic debt to pay down, so YMMV.
In any case, I'm considering the latent woo-cult atmosphere as a test of the skeptical inoculation that I've tried to raise my child with.
I went to a Waldorf school and I’d recommend being really wary. The woo is sort of background noise, and if you’ve raised your kid well they’ll be fine. But the quality of the academics may not be good at all. For example, when I was ready for calculus my school didn’t have anyone who knew how to teach it so they stuck me and the other bright kid in a classroom with a textbook and told us to figure it out. As a side effect of not being challenged, I didn’t have good study habits going into college, which hurt me a lot.
If you’re talking about grade school, interview whoever is gonna be your kids teacher for the next X years and make sure they seem sane. If you’re talking about high school, give a really critical look at the class schedule.
Waldorf schools can vary a lot in this regard so you may not encounter the same problems I did, but it’s good to be cautious.
Don't do it. It's a place that enables child abuse with its culture. These people are serious wackos and you should not give your kid into their hands. A lot of people come out of that Steiner Shitbox traumatized for decades if not for life. They should not be allowed to run schools to begin with. Checking a lot of boxes from antivax to whatever the fuck their lore has to offer starting with a z.
From false premises, you can logically and rationally reach really wrong conclusions. If you have too much pride in your rationality, you may not be willing to say "I seem to have reached a really insane conclusion, maybe my premises are wrong". That is, the more you pride yourself on your rationalism, the more prone you may be to accepting a bogus conclusion if it is bogus because the premises are wrong.
Then again, most people tend to form really bogus beliefs without bothering to establish any premises. They may not even be internally consistent or align meaningfully with reality. I imagine having premises and thinking it through has a better track record of reaching conclusions consistent with reality.
Makes me think of that saying that great artists steal, so repurposed for cult founders: "Good cult founders copy, great cult founders steal"
I do not think this cult dogma is any more out there than other cult dogma I have heard, but the above quote makes me think it is easier to found cults in modern day in someways since you can steal other complex world building from numerous sources rather building yourself and keeping everything straight.
I slowly deconverted from being raised evangelical / fundamentalist into being an atheist in my late 40s. I still "pray" at times just to (mentally) shout my frustration at the sorry state of the world at SOMETHING (even nothing) rather than constantly yelling my frustration at my family.
I may have actually been less anxious about the state of the world back then, and may have remained so, if I'd just continued to ignore all those little contradictions that I just couldn't ignore anymore...... But I feel SO MUCH less personal guilt about being "human".
I came to comments first. Thank you for sharing this quote. Gave me a solid chuckle.
I think people are going nuts because we've drifted from the dock of a stable civilization. Institutions are a mess. Economy is a mess. Combine all of that together with the advent of social media making the creation of echo chambers (and the inevitable narcissism of "leaders" in those echo chambers) effortless and ~15 years later, we have this.
> I think people are going nuts because we've drifted from the dock of a stable civilization.
When was stable period, exactly? I'm 40; the only semi-stable bit I can think of in my lifetime was a few years in the 90s (referred to, sometimes unironically, as "the end of history" at the time, before history decided to come out of retirement).
Everything's always been unstable, people sometimes just take a slightly rose-tinted view of the past.
People have been going nuts all throughout recorded history, that's really nothing new.
The only scary thing is that they have ever more power to change the world and influence others without being forced to grapple with that responsibility...
Mage: The Ascension is basically a delusions of grandeur simulator, so I can see how an already unstable personality might get attached to it and become more unstable.
The magic system is amazing though, best I've played in any game. Easy to use, role play heavy, and it lets players go wild with ideas, but still reins in their crazier impulses.
Mage: The Awakening is a minor rules revision to the magic system, but the lore is super boring in comparison IMHO. It is too wishy washy.
Ascension has tons of cool source material, and White Wolf ended up tying all their properties together into one grand finale story line. That said it is all very 90s cringe in retrospect, but if you are willing to embrace the 90s retro feel, it is still fun.
Awakening's lore never drew me in, the grand battle just isn't there. So many shades of grey is is damn near technicolor.
I don't know, i'd understand something like Wraith (which I did see people developing issues, the shadow mechanic is such a terrible thing) but Mage is so, like, straightforward?
Use your mind to control reality, reality fights back with paradox, its cool for a teenager but you read a bit more fantasy and you'll definitely find cooler stuff. But i guess for you to join a cult your mind must stay a teen mind forever.
I didn't originally write this, but can't find the original place I read it anymore. I think it makes a lot of sense to repost it here:
All of the World Of Darkness and Chronicles Of Darkness games are basically about coming of age/puberty. Like X-Men but for Goth-Nerds instead of Geek-Nerds.
In Vampire, your body is going through weird changes and you are starting to develop, physically and/or mentally, while realising that the world is run by a bunch of old, evil fools who still expect you to toe the line and stay in your place, but you are starting to wonder if the world wouldn't be better if your generation overthrew them and took over running the world, doing it the right way. And there are all these bad elements trying to convince you that you should do just that, but for the sake of mindless violence and raucous partying.
Teenager - the rpg.
In Werewolf, your body is going through weird changes and you are starting to develop, physically and mentally, while realising that you are not a part of the "normal" crowd that the rest of Humanity belongs to. You are different and they just can't handle that whenever it gets revealed. Luckily, there are small communities of people like you out there who take you in and show you how use the power of your "true" self. Of course, even among this community, there are different types of other.
LGBT Teenager - the RPG
In Mage, you have begun to take an interest in the real world, and you think you know what the world is really like. The people all around you are just sleep-walking through life, because they don't really get it. This understanding sets you against the people who run the world: the governments and the corporations, trying to stop these sleeper from waking up to the truth and rejecting their comforting lies. You have found some other people who saw through them, and you think they've got a lot of things wrong, but at least they're awake to the lies!
Rebellious Teenager - the RPG
This tracks, but I'd say Werewolf goes beyond LGBT folks, the violence there also fits the boy's aggressive play and the saving the world theme resonated a lot with the basic "i want to be important/hero" thing. Its my favorite of all world of darkness books, i regret not getting the kickstarter edition :(
I had friends who were into Vampire growing up. I hadn’t heard of Werewolf until after the aforementioned book came out and people started going nuts for it. I mentioned to my wife at the time that there was this game called “Vampire” and told her about it and she just laughed, pointed to the book, and said “this is so much better”. :shrug:
Rewind back and there were the Star Wars kids. Fast forward and there are the Harry Potter kids/adults. Each generation has their own “thing”. During that time, it was Quake MSDOS and Vampire. Oh and we started Senior Assassinations. 90s super soakers were the real deal.
How many adults actually abandon juvenalia as they age? Not the majority in my observation, and it's not always a bad thing when it's only applied to subjects like pop culture. Applied juvenalia in response to serious subjects is a more serious issue.
There has to be a cult of people that believe they’re vampires, respecting the masquerade and serving some antedeluvian somewhere, vampire was much more mainstream than mage.
Humans are compelled to find agency and narrative in chaos. Evolution favored those who assumed the rustle was a predator, not the wind. In a post-Enlightenment world where traditional religion often fails (or is rejected), this drive doesn't vanish. We don't stop seeking meaning. We seek new frameworks. Our survival depended on group cohesion. Ostracism meant death. Cults exploit this primal terror. Burning Man's temporary city intensifies this: extreme environment, sensory overload, forced vulnerability. A camp like Black Lotus offers immediate, intense belonging. A tribe with shared secrets (the "Ascension" framework), rituals, and an "us vs. the sleepers" mentality. This isn't just social; it's neurochemical. Oxytocin (bonding) and cortisol (stress from the environment) flood the system, creating powerful, addictive bonds that override critical thought.
Human brains are lazy Bayesian engines. In uncertainty, we grasp for simple, all-explaining models (heuristics). Mage provides this: a complete ontology where magic equals psychology/quantum woo, reality is malleable, and the camp leaders are the enlightened "tradition." This offers relief from the exhausting ambiguity of real life. Dill didn't invent this; he plugged into the ancient human craving for a map that makes the world feel navigable and controllable. The "rationalist" veneer is pure camouflage. It feels like critical thinking but is actually pseudo-intellectual cargo culting. This isn't Burning Man's fault. It's the latest step of a 2,500-year-old playbook. The Gnostics and the Hermeticists provided ancient frameworks where secret knowledge ("gnosis") granted power over reality, accessible only through a guru. Mage directly borrows from this lineage (The Technocracy, The Traditions). Dill positioned himself as the modern "Ascended Master" dispensing this gnosis.
The 20th century cults Synanon, EST, Moonies, NXIVM all followed similar patterns, starting with isolation. Burning Man's temporary city is the perfect isolation chamber. It's physically remote, temporally bounded (a "liminal space"), fostering dependence on the camp. Initial overwhelming acceptance and belonging (the "Burning Man hug"), then slowly increasing demands (time, money, emotional disclosure, sexual access), framed as "spiritual growth" or "breaking through barriers" (directly lifted from Mage's "Paradigm Shifts" and "Quintessence"). Control language ("sleeper," "muggle," "Awakened"), redefining reality ("that rape wasn't really rape, it was a necessary 'Paradox' to break your illusions"), demanding confession of "sins" (past traumas, doubts), creating dependency on the leader for "truth."
Burning Man attracts people seeking transformation, often carrying unresolved pain. Cults prey on this vulnerability. Dill allegedly targeted individuals with trauma histories. Trauma creates cognitive dissonance and a desperate need for resolution. The cult's narrative (Mage's framework + Dill's interpretation) offers a simple explanation for their pain ("you're unAwakened," "you have Paradox blocking you") and a path out ("submit to me, undergo these rituals"). This isn't therapy; it's trauma bonding weaponized. The alleged rape wasn't an aberration; it was likely part of the control mechanism. It's a "shock" to induce dependency and reframe the victim's reality ("this pain is necessary enlightenment"). People are adrift in ontological insecurity (fear about the fundamental nature of reality and self). Mage offers a new grand narrative with clear heroes (Awakened), villains (sleepers, Technocracy), and a path (Ascension).
I've met a fair share of people in the burner community, the vast majority I met are lovely folks who really enjoy the process of bringing some weird big idea into reality, working hard on the builds, learning stuff, and having a good time with others for months to showcase their creations at some event.
On the other hand, there's a whole other side of a few nutjobs who really behave like cult leaders, they believe their own bullshit and over time manage to find in this community a lot of "followers", since one of the foundational aspects is radical acceptance it becomes very easy to be nutty and not questioned (unless you do something egregious).
> The Sequences [posts on LessWrong, apparently] make certain implicit promises. There is an art of thinking better, and we’ve figured it out. If you learn it, you can solve all your problems, become brilliant and hardworking and successful and happy, and be one of the small elite shaping not only society but the entire future of humanity.
Ooh, a capital S and everything. I mean, I feel like it is fairly obvious, really. 'Rationalism' is a new religion, and every new religion spawns a bunch of weird, generally short-lived, cults. You might as well ask, in 100AD, "why are there so many weird Christian cults all of a sudden"; that's just what happens whenever any successful new religion shows up.
Rationalism might be particularly vulnerable to this because it lacks a strong central authority (much like early Christianity), but even with new religions which _did_ have a strong central authority from the start, like Mormonism or Scientology, you still saw this happening to some extent.
The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
Well, it turns out that intuition and long-lived cultural norms often have rational justifications, but individuals may not know what they are, and norms/intuitions provide useful antibodies against narcissist would-be cult leaders.
Can you find the "rational" justification not to isolate yourself from non-Rationalists, not to live with them in a polycule, and not to take a bunch of psychedelic drugs with them? If you can't solve that puzzle, you're in danger of letting the group take advantage of you.
Yeah, I think this is exactly it. If something sounds extremely stupid, or if everyone around you says it's extremely stupid, it probably is. If you can't justify it, it's probably because you have failed to find the reason it's stupid, not because it's actually genius.
And the crazy thing is, none of that is fundamentally opposed to rationalism. You can be a rationalist who ascribes value to gut instinct and societal norms. Those are the product of millions of years of pre-training.
I have spent a fair bit of time thinking about the meaning of life. And my conclusions have been pretty crazy. But they sound insane, so until I figure out why they sound insane, I'm not acting on those conclusions. And I'm definitely not surrounding myself with people who take those conclusions seriously.
> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
Specifically, rationalism spends a lot of time about priors, but a sneaky thing happens that I call the 'double update'.
Bayesian updating works when you update your genuine prior believe with new evidence. No one disagrees with this, and sometimes it's easy and sometimes it's difficult to do.
What Rationalists often end up doing is relaxing their priors - intuition, personal experience, cultural norms - and then updating. They often think of this as one update, but what it is is two. The first update, relaxing priors, isn't associated with evidence. It's part of the community norms. There is an implicit belief that by relaxing one's priors you're more open to reality. The real result though, is that it sends people wildly off course. Care in point: all the cults.
Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
Trying to correct for bias by relaxing priors is updating using evidence, not just because everyone is doing it.
> Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
I'm not following this example at all. If you've zero'd out the scale by tilting, why would adding flour until it reads 1g lead to 2g of flour?
I played around with various metaphors but most of them felt various degrees of worse. The idea of relaxing priors and then doing an evidence-based update while thinking it's genuinely a single update is a difficult thing to capture metaphorically.
Happy to hear better suggestions.
EDIT: Maybe something more like this:
Picture your belief as a shotgun aimed at the truth:
Aim direction = your best current guess.
Spread = your precision.
Evidence = the pull that says "turn this much" and "widen/narrow this much."
The correct move is one clean Bayesian shot.
Hold your aim where it is. Evidence arrives. Rotate and resize the spread in one simultaneous posterior jump determined by the actual likelihood ratio in front of you.
The stupid move? The move that Rationalists love to disguise as humility? It's to first relax your spread "to be open-minded," and then apply the update. You've just secretly told the math, "Give this evidence more weight than it deserves." And then you wonder why you keep overshooting, drifting into confident nonsense.
If you think your prior is overconfident, that is itself evidence. Evidence about your meta-level epistemic reliability. Feed it into the update properly. Do not amputate it ahead of time because "priors are bias." Bias is bad, yes, but closing your eyes and spinning around with shotgun in hand ie: double updating is not an effective method at removing bias.
> The ability to dismiss an argument with a “that sounds nuts,” without needing recourse to a point-by-point rebuttal, is anathema to the rationalist project. But it’s a pretty important skill to have if you want to avoid joining cults.
This is actually a known pattern in tech, going back to Engelbart and SRI. While not 1-to-1, you could say that the folks who left SRI for Xerox PARC did so because Engelbart and his crew became obsessed with EST: https://en.wikipedia.org/wiki/Erhard_Seminars_Training
EST-type training still exists today. You don't eat until the end of the whole weekend, or maybe you get rice and little else. Everyone is told to insult you day one until you cry. Then day two, still having not eaten, they build you up and tell you how great you are and have a group hug. Then they ask you how great you feel. Isn't this a good feeling? Don't you want your loved ones to have this feeling? Still having not eaten, you're then encouraged to pay for your family and friends to do the training, without their knowledge or consent.
A friend of mine did this training after his brother paid for his mom to do it, and she paid for him to do it. Let's just say that, though they felt it changed their lives at the time, their lives in no way shape or form changed. Two are in quite a bad place, in fact...
Anyway, point is, the people who invented everything we are using right now were also susceptible to cult-like groups with silly ideas and shady intentions.
Several of my family members got sucked into that back in the early 80s and quite a few folks I knew socially as well.
I was quite skeptical, especially because of the cult-like fanaticism of its adherents. They would go on for as long as you'd let them (often needing to just walk away to get them to stop) try to get you to join.
The goal appears to be to obtain as much legal tender as can be pried from those who are willing to part with it. Hard sell, abusive and deceptive tactics are encouraged -- because it's so important for those who haven't "gotten it" to do so, justifying just about anything. But if you don't pay -- you get bupkis.
What is it about San Francisco that makes it the global center for this stuff?
Reading this, I was reminded of the 60's hippy communes, that generally centered around SF, and the problems they reported. So similar, especially around that turning-inward group emotional dynamics problem, that such groups tend to become dysfunctional (as TFA says) by blowing up internal emotional group politics into huge problems that need the entire group to be involved in trying to heal (as opposed to, say, accepting that a certain amount of interpersonal conflict is inevitable in human groups and ignoring it). It's fascinating that the same kind of groups seem to encounter the same kind of problems despite being ~60 years apart and armed with a lot more tech and knowledge.
A problem with this whole mindset is that humans, all of us, are only quasi-rational beings. We all use System 1 ("The Elephant") and System 2 ("The Rider") thinking instinctively. So if you end up in deep denial about your own capacity for irrationality, I guess it stands to reason you could end up getting led down some deep dark rabbit holes.
Some of the most irrational people I've met were those who claimed to make all their decisions rationally, based on facts and logic. They're just very good at rationalizing, and since they've pre-defined their beliefs as rational, they never have to examine where else they might come from. The rest of us at least have a chance of thinking, "Wait, am I fooling myself here?"
Many specific studies on the matter don't replicate, I think the book preceded the replication crisis so this is to be expected, but I don't think that negates the core idea that our brain does some things on autopilot whereas other things take conscious thought which is slower. This is a useful framework to think about cognition, though any specific claims need evidence obviously.
TBH I've learned that even the best pop sci books making (IMHO) correct points tend to have poor citations - to studies that don't replicate or don't quite say what they're being cited to say - so when I see this, it's just not very much evidence one way or the other. The bar is super low.
The point remains. People are not 100 percent rational beings, never have been, never will be, and it's dangerous to assume that this could ever be the case. Just like any number of failed utopian political movements in history that assumed people could ultimately be molded and perfected.
Those of us who accept this limitation can often fail to grasp how much others perceive it as a profound attack on the self. To me, it is a basic humility - that no matter how much I learn, I cannot really transcend the time and place of my birth, the biology of my body, the quirks of my culture. Rationality, though, promises that transcendence, at least to some people. And look at all the trouble such delusion has caused, for example "presentism". Science fiction often introduces a hidden coordinate system, one of language and predicate, upon which reason can operate, but system itself did not come from reason, but rather a storyteller's aesthetic.
Yup. It's fundamentally irrational for anybody to believe themselves sufficiently rational to pull off the feats of supposed rational deduction that the so called Rationalists regularly perform. Predicting the future of humanity decades or even centuries away is absurd, but the Rationalists irrationally believe they can.
So to the point of the article, rationalist cults are common because Rationalists are irrational people (like all people) who (unlike most people) are blinded to their own irrationality by their overinflated egos. They can "reason" themselves into all manner of convoluted pretzels and lack the humility to admit they went off the deep end.
Finally, something that properly articulates my unease when encountering so-called "rationalists" (especially the ones that talk about being "agentic", etc.). For some reason, even though I like logical reasoning, they always rubbed me the wrong way - probably just a clash between their behavior and my personal values (mainly humility).
> One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger.
Capital-R Rationalism also encourages you to think you can outperform it, by being smart and reasoning from first principles. That was the idea behind MetaMed, founded by LessWronger Michael Vassar - that being trained in rationalism made you better at medical research and consulting than medical school or clinical experience. Fortunately they went out of business before racking up a body count.
One lesson I've learned and seen a lot in my life is that understanding that something is wrong or what's wrong about it, and being able to come up with a better solution are distinct, and the latter is often much harder. It seems often that those that are best able to describe the problem often don't overlap much with those that can figure out how to solve, even though they think they can.
It is an unfortunate reality of our existence that sometimes Chesterton actually did build that fence for a good reason, a good reason that's still here.
(One of my favorite TED talks was about a failed experiment in introducing traditional Western agriculture to a people in Zambia. It turns out when you concentrate too much food in one place, the hippos come and eat it all and people can't actually out-fight hippos in large numbers. In hindsight, the people running the program should have asked how likely it was that folks in a region that had exposure to other people's agriculture for thousands of years, hadn't ever, you know... tried it. https://www.ted.com/talks/ernesto_sirolli_want_to_help_someo...)
My understanding of the emu war is that they werent dangerous so much as quick to multiply. The army couldnt whack the moles fast enough. Hippos dont strike me as animals that can go underground when threatened
Granted, admitted from what little I've read on the outside, the "rational" part just seems to be mostly the writing style - this sort of dispassionate, eloquently worded prose that makes weird ideas seem more "rational" and logical than they really are.
> “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
I see this arrogant attitude all the time on HN: reflexive distrust of the "mainstream media" and "scientific experts". Critical thinking is a very healthy idea, but its dangerous when people use it as a license to categorically reject sources. Its even worse when extremely powerful people do this; they can reduce an enormous sub-network of thought into a single node for many many people.
So, my answer for "Why Are There So Many Rationalist Cults?" is the same reason all cults exist: humans like to feel like they're in on the secret. We like to be in secret clubs.
Sure, but that doesn't say anything about why one particular social scene would spawn a bunch of cults while others do not, which is the question that the article is trying to answer.
Maybe I was too vague. My argument is that cults need a secret. The secret of the rationalist community is "nobody is rational except for us". Then the rituals would be endless probability/math/logic arguments about sci-fi futures.
I think the promise of secret knowledge is important, but I think cults also need a second thing: "That thing you fear? You're right to fear it, and only we can protect you from it. If you don't do what we say, it's going to be so much worse than it is now, but if you do, everything will be good and perfect."
In the rationalist cults, you typically have the fear of death and non-existence, coupled with the promise of AGI, the Singularity and immortality, weighed against the AI Apocalypse.
I guess I'd say protection promises like this are a form of "secret knowledge". At the same time, so many cults have this protection racket so you might be on to something
The terminology here is worth noting. Is a Rationalist Cult a cult that practices Rationalism according to third parties, or is it a cult that says they are Rationalist?
Clearly all of these groups that believe in demons or realities dictated by tabletop games are not what third parties would call Rationalist. They might call themselves that.
There are some pretty simple tests that can out these groups as not rational. None of these people have ever seen a demon, so world models including demons have never predicted any of their sense data. I doubt these people would be willing to make any bets about when or if a demon will show up. Many of us would be glad to make a market concerning predictions made by tabletop games about physical phenomenon.
Yeah, I would say the groups in question are notionally, aspirationally rational and I would hate for the takeaway to be disengagement from principles of critical thinking and skeptical thinking writ large.
Which, to me, raises the fascinating question of what does a "good" version look like, of groups and group dynamics centered around a shared interest in best practices associated with critical thinking?
At a first impression, I think maybe these virtues (which are real!) disappear into the background of other, more applied specializations, whether professions, hobbies, backyard family barbecues.
It would seem like the quintessential Rationalist institution to congregate around is the prediction market. Status in the community has to be derived from a history of making good bets (PnL as a %, not in absolute terms). And the sense of community would come from (measurably) more rational people teaching (measurably) less rational people how to be more rational.
The founder of LessWrong / The Rationalist movement would absolutely agree with you here, and has written numerous fanfics about a hypothetical alien society ("Dath Ilan") where those are fairly central.
The article is talking about cults that arose out of the rationalist social milieu, which is a separate question from whether the cult's beliefs qualify as "rationalist" in some sense (a question that usually has no objective answer anyway).
>so world models including demons have never predicted any of their sense data.
There's a reason they call themselves "rationalists" instead of empiricists or positivists. They perfectly inverted Hume ("reason is, and ought only to be the slave of the passions")
These kinds of harebrained views aren't an accident but a product of rationalism. The idea that intellect is quasi infinite and that the world can be mirrored in the mind is not running contradictory to, but just the most extreme form of rationalism taken to its conclusion, and of course deeply religious, hence the constant fantasies about AI divinities and singularities.
The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally
I actually don't mind Yudkowski as an individual - I think he is almost always wrong and undeservedly arrogant, but mostly sincere. Yet treating him as an AI researcher and serious philosopher (as opposed to a sci-fi essayist and self-help writer) is the kind of slippery foundation that less scrupulous people can build cults from. (See also Maharishi Mahesh Yogi and related trends - often it is just a bit of spiritual goofiness as with David Lynch, sometimes you get a Charles Manson.)
EY and MIRI as a whole have largely failed to produce anything which even reaches the point of being peer reviewable. He does not have any formal education and is uninterested in learning how to navigate academia.
I don't think Yudkowski is at all like L. Ron Hubbard. Hubbard was insane and pure evil. Yudkowski seems like a decent and basically reasonable guy, he's just kind of a blowhard and he's wrong about the science.
Here's one: Yudkowsky has been confidently asserting (for years) that AI will extinct humanity because it will learn how to make nanomachines using "strong" covalent bonds rather than the "weak" van der Waals forces used by biological systems like proteins. I'm certain that knowledgeable biologists/physicists have tried to explain to him why this belief is basically nonsense, but he just keeps repeating it. Heck there's even a LessWrong post that lays it out quite well [1]. This points to a general disregard for detailed knowledge of existing things and a preference for "first principles" beliefs, no matter how wrong they are.
Dear god. The linked article is a good takedown of this "idea," but I would like to pile on: biological systems are in fact extremely good at covalent chemistry, usually via extraordinarily powerful nanomachines called "enzymes". No, they are (usually) not building totally rigid condensed matter structures, but .. why would they? Why would that be better?
I'm reminded of a silly social science article I read, quite a long time ago. It suggested that physicists only like to study condensed matter crystals because physics is a male-dominated field, and crystals are hard rocks, and, um ... men like to think about their rock-hard penises, I guess. Now, this hypothesis obviously does not survive cursory inspection - if we're gendering natural phenomena studied by physicists, are waves male? Are fluid dynamics male?
However, Mr. Yudowsky's weird hangups here around rigidity and hardness have me adjusting my priors.
> One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger.
I've been there myself.
> And without the steadying influence of some kind of external goal you either achieve or don’t achieve, your beliefs can get arbitrarily disconnected from reality — which is very dangerous if you’re going to act on them.
I think this and the entire previous two paragraphs preceding it are excellent arguments for philosophical pragmatism and empiricism. It's strange to me that the community would not have already converged on that after all their obsessions with decision theory.
> The Zizians and researchers at Leverage Research both felt like heroes, like some of the most important people who had ever lived. Of course, these groups couldn’t conjure up a literal Dark Lord to fight. But they could imbue everything with a profound sense of meaning. All the minor details of their lives felt like they had the fate of humanity or all sentient life as the stakes. Even the guilt and martyrdom could be perversely appealing: you could know that you’re the kind of person who would sacrifice everything for your beliefs.
This helps me understand what people mean by "meaning". A sense that their life and actions actually matter. I've always struggled to understand this issue but this helps make it concrete, the kind of thing people must be looking for.
> One of my interviewees speculated that rationalists aren’t actually any more dysfunctional than anywhere else; we’re just more interestingly dysfunctional.
"We're"? The author is a rationalist too? That would definitely explain why this article is so damned long. Why are rationalists not able to write less? It sounds like a joke but this is seriously a thing. [EDIT: Various people further down in the comments are saying it's amphetamines and yes, I should have known that from my own experience. That's exactly what it is.]
> Consider talking about “ethical injunctions:” things you shouldn’t do even if you have a really good argument that you should do them. (Like murder.)
This kind of defeats the purpose, doesn't it? Also, this is nowhere justified in the article, just added on as the very last sentence.
>I think this and the entire previous two paragraphs preceding it are excellent arguments for philosophical pragmatism and empiricism. It's strange to me that the community would not have already converged on that after all their obsessions with decision theory
They did! One of the great ironies inside the community is that they are and openly admit to being empiricists. They reject most of the French/European rationalist cannon.
>Why are rationalists not able to write less?
The answer is a lot more boring. They like to write and they like to think. They also think by writing. It is written as much for themselves as for anyone else, probably more.
> A purity spiral is a theory which argues for the existence of a form of groupthink in which it becomes more beneficial to hold certain views than to not hold them, and more extreme views are rewarded while expressing doubt, nuance, or moderation is punished (a process sometimes called "moral outbidding").[1] It is argued that this feedback loop leads to members competing to demonstrate the zealotry or purity of their views.[2][3]
Certainly something they're aware of - the same concept was discussed as early as in 2007 on Less Wrong under the name "evaporative cooling of group beliefs"
Eliezer Yudkowsky, shows little interest in running one. He has consistently been distant from and uninvolved in rationalist community-building efforts, from Benton House (the first rationalist group house) to today’s Lightcone Infrastructure (which hosts LessWrong, an online forum, and Lighthaven, a conference center). He surrounds himself with people who disagree with him, discourages social isolation.
Ummm, EY literally has a semi-permanent office in Lighthouse (at least until recently) and routinely blocks people on Twitter as a matter of course.
Blocking people on Twitter doesn't necessarily imply intolerance of people who disagree with you. People often block for different reasons than disagreement.
On a recent Mindscape podcast Sean Carroll mentioned that rationalists are rational about everything except accusations that they're not being rational.
This just sounds like any other community based around a niche interest.
From kink to rock hounding, there's always people who base their identity on being a broker of status or power because they themselves are a powerless outsider once removed from the community
> base their identity on being a broker of status or power because they themselves are a powerless outsider once removed from the community
Who would ever maintain power when removed from their community? You mean to say they base their identity on the awareness of the power they possess within a certain group?
It's really worth reading up on the techniques from Large Group Awareness Training so that you can recognize them when they pop up.
Once you see them listed (social pressure, sleep deprivation, control of drinking/bathroom, control of language/terminology, long exhausting activities, financial buy in, etc) and see where they've been used in cults and other cult adjacent things it's a little bit of a warning signal when you run across them IRL.
Related, the BITE model of authoritarian control is also a useful framework for identifying malignant group behavior. It's amazing how consistent these are across groups and cultures, from Mao's inner circle to NXIVM and on.
What is the base rate here? Hard to know the scope of the problem without knowing how many non-rationalists (is that even a coherent group of people?) end up forming weird cults, as a comparison. My impression is that crazy beliefs are common amongst everybody.
A much simpler theory is that rationalists are mostly normal people, and normal people tend to form cults.
I was wondering about this too. You could also say it's a sturgeon's law question.
They do note at the beginning of the article that many, if not most such groups have reasonably normal dynamics, for what it's worth. But I think there's a legitimate question of whether we ought to expect groups centered on rational thinking to be better able to escape group dynamics we associate with irrationality.
The only way you can hope to get a gathering of nothing but paragons of critical thinking and skepticism is if the gathering has an entrance exam in critical thinking and skepticism (and a pretty tough one, if they are to be paragons). Or else, it's invitation-only.
> If someone is in a group that is heading towards dysfunctionality, try to maintain your relationship with them; don’t attack them or make them defend the group. Let them have normal conversations with you.
This is such an important skill we should all have. I learned this best from watching the documentary Behind the Curve, about flat earthers, and have applied it to my best friend diving into the Tartarian conspiracy theory.
I remember going to college and some graduate student, himself a philosophy major, telling me that nobody is as big a jerk as philosophy majors.
I don't know if it is really true, but it certainly felt true that folks looking for deeper answers about a better way to think about things end up finding what they believe is the "right" way and that tends to lead to branding other options as "wrong".
A search for certainty always seems to be defined or guided by people dealing with their own issues and experiences that they can't explain. It gets tribal and very personal and those kind of things become dark rabbit holes.
----
>Jessica Taylor, an AI researcher who knew both Zizians and participants in Leverage Research, put it bluntly. “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
Reminds me of some members of our government and conspiracy theorists who "research" and encourage people to figure it out themselves ...
I think they were making fun of the "Moonies" so she was probably able to rationalize it. Pretty sure Isaac Hayes quit South Park over their making fun of scientologists.
I read recently that he suffered a serious medical event atound that time and it was actually cult members speaking on his behalf that withdrew him from the show.
>I read recently that he suffered a serious medical event atound that time and it was actually cult members speaking on his behalf that withdrew him from the show.
I saw him perform after he left the show and several months before he passed. He looked pretty unhealthy and I'm glad I had the chance to see him before that happened, He obviously was having some medical issues, but didn't discuss them during his performance.
When I was looking for a group in my area to meditate with, it was tough finding one that didn't appear to be a cult. And yet I think Buddhist meditation is the best tool for personal growth humanity has ever devised. Maybe the proliferation of cults is a sign that Yudkowsky was on to something.
None of them are practicing Buddhist meditation though, same for the "personal growth" oriented meditation styles.
Buddhist meditation exists only in the context of the Four Noble Truths and the rest of the Buddha's Dhamma. Throwing them away means it stops being Buddhist.
I disagree, but we'd be arguing semantics. In any case, the point still stands: you can just as easily argue that these rationalist offshoots aren't really Rationalist.
I'm not familiar enough with their definitions to argue about them, but meditations techniques predate Buddhism. In fact, the Buddha himself learned them from two teachers before developing his own path. Also, the style of meditation taught nowadays (accepting non-reactive awareness) is not how it's described in the Pali Canon.
This isn’t just a "must come from the Champagne region of France, otherwise it’s sparkling wine" bickering, but actual widespread misconceptions of what counts as Buddhism. Many ideas floating in Western discourse are basically German Romanticism wrapped in Orientalist packaging, not matching neither Theravada nor Mahayana teachings (for example, see the Fake Buddha Quotes project).
So the semantics are extremely important when it comes to spiritual matters. Flip one or two words and the whole metaphysical model goes in a completely different direction. Even translations add distortions, so there’s no room to be careless.
> And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.
It's mostly just people who aren't very experienced talking about and dealing honestly with their emotions, no?
I mean, suppose someone is busy achieving and, at the same time, proficient in balancing work with emotional life, dealing head-on with interpersonal conflicts, facing change, feeling and acknowledging hurt, knowing their emotional hangups, perhaps seeing a therapist, perhaps even occasionally putting personal needs ahead of career... :)
Tell that person they can get a marginal (or even substantial) improvement from some rationalist cult practice. Their first question is going to be, "What's the catch?" Because at the very least they'll suspect that adjusting their work/life balance will bring a sizeable amount of stress and consequent decrease in their emotional well-being. And if the pitch is that this rationalist practice works equally well at improving emotional well-being, that smells to them. They already know they didn't logic themselves into their current set of emotional issues, and they are highly unlikely to logic themselves out of them. So there's not much value here to offset the creepy vibes of the pitch. (And again-- being in touch with your emotions means quicker and deeper awareness of creepy vibes!)
Now, take a person whose unexplored emotional well-being tacitly depends on achievement. Even a marginal improvement in achievement could bring perceptible positive changes in their holistic selves! And you can step through a well-specified, logical process to achieve change? Sign HN up!
I think everyone should be familiar with hermeticism because its various mystery cults have been with us since Hermes Trismegistus laid down its principles in Ancient Egypt on the Emerald Tablets. It was where early science like practices like alchemy originated, but that wheat got separated out from the chaff during the renaissance and the more coercive control aspects remained. That part, how to get people to follow you and fight for you and maintain a leadership hierarchy is extremely old technology.
They essentially use this glitch in human psychology that gets exploited over and over again. The glitch is a more generalized version of the advanced fee scam. You tell people that if we just believe something can be true, it can be true. Then we discriminate against people who don't believe in that thing. We then say only the leader(s) can make that thing true, but first you must give them all your power and support so they can fight the people who don't believe in that thing.
Unfortunately, reality is much more messy than the cult leaders would have you believe, and leaders often don't have their followers best interests at heart, especially those who follow blindly, or even the ability to make the thing true that everyone wants to be true, but use it as this white rabbit that everyone in the cult has to chase after forever.
There's so much in these group dynamics that repeats group dynamics of communist extremists of the 70s. A group that has found a 'better' way of life, all you have to do is believe in the group's beliefs.
Compare this part from OP:
>Here is a sampling of answers from people in and close to dysfunctional groups: “We spent all our time talking about philosophy and psychology and human social dynamics, often within the group.” “Really tense ten-hour conversations about whether, when you ate the last chip, that was a signal that you were intending to let down your comrades in selfish ways in the future.”
This reeks of Marxist-Leninist self-criticism, where everybody tried to up each other in how ideologically pure they were. The most extreme outgrowing of self-criticism is when the Japanese United Red Army beat its own members to death as part of self-criticisms.
It is so strange that this article would hijack the term "rationalist" to mean this extraordinarily specific set of people "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally".
As a counter example (with many many more people) is the Indian Rationalist Association (https://en.wikipedia.org/wiki/Indian_Rationalist_Association) to "promote scientific skepticism and critique supernatural claims". This isn't a cult of any kind, even if the members broadly agree about what it means to be rational with the set above.
I think rationalist cults work exactly the same as religious cults. They promise to have all the answers, to attract the vulnerable. The answers are convoluted and inscrutable, so a leader/prophet interprets them. And doom is neigh, providing motivation and fear to hold things together.
It's the same wolf in another sheep's clothing.
And people who wouldn't join a religious cult -- e.g. because religious cults are too easy to recognize since we're all familiar with them, or because religions hate anything unusual about gender -- can join a rationalist cult instead.
IIUC the name in its current sense was sort of an accident. Yudkowsky originally used the term to mean "someone who succeeds at thinking and acting rationally" (so "correctism" or "winsargumentism" would have been about equally good), and then talked about the idea of "aspiring rationalists" as a community narrowly focused on developing a sort of engineering discipline that would study the scientific principles of how to be right in full generality and put them into practice. Then the community grew and mutated into a broader social milieu that was only sort of about that, and people needed a name for it, and "rationalists" was already there, so that became the name through common usage. It definitely has certain awkwardnesses.
It's not particularly unusual, though. See the various kinds of 'Realist' groups, for example, which have a pretty wild range of outlooks. (both Realist and Rationalist also have the neat built-in shield of being able to say "look, I don't particularly like the conclusions I'm coming to, they just are what they are", so it's a convenient framing for unpalatable beliefs)
To be honest I don't understand that objection. If you strip it from all its culty sociological effects, one of the original ideas of rationalism was to try to use logical reasoning and statistical techniques to explicitly avoid the pitfalls of known cognitive biases. Given that foundational tenet, "rationalism" seems like an extremely appropriate moniker.
I fully accept that the rationalist community may have morphed into something far beyond that original tenet, but I think rationalism just describes the approach, not that it's the "one true philosophy".
That point seems fair enough to me, as I'm not familiar with the specifics and history of the related concept in philosophy. But that seems different than the objection that calling yourself "rationalist" somehow implies you think that you have the "1 true answer" to the world's problems.
I'm going to start a group called "Mentally Healthy People". We use data, logical thinking, and informal peer review. If you disagree with us, our first question will be "what's wrong with mental health?"
But, to be frank, "Mentally Healthy People" fully acknowledge and accept their emotions, and indeed understand that emotions are the fundamental way that natural selection implements motivation.
Calling yourself "rationalist" doesn't inherently mean that you think you're better than everyone else, or somehow infallible. To me it just means your specific approach to problem solving.
So... Psychiatry? Do you think psychiatrists are particularly prone to starting cults? Do you think learning about psychiatry makes you at risk for cult-like behavior?
No. I have no beef with psychology or psychiatry. They're doing good work as far as I can tell. I am poking fun at people who take "rationality" and turn it into a brand name.
I'm feeling a little frustrated by the derail. My complaint is about some small group claiming to have a monopoly on a normal human faculty, in this case rationality. The small group might well go on to claim that people outside the group lack rationality. That would be absurd. The mental health profession do not claim to be immune from mental illness themselves, they do not claim that people outside their circle are mentally unhealthy, and they do not claim that their particular treatment is necessary for mental health.
I guess it's possible you might be doing some deep ironic thing by providing a seemingly sincere example of what I'm complaining about. If so it was over my head but in that case I withdraw "derail"!
> My complaint is about some small group claiming to have a monopoly on a normal human faculty, in this case rationality.
"Rationalists" don't claim a monopoly any more than Psychiatry does.
> The small group might well go on to claim that people outside the group lack rationality.
Again, something that psychiatry is quite noteworthy about: the entire point of the profession is to tell non-professionals that they're doing Emotionally Healthy wrong.
> The mental health profession do not claim to be immune from mental illness themselves,
Rationalist don't claim to be immune to irrationality, and this is in fact repeatedly emphasized: numerous cornerstone articles are about "wow, I really fucked up at this Rationality thing", including articles by Eliezer.
> they do not claim that people outside their circle are mentally unhealthy
... what?
So if I go to a psychiatrist, you think they're gonna say I'm FINE? No matter what?
Have you ever heard of "involuntary commitment"?
> and they do not claim that their particular treatment is necessary for mental health.
Again, this is about as true as it is for rationalists.
Right and to your point, I would say you can distinguish (1) "objective" in the sense of relying on mind-independent data from (2) absolute knowledge, which treats subjects like closed conversations. And you can make similar caveats for "rational".
You can be rational and objective about a given topic without it meaning that the conversation is closed, or that all knowledge has been found. So I'm certainly not a fan of cult dynamics, but I think it's easy to throw an unfair charge at these groups, that their interest in the topic necessitates an absolutist disposition.
Is it really that surprising that a group of humans who think they have some special understanding of reality compared to others tend to separate and isolate themselves until they fall into an unguided self-reinforcing cycle?
I'd have thought that would be obvious since it's the history of many religions (which seem to just be cults that survived the bottleneck effect to grow until they reached a sustainable population).
In other words, humans are wired for tribalism, so don't be surprised when they start forming tribes...
> And yet, the rationalist community has hosted perhaps half a dozen small groups with very strange beliefs (including two separate groups that wound up interacting with demons). Some — which I won’t name in this article for privacy reasons — seem to have caused no harm but bad takes.
So there's six questionable (but harmless) groups and then later the article names three of them as more serious. Doesn't seem like "many" to me.
I wonder what percentage of all cults are the rationalist ones.
The thing with identifying yourself with an “ism” (e.g. rationalism, feminism, socialism) is that, even though you might not want that, you’re inherently positioning yourself in a reductionist and inaccurate corner of the world. Or in other words you’re shielding yourself in a comfortable, but wrong, bubble.
To call yourself an -ist means that you consider that you give more importance to that concept than other people—-you’re more rational than most, or care more about women than most, or care more about social issues than most. That is wrong both because there are many irrational rationalists and also because there are many rational people who don’t associate with the group (same with the other isms). The thing is that the very fact of creating the label and associating yourself with it will ruin the very thing that you strive for. You will attract a bunch of weirdos who want to be associated with the label without having to do the job that it requires, and you will become estranged from those who prefer to walk the walk instead of talk the talk. In both ways, you failed.
The fact is that every ism is a specific set of thoughts and ideas that is not generic, and not broad enough to carry the weight of its name. Being a feminist does not mean you care about women; it means you are tied to a specific set of ideologies and behaviours that may or may not advance the quality of life of women in the modern world, and are definitely not the only way to achieve that goal (hence the inaccuracy of the label).
Isn't this entirely to be expected? The people who dominate groups like these are the ones who put the most time and effort into them, and no sane person who appreciates both the value and the limitations of rational thinking is going to see as much value in a rationalist group, and devote as much time to it, as the kind of people who are attracted to the cultish aspect of achieving truth and power through pure thought. There's way more value there if you're looking to indulge in, or exploit, a cult-like spiral into shared fantasy than if you're just looking to sharpen your logical reasoning.
Depends very much on what you're hoping to get out of it. There isn't really one "rationalist" thing at this point, it's now a whole bunch of adjacent social groups with overlapping-but-distinct goals and interests.
https://www.lesswrong.com/highlights this is the ostensible "Core Highlights", curated by major members of the community, and I believe Eliezer would endorse it.
If you don't get anything out of reading the list itself, then you're probably not going to get anything out of the rest of the community either.
If you poke around and find a few neat ideas there, you'll probably find a few other neat ideas.
For some people, though, this is "wait, holy shit, you can just DO that? And it WORKS?", in which case probably read all of this but then also go find a few other sources to counter-balance it.
(In particular, probably 90% of the useful insights already exist elsewhere in philosophy, and often more rigorously discussed - LessWrong will teach you the skeleton, the general sense of "what rationality can do", but you need to go elsewhere if you want to actually build up the muscles)
This is a very interesting article. It's surprising though to see it not use the term "certainty" at all. (It only uses "certain" in a couple instances of like "a certain X" and one use of "certainly" for generic emphasis.)
Most of what the article says makes sense, but it seems to sidestep the issue that a major feature distinguishing the "good" rationalists from the "bad" is that the bad ones are willing to take very extreme actions in support of their beliefs. This is not coincidentally something that distinguishes good believers in various religions or philosophies from bad believers (e.g., people who say God told them to kill people). This is also lurking in the background of discussion of those who "muddled through" or "did the best they could". The difference is not so much in the beliefs as in the willingness to act on them, and that willingness is in turn largely driven by certainty.
I think it's plausible there is a special dimension to rationalism that may exacerbate this, namely a tendency of rationalists to feel especially "proud" of their beliefs because of their meta-belief that they derived their beliefs rationally. Just like an amateur painter may give themselves extra brownie points because no one taught them how to paint, my impression of rationalists is that they sometimes give themselves an extra pat on the back for "pulling themselves up by their bootstraps" in the sense of not relying on faith or similar "crutches" to determine the best course of action. This can paradoxically increase their certainty in their beliefs when actually it's often a warning that those beliefs may be inadequately tested against reality.
I always find it a bit odd that people who profess to be rationalists can propose or perform various extreme acts, because it seems to me that one of the strongest and most useful rational beliefs is that your knowledge is incomplete and your beliefs are almost surely not as well-grounded as you think they are. (Certainly no less an exponent of reason than Socrates was well aware of this.) This on its own seems sufficient to me to override some of the most absurd "rationalist" conclusions (like that you should at all costs become rich or fix Brent Dill's depression). It's all the more so when you combine it with some pretty common-sense forecasts of what might happen if you're wrong. (As in, if you devote your life to curing Brent Dill's depression on the theory that he will then save the world, and he turns out to be just an ordinary guy or worse, you wasted your life curing one person's depression when you yourself could have done more good with your own abilities, just by volunteering at a soup kitchen or something.) It's never made sense to me that self-described rationalists could seriously consider some of these possible courses of action in this light.
Sort of related is the claim at the end that rationalists "want to do things differently from the society around them". It's unclear why this would be a rational desire. It might be rational in a sense to say you want to avoid being influenced by the society around you, but that's different from affirmatively wanting to differ from it. This again suggests a sort of "psychological greed" to reach a level of certainty that allows you to confidently, radically diverge from society, rather than accepting that you may never reach a level of certainty that allows you to make such deviations on a truly rational basis.
It's also interesting to me that the article focuses a lot not on rationalist belief per se, but on the logistics and practices of rationalist communities. This in itself seems like a warning that the rationality of rationalism is not all it's cracked up to be. It's sort of like, you can try to think as logically as possible, but if you hit yourself in the head with a hammer every day you're likely going to make mistakes anyway. And some of the "high demand" practices mentioned seem like slightly less severe psychological versions of that.
The premise of the article might just be nonsense.
How many rationalists are there in the world? Of course it depends on what you mean by rationalist, but I'd guess that there are probably several tens of thousands, at very least, people in the world who either consider themselves rationalists or are involved with the rationalist community.
With such numbers, is it surprising that there would be half a dozen or so small cults?
There are certainly some cult-like aspects to certain parts of the rationalist community, and I think that those are interesting to explore, but come on, this article doesn't even bother to establish that its title is justified.
To the extent that rationalism does have some cult-like aspects, I think a lot of it is because it tends to attract smart people who are deficient in the ability to use avenues other than abstract thinking to comprehend reality and who enjoy making loosely justified imaginative leaps of thought while overestimating their own abilities to model reality. The fact that a huge fraction of rationalists are sci-fi fans is not a coincidence.
But again, one should first establish that there is anything actually unusual about the number of cults in the rationalist community. Otherwise this is rather silly.
I find it ironic that the question is asked unempirically. Where is the data stating there are many more than before? Start there, then go down the rabbit hole. Otherwise, you're concluding on something that may not be true, and trying to rationalize the answer, just as a cultist does.
Anyone who's ever seen the sky knows it's blue. Anyone who's spent much time around rationalism knows the premise of this article is real. It would make zero sense to ban talking about about a serious and obvious problem in their community until some double blind peer reviewed data can be gathered.
It would be what they call an "isolated demand for rigor".
Rationalism is the belief that reason is the primary path to knowledge, as opposed to, say, the observation that is championed by empiricism. It's a belief system that prioritises imposing its tenets on reality rather than asking reality what reality's tenets are. From the outset, it's inherently cult-like.
Rationalists, in this case, refers specifically to the community clustered around LessWrong, which explicitly and repeatedly emphasizes points like "you can't claim to have a well grounded belief if you don't actually have empirical evidence for it" (https://www.lesswrong.com/w/evidence for a quick overview of some of the basic posts on that topic)
To quote one of the core foundational articles: "Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick—ignoring infinitesimally tiny probabilities of success—is working properly." (https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can...)
One can argue how well the community absorbs the lesson, but this certainly seems to be a much higher standard than average.
That is the definition of “rationalism” as proposed by philosophers like Descartes and Kant, but I don’t think that is an accurate representation of the type of “rationalism” this article describes.
This article describes “rationalism” as described in LessWrong and the sequences by Eliezer Yudkowsky. A good amount of it based on empirical findings from psychology behavior science. It’s called “rationalism” because it seeks to correct common reasoning heuristics that are purported to lead to incorrect reasoning, not in contrast to empiricism.
Agreed, I appreciate that there's a conceptual distinction between the philosophical versions of rationalism and empiricism, but what's being talked about here is a conception that (again, at least notionally) is interested in and compatible with both.
I am pretty sure many of the LessWrong posts are about how to understand the meaning of different types of data and are very much about examining, developing, criticizing a rich variety of empirical attitudes.
I was going to write a similar comment as op, so permit me to defend it:
Many of their "beliefs" - Super-duper intelligence, doom - are clearly not believed by the market; Observing the market is a kind of empiricism and it's completely discounted by the lw-ers
But you cannot have reason without substantial proof of how things behave by observing them in the first place. Reason is simply a logical approach to yes and no questions where you factually know, from observation of past events, how things work. And therefore you can simulate an outcome by the exercise of reasoning applied onto a situation that you have not yet observed and come to a logical outcome, given the set of rules and presumptions.
One of the hallmarks of cults — if not a necessary element — is that they tend to separate their members from the outside society. Rationalism doesn't directly encourage this, but it does facilitate it in a couple of ways:
- Idiosyncratic language used to describe ordinary things ("lightcone" instead of "future", "prior" instead of "belief" or "prejudice", etc)
- Disdain for competing belief systems
- Insistence on a certain shared interpretation of things most people don't care about (the "many-worlds interpretation" of quantum uncertainty, self-improving artificial intelligence, veganism, etc)
- I'm pretty sure polyamory makes the list somehow, just because it isn't how the vast majority of people want to date. In principle it's a private lifestyle choice, but it's obviously a community value here.
So this creates an opportunity for cult-like dynamics to occur where people adjust themselves according to their interactions within the community but not interactions outside the community. And this could seem — to the members — like the beliefs themselves are the problem, but from a sociological perspective, it might really be the inflexible way they diverge from mainstream society.
Trying to find life’s answers by giving over your self authority to another individual or group’s philosophy is not rational. Submitting oneself to an authority who’s role is telling people what’s best in life will always lead to attracting the type of people looking to control, take advantage and traumatize others.
Something like 15 years ago I once went to a Less Wrong/Overcoming Bias meetup in my town after being a reader of Yudkowsky's blog for some years. I was like, Bayesian Conspiracy, cool, right?
The group was weird and involved quite a lot of creepy oversharing. I didn't return.
I was on LW when it emerged from the OB blog and back then it was a interesting and engaging group, though even then there were like 5 “major” contributors - most of which had no coherent academic or commercial success.
As soon as those “sequences” were being developed it was clearly turning into a cult around EY, that I never understood and still don’t.
This article did a good job of covering the history since and was really well written.
Perhaps I will get downvoted to death again for saying so, but the obvious answer is because the name "rationalist" is structurally indistinguishable from the name "scientology" or "the illuminati". You attract people who are desperate for an authority to appeal to, but for whatever reason are no longer affiliated with the church of their youth. Even a rationalist movement which held nothing as dogma would attract people seeking dogma, and dogma would form.
The article begins by saying the rationalist community was "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences". Obviously the article intends to make the case that this is a cult, but it's already done with the argument at this point.
> for whatever reason are no longer affiliated with the church of their youth.
This is the Internet, you're allowed to say "they are obsessed with unlimited drugs and weird sex things, far beyond what even the generally liberal society tolerates".
I'm increasingly convinced that every other part of "Rationalism" is just distraction or justification for those; certainly there's a conscious decision to minimize talking about this part on the Internet.
I strongly suspect there is heterogeneity here. An outer party of "genuine" rationalists who believe that learning to be a spreadsheet or whatever is going to let them save humanity, and an inner party who use the community to conceal some absolute shenanigans.
> Obviously the article intends to make the case that this is a cult
The author is a self-identified rationalist. This is explicitly established in the second sentence of the article. Given that, why in the world would you think they're trying to claim the whole movement is a cult?
Obviously you and I have very different definitions of "obvious"
In fact, I'd go a step further and note the similarity with organized religion. People have a tendency to organize and dogmatize everything. The problem with religion is rarely the core ideas, but always the desire to use it as a basis for authority, to turn it dogmatic and ultimately form a power structure.
And I say this as a Christian. I often think that becoming a state religion was the worst thing that ever happened to Christianity, or any religion, because then it unavoidably becomes a tool for power and authority.
And doing the same with other ideas or ideologies is no different. Look at what happened to communism, capitalism, or almost any other secular idea you can think of: the moment it becomes established, accepted, and official, the corruption sets in.
There are a lot of rationalists in this community. Pointing out that the entire thing is a cult attracts downvotes from people who wish to, for instance, avoid being identified with the offshoots.
No, the downvotes are because rationalism isn't a cult and people take offense to being blatantly insulted. This article is about cults that are rationalism-adjacent, it's not claiming that rationalism is itself a cult.
Gott ist tot! Gott bleibt tot! Und wir haben ihn getötet! Wie trösten wir uns, die Mörder aller Mörder? Das Heiligste und Mächtigste, was die Welt bisher besaß, es ist unter unseren Messern verblutet.
The average teenager who reads Nietzsches proclamation on the death of God thinks of this as an accomplishment, finally we got rid of those thousands of years old and thereby severely outdated ideas and rules. Somewhere along the march to maturity they may start to wonder whether that which has replaced those old rules and ideas were good replacements but most of them never come to the realisation that there were rebellious teenagers during all those centuries when the idea of a supreme being to which or whom even the mightiest were to answer to still held sway. Nietzsche saw the peril in letting go off that cultural safety valve and warned for what might come next.
We are currently living in the world he warned us about and for that I, atheist as I am, am partly responsible. The question to be answered here is whether it is possible to regain the benefits of the old order without getting back the obvious excesses, the abuse, the sanctimoniousness and all the other abuses of power and privilege which were responsible for turning people away from that path.
> The Sequences make certain implicit promises. ...
Some meta-commentary first... How would one go about testing if this is true? If true, then such "promises" are not written down -- they are implied. So one would need to ask at least two questions: 1. Did the author intend to make these implicit promises? 2. What portion of readers perceive them as such?
> ... There is an art of thinking better ...
First, this isn't _implicit_ in the Sequences; it is stated directly. In any case, the quote does not constitute a promise: so far, it is a claim. And yes, rationalists do think there are better and worse ways of thinking, in the sense of "what are more effective ways of thinking that will help me accomplish my goals?"
> ..., and we’ve figured it out.
Codswallop. This is not a message of the rationality movement -- quite the opposite. We share what we've learned and why we believe it to be true, but we don't claim we've figured it all out. It is better to remain curious.
> If you learn it, you can solve all your problems...
Bollocks. This is not claimed implicitly or explicitly. Besides, some problems are intractable.
> ... become brilliant and hardworking and successful and happy ...
Rubbish.
> ..., and be one of the small elite shaping not only society but the entire future of humanity.
Bunk.
For those who haven't read it, I'll offer a relevant extended quote from Yudkowsky's 2009 "Go Forth and Create the Art!" [1], the last post of the Sequences:
## Excerpt from Go Forth and Create the Art
But those small pieces of rationality that I've set out... I hope... just maybe...
I suspect—you could even call it a guess—that there is a barrier to getting started, in this matter of rationality. Where by default, in the beginning, you don't have enough to build on. Indeed so little that you don't have a clue that more exists, that there is an Art to be found. And if you do begin to sense that more is possible—then you may just instantaneously go wrong. As David Stove observes—I'm not going to link it, because it deserves its own post—most "great thinkers" in philosophy, e.g. Hegel, are properly objects of pity. That's what happens by default to anyone who sets out to develop the art of thinking; they develop fake answers.
When you try to develop part of the human art of thinking... then you are doing something not too dissimilar to what I was doing over in Artificial Intelligence. You will be tempted by fake explanations of the mind, fake accounts of causality, mysterious holy words, and the amazing idea that solves everything.
It's not that the particular, epistemic, fake-detecting methods that I use, are so good for every particular problem; but they seem like they might be helpful for discriminating good and bad systems of thinking.
I hope that someone who learns the part of the Art that I've set down here, will not instantaneously and automatically go wrong, if they start asking themselves, "How should people think, in order to solve new problem X that I'm working on?" They will not immediately run away; they will not just make stuff up at random; they may be moved to consult the literature in experimental psychology; they will not automatically go into an affective death spiral around their Brilliant Idea; they will have some idea of what distinguishes a fake explanation from a real one. They will get a saving throw.
It's this sort of barrier, perhaps, which prevents people from beginning to develop an art of rationality, if they are not already rational.
And so instead they... go off and invent Freudian psychoanalysis. Or a new religion. Or something. That's what happens by default, when people start thinking about thinking.
I hope that the part of the Art I have set down, as incomplete as it may be, can surpass that preliminary barrier—give people a base to build on; give them an idea that an Art exists, and somewhat of how it ought to be developed; and give them at least a saving throw before they instantaneously go astray.
That's my dream—that this highly specialized-seeming art of answering confused questions, may be some of what is needed, in the very beginning, to go and complete the rest.
because humans are biological creatures iterating through complex chemical processes that are attempting to allow a large organism to survive and reproduce within the specific ecosystem provided by the Earth in the present day. "Rational reasoning" is a quaint side effect that sometimes is emergent from the nervous system of these organisms, but it's nothing more than that. It's normal that the surviving/reproducing organism's emergent side effect of "rational thought", when it is particularly intense, will self-refer to the organism and act as though it has some kind of dominion over the organism itself, but this is, like the rationalism itself, just an emergent effect that is accidental and transient. Same as if you see a cloud that looks like an elephant (it's still just a cloud).
Empathy is usally a limited resource of those that generously ascribe it to themselves and it is often mixed up with self-serving desires. Perhaps Rationalists have similar difficulties with reasoning.
While I believe Rationalism can be some form of occupational disease in tech circles, it sometimes does pose interesting questions. You just have to be aware that the perspective to analyse circumstances is intentionally constrained and in the end you still have to compare your prognosis to a reality that always choses empiricism.
Little on offer but cults these days. Take your pick. You probably already did long ago and now your own cult is the only one you'll never clock as such.
Quite possibly, places like Reddit and Hacker News, are training for the required level of intellectual smugness, and certitude that you can dismiss every annoying argument with a logical fallacy.
That sounds smug of me, but I’m actually serious. One of their defects, is that once you memorize all the fallacies (“Appeal to authority,” “Ad hominem,”) you can easily reach the point where you more easily recognize the fallacies in everyone else’s arguments than your own. You more easily doubt other people’s cited authorities, than your own. You slap “appeal to authority” against a disliked opinion, while citing an authority next week for your own. It’s a fast path from there to perceived intellectual superiority, and an even faster path from there into delusion. Rational delusion.
While deployment of logical fallacies to win arguments is annoying at best, the far bigger problem is that people make those fallacies in the first place — such as not considering base rates.
It's generally worth remembering that some of the fallacies are actually structural, and some are rhetorical.
A contradiction creates a structural fallacy; if you find one, it's a fair belief that at least one of the supporting claims is false. In contrast, appeal to authority is probabilistic: we don't know, given the current context, if the authority is right, so they might be wrong... But we don't have time to read the universe into this situation so an appeal to authority is better than nothing.
... and this observation should be coupled with the observation that the school of rhetoric wasn't teaching a method for finding truth; it was teaching a method for beating an opponent in a legal argument. "Appeal to authority is a logical fallacy" is a great sword to bring to bear if your goal is to turn off the audience's ability to ask whether we should give the word of the environmental scientist and the washed-up TV actor equal weight on the topic of environmental science...
… however, even that is up for debate. Maybe the TV actor in your own example is Al Gore filming An Inconvenient Truth and the environmental scientist was in the minority which isn’t so afraid of climate change. Fast forward to 2025, the scientist’s minority position was wrong, while Al Gore’s documentary was legally ruled to have 9 major errors; so you were stupid on both sides, with the TV actor being closer.
True, but this is where the Boolean nature of traditional logic can really trip up a person trying to operate in the real world.
These "maybes" are on the table. They are probably not the case.
(You end up with a spread of likelihoods and have to decide what to do with them. And law hates a spread of likelihoods and hates decision-by-coinflips, so one can see how rhetorical traditions grounded in legal persuasion tend towards encouraging Boolean outcomes; you can't find someone "a little guilty," at least not in the Western tradition of justice).
There was this interview with Diane Benscoter who talked about her experience and reasons for joining a cult that I found very insightful: https://www.youtube.com/watch?v=6Ibk5vJ-4-o
The main point is that it isn't so much the cult (leader) so much as the victims being in a vulnerable mental state getting exploited.
Why are there so many cults? People want to feel like they belong to something, and in a world in the midst of a loneliness and isolation epidemic the market conditions are ideal for cults.
The question the article is asking is "why did so many cults come out of this particular social milieu", not "why are there a lot of cults in the whole world".
The book Imagined Communities (Benedict Anderson) touches on this, making the case that in modern times, "nation" has replaced the cultural narrative purpose previously held by "tribe," "village," "royal subject," or "religion."
The shared thread among these is (in ever widening circles) a story people tell themselves to justify precisely why, for example, the actions of someone you'll never meet in Tulsa, OK have any bearing whatsoever on the fate of you, a person in Lincoln, NE.
One can see how this leaves an individual in a tenuous place if one doesn't feel particularly connected to nationhood (one can also see how being too connected to nationhood, in an exclusionary way, can also have deleterious consequences, and how not unlike differing forms of Christianity, differing concepts on what the 'soul' of a nation is can foment internal strife).
(To be clear: those fates are intertwined to some extent; the world we live in grows ever smaller due to the power of up-scaled influence of action granted by technology. But "nation" is a sort of fiction we tell ourselves to fit all that complexity into the slippery meat between human ears).
Your profile says that you want to keep your identity small, but you have like over 30 thousand comments spelling out exactly who you are and how you think. Why not shard accounts? Anyways. Just a random thought.
My pet theory is - that as a rationalist, you have a idealized view of humanity by nature. Your mirror neurons copy your own mind for interpolating other peoples behavior and character.
Which results in a constant state of cognitive dissonance, as the people of normal society around you behave very differently and often more "rustically" then expected. The education is there- all the learning sources are there- and yet are rejected. The lessons learned from history go unlearned and are often repeated.
You are in a out-group by definition and life-long, so you band together with others and get conned by cult-con-artists into foolish projects.
For the "rational" are nothing but another deluded project to hijack by the sociopaths of our society. The most rational being- in fact a being so capable to predate us, society had to develop anti-bodies against socio-paths, we call religion and laws!
For me largley shaped by the westering old Europe creaking and breaking (after 2 WWs) under its heavy load of philosophical/metaphysical inheritance (which at this point in time can be considered effectively americanized).
It is still fascinating to trace back the divergent developments like american-flavoured christian sects or philosophical schools of "pragmatism", "rationalism" etc. which get super-charged by technological disruptions.
In my youth I was heavily influenced by the so-called Bildung which can be functionally thought of as a form of ersatz religion and is maybe better exemplified in the literary tradition of the Bildungsroman.
I've grappled with and wildly fantasized about all sorts of things, experimented mindlessly with all kinds of modes of thinking and consciousness amidst my coming-of-age, in hindsight without this particular frame of Bildung left by myself I would have been left utterly confused and maybe at some point acted out on it. By engaging with books like Der Zauberberg by Thomas Mann or Der Mann ohne Eigenschaften by Robert Musil, my apparent madness was calmed down and instead of breaking the dam of a forming social front of myself with the vastness of the unconsciousness, over time I was guided to develop my own way into slowly operating it appropriately without completely blowing myself up into a messiah or finding myself eternally trapped in the futility and hopelessness of existence.
Borrowing from my background, one effective vaccination which spontaneously came up in my mind against rationalists sects described here, is Schopenhauer's Die Welt als Wille und Vorstellung which can be read as a radical continuation of Kant's Critique of Pure Reason which was trying to stress test the ratio itself. [To demonstrate the breadth of Bildung in even something like the physical sciences e.g. Einstein was familiar with Kant's a priori framework of space and time, Heisenberg's autobiographical book Der Teil und das Ganze was motivated by: "I wanted to show that science is done by people, and the most wonderful ideas come from dialog".]
Schopenhauer arrives at the realization because of the groundwork done by Kant (which he heavily acknowledges): that there can't even exist a rational basis for rationality itself, that it is simply an exquisitely disguised tool in the service of the more fundamental will i.e. by its definition an irrational force.
Funny little thought experiment but what consequences does this have? Well, if you are declaring the ratio as your ultima ratio you are just fooling yourself in order to be able to rationalize anything you want. Once internalized Schopenhauer's insight gets you overwhelmed by Mitleid for every conscious being, inoculating you against the excesses of your own ratio. It instantly hit me with the same force as MDMA but several years before.
I think it speaks volumes that you think "american" is the approximate level of scope that this behavior lives at.
Stuff like this crosses all aspects of society. Certain americans of certain backgrounds, demographics and life experiences are fare more likely engage in it than others. I think those people are minority, but they are definitely an overly visible one if not a local majority in a lot of internet spaces so it's easy to mistake them for the majority.
Sure many people across the globe are susceptible to cult-think. It’s just been a century long trend in America to seek a superior way of living to “save all Americans” is all. No offense to other countries peoples, I’m sure they’re just as good cult members championing over application as any American.
It probably speaks more volumes that you are taking my comment about this so literally.
It's a religion of an overdeveloped mind that hides from everything it cannot understand. It's an anti-religion, in a sense, that puts your mind on the pedestal.
Note the common pattern in major religions: they tell you that thoughts and emotions obscure the light of intuition, like clouds obscure sunlight. Rationalism is the opposite: it denies the very idea of intuition, or anything above the sphere of thoughts, and tells to create as many thoughts as possible.
Rationalists deny anything spiritual, good or evil, because they don't have evidence to think otherwise. They remain in this state of neutral nihilism until someone bigger than them sneaks into their ranks and casually introduces them to evil with some undeniable evidence. Their minds quickly pass the denial-anger-acceptance stages and being faithful to their rationalist doctrine they update their beliefs with what they know. From that point they are a cult. That's the story of Scientology, which has too many many parallels with Rationalism.
Cue all the surface-level “tribalism/loneliness/hooman nature” comments instead of the simple analysis that Rationalism (this kind) is severely brain-broken and irredeemable and will just foster even worse outcomes in a group setting. It’s a bit too close to home (ideologically) to get a somewhat detached analysis.
We live in an irrational time. It's unclear if it was simply under reported in history or social changes in the last ~50-75 years have had breaking consequences.
People are trying to make sense of this. For examples.
The Canadian government heavily subsidizes junk food, then spends heavily on healthcare because of the resulting illnesses. It restrict and limits healthy food through supply management and promotes a “food pyramid” favoring domestic unhealthy food. Meanwhile, it spends billions marketing healthy living, yet fines people up to $25,000 for hiking in forests and zones cities so driving is nearly mandatory.
Government is an easy target for irrational behaviours.
Rationalists are, to a man (and they’re almost all men) arrogant dickheads and arrogant dickheads do not see what they’re doing to be “a cult” but “the right and proper way of things because I am right and logical and rational and everyone else isn’t”.
That's an unnecessary charicaterature. I have met many rationalists of both genders and found most of them quite pleasant. But it seems that the proportion of "arrogant dickheads" unfortunately matches that of the general population. Whether it's "irrational people" or "liberal elites" these assholes always seem to find someone to look down on.
Because they have serious emotional maturity issues leading to lobotomizing their normal human emotional side of their identity and experience of life.
I think we've strayed too far from the Aristotelian dynamics of the self.
Outside of sexuality and the proclivities of their leaders, emphasis on physical domination of the self is lacking. The brain runs wild, the spirit remains aimless.
In the Bay, the difference between the somewhat well-adjusted "rationalists" and those very much "in the mush" is whether or not someone tells you they're in SF or "on the Berkeley side of things"
Note that Asterisk magazine is basically the unofficial magazine for the rationalism community and the author is a rationalist blogger who is naturally very pro-LessWrong. This piece is not anti-Yudkowsky or anti-LessWrong.
A very interesting read.
My idea of these self-proclaimed rationalists was fifteen years out of date. I thought they’re people who write wordy fan fiction, but turns out they’ve reached the point of having subgroups that kill people and exorcise demons.
This must be how people who had read one Hubbard pulp novel in the 1950s felt decades later when they find out he’s running a full-blown religion now.
The article seems to try very hard to find something positive to say about these groups, and comes up with:
“Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work and only hypochondriacs worried about covid; rationalists were some of the first people to warn about the threat of artificial intelligence.”
There’s nothing very unique about agreeing with the WHO, or thinking that building Skynet might be bad… (The rationalist Moses/Hubbard was 12 when that movie came out — the most impressionable age.) In the wider picture painted by the article, these presumed successes sound more like a case of a stopped clock being right twice a day.
You're falling into some sort of fallacy; maybe a better rationalist than I could name it.
The "they" you are describing is a large body of disparate people spread around the world. We're reading an article that focuses on a few dysfunctional subgroups. They are interesting because they are so dysfunctional and rare.
Or put it this way: Name one -ism that _doesn't_ have sub/splinter groups that kill people. Even Pacifism doesn't get a pass.
> The "they" you are describing is a large body of disparate people spread around the world.
[Citation needed]
I sincerely doubt anything but a tiny insignificant minority consider themselves part of the "rationalist community".
"Large" is very vague.
The leaderboard shows (50 of) 166385 registered accounts* on https://www.lesswrong.com/leaderboard
This is simultaneously a large body and an insignificant minority.
* How many are junk accounts? IDK. But I do know it's international, because I live in Berlin, Germany, and socialise regularly.
I realized a few years ago that there's an important difference between someone who believes women should have equal rights and a feminist. Similarly, there's a difference between someone who believes men should have equal rights and a men's rights advocate. I often sympathize with the first group. I often disagree with the latter. This same distinction applies to rationality: there's a huge difference between someone who strives to be rational and someone who belongs to a "rationalist community".
The article specifically defines the rationalists it’s talking about:
“The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally.”
Is this really a large body of disparate people spread around the world? I suspect not.
Not sure how to define "drawn together", but the anecdata is: about half of my friends love Yudkowsky's works; they live all across US, EU and Middle Eastern countries.
So I suspect yes, it's a large body of loosely coupled people.
"Self-proclaimed rationalists" is a much broader group than people who read Yudkowsky.
It's large in the sense that it's not a single well connected group. There are subgroups within the rationalists
Define "large" :
https://www.astralcodexten.com/p/fall-meetups-everywhere-cal...
(One of the largest subgroups AFAIK ?)
(They still seem to be leaning heavily USA-based, and in particular California-based.)
>The "they" you are describing is a large body of disparate people spread around the world.
And that "large body" has a few hundred core major figures and prominent adherents, and a hell of a lot of them seem to be exactly like how the parent describes. Even the "tamer" of them like ASC have that cultish quality...
As for the rest of the "large body", the hangers on, those are mostly out of view anyway, but I doubt they'd be paragons of sanity if looked up close.
>Or put it this way: Name one -ism that _doesn't_ have sub/splinter groups that kill people
-isms include fascism, nazism, jihadism, nationalism, communism, nationalism, racism, etc, so not exactly the best argument to make in rationalism's defense. "Yeah, rationalism has groups that murder people, but after all didn't fascism had those too?"
Though, if we were honest, it mostly brings in mind another, more medical related, -ism.
Dadaism? Most art -isms didn't have subgroups who killed people. If people killed others in art history it was mostly tragic individual stories and had next to nothing to do with the ideology of the ism.
The level of dysfunction which is described in the article is really rare. But dysfunction, the kind of which we talk about, is not really that rare, I would even say that quite common, in self proclaimed rationalist groups. They don’t kill people - at least directly - but they definitely not what they claim to be: rational. They use rational tools, more than others, but they are not more rational than others, they simply use these tools to prove their irrationality.
I touch rationalists only with a pole recently, because they are not smarter than others, but they just think that, and on the surface level they seem so. They praise Julia Galef, then ignore everything what she said. Even Galef invited people who were full blown racists, just it seemed that they were all right because they knew whom they talked with, and they couldn’t bullshit. They tried to argue why their racism is rational, but you couldn’t tell from the interviews. They flat out lies all the time on every other platforms. So at the end she just gave platform for covered racism.
The WHO didn't declare a global pandemic until March 11, 2020 [1]. That's a little slow and some rationalists were earlier than that. (Other people too.)
After reading a warning from a rationalist blog, I posted a lot about COVID news to another forum and others there gave me credit for giving the heads-up that it was a Big Deal and not just another thing in the news. (Not sure it made all that much difference, though?)
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC7569573/
Do you think that the consequences of the WHO declaring a pandemic and some rationalist blog warning about covid are the same? Clearly the WHO has to be more cautious. I have no doubt there were people at the WHO who felt a global pandemic was likely at least as early as you and the person writing the rationalist blog.
This is going to be controversial. But WHO wasted precious time during the early phases of the pandemic. It could have been contained more effectively if they weren't in denial. And when they did declare a pandemic, it was all very sudden instead of gradually raising the level, leading to panic buying and anxiety.
Are the WHO personnel rational and competent? I would like to believe so. But that isn't a given - the amount of nonsense I had to fight in institutions considered as pinnacles of rationality is just depressing. Regardless, WHO was encumbered by international policitics. Their rationality would have made no difference. That is why the opinion of rational outsiders matter - especially of those with domain expertise.
The signs of an uncontained contagion were evident by the middle of December 2020, well before the WHO declared the pandemic in March 2021. They could have asked everyone to start preparing around then. Instead, there were alarming news coming out of Wuhan and endless debates on TV about the appeasement of the Chinese administration by WHO - things that started ringing the alarm bells for us. We started preparing by at least the middle of January. WHO chose to wait again till everything was obvious and a declaration was inevitable. People were dying by the thousands everyday and the lockdowns had already started by then. Their rubberstamp wasn't necessary to confirm what everyone knew already. That was one instance where waiting for the WHO wasn't a prudent choice.
WHO is a critical institution to the entire world. Their timing can mean the difference between life and death for millions everywhere. These sorts of failing shouldn't be excused and swept under the rug so easily.
If you look at the timeline, it's purely political. Some earliest warnings came from Taiwan/ROC who found it in travelers from mainland. But WHO did not dare to anger PRC so they ignored Taiwan and that way caused probably thousands of unnecessary deaths in the whole world
Shitposting comedy forums were ahead of the WHO when it came to this, it didn't take a genius to understand what was going on before shit completely hit the fan.
I worked at the British Medical Journal at the time. We got wind of COVID being a big thing in January. I spent January to March to get our new VPN into a fit state that the whole company could do their whole jobs from home. 23 March was lockdown and we were ready and had a very busy year.
That COVID was going to be big was obvious to a lot of people and groups who were paying attention. We were a health-related org, but we were extremely far from unique in this.
The rationalist claim that they were uniquely on the ball and everyone else dropped it is just a marketing lie.
I recall friends who worked for Google telling me that they instituted WFH for all employees from the start of March. I also remember a call with a co-worker in January/February who had a PhD in epidemiology (not a "rationalist" afaik); I couldn't believe what he was saying about the likelihood of a months-long lockdown in the West.
I think the piece bends over backwards to keep the charitable frame because it's written by someone inside the community, but you're right that the touted "wins" feel a bit thin compared to the sheer scale of dysfunction described.
> Rationalists came to correct views about the COVID-19 pandemic while many others were saying masks didn’t work
I wonder what views about covid-19 are correct. On masks, I remember the mainstream messaging went through the stages that were masks don't work, some masks work, all masks work, double masking works, to finally masks don't work (or some masks work; I can't remember where we ended up).
> to finally masks don't work (or some masks work; I can't remember where we ended up).
Most masks 'work', for some value of 'work', but efficacy differs (which, to be clear, was ~always known; there was a very short period when some authorities insisted that covid was primarily transmitted by touch, but you're talking weeks at most). In particular I think what confused people was that the standard blue surgical masks are somewhat effective at stopping an infected person from passing on covid (and various other things), but not hugely effective at preventing the wearer from contracting covid; for that you want something along the lines of an n95 respirator.
The main actual point of controversy was whether it was airborne or not (vs just short-range spread by droplets); the answer, in the end, was 'yes', but it took longer than it should have to get there.
> In particular I think what confused people was that the standard blue surgical masks are somewhat effective at stopping an infected person from passing on covid (and various other things), but not hugely effective at preventing the wearer from contracting covid
Yes, exactly.
If we look at guidelines about influenza, we will see them say that "surgical masks are not considered adequate respiratory protection for airborne transmission of pandemic influenza". And as far as I understand, it was finally agreed that in terms of transmission, Sars CoV-2 behaves similarly to the influenza virus.
Basic masks work for society because they stop your saliva from traveling but they don't work for you because they don't stop particles from other people saliva from reaching you
FWIW, my rationalist friends were warning about Covid before I had heard about it from others, and talking about AI before it was on others radar.
I was reminded of Hubbard too. In particular the "[belief that one] should always escalate when threatened" strongly echoes Hubbard's advice to always attack attack. Never defend.
The whole thing reminds me of EST and a thousand other cults / self-improvement / self-actualisation groups that seem endemic to California ever since the 60s or before.
As someone who started reading without knowing about rationalists, I actually came out without knowing much more. Lots of context is assumed I guess.
Some main figures and rituals are mentioned but I still don’t know how the activities and communities arise from the purported origin. How do we go from “let’s rationally analyze how we think and get rid of bias” to creating a crypto, or being hype focused on AI, or summoning demons? Why did they raise this idea of matching confrontation always with escalation? Why the focus on programming, is this a Silicon Valley thing?
Also lesswrong is mentioned but no context is given about it. I only know the name as a forum, just like somethingawful or Reddit, but I don’t know how it fits into the picture.
[flagged]
The point of wearing a mask is to protect other people from your respiratory droplets. Please wear a mask when you're sick.
The point of masks, originally, was to catch saliva drops from surgeons as they worked over an open body, not to stop viruses.
For COVID its use was novel. But having an intention isn't enough. It must actually work. Otherwise, you are just engaging in witchcraft and tomfoolery.
The respiratory droplet model of how COVID spread was wrong, which was proven by lots of real world evidence. Look at how the Diamond Princess worked out and please explain how that was compatible with either masks or lockdowns working? SARS-CoV-2 spreads like every other respiratory virus, as a gaseous aerosol that doesn't care about masks in the slightest.
I'm not sure where you're getting this from. Repeated studies continue to affirm that COVID is spread by respiratory droplets and that masks are effective in reducing transmission.
https://pmc.ncbi.nlm.nih.gov/articles/PMC8721651/
https://www.cbsnews.com/news/face-mask-effectiveness-what-sc...
https://www.ukri.org/who-we-are/how-we-are-doing/research-ou...
Why do you believe the Diamond Princess is a counterexample?
Indoors. There were decades of research leading to the recommendations of mask wearing when symptomatic and only indoors.
All that fell by the wayside when mask wearing became a covid-time cult. A friend (with a degree in epidemiology) told me that if she tried to argue those points and doubts outdoor mask mandates she will be the immediately out of her job.
The covid-time environment of shutting down scientific discussions because policymakers decided that we had enough science to reach a conclusion should not be forgotten, it was a reasonable concern turned into a cult. My 2c.
It's still mind boggling to me that governments didn't say "Don't wear a mask for yourself -- wear one to save your neighbor."
Sure, there would have been some people who ignored it because they're jackasses, but I can't believe we wouldn't be in a better place today.
Both in terms of public scientific- and community- appreciation.
> It's still mind boggling to me that governments didn't say "Don't wear a mask for yourself -- wear one to save your neighbor."
I mean... they did?
Like in the UK's guidance literally the second sentence is "Face coverings are primarily worn to protect others because they cover the nose and mouth, which are the main sources of emission of the virus that causes coronavirus infection (COVID-19)."
https://www.gov.uk/government/publications/face-coverings-wh...
They did in the US as well. It’s disappointing US leadership flip flopped on masks enough times that anti-mask started to make sense to people.
> AI is very friendly, even
Very friendly until it reads in your email that you plan to replace it with a new model:
https://www.anthropic.com/research/agentic-misalignment
It was genuinely difficult to persuade people to wear masks before everyone started doing it and it became normal.
Nobody was persuaded, they were forced by law exactly because it was obvious to everyone with their brain switched on that masks didn't work. Remember how when the rules demanding masks on planes were rescinded there were videos of whole planes ripping off their masks and celebrating mid-flight? Literally the second the law changed, people stopped wearing masks.
That's because masks were a mass hysteria. They did not work. Everyone could see it.
As a counterpoint, myself and most people I know continued mask-wearing for many months after it stopped being a legal requirement in the UK.
I hope we never have to find out how wrong you are.
> And masks? How many graphs of cases/day with mask mandate transitions overlayed are required before people realize masks did nothing? Whole countries went from nearly nobody wearing them, to everyone wearing them, overnight, and COVID cases/day didn't even notice.
Most of those countries didn't actually follow their mask mandates - the USA for example. I visited because the PRC was preventing vaccine deliveries to Taiwan so I flew to the USA to get a vaccine, and I distinctly remember thinking "yeah... Of course" when walked around an airport of people chin diapering.
Taiwan halted a couple outbreaks from pilots completely, partially because people are so used to wearing masks when they're sick here (and also because the mask mandate was strictly enforced everywhere).
I visited DC a year later where they had a memorial for victims of COVID. It was 700,000 white flags near the Washington monument when I visited, as I recall it broke a million a few months later.
This article is beautifully written, and it's full of proper original research. I'm sad that most comments so far are knee-jerk "lol rationalists" type responses. I haven't seen any comment yet that isn't already addressed in much more colour and nuance in the article itself.
The contrarian dynamic strikes again! https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
(I'm referring to how this comment, objecting to the other comments as unduly negative, has been upvoted to the top of the thread.)
(p.s. this is not a criticism!)
Hahaah yeah true. If I had been commenting earlier I might’ve written “lol rationalists”
I think that since it's not possible to reply to multiple comments at the same time, people will naturally open a new top-level comment the moment there's a clearly identifiable groupthink emerging. Quoting one of your earlier comments about this:
>This happens so frequently that I think it must be a product of something hard-wired in the medium *[I mean the medium of the internet forum]
I would say it's only hard-wired in the medium of tree-style comment sections. If HN worked more like linear forums with multi-quote/replies, it might be possible to have multiple back-and-forths of subgroup consensus like this.
Asterisk is basically "rationalist magazine" and the author is a well-known rationalist blogger, so it's not a surprise that this is basically the only fair look into this phenomenon - compared to the typical outside view that rationalism itself is a cult and Eliezer Yudkowsky is a cult leader, both of which I consider absurd notions.
The view from the inside, written by a person who is waist deep into the movement, is the only fair look into the phenomenon?
Okay, true, that was a silly statement for me to make. It's just a look that's different from the typical media treatment of the rationalist community, and is as far as I know the first time there's an inside view of this cult-spawning phenomenon from a media outlet or publication.
The story from the outside is usually reduced to something like "rationalism is a wacky cult", with the recent ones tacking on "and some of its members include this Ziz gang who murdered many people". Like the NYT article a week ago.
> the typical outside view that rationalism itself is a cult and Eliezer Yudkowsky is a cult leader, both of which I consider absurd notions
Cults are a whole biome of personalities. The prophet does not need to be the same person as the leader. They sometimes are and things can be very ugly in those cases, but they often aren’t. After all, there are Christian cults today even though Jesus and his supporting cast have been dead for approaching 2k years.
Yudkowsky seems relatively benign as far as prophets go, though who knows what goes on in private (I’m sure some people on here do, but the collective We do not). I would guess that the failure mode for him would be a David Miscavige type who slowly accumulates power while Yudkowsky remains a figurehead. This could be a girlfriend or someone who runs one of the charitable organizations (controlling the purse strings when everyone is dependent on the organization for their next meal is a time honored technique). I’m looking forward to the documentaries that get made in 20 years or so.
It's not just a drive-by hit piece
> I haven't seen any comment yet that isn't already addressed in much more colour and nuance in the article itself.
I once called rationalists infantile, impotent liberal escapism, perhaps that's the novel take you are looking for.
Essentially my view is that the fundamental problem with rationalists and the effective altruist movement is that they are talking about profound social and political issues, with any and all politics completely and totally removed from it. It is liberal depoliticisation[1] driven to its ultimate conclusion. That's just why they are ineffective and wrong about everything, but that's also why they are popular among the tech elites that are giving millions to associated groups like MIRI[2]. They aren't going away, they are politically useful and convenient to very powerful people.
[1] https://en.wikipedia.org/wiki/Post-politics
[2] https://intelligence.org/transparency/
https://en.wikipedia.org/wiki/They_Saved_Lisa%27s_Brain
I just so happened to read in the last few days the (somewhat disjointed and rambling) Technically Radical: On the Unrecognized [Leftist] Potential of Tech Workers and Hackers
https://wedontagree.net/technically-radical-on-the-unrecogni...
as well as the better but much older "The professional-managerial class" Ehrenreich (1976) :
https://libcom.org/article/professional-managerial-class-bar...
"Rationalists" do seem to be in some ways the poster children of consumerist atomization, but do note that they also resisted it socially by forming those 'cults' of theirs.
(If counter-cultures are 'dead', why don't they count as one ?? Alternatively, might this be a form of communitarianism, but with less traditionalism, more atheism, and perhaps a Jewish slant ?)
I think it's perfectly fine to read these articles, think "definitely a cult" and ignore whether they believe in spaceships, or demons, or AGI.
The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag – not really a novel, or unique, or situational insight.
That's a side point of the article, acknowledged as an old idea. The central points of this article are actually quite a bit more interesting than that. He even summarized his conclusions concisely at the end, so I don't know what your excuse is for trivializing it.
The other key takeaway, that people with trauma are more attracted to organizations that purport to be able to fix and are thus over-represented in them (vs in the general population), is also important.
Because if you're going to set up a hierarchical (explicitly or implicitly) isolated organization with a bunch of strangers, it's good to start by asking "How much do I trust these strangers?"
> The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag
Even better: a social group with a lot of invented lingo is a red flag that you can see before you get isolated from your loved ones.
By this token, most scientists would be considered cultists: normal people don't have "specific tensile strength" or "Jacobian" or "Hermitian operator" etc in their vocabulary. "Must be some cult"?
Edit: it seems most people don't understand what I'm pointing out.
Having terminology is not the red flag.
Having intricate terminology without a domain is the red flag.
In science or mathematics, there are enormous amounts of jargon, terms, definitions, concepts, but they are always situated in some domain of study.
The "rationalists" (better call them pseudorationalists) invent their own concepts without actual corresponding domain, just life. It's like kids re-inventing their generation specific words each generation to denote things they like or dislike, etc.
> social group
fine, the jargon of a "social group" of science is a red flag?
sure, theres lots of nasty side effects of how academia is run, rewarded, etc..
but thats not because of precision of language employed.
do you want scientists recycling the same words and overloading ever more meanings onto ever more ambiguous words?
I don’t think we disagree. I’m not taking issue with scientists having jargon, which I agree is good and necessary (though I think the less analytical academic disciplines, not being rooted in fact, have come to bear many similarities to state-backed religions; and I think they use jargon accordingly). I’m pointing out that I specifically intended to exclude professionals by scoping my statement to “social groups”. Primarily I had in mind religion, politics, certain social media sites, and whatever you want to call movements like capital R Rationality (I have personally duck typed it as a religion).
> I’m pointing out that I specifically intended to exclude professionals by scoping my statement to “social groups”.
I think your argumentation is a generalization that's close to a rationalist fallacy we're discussing:
> a social group with a lot of invented lingo is a red flag that you can see before you get isolated from your loved ones.
Groups of artists do this all the time for the sake of agency over their intentions. They borrow terminology from economics, psychology, computer science etc., but exclude economists, psychologists and computer scientists all the time. I had one choreographer talk to me about his performances as if they were "Protocols". People are free to use any vocabulary to describe their observed dynamics, expressions or phenomena.
As far as red flag moments go, the intent to use a certain terminology still prevails any choice of terminology itself.
I think there's a distinction between inventing new terms for utilitarian purposes vs ideological and in-group signalling purposes.
If you have groups talking about "expected value" or "dot products", that's different from groups who talk a lot about "privilege" or "the deep state". Even though the latter would claim they're just using jargon between experts, just like the scientists.
So every fandom in history?
> The key takeaway from the article is that if you have a group leader who cuts you off from other people, that's a red flag – not really a novel, or unique, or situational insight
Well yes and no. The reason why I think the insight is so interesting is that these groups were formed, almost definitionally for the purpose of avoiding such "obvious" mistakes. The name of the group is literally the "Rationalists"!
I find that funny, ironic, and saying something important about this philosophy, in that it implies that the rest of society wasn't so "irrational" after all.
As a more extreme and silly example, imagine there was a group called "Cults suck, and we are not a cult!", that was created for the very purpose of fighting cults, and yet, ironically, became a cult into and of itself. That would be insightful and funny.
[flagged]
I have a link for you:
https://news.ycombinator.com/newsguidelines.html
Scroll to the bottom of the page.
Thank you for the link - I'm familiar with the commenting rules. I just don't think that people follow the spirit of those rules.
One of a few issues I have with groups like these, is that they often confidently and aggressively spew a set of beliefs that on their face logically follow from one another, until you realize they are built on a set of axioms that are either entirely untested or outright nonsense. This is common everywhere, but I feel especially pronounced in communities like this. It also involves quite a bit of navel gazing that makes me feel a little sick participating in.
The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
As a former mechanical engineer, I visualize this phenomenon like a "tolerance stackup". Effectively meaning that for each part you add to the chain, you accumulate error. If you're not damn careful, your assembly of parts (or conclusions) will fail to measure up to expectations.
I like this approach. Also having dipped my toes in the engineering world (professionally) I think it naturally follows that you should be constantly rechecking your designs. Those tolerances were fine to begin with, but are they now that things have changed? It also makes you think about failure modes. What can make this all come down and if it does what way will it fail? Which is really useful because you can then leverage this to design things to fail in certain ways and now you got a testable hypothesis. It won't create proof, but it at least helps in finding flaws.
The example I heard was to picture the Challenger shuttle, and the O-rings used worked 99% of the time. Well, what happens to the failure rate when you have 6 O-rings in a booster rocket, and you only need one to fail for disaster? Now you only have a 94% success rate.
Basically the same as how dead reckoning your location works worse the longer you've been traveling?
Dead reckoning is a great analogy for coming to conclusions based on reason alone. Always useful to check in with reality.
And always worth keeping an eye on the maximum possible divergence from reality you're currently at, based on how far you've reasoned from truth, and how less-than-sure each step was.
Maybe you're right. But there's a non-zero chance you're also max wrong. (Which itself can be bounded, if you don't wander too far)
My preferred argument against the AI doom hypothesis is exactly this: it has 8 or so independent prerequisites with unknown probabilities. Since you multiply the probabilities of each prerequisite to get the overall probability, you end up with a relatively low overall probability even when the probability of each prerequisite is relatively high, and if just a few of the prerequisites have small probabilities, the overall probability basically can’t be anything other than very small.
Given this structure to the problem, if you find yourself espousing a p(doom) of 80%, you’re probably not thinking about the issue properly. If in 10 years some of those prerequisites have turned out to be true, then you can start getting worried and be justified about it. But from where we are now there’s just no way.
I saw an article recently that talked about stringing likely inferences together but ending up with an unreliable outcome because enough 0.9 probabilities one after the other lead to an unlikely conclusion.
Edit: Couldn't find the article, but AI referenced Baysian "Chain of reasoning fallacy".
I think you have this oversimplified. Stringing together inferences can take us in either direction. It really depends on how things are being done and this isn't always so obvious or simple. But just to show both directions I'll give two simple examples (real world holds many more complexities)
It is all about what is being modeled and how the inferences string together. If these are being multiplied, then yes, this is going to decreases as xy < x and xy < y for every x,y < 1.
But a good counter example is the classic Bayesian Inference example[0]. Suppose you have a test that detects vampirism with 95% accuracy (Pr(+|vampire) = 0.95) and has a false positive rate of 1% (Pr(+|mortal) = 0.01). But vampirism is rare, affecting only 0.1% of the population. This ends up meaning a positive test only gives us a 8.7% likelihood of a subject being a vampire (Pr(vampire|+). The solution here is that we repeat the testing. On our second test Pr(vampire) changes from 0.001 to 0.087 and Pr(vampire|+) goes to 89% and a third getting us to about 99%.
[0] Our equation is
And the crux is Pr(+) = Pr(+|vampire)Pr(vampire) + Pr(+|mortal)(1-Pr(vampire))Worth noting that solution only works if the false positives are totally random, which is probably not true of many real world cases and would be pretty hard to work out.
Definitely. Real world adds lots of complexities and nuances, but I was just trying to make the point that it matters how those inferences compound. That we can't just conclude that compounding inferences decreases likelihood
Well they were talking about a chain, A->B, B->C, C->D.
You're talking about multiple pieces of evidence for the same statement. Your tests don't depend on any of the previous tests also being right.
Be careful with your description there, are you sure it doesn't apply to the Bayesian example (which was... illustrative...? And not supposed to be every possible example?)? We calculated f(f(f(x))), so I wouldn't say that this "doesn't depend on the previous 'test'". Take your chain, we can represent it with h(g(f(x))) (or (f∘g∘h)(x)). That clearly fits your case for when f=g=h. Don't lose sight of the abstractions.
So in your example you can apply just one test result at a time, in any order. And the more pieces of evidence you apply, the stronger your argument gets.
f = "The test(s) say the patient is a vampire, with a .01 false positive rate."
f∘f∘f = "The test(s) say the patient is a vampire, with a .000001 false positive rate."
In the chain example f or g or h on its own is useless. Only f∘g∘h is relevant. And f∘g∘h is a lot weaker than f or g or h appears on its own.
This is what a logic chain looks like, adapted for vampirism to make it easier to compare:
f: "The test says situation 1 is true, with a 10% false positive rate."
g: "If situation 1 then situation 2 is true, with a 10% false positive rate."
h: "If situation 2 then the patient is a vampire, with a 10% false positive rate."
f∘g∘h = "The test says the patient is a vampire, with a 27% false positive rate."
So there are two key differences. One is the "if"s that make the false positives build up. The other is that only h tells you anything about vampires. f and g are mere setup, so they can only weaken h. At best f and g would have 100% reliability and h would be its original strength, 10% false positive. The false positive rate of h will never be decreased by adding more chain links, only increased. If you want a smaller false positive rate you need a separate piece of evidence. Like how your example has three similar but separate pieces of evidence.
Can’t you improve thing if you can calibrate with a known good vampire? You’d think NIST or the CDC would have one locked in a basement somewhere.
IDK, probably? I'm just trying to say that iterative inference doesn't strictly mean decreasing likelihood.
I'm not a virologist or whoever designs these kinds of medical tests. I don't even know the right word to describe the profession lol. But the question is orthogonal to what's being discussed here. I'm only guessing "probably" because usually having a good example helps in experimental design. But then again, why wouldn't the original test that we're using have done that already? Wouldn't that be how you get that 95% accurate test?
I can't tell you the biology stuff, I can just answer math and ML stuff and even then only so much.
The thought of a BIPM Reference Vampire made me chuckle.
GPT6 would come faster but we ran out of Casandra blood.
Assuming your vampire tests are independent.
Correct. And there's a lot of other assumptions. I did make a specific note that it was a simplified and illustrative example. And yes, in the real world I'd warn about being careful when making i.i.d. assumptions, since these assumptions are made far more than people realize.
I like this analogy.
I think of a bike's shifting systems; better shifters, better housings, better derailleur, or better chainrings/cogs can each 'improve' things.
I suppose where that becomes relevant to here, is that you can have very fancy parts on various ends but if there's a piece in the middle that's wrong you're still gonna get shit results.
You only as strong as the weakest link.
Your SCSI devices are only as fast as the slowest device in the chain.
I don't need to be faster than the bear, I only have to be faster than you.
> Your SCSI devices are only as fast as the slowest device in the chain.
There are not many forums where you would see this analogy.
This is what I hate about real life electronics. Everything is nice on paper, but physics sucks.
Which I think at the end of the day the important lesson is more how simple explanations can be good approximations that get us most of the way there but the details and nuances shouldn't be so easily dismissed. With this framing we can choose how we pick our battles. Is it cheaper/easier/faster to run a very accurate sim or cheaper/easier/faster to iterate in physical space?
This is how you reduce the leakiness, but I think it is categorically the same problem as the bad axioms. It is hard to challenge yourself and we often don't like being wrong. It is also really unfortunate that small mistakes can be a critical flaw. There's definitely an imbalance.
This is why the OP is seeing this behavior. Because the smartest people you'll meet are constantly challenging their own ideas. They know they are wrong to at least some degree. You'll sometimes find them talking with a bit of authority at first but a key part is watching how they deal with challenging of assumptions. Ask them what would cause them to change their minds. Ask them about nuances and details. They won't always dig into those can of worms but they will be aware of it and maybe nervousness or excited about going down that road (or do they just outright dismiss it?). They understand that accuracy is proportional to computation, and you have exponentially increasing computation as you converge on accuracy. These are strong indications since it'll suggest if they care more about the right answer or being right. You also don't have to be very smart to detect this.IME most people aren't very good at building axioms.
It seems you implying that some people are good building good axiom systems for the real world. I disagree. There are a few situations in the world where you have generalities so close to complete that you can use simple logic on them. But for the messy parts of the real world, there simply is not set of logical claims which can provide anything like certainty no matter how "good" someone is at "axiom creation".
I don't even know what you're arguing.
How do you go from "most people aren't very good" to "this implies some people are really good"? First, that is just a really weird interpretation of how people speak (btw, "you're" not "you" ;) because this is nicer and going to be received better than "making axioms is hard and people are shit at it." Second, you've assumed a binary condition. Here's an example. "Most people aren't very good at programming." This is an objectively true statement, right?[0] I'll also make the claim that no one is a good programmer, but some programmers are better than others. There's no contradiction in those two claims, even if you don't believe the latter is true.Now, there are some pretty good axiom systems. ZF and ZFC seems to be working pretty well. There's others too and they are used to for pretty complex stuff. They all work at least for "simple logic."
But then again, you probably weren't thinking of things like ZFC. But hey, that was kinda my entire point.
I agree. I'd hope I agree considering my username... But you've jumped to a much stronger statement. I hope we both agree that just because there are things we can't prove that this doesn't mean there aren't things we can prove. Similarly I hope we agree that if we couldn't prove anything to absolute certainty that this doesn't mean we can't prove things to an incredibly high level of certainty or that we can't prove something is more right than something else.[0] Most people don't even know how to write a program. Well... maybe everyone can write a Perl program but let's not get into semantics.
If you mean nobody is good at something, just say that.
Saying most people aren't good at it DOES imply that some are good at it.
I think I misunderstood that you talking of axiomatization of mathematical or related systems.
The original discussion are about the formulation of "axioms" about the real world ("the bus always X minutes late" or more elaborate stuff). I suppose I should have considered with your username, you would have consider the statement in terms of the formulation of mathematical axioms.
But still, I misunderstood you and you misunderstood me.
We're talking about "rationalist" cults, axioms, logic, and "from first principles", I don't think using a formal language around this stuff is that much of a leap, if any. (Also, not expecting you to notice my username lol. But I did mention it because after the fact it would make more sense and serve as a hint to where I'm approaching this from).
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
This is what you get when you naively re-invent philosophy from the ground up while ignoring literally 2500 years of actual debugging of such arguments by the smartest people who ever lived.
You can't diverge from and improve on what everyone else did AND be almost entirely ignorant of it, let alone have no training whatsoever in it. This extreme arrogance I would say is the root of the problem.
> Not that non-rationalists are any better at reasoning, but non-rationalists do at least benefit from some intellectual humility.
Non-rationalists are forced to use their physical senses more often because they can't follow the chain of logic as far. This is to their advantage. Empiricism > rationalism.
That conclusion presupposes that rationality and empiricism are at odds or mutually incompatible somehow. Any rational position worth listening to, about any testable hypothesis, is hand in hand with empirical thinking.
In traditional philosophy, rationalism and empiricism are at odds; they are essentially diametrically opposed. Rationalism prioritizes a priori reasoning while empiricism prioritizes a posteriori reasoning. You can prioritize both equally but that is neither rationalism nor empiricism in the traditional terminology. The current rationalist movement has no relation to that original rationalist movement, so the words don't actually mean the same thing. In fact, the majority of participants in the current movement seem ignorant of the historical dispute and its implications, hence the misuse of the word.
Thank you for clarifying.
That does compute with what I thought the "Rationalist" movement as covered by the article was about. I didn't peg them as pure a priori thinkers as you put it. I suppose my comment still holds, assuming the rationalist in this context refers to the version of "Rationalism" being discussed in the article as opposed to the traditional one.
Yeah, Stanford has a good recap :
https://plato.stanford.edu/entries/rationalism-empiricism/
(Note also how the context is French vs British, and the French basically lost with Napoleon, so the current "rationalists" seem to be more likely to be heirs to empiricism instead.)
Good rationalism includes empiricism though
[dead]
Yet I think most people err in the other direction. They 'know' the basics of health, of discipline, of charity, but have a hard time following through. 'Take a simple idea, and take it seriously': a favorite aphorism of Charlie Munger. Most of the good things in my life have come from trying to follow through the real implications of a theoretical belief.
And “always invert”! A related mungerism.
I always get weird looks when I talk about killing as many pilots as possible. I need a new example of the always invert model of problem solving.
Perhaps part of being rational, as opposed to rationalist, is having a sense of when to override the conclusions of seemingly logical arguments.
In philosophy grad school, we described this as 'being reasonable' as opposed to 'being rational'.
That said, big-R Rationalism (the Lesswrong/Yudkowsky/Ziz social phenomenon) has very little in common with what we've standardly called 'rationalism'; trained philosophers tend to wince a little bit when we come into contact with these groups (who are nevertheless chockablock with fascinating personalities and compelling aesthetics.)
From my perspective (and I have only glancing contact,) these mostly seem to be _cults of consequentialism_, an epithet I'd also use for Effective Altruists.
Consequentialism has been making young people say and do daft things for hundreds of years -- Dostoevsky's _Crime and Punishment_ being the best character sketch I can think of.
While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
The other codesmell these big-R rationalist groups have for me, and that which this article correctly flags, is their weaponization of psychology -- while I don't necessarily doubt the findings of sociology, psychology, etc, I wonder if they necessarily furnish useful tools for personal improvement. For example, memorizing a list of biases that people can potentially have is like numbering the stars in the sky; to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
And that's a relatively mild use of psychology. I simply can't imagine how annoying it would be to live in a household where everyone had memorized everything from connection theory to attachment theory to narrative therapy and routinely deployed hot takes on one another.
In actual philosophical discussion, back at the academy, psychologizing was considered 'below the belt', and would result in an intervention by the ref. Sometimes this was explicitly associated with something we called 'the Principle of Charity', which is that, out of an abundance of epistemic caution, you commit to always interpreting the motives and interests of your interlocutor in the kindest light possible, whether in 'steel manning' their arguments, or turning a strategically blind eye to bad behaviour in conversation.
The importance Principle of Charity is probably the most enduring lesson I took from my decade-long sojurn among the philosophers, and mutual psychological dissection is anathema to it.
> While there are plenty of non-religious (and thus, small-r rationalist) alternatives to consequentialism, none of them seem to make it past the threshold in these communities.
I suspect this is because consequentialism is the only meta-ethical framework that has any leg to stand on other than "because I said so". That makes it very attractive. The problem is you also can't build anything useful on top of it, because if you try to quantify consequences, and do math on them, you end up with the Repugnant Conclusion or worse. And in practice - in Effective Altruism/Longtermism, for example - the use of arbitrarily big numbers lets you endorse the Very Repugnant Conclusion while patting yourself on the back for it.
I actually think that the fact that rationalists use the term "steel manning" betrays a lack of charity.
If the only thing you owe your interlocutor is to use your "prodigious intellect" to restate their own argument in the way that sounds the most convincing to you, maybe you are in fact a terrible listener.
I have tried to tell my legions of fanatic brainwashed adherents exactly this, and they have refused to listen to me because the wrong way is more fun for them.
https://x.com/ESYudkowsky/status/1075854951996256256
Listening to other viewpoints is hard. Restating is a good tool to improve listening and understanding. I don't agree with this criticism at all, since that "prodigious intellect" bit isn't inherent to the term.
I was being snarky, but I think steelmanning does have one major flaw.
By restating the argument in terms that are most convincing to you, you may already be warping the conclusions of your interlocutor to fit what you want them to be saying. Charity is, "I will assume this person is intelligent and overlook any mistakes in order to try and understand what they are actually communicating." Steelmanning is "I can make their case for them, better than they could."
Of course this is downstream of the core issue, and the reason why steelmanning was invented in the first place. Namely, charity breaks down on the internet. Steelmanning is the more individualistic version of charity. It is the responsibility of people as individuals, not a norm that can be enforced by an institution or community.
One of the most annoying habits of Rationalists, and something that annoyed me with plenty of people online before Yudkowsky's brand was even a thing, is the assumption that they're much smarter than almost everyone else. If that is your true core belief, the one that will never be shaken, then of course you're not going to waste time trying to understand the nuances of the arguments of some pious medieval peasant.
For mistakes that aren't just nitpicks, for the most part you can't overlook them without something to fix them with. And ideally this fixing should be collaborative, figuring out if that actually is what they mean. It's definitely bad to think you simply know better or are better at arguing, but the opposite end of leaving seeming-mistakes alone doesn't lead to a good resolution either.
Just so. I hate this term, and for essentially this reason, but it has undeniable currency right now; I was writing to be understood.
> to me, it seems like this is a cargo-cultish transposition of the act of finding _fallacies in arguments_ into the domain of finding _faults in persons_.
Well put, thanks!
I am interested in your journey from philosophy to coding.
Would you consider the formal verification community to be "rationalists"?
I feel this way about some of the more extreme effective altruists. There is no room for uncertainty or recognition of the way that errors compound.
- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.
- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.
- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).
The "longtermism" idea never made sense to me: So we should sacrifice the present to save the future. Alright. But then those future descendants would also have to sacrifice their present to save their future, etc. So by that logic, there could never be a time that was not full of misery. So then why do all of that stuff?
At some point in the future, there won't be more people who will live in the future than live in the present, at which point you are allowed to improve conditions today. Of course, by that point the human race is nearly finished, but hey.
That said, if they really thought hard about this problem, they would have come to a different conclusion:
https://theconversation.com/solve-suffering-by-blowing-up-th...
Some time after we've colonized half the observable universe. Got it.
Actually, you could make the case that the population won't grow over the next thousand years maybe even then thousand years, but that's the short term and therefore unimportant.
(I'm not a longtermist)
Not on earth, but my understanding was that space colonization was a big part of their plan.
To me it is disguised way of saying the ends justify the means. Sure, we murder a few people today but think of the utopian paradise we are building for the future.
From my observation, that "building the future" isn't something any of them are actually doing. Instead, the concept that "we might someday do something good with the wealth and power we accrue" seems to be the thought that allows the pillaging. It's a way to feel morally superior without actually doing anything morally superior.
A bit of longtermism wouldn’t be so bad. We could sacrifice the convenience of burning fossil fuels today for our descendants to have an inhabitable planet.
But that's the great thing about Longtermism. As long as a catastrophe is not going to lead to human extinction or otherwise specifically prevent the Singularity, it's not an X-Risk that you need to be concerned about. So AI alignment is an X-Risk we need to work on, but global warming isn't, so we can keep burning as much fossil fuel as we want. In fact, we need to burn more of them in order to produce the Singularity. The misery of a few billion present/near-future people doesn't matter compared to the happiness of sextillions of future post-humans.
Zeno's poverty
Well, there's a balance to be had. Do the most good you can while still being able to survive the rat race.
However, people are bad at that.
I'll give an interesting example.
Hybrid Cars. Modern proper HEVs[0] usually benefit to their owners, both by virtue of better fuel economy as well as in most cases being overall more reliable than a normal car.
And, they are better on CO2 emissions and lower our oil consumption.
And yet most carmakers as well as consumers have been very slow to adopt. On the consumer side we are finally to where we can have hybrid trucks that can get 36-40MPG capable of towing 4000 pounds or hauling over 1000 pounds in the bed [1] we have hybrid minivans capable of 35MPG for transporting groups of people, we have hybrid sedans getting 50+ and Small SUVs getting 35-40+MPG for people who need a more normal 'people' car. And while they are selling better it's insane that it took as long as it has to get here.
The main 'misery' you experience at that point, is that you're driving the same car as a lot of other people and it's not as exciting [2] as something with more power than most people know what to do with.
And hell, as they say in investing, sometimes the market can be irrational longer than you can stay solvent. E.x. was it truly worth it to Hydro-Quebec to sit on LiFePO patents the way they did vs just figuring out licensing terms that got them a little bit of money to then properly accelerate adoption of Hybrids/EVs/etc?
[0] - By this I mean Something like Toyota's HSD style setup used by Ford and Subaru, or Honda or Hyundai/Kia's setup where there's still a more normal transmission involved.
[1] - Ford advertises up to 1500 pounds, but I feel like the GVWR allows for a 25 pound driver at that point.
[2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...
Not that these technologies don't have anything to bring, but any discussion that still presupposes that cars/trucks(/planes) (as we know them) still have a future is (mostly) a waste of time.
P.S.: The article mentions the "normal error-checking processes of society"... but what makes them so sure cults aren't part of them ?
It's not like society is particularly good about it either, immune from groupthink (see the issue above) - and who do you think is more likely to kick-start a strong enough alternative ?
(Or they are just sad about all the failures ? But it's questionable that the "process" can work (with all its vivacity) without the "failures"...)
"I came up with a step-by-step plan to achieve World Peace, and now I am on a government watchlist!"
It goes along with the "taking ideas seriously" part of [R]ationalism. They committed to the idea of maximizing expected quantifiable utility, and imagined scenarios with big enough numbers (of future population) that the probability of the big-number-future coming to pass didn't matter anymore. Normal people stop taking an idea seriously once it's clearly a fantasy, but [R]ationalists can't do that if the fantasy is both technically possible and involves big enough imagined numbers to overwhelm its probability, because of their commitment to "shut up and calculate"'
"I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way."
Has always really bothered me because it assumes that there are no negative impacts of the work you did to get the money. If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
> If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
You kinda summed up a lot of the world post industrial revolution there, at least as far as stuff like toxic waste (Superfund, anyone?) and stuff like climate change, I mean for goodness sake let's just think about TEL and how they knew Ethanol could work but it just wasn't 'patentable'. [0] Or the "We don't even know the dollar amount because we don't have a workable solution" problem of PFAS.
[0] - I still find it shameful that a university is named after the man who enabled this to happen.
And not just that, but the very fact that someone considers it valid to try to accumulate billions of dollars so they can have an outsized influence on the direction of society, seems somewhat questionable.
Even with 'good' intentions, there is the implied statement that your ideas are better than everyone else's and so should be pushed like that. The whole thing is a self-satisfied ego-trip.
Well, it's easy to do good. Or, its easy to plan on doing good, once your multi-decade plan to become a billionaire comes to fruition.
> I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
Yeah, this is a pattern I've seen a lot of recently—especially in discussions about LLMs and the supposed inevitability of AGI (and the Singularity). This is a good description of it.
Another annoying one is the simulation theory group. They know just enough about Physics to build sophisticated mental constructs without understanding how flimsy the foundations are or how their logical steps are actually unproven hypotheses.
Agreed. This one is especially annoying to me and dear to my heart, because I enjoy discussing the philosophy behind this, but it devolves into weird discussions and conclusions fairly quickly without much effort at all. I particularly enjoy the tenets of certain sects of buddhism and how they view these things, but you'll get a lot of people that are doing a really pseudo-intellectual version of the Matrix where they are the main character.
Which sects of Buddhism? Just curious to read further about them.
You might have just explained the phenomenon of AI doomsayers overlapping with ea/rat types, which I otherwise found inexplicable. EA/Rs seem kind of appalingly positivist otherwise.
> I don’t think it’s just (or even particularly) bad axioms, I think it’s that people tend to build up “logical” conclusions where they think each step is a watertight necessity that follows inevitably from its antecedents, but actually each step is a little bit leaky, leading to runaway growth in false confidence.
I really like your way of putting it. It’s a fundamental fallacy to assume certainty when trying to predict the future. Because, as you say, uncertainty compounds over time, all prediction models are chaotic. It’s usually associated with some form of Dunning-Kruger, where people know just enough to have ideas but not enough to understand where they might fail (thus vastly underestimating uncertainty at each step), or just lacking imagination.
Deep Space 9 had an episode dealing with something similar. Superintelligent beings determine that a situation is hopeless and act accordingly. The normal beings take issue with the actions of the Superintelligents. The normal beings turn out to be right.
Precisely! I'd even say they get intoxicated with their own braininess. The expression that comes to mind is to get "way out over your skis".
I'd go even further and say most of the world's evils are caused by people with theories that are contrary to evidence. I'd place Marx among these but there's no shortage of examples.
> non-rationalists do at least benefit from some intellectual humility
The Islamists who took out the World Trade Center don’t strike me as particularly intellectually humble.
If you reject reason, you are only left with force.
Are you so sure the 9/11 hijackers rejected reason?
Why Are So Many Terrorists Engineers?
https://archive.is/XA4zb
Self-described rationalists can and often do rationalize acts and beliefs that seem baldly irrational to others.
Here's the thing, the goals of the terrorists weren't irrational.
People confuse "rational" with "moral". Those aren't the same thing. You can perfectly rationally do something that is immoral with a bad goal.
For example, if you value your life above all others, then it would be perfectly rational to slaughter an orphanage if a more powerful entity made that your only choice for survival. Morally bad, rationally correct.
I now feel the need to comment that this thread does illustrate an issue I have with the naming of the philosophical/internet community of rationalism.
One can very clearly be a rational individual or an individual who practices reason and not associate with the internet community of rationalism. The median member of the group defined as "not being part of the internet-organized movement of rationalism and not reading lesswrong posts" is not "religious extremist striking the world trade center and committing an atrocious act of terrorism", it's "random person on the street."
And to preempt a specific response some may make to this, yes, the thread here is talking about rationalism as discussed in the blog post above as organized around Yudowsky or slate star codex, and not the rationalist movement of like, Spinoza and company. Very different things philosophically.
Islamic fundamentalism and cult rationalism are both involved in a “total commitment”, “all or nothing” type of thinking. The former is totally committed to a particular literal reading of scripture, the latter, to logical deduction from a set of chosen premises. Both modes of thinking have produced violent outcomes in the past.
Skepticism, in which no premise or truth claim is regarded as above dispute (or, that it is always permissible and even praiseworthy to suspend one’s judgment on a matter), is the better comparison with rationalism-fundamentalism. It is interesting that skepticism today is often associated with agnostic or atheist religious beliefs, but I consider many religious thinkers in history to have been skeptics par excellence when judged by the standard of their own time. E.g. William Ockham (of Ockham’s razor) was a 14C Franciscan friar (and a fascinating figure) who denied papal infallibility. I count Martin Luther as belonging to the history of skepticism as well, for example, as well as much of the humanist movement that returned to the original Greek sources for the Bible, from the Latin Vulgate translation by Jerome.
The history of ideas is fun to read about. I am hardly an expert, but you may be interested by the history of Aristotelian rationalism, which gained prominence in the medieval west largely through the works of Averroes, a 12C Muslim philosopher who heavily favored Aristotle. In 13C, Thomas Aquinus wrote a definitive Catholic systematic theology, rejecting Averroes but embracing Aristotle. To this day, Catholic theology is still essentially Aristotelian.
True skepticism is rare. It's easy to be skeptical only about beliefs you dislike or at least don't care about. It's hard to approach the 100th self-professed psychic with an honest intention to truly test their claims rather than to find the easiest way to ridicule them.
The only absolute above questioning is that there are no absolutes.
Strongly recommend this profile in the NYer on Curtis Yarvin (who also uses "rationalism" to justify their beliefs) [0]. The section towards the end that reports on his meeting one of his supposed ideological heroes for an extended period of time is particularly illuminating.
I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
[0]: https://www.newyorker.com/magazine/2025/06/09/curtis-yarvin-...
> I feel like the internet has led to an explosion of these such groups because it abstracts the "ideas" away from the "people". I suspect if most people were in a room or spent an extended amount of time around any of these self-professed, hyper-online rationalists, they would immediately disregard any theories they were able to cook up, no matter how clever or persuasively-argued they might be in their written down form.
Likely the opposite. The internet has led to people being able to see the man behind the curtain, and realize how flawed the individuals pushing these ideas are. Whereas many intellectuals from 50 years back were just as bad if not worse, but able to maintain a false aura of intelligence by cutting themselves off from the masses.
Hard disagree. People use rationality to support the beliefs they already have, not to change those beliefs. The internet allows everyone to find something that supports anything.
I do it. You do it. I think a fascinating litmus test is asking yourself this question: “When did I last change my mind about something significant?” For most people the answer is “never”. If we lived in the world you described, most people’s answers would be “relatively recently”.
That relies on two assumptions that I don't think are true at all:
1. Most people who follow these beliefs will pay attention to/care about the man behind the curtain.
2. Most people who follow these beliefs will change their mind when shown that the man behind the curtain is a charlatan.
If anything, history shows us the opposite. Even in the modern world, it's easy for people to see that other people's thought leaders are charlatans, very difficult to see that our own are.
Why wouldn't this phenomenon start with writing itself (supercharged with the printing press), heck, even with oral myths ?
> I immediately become suspicious of anyone who is very certain of something
Me too, in almost every area of life. There's a reason it's called a conman: they are tricking your natural sense that confidence is connected to correctness.
But also, even when it isn't about conning you, how do people become certain of something? They ignored the evidence against whatever they are certain of.
People who actually know what they're talking about will always restrict the context and hedge their bets. Their explanation are tentative, filled with ifs and buts. They rarely say anything sweeping.
In the term "conman" the confidence in question is that of the mark, not the perpetrator.
Isn't confidence referring to the alternate definition of trust, as in "taking you into his confidence"?
I think if you used that definition you could equally say "it is the mark that is taking the conman into [the mark's] confidence"
> how do people become certain of something?
They see the same pattern repeatedly until it becomes the only reasonable explanation? I’m certain about the theory of gravity because every time I drop an object it falls to the ground with a constant acceleration.
"Cherish those who seek the truth but beware of those who find it" - Voltaire
Most likely Gide ("Croyez ceux qui cherchent la vérité, doutez de ceux qui la trouvent", "Believe those who seek Truth, doubt those who find it") and not Voltaire ;)
Voltaire was generally more subtle: "un bon mot ne prouve rien", a witty saying proves nothing, as he'd say.
> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
Are you certain about this?
All I know is that I know nothing.
How do you know?
Socrates told me.
Well you could be a critical rationalist and do away with the notion of "certainty" or any sort of justification or privileged source of knowledge (including "rationality").
Your own state of mind is one of the easiest things to be fairly certain about.
The fact that this is false is one of the oldest findings of research psychology
Marvin Minsky wrote forcefully [1] about this in The Society of Mind and went so far to say that trying to observe yourself (e.g. meditation) might be harmful.
Freud of course discovered a certain world of the unconscious but untrained [2] you would certainly struggle to explain how you know sentence S is grammatical and S' is not, or what it is you do when you walk.
If you did meditation or psychoanalysis or some other practice to understand yourself better it would take years.
[1] whether or not it is true.
[2] the "scientific" explanation you'd have if you're trained may or may not be true since it can't be used to program a computer to do it
said no one familiar with their own mind, ever!
no
Isaac Newton would like to have a word.
I am not a big fan of alchemy, thank you though.
Suspicious implies uncertain. It’s not immediate rejection.
Many arguments arise over the valuation of future money. See "discount function" [1] At one extreme are the rational altruists, who rate that near 1.0, and the "drill, baby, drill" people, who are much closer to 0.
The discount function really should have a noise term, because predictions about the future are noisy, and the noise increases with the distance into the future. If you don't consider that, you solve the wrong problem. There's a classic Roman concern about running out of space for cemeteries. Running out of energy, or overpopulation, turned out to be problems where the projections assumed less noise than actually happened.
[1] https://en.wikipedia.org/wiki/Discount_function
I find Yudowsky-style rationalists morbidly fascinating in the same way as Scientologists and other cults. Probably because they seem to genuinely believe they're living in a sci-fi story. I read a lot of their stuff, probably too much, even though I find it mostly ridiculous.
The biggest nonsense axiom I see in the AI-cult rationalist world is recursive self-improvement. It's the classic reason superintelligence takeoff happens in sci-fi: once AI reaches some threshold of intelligence, it's supposed to figure out how to edit its own mind, do that better and faster than humans, and exponentially leap into superintelligence. The entire "AI 2027" scenario is built on this assumption; it assumes that soon LLMs will gain the capability of assisting humans on AI research, and AI capabilities will explode from there.
But AI being capable of researching or improving itself is not obvious; there's so many assumptions built into it!
- What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
- Speaking of which, LLMs already seem to have hit a wall of diminishing returns; it seems unlikely they'll be able to assist cutting-edge AI research with anything other than boilerplate coding speed improvements.
- What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
- Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself? (short-circuit its reward pathway so it always feels like it's accomplished its goal)
Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory, but I don't think any amount of doing philosophy in a vacuum without concrete evidence could convince me that fast-takeoff superintelligence is possible.
> it assumes that soon LLMs will gain the capability of assisting humans
No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
It doesn't require AI to be better than humans for AI to take over because unlike a human an AI can be cloned. You have have 2 AIs, then 4, then 8.... then millions. All able to do the same things as humans (the assumption of AGI). Build cars, build computers, build rockets, built space probes, build airplanes, build houses, build power plants, build factories. Build robot factories to create more robots and more power plants and more factories.
PS: Not saying I believe in the doom. But the thought experiment doesn't seem indefensible.
> No, it does not. It assumes there will be progress in AI. It does not assume that progress will be in LLMs
I mean, for the specific case of the 2027 doomsday prediction, it really does have to be LLMs at this point, just given the timeframes. It is true that the 'rationalist' AI doomerism thing doesn't depend LLMs, and in fact predates transformer-based models, but for the 2027 thing, it's gotta be LLMs.
> It does not assume that progress will be in LLMs
If that's the case then there's not as much reason to assume that this progress will occur now, and not years from now; LLMs are the only major recent development that gives the AI 2027 scenario a reason to exist.
> You have have 2 AIs, then 4, then 8.... then millions
The most powerful AI we have now is strictly hardware-dependent, which is why only a few big corporations have it. Scaling it up or cloning it is bottlenecked by building more data centers.
Now it's certainly possible that there will be a development soon that makes LLMs significantly more efficient and frees up all of that compute for more copies of them. But there's no evidence that even state-of-the-art LLMs will be any help in finding this development; that kind of novel research is just not something they're any good at. They're good at doing well-understood things quickly and in large volume, with small variations based on user input.
> But the thought experiment doesn't seem indefensible.
The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability in fields like software or research, using better algorithms and data alone.
Take https://ai-2027.com/research/takeoff-forecast as an example: it's the side page of AI 2027 that attempts to deal with these types of objections. It spends hundreds of paragraphs on what the impact of AI reaching a "superhuman coder" level will be on AI research, and on the difference between the effectiveness of an organizations average and best researchers, and the impact of an AI closing that gap and having the same research effectiveness as the best humans.
But what goes completely unexamined and unjustified is the idea that AI will be capable of reaching "superhuman coder" level, or developing peak-human-level "research taste", at all, at any point, with any amount of compute or data. It's simply assumed that it will get there because the exponential curve of the recent AI boom will keep going up.
Skills like "research taste" can't be learned at a high level from books and the internet, even if, like ChatGPT, you've read the entire Internet and can see all the connections within it. They require experience, trial and error. Probably the same amount that a human expert would require, but even that assumes we can make an AI that can learn from experience as efficiently as a human, and we're not there yet.
> The most powerful AI we have now is strictly hardware-dependent
Of course that's the case and it always will be - the cutting edge is the cutting edge.
But the best AI you can run on your own computer is way better than the state of the art just a few years ago - progress is being made at all levels of hardware requirements, and hardware is progressing as well. We now have dedicated hardware in some of our own devices for doing AI inference - the hardware-specificity of AI doesn't mean we won't continue to improve and commoditise said hardware.
> The part that seems indefensible is the unexamined assumptions about LLMs' ability (or AI's ability more broadly) to jump to optimal human ability [...]
I don't think this is at all unexamined. But I think it's risky to not consider the strong possibility when we have an existence proof in ourselves of that level of intelligence, and an algorithm to get there, and no particular reason to believe we're optimal since that algorithm - evolution - did not optimise us for intelligence alone.
I agree. There's also the point of hardware dependance.
From all we've seen, the practical ability of AI/LLMs seems to be strongly dependent on how much hardware you throw at it. Seems pretty reasonable to me - I'm skeptical that there's that much out there in gains from more clever code, algorithms, etc on the same amount of physical hardware. Maybe you can get 10% or 50% better or so, but I don't think you're going to get runaway exponential improvement on a static collection of hardware.
Maybe they could design better hardware themselves? Maybe, but then the process of improvement is still gated behind how fast we can physically build next-generation hardware, perfect the tools and techniques needed to make it, deploy with power and cooling and datalinks and all of that other tedious physical stuff.
I think you can get a few more gigantic step functions' worth of improvement on the same hardware. For instance, LLMs don't have any kind of memory, short or long term.
An interesting point you make there — one would assume that if recursive self-improvement were a thing, Nature would have already lead humans into that "hall of mirrors".
I often like to point out that Earth was already consumed by Grey Goo, and today we are hive-minds in titanic mobile megastructure-swarms of trillions of the most complex nanobots in existence (that we know of), inheritors of tactics and capabilities from a zillion years of physical and algorithmic warfare.
As we imagine the ascension of AI/robots, it may seem like we're being humble about ourselves... But I think it's actually the reverse: It's a kind of hubris elevating our ability to create over the vast amount we've inherited.
To take it a little further - if you stretch the conventional definition of intelligence a bit - we already assemble ourselves into a kind of collective intelligence.
Nations, corporations, clubs, communes -- any functional group of humans is capable of observing, manipulating, and understanding our environment in ways no individual human is capable of. When we dream of hive minds and super-intelligent AI it almost feels like we are giving up on collaboration.
We can probably thank our individualist mindset for that. (Not that it's all negative.)
There's a variant of this that argues that humans are already as intelligent as it's possible to be. Because if it's possible to be more intelligent, why aren't we? And a slightly more reasonable variant that argues that we're already as intelligent as it's useful to be.
"Because if it's possible to be more intelligent, why aren't we?"
Because deep abstract thoughts about the nature of the universe and elaborate deep thinking were maybe not as useful while we were chasing lions and buffaloes with a spear?
We just had to be smarter then them. Which included finding out that tools were great. Learning about the habits of the prey and optmize hunting success. Those who were smarter in that capacity had a greater chance of reproducing. Those who just exceeded in thinking likely did not lived that long.
Is it just dumb luck that we're able to create knowledge about black holes, quarks, and lots of things in between which presumably had zero evolutionary benefit before a handful of generations ago?
Basically yes it is luck, in the sense that evolution is just randomness with a filter of death applied, so whatever brains we happen to have are just luck.
The brains we did end up with are really bad at creating that sort of knowledge. Almost none of us can. But we’re good at communicating, coming up with simplified models of things, and seeing how ideas interact.
We’re not universe-understanders, we’re behavior modelers and concept explainers.
I wasn't referring the "luck" factor of evolution, which is of course always there. I was asking whether "luck" is the reason that the cognitive capabilities which presumably were selected for also came with cognitive capabilities that almost certainly were not selected for.
My guess is that it's not dumb luck, and that what we evolved is in fact general intelligence, and that this was an "easier" way to adapt to environmental pressure than to evolve a grab bag of specific (non-general) cognitive abilities. An implication of this claim would be that we are universe-understanders (or at least that we are biologically capable of that, given the right resources and culture).
In other words, it's roughly the same answer for the question "why do washing machines have Turing complete microcontrollers in them when they only need to do a very small number of computing tasks?" At scale, once you know how to implement general (i.e. Turing-complete and programmable) computers it tends to be simpler to use them than to create purpose-built computer hardware.
Evolution rewarded us for developing general intelligence. But with a very immediate practical focus and not too much specialisation.
I don't think the logic follows here. Nor does it match evidence.
The premise is ignorant of time. It is also ignorant of the fact that we know there's a lot of things we don't know. That's all before we consider other factors like if there are limits and physical barriers or many other things.
While I'm deeply and fundamentally skeptical of the recursive self-improvement/singularity hypothesis, I also don't really buy this.
There are some pretty obvious ways we could improve human cognition if we had the ability to reliably edit or augment it. Better storage & recall. Lower distractibility. More working memory capacity. Hell, even extra hands for writing on more blackboards or putting up more conspiracy theory strings at a time!
I suppose it might be possible that, given the fundamental design and structure of the human brain, none of these things can be improved any further without catastrophic side effects—but since the only "designer" of its structure is evolution, I think that's extremely unlikely.
Some of your suggestions, if you don't mind my saying, seem like only modest improvements — akin to Henry Ford's quote “If I had asked people what they wanted, they would have said a faster horse.”
To your point though, an electronic machine is a different host altogether with different strengths and weaknesses.
Well, twic's comment didn't say anything about revolutionary improvements, just "maybe we're as smart as we can be".
Well, arguably that's exactly where we are, but machines can evolve faster.
And that's an entire new angle that the cultists are ignoring... because superintelligence may just not be very valuable.
And we don't need superintelligence for smart machines to be a problem anyway. We don't need even AGI. IMO, there's no reason to focus on that.
> Well, arguably that's exactly where we are
Yep; from the perspective of evolution (and more specifically, those animal species that only gain capability generationally by evolutionary adaptation of instinct), humans are the recursively self-(fitness-)improving accident.
Our species-aggregate capacity to compete for resources within the biosphere went superlinear in the middle of the previous century; and we've had to actively hit the brakes on how much of everything we take since then, handicapping . (With things like epidemic obesity and global climate change being the result of us not hitting those brakes quite hard enough.)
Insofar as a "singularity" can be defined on a per-agent basis, as the moment when something begins to change too rapidly for the given agent to ever hope to catch up with / react to new conditions — and so the agent goes from being a "player at the table" to a passive observer of what's now unfolding around them... then, from the rest of our biosphere's perspective, they've 100% already witnessed the "human singularity."
No living thing on Earth besides humans now has any comprehension of how the world has been or will be reshaped by human activity; nor can ever hope to do anything to push back against such reshaping. Every living thing on Earth other than humans, will only survive into the human future, if we humans either decide that it should survive, and act to preserve it; or if we humans just ignore the thing, and then just-so-happen to never accidentally do anything to wipe it from existence without even noticing.
> machines can evolve faster
[Squinty Thor] "Do they though?"
I think it's valuable to challenge this popular sentiment every once-in-a-while. Sure, it's a good poetic metaphor, but when you really start comparing their "lifecycle" and change-mechanisms to the swarming biological nanobots that cover the Earth, a bunch of critical aspects just aren't there or are being done to them rather than by them.
At least for now, these machines mostly "evolve" in the same sense that fashionable textile pants "evolve".
> What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
This is sort of what I subscribe to as the main limiting factor, though I'd describe it differently. It's sort of like Amdahl's Law (and I imagine there's some sort of Named law that captures it, I just don't know the name): the magic AI wand may be very good at improving some part of AGI capability, but the more you improve that part, the more the other parts come to dominate. Metaphorically, even if the juice is worth the squeeze initially, pretty soon you'll only be left with a dried-out fruit clutched in your voraciously energy-consuming fist.
I'm actually skeptical that there's much juice in the first place; I'm sure today's AIs could generate lots of harebrained schemes for improvement very quickly, but exploring those possibilities is mind-numbingly expensive. Not to mention that the evaluation functions are unreliable, unknown, and non-monotonic.
Then again, even the current AIs have convinced a large number of humans to put a lot of effort into improving them, and I do believe that there are a lot of improvements that humans are capable of making to AI. So the human-AI system does appear to have some juice left. Where we'll be when that fruit is squeezed down to a damp husk, I have no idea.
> - What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
I think what's more plausible is that there is general intelligence, and humans have that, and it's general in the same sense that Turing machines are general, meaning that there is no "higher form" of intelligence that has strictly greater capability. Computation speed, memory capacity, etc. can obviously increase, but those are available to biological general intelligences just like they would be available to electronic general intelligences.
The built in assumptions are always interesting to me, especially as it relates to intelligence. I find many of them (though not all), are organized around a series of fundamental beliefs that are very rarely challenged within these communities. I should initially mention that I don't think everyone in these communities believes these things, of course, but I think there's often a default set of assumptions going into conversations in these spaces that holds these axioms. These beliefs more or less seem to be as follows:
1) They believe that there exists a singular factor to intelligence in humans which largely explains capability in every domain (a super g factor, effectively).
2) They believe that this factor is innate, highly biologically regulated, and a static factor about a person(Someone who is high IQ in their minds must have been a high achieving child, must be very capable as an adult, these are the baseline assumptions). There is potentially belief that this can be shifted in certain directions, but broadly there is an assumption that you either have it or you don't, there is no feeling of it as something that could be taught or developed without pharmaceutical intervention or some other method.
3) There is also broadly a belief that this factor is at least fairly accurately measured by modern psychometric IQ tests and educational achievement, and that this factor is a continuous measurement with no bounds on it (You can always be smarter in some way, there is no max smartness in this worldview).
These are things that certainly could be true, and perhaps I haven't read enough into the supporting evidence for them but broadly I don't see enough evidence to have them as core axioms the way many people in the community do.
More to your point though, when you think of the world from those sorts of axioms above, you can see why an obsession would develop with the concept of a certain type of intelligence being recursively improving. A person who has become convinced of their moral placement within a societal hierarchy based on their innate intellectual capability has to grapple with the fact that there could be artificial systems which score higher on the IQ tests than them, and if those IQ tests are valid measurements of this super intelligence factor in their view, then it means that the artificial system has a higher "ranking" than them.
Additionally, in the mind of someone who has internalized these axioms, there is no vagueness about increasing intelligence! For them, intelligence is the animating factor behind all capability, it has a central place in their mind as who they are and the explanatory factor behind all outcomes. There is no real distinction between capability in one domain or another mentally in this model, there is just how powerful a given brain is. Having the singular factor of intelligence in this mental model means being able to solve more difficult problems, and lack of intelligence is the only barrier between those problems being solved vs unsolved. For example, there's a common belief among certain groups among the online tech world that all governmental issues would be solved if we just had enough "high-IQ people" in charge of things irrespective of their lack of domain expertise. I don't think this has been particularly well borne out by recent experiments, however. This also touches on what you mentioned in terms of an AI system potentially maximizing the "wrong types of intelligence", where there isn't a space in this worldview for a wrong type of intelligence.
I think you'll indeed find, if you were to seek out the relevant literature, that those claims are more or less true, or at least, are the currently best-supported interpretation available. So I don't think they're assumptions so much as simply current state of the science on the matter, and therefore widely accepted among those who for whatever reason have looked into it (or, more likely, inherited the information from someone they trust who has read up on it).
Interestingly, I think we're increasingly learning that although most aspects of human intelligence seem to correlate with each other (thus the "singular factor" interpretation), the grab-bag of skills this corresponds to are maybe a bit arbitrary when compared to AI. What evolution decided to optimise the hell out of in human intelligence is specific to us, and not at all the same set of skills as you get out of cranking up the number of parameters in an LLM.
Thus LLMs continuing to make atrocious mistakes of certain kinds, despite outshining humans at other tasks.
Nonetheless I do think it's correct to say that the rationalists think intelligence is a real measurable thing, and that although in humans it might be a set of skills that correlate and maybe in AIs it's a different set of skills that correlate (such that outperforming humans in IQ tests is impressive but not definitive), that therefore AI progress can be measured and it is meaningful to say "AI is smarter than humans" at some point. And that AI with better-than-human intelligence could solve a lot of problems, if of course it doesn't kill us all.
It's kinda weird how the level of discourse seems to be what you get when a few college students sit around smoking weed. Yet somehow this is taken as very serious and profound in the valley and VC throw money at it.
I've pondered recursive self-improvement. I'm fairly sure it will be a thing - we're at a point already where people could try telling Claude or some such to have a go, even if not quite at a point it would work. But I imagine take off would be very gradual. It would be constrained by available computing resources and probably only comparably good to current human researchers and so still take ages to get anywhere.
I honestly am not trying to be rude when I say this, but this is exactly the sort of speculation I find problematic and that I think most people in this thread are complaining about. Being able to tell Claude to have a go has no relation at all to whether it may ever succeed, and you don't actually address any of the legitimate concerns the comment you're replying to points out. There really isn't anything in this comment but vibes.
I don't think it's vibes rather than my thinking about the problem.
If you look at the "legitimate concerns" none are really deal breakers:
>What if "increasing intelligence", which is a very vague goal, has diminishing returns, making recursive self-improvement incredibly slow?
I'm will to believe it will be slow though maybe it won't
>LLMs already seem to have hit a wall of diminishing returns
Who cares - there will be other algorithms
>What if there are several paths to different kinds of intelligence with their own local maxima
well maybe, maybe not
>Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?
well - you can make another one if the first does that
Those are all potential difficulties with self improvement, not reasons it will never happen. I'm happy to say it's not happening right now but do you have any solid arguments that it won't happen in the next century?
To me the arguments against sound like people in the 1800s discussing powered flight and saying it'll never happen because steam engine development has slowed.
On the other hand, I'm baffled to encounter recursive self-improvement being discussed as something not only weird to expect, but as damning evidence of sloppy thinking by those who speculate about it.
We have an existence proof for intelligence that can improve AI: humans.
If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself.
Are people really that sceptical that AI will get to human level intelligence?
It that an insane belief worthy of being a primary example of a community not thinking clearly?
Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly.
> We have an existence proof for intelligence that can improve AI: humans.
I don't understand what you mean by this. The human brain has not meaningfully changed, biologically, in the past 40,000 years.
We, collectively, have built a larger base of knowledge and learned to cooperate effectively enough to make large changes to our environment. But that is not the same thing as recursive self-improvement. No one has been editing our genes or performing brain surgery on children to increase our intelligence or change the fundamental way it works.
Modern brains don't work "better" than those of ancient humans, we just have more knowledge and resources to work with. If you took a modern human child and raised them in the middle ages, they would behave like everyone else in the culture that raised them. They would not suddenly discover electricity and calculus just because they were born in 2025 instead of 950.
----
And, if you are talking specifically about the ability to build better AI, we haven't matched human intelligence yet and there is no indication that the current LLM-heavy approach will ever get there.
Culture is certainly one aspect of recursive self-improvement.
Somewhat akin to 'software' if you will.
Consider that even the named phenomenon is sloppy: "recursive self improvement" does not imply "self improvement without bounds". This is the "what if you hit diminishing returns and never get past it" claim. Absolutely no justification for the jump, ever, among AI boosters.
> If AI ever gets to human-level intelligence
This picture of intelligence as a numerical scale that you just go up or down, with ants at the bottom and humans/AI at the top, is very very shaky. AI is vulnerable to this problem, because we do not have a definition of intelligence. We can attempt to match up capabilities LLMs seem to have with capabilities humans have, and if the capability is well-defined we may even be able to reason about how stable it is relative to how LLMs work.
For "reasoning" we categorically do not have this. There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure. IIRC there was a recent paper about giving LLMs more opportunity processing time, and this reduced performance. Same with adding extraneous details, sometimes that reduces performance too. What if eventually everything you try reduces performance? Totally unaddressed.
> It that an insane belief worthy of being a primary example of a community not thinking clearly?
I really need to stress this: thinking clearly is about the reasoning, not the conclusion. Given the available evidence, no legitimate argument has been presented that implies the conclusion. This does not mean the conclusion is wrong! But just putting your finger in the air and saying "the wind feels right, we'll probably have AGI tomorrow" is how you get bubbles and winters.
>"recursive self improvement" does not imply "self improvement without bounds"
I was thinking that. I mean if you look at something like AlphaGo it was based on human training and then they made one I think called AlphaZero which learned by playing against itself and got very good but not infinitely good as it was still constrained by hardware. I think with Chess the best human is about 2800 on the ELO scale and computers about 3500. I imagine self improving AI would be like that - smarter than humans but not infinitely so and constrained by hardware.
Also like humans still play chess even if computers are better, I imagine humans will still do the usual kind of things even if computers get smarter.
I'm surprised not see see much pushback on your point here, so I'll provide my own.
We have an existence proof for intelligence that can improve AI: humans can do this right now.
Do you think AI can't reach human-level intelligence? We have an existence proof of human-level intelligence: humans. If you think AI will reach human-level intelligence then recursive self-improvement naturally follows. How could it not?
Do you not think human-level intelligence is some kind of natural maximum? Why? That would be strange, no? Even if you think it's some natural maximum for LLMs specifically, why? And why do you think we wouldn't modify architectures as needed to continue to make progress? That's already happening, our LLMs are a long way from the pure text prediction engines of four or five years ago.
There is already a degree of recursive improvement going on right now, but with humans still in the loop. AI researchers currently use AI in their jobs, and despite the recent study suggesting AI coding tools don't improve productivity in the circumstances they tested, I suspect AI researchers' productivity is indeed increased through use of these tools.
So we're already on the exponential recursive-improvement curve, it's just that it's not exclusively "self" improvement until humans are no longer a necessary part of the loop.
On your specific points:
> 1. What if increasing intelligence has diminishing returns, making recursive improvement slow?
Sure. But this is a point of active debate between "fast take-off" and "slow take-off" scenarios, it's certainly not settled among rationalists which is more plausible, and it's a straw man to suggest they all believe in a fast take-off scenario. But both fast and slow take-off due to recursive self-improvement are still recursive self-imrpovement, so if you only want to criticise the fast take-off view, you should speak more precisely.
I find both slow and fast take-off plausible, as the world has seen both periods of fast economic growth through technology, and slower economic growth. It really depends on the details, which brings us to:
> 2. LLMs already seem to have hit a wall of diminishing returns
This is IMHO false in any meaningful sense. Yes, we have to use more computing power to get improvements without doing any other work. But have you seen METR's metric [1] on AI progress in terms of the (human) duration of task they can complete? This is an exponential curve that has not yet bent, and if anything has accelerated slightly.
Do not confuse GPT-5 (or any other incrementally improved model) failing to live up to unreasonable hype for an actual slowing of progress. AI capabilities are continuing to increase - being on an exponential curve often feels unimpressive at any given moment, because the relative rate of progress isn't increasing. This is a fact about our psychology, if we look at actual metrics (that don't have a natural cap like evals that max out at 100%, these are not good for measuring progress in the long-run) we see steady exponential progress.
> 3. What if there are several paths to different kinds of intelligence with their own local maxima, in which the AI can easily get stuck after optimizing itself into the wrong type of intelligence?
This seems valid. But it seems to me that unless we see METR's curve bend soon, we should not count on this. LLMs have specific flaws, but I think if we are honest with ourselves and not over-weighting the specific silly mistakes they still make, they are on a path toward human-level intelligence in the coming years. I realise that claim will sound ridiculous to some, but I think this is in large part due to people instinctively internalising that everything LLMs can do is not that impressive (it's incredible how quickly expectations adapt), and therefore over-indexing on their remaining weaknesses, despite those weaknesses improving over time as well. If you showed GPT-5 to someone from 2015, they would be telling you this thing is near human intelligence or even more intelligent than the average human. I think we all agree that's not true, but I think that superficially people would think it was if their expectations weren't constantly adapting to the state of the art.
> 4. Once AI realizes it can edit itself to be more intelligent, it can also edit its own goals. Why wouldn't it wirehead itself?
It might - but do we think it would? I have no idea. Would you wirehead yourself if you could? I think many humans do something like this (drug use, short-form video addiction), and expect AI to have similar issues (and this is one reason it's dangerous) but most of us don't feel this is an adequate replacement for "actually" satisfying our goals, and don't feel inclined to modify our own goals to make it so, if we were able.
> Knowing Yudowsky I'm sure there's a long blog post somewhere where all of these are addressed with several million rambling words of theory
Uncalled for I think. There are valid arguments against you, and you're pre-emptively dismissing responses to you by vaguely criticising their longness. This comment is longer than yours, and I reject any implication that that weakens anything about it.
Your criticisms are three "what ifs" and a (IMHO) falsehood - I don't think you're doing much better than "millions of words of theory without evidence". To the extent that it's true Yudkowsky and co theorised without evidence, I think they deserve cred, as this theorising predated the current AI ramp-up at a time when most would have thought AI anything like what we have now was a distant pipe dream. To the extent that this theorising continues in the present, it's not without evidence - I point you again to METR's unbending exponential curve.
Anyway, so I contend your points comprise three "what ifs" and (IMHO) a falsehood. Unless you think "AI can't recursively self-improve itself" already has strong priors in its favour such that strong arguments are needed to shift that view (and I don't think that's the case at all), this is weak. You will need to argue why we should need to have strong evidence to overturn a default "AI can't recursively self-improve" view, when it seems that a) we are already seeing recursive improvement (just not purely "self"-improvement), and that it's very normal for technological advancement to have recursive gains - see e.g. Moore's law or technological contributions to GDP growth generally.
Far from a damning example of rationalists thinking sloppily, this particular point seems like one that shows sloppy thinking on the part of the critics.
It's at least debateable, which is all it has to be for calling it "the biggest nonsense axion" to be a poor point.
[1] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
Yudkowsky seems to believe in fast take off, so much so that he suggested bombing data centers. To more directly address your point, I think it’s almost certain that increasing intelligence has diminishing returns and the recursive self improvement loop will be slow. The reason for this is that collecting data is absolutely necessary and many natural processes are both slow and chaotic, meaning that learning from observation and manipulation of them will take years at least. Also lots of resources.
Regarding LLM’s I think METR is a decent metric. However you have to consider the cost of achieving each additional hour or day of task horizon. I’m open to correction here, but I would bet that the cost curves are more exponential than the improvement curves. That would be fundamentally unsustainable and point to a limitation of LLM training/architecture for reasoning and world modeling.
Basically I think the focus on recursive self improvement is not really important in the real world. The actual question is how long and how expensive the learning process is. I think the answer is that it will be long and expensive, just like our current world. No doubt having many more intelligent agents will help speed up parts of the loop but there are physical constraints you can’t get past no matter how smart you are.
How do you reconcile e.g. AlphaGo with the idea that data is a bottleneck?
At some point learning can occur with "self-play", and I believe this is already happening with LLMs to some extent. Then you're not limited by imitating human-made data.
If learning something like software development or mathematical proofs, it is easier to verify whether a solution is correct than to come up with the solution in the first place, many domains are like this. Anything like that is amenable to learning on synthetic data or self-play like AlphaGo did.
I can understand that people who think of LLMs as human-imitation machines, limited to training on human-made data, would think they'd be capped at human-level intelligence. However I don't think that's the case, and we have at least one example of superhuman AI in one domain (Go) showing this.
Regarding cost, I'd have to look into it, but I'm under the impression costs have been up and down over time as models have grown but there have also been efficiency improvements.
I think I'd hazard a guess that end-user costs have not grown exponentially like time horizon capabilities, even though investment in training probably has. Though that's tricky to reason about because training costs are amortised and it's not obvious whether end user costs are at a loss or what profit margin for any given model.
On the fast-slow takeoff - Yud does seem to beleive in a fast takeoff yes, but it's also one of the the oldest disagreements in rationality circles, on which he disagreed with his main co-blogger on the orignal rationalist blog, Overcoming Bias, some discussion of this and more recent disagreements here [1].
[1] https://www.astralcodexten.com/p/yudkowsky-contra-christiano...
AlphaGo showed that RL+search+self play works really well if you have an easy to verify reward and millions of iterations. Math partially falls into this category via automated proof checkers like Lean. So, that’s where I would put the highest likelihood of things getting weird really quickly. It’s worth noting that this hasn’t happened yet, and I’m not sure why. It seems like this recipe should already be yielding results in terms of new mathematics, but it isn’t yet.
That said, nearly every other task in the world is not easily verified, including things we really care about. How do you know if an AI is superhuman at designing fusion reactors? The most important step there is building a fusion reactor.
I think a better reference point than AlphaGo is AlphaFold. Deepmind found some really clever algorithmic improvements, but they didn’t know whether they actually worked until the CASP competition. CASP evaluated their model on new Xray crystal structures of proteins. Needless to say getting Xray protein structures is a difficult and complex process. Also, they trained AlphaFold on thousands of existing structures that were accumulated over decades and required millenia of graduate-student-hours hours to find. It’s worth noting that we have very good theories for all the basic physics underlying protein folding but none of the physics based methods work. We had to rely on painstakingly collected data to learn the emergent phenomena that govern folding. I suspect that this will be the case for many other tasks.
The other weird assumption I hear is about how it'll just kill us all. The vast majority of smart people I know are very peaceful. They aren't even seeking power of wealth. They're too busy thinking about things and trying to figure everything out. They're much happier in front of a chalk board than sitting on a yacht. And humans ourselves are incredibly passionate towards other creatures. Maybe we learned this because coalitions are a incredibly powerful thing, but truth is that if I could talk to an ant I'd choose that over laying traps. Really that would be so much easier too! I'd even rather dig a small hole to get them started somewhere else than drive down to the store and do all that. A few shovels in the ground is less work and I'd ask them to not come back and tell others.
Granted, none of this is absolutely certain. It'd be naive to assume that we know! But it seems like these cults are operating on the premise that they do know and that these outcomes are certain. It seems to just be preying on fear and uncertainty. Hell, even Altman does this, ignoring risk and concern of existing systems by shifting focus to "an even greater risk" that he himself is working towards (You can't simultaneously maximize speed and safety). Which, weirdly enough might fulfill their own prophesies. The AI doesn't have to become sentient but if it is trained on lots of writings about how AI turns evil and destroys everyone then isn't that going to make a dumb AI that can't tell fact from fiction more likely to just do those things?
I think of it more like visualizing a fractal on a computer. The more detail you try to dig down into the more detail you find, and pretty quickly you run out of precision in your model and the whole thing falls apart. Every layer further down you go the resource requirements increase by an exponential amount. That's why we have so many LLMs that seem beautiful at first glance but go to crap when the details really matter.
soo many things make no sense in this comment that I feel like 20% chance this a mid quality gpt. and so much interpolation effort, but starting from hearsay instead of primary sources. then the threads stop just before seeing the contradiction with the other threads. I imagine this is how we all reason most of the time, just based on vibes :(
Sure, I wrote a lot and it's a bit scattered. You're welcome to point to something specific but so far you haven't. Ironically, you're committing the error you're accusing me of.
I'm also not exactly sure what you mean because the only claim I've made is that they've made assumptions where there are other possible, and likely, alternatives. It's much easier to prove something wrong than prove it right (or in our case, evidence, since no one is proving anything).
So the first part I'm saying we have to consider two scenarios. Either intelligence is bounded or unbounded. I think this is a fair assumption, do you disagree?
In an unbounded case, their scenario can happen. So I don't address that. But if you want me to, sure. It's because I have no reason to believe information is bounded when everything around me suggests that it is. Maybe start with the Bekenstein bound. Sure, it doesn't prove information is bounded but you'd then need to convince me that an entity not subject to our universe and our laws of physics is going to care about us and be malicious. Hell, that entity wouldn't even subject to time and we're still living.
In a bounded case it can happen but we need to understand what conditions that requires. There's a lot of functions but I went with S-curve for simplicity and familiarity. It'll serve fine (we're on HN man...) for any monotonically increasing case (or even non-monotonic, it just needs to tends that way).
So think about it. Change the function if you want, I don't care. But if intelligence is bounded, then if we're x more intelligent then ants, where on the graph do we need to be for another thing to be x more intelligent than us? There's not a lot of opportunities for that even to happen. It requires our intelligence (on that hypothetical scale) to be pretty similar than an ant. What cannot happen is that ant be in the tail of that function and us be further than the inflection point (half way). There just isn't enough space on that y-axis for anything to be x more intelligent. This doesn't completely reject that crazy superintelligence, but it does place some additional constraints that we can use to reason about things. For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).
Yeah, I'll admit that this is a very naïve model but again, we're not trying to say what's right but instead just say there's good reason to believe their assumption is false. Adding more complexity to this model doesn't make their case stronger, it makes it weaker.
The second part I can make much easier to understand.
Yes, there's bad smart people, but look at the smartest people in history. Did they seek power or wish to harm? Most of the great scientists did not. A lot of them were actually quite poor and many even died fighting persecution.
So we can't conclude that greater intelligence results in greater malice. This isn't hearsay, I'm just saying Newton wasn't a homicidal maniac. I know, bold claim...
I don't think this word means what you think it means. Just because I didn't link sources doesn't make it a rumor. You can validate them and I gave you enough information to do so. You now have more. Ask gpt for links, I don't care, but people should stop worshiping YudAnd about this second comment, I agree that intelligence is bounded. We can discuss how much more intelligence is theoretically possible, but even if limit ourselves to extrapolation from human variance (agency of musk, math smart of von neumann, manipulative as trump, etc), and add a little more speed and parallelism (100 times faster, 100 copies cooperating), then we can get pretty far.
Also I agree we are all pretty fucking dumb, and cannot make this kind of predictions, which is actually one very important point in the rationalist circles: doom is not certain, but p(doom) looks uncomfortably high though. How lucky do you feel?
>For the "AI will be [human to ant difference] more intelligent than us" argument to follow it would require us to be pretty fucking dumb, and in that case we're pretty fucking dumb and it'd be silly to think we can make these types of predictions with reasonable accuracy (also true in the unbounded case!).
...which is why we should be careful not to rush full-speed ahead and develop AI before we can predict how it will behave after some iterations of self-improvement. As the rationalist argument goes.
BTW you are assuming that intelligence will necessarily and inherently lead to (good) morality, and I think that's a much weirder assumption than some you're accusing rationalists of holding.
I apologize for the tone of my comment, but this is how I read your arguments (I was a little drunk at the time):
1. future AI cannot be infinitely intelligent, therefore AI is safe
But even with our level of intelligence, if we get serious we can eliminate all humans.
2. some smart ppl I know are peaceful
Do you think Putin is dumb?
3. smart ppl have different preferences than other ppl therefore AI is safe
Ironically this is the main doom argument from EY: it is difficult to make an AI that has the same values as us.
4. AI is competent enough to destroy everyone but is not able to tell fact from fiction
So are you willing to bet your life and the life of your loved ones on the certainty of these arguments?
Yeah, to compare Yudkowsky to Hubbard I've read accounts of people who read Dianetics or Science of Survival and thought "this is genius!" and I'm scratching my head and it's like they never read Freud or Horney or Beck or Berne or Burns or Rogers or Kohut, really any clinical psychology at all, even anything in the better 70% of pop psychology. Like Hubbard, Yudkowsky is unreadable, rambling [1] and inarticulate -- how anybody falls for it boggles my mind [2], but hey, people fell for Carlos Castenada who never used a word of the Yaqui language or mentioned any plant that grows in the desert in Mexico but has Don Juan give lectures about Kant's Critique of Pure Reason [3] that Castenada would have heard in school and you would have heard in school too if you went to school or would have read if you read a lot.
I can see how it appeals to people like Aella who wash into San Francisco without exposure to education [4] or philosophy or computer science or any topics germane to the content of Sequences -- not like it means you are stupid but, like Dianetics, Sequences wouldn't be appealing if you were at all well read. How is people at frickin' Oxford or Stanford fall for it is beyond me, however.
[1] some might even say a hypnotic communication pattern inspired by Milton Erickson
[2] you think people would dismiss Sequences because it's a frickin' Harry Potter fanfic, but I think it's like the 419 scam email which is riddled by typos which is meant to drive the critical thinker away and, ironically in the case of Sequences, keep the person who wants to cosplay as a critical thinker.
[3] minus any direct mention of Kant
[4] thus many of the marginalized, neurodivergent, transgender who left Bumfuck, AK because they couldn't live at home and went to San Francisco to escape persecution as opposed to seek opportunity
> like Dianetics, Sequences wouldn't be appealing if you were at all well read.
That would require an education in the humanities, which is low status.
Well, there is "well read" and "educated" which aren't the same thing. I started reading when I was three and checked out ten books a week from the public library throughout my youth. I was well read in psychology, philosophy and such long before I went to college -- I got a PhD in a STEM field so I didn't read a lot of that stuff for classes [1] I still read a lot of that stuff.
Perhaps the reason why Stanford and Oxford students are impressed by that stuff is that they are educated but not well read which has a few angles: STEM privileged over the humanities, the ride of Dyslexia culture, and a shocking level of incuriosity in "nepo baby" professors [2] who are drawn to the profession not because of a thirst for knowledge but because it's the family business.
[1] did get an introduction to https://en.wikipedia.org/wiki/Rogerian_argument and took a relatively "woke" (in a good way) Shakespeare class such that https://en.wikipedia.org/wiki/Troilus_and_Cressida is my favorite
[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC9755046/
I thought sequences was the blog posts and the fanfic was kept separately, to nitpick
You're describing the impressions I had of MENSA back in the 70's.
One of the only idioms that I don't mind living my life by is, "Follow the truth-seeker, but beware those who've found it".
Interesting. I can't say I've done much following though — not that I am aware of anyway. Maybe I just had no leaders growing up.
The distinction between them and religion is that religion is free to say that those axioms are a matter of faith and treat them as such. Rationalists are not as free to do so.
Epistemological skepticism sure is a belief. A strong belief on your side?
I am profoundly sure, I am certain I exist and that a reality outside myself exists. Worse, I strongly believe knowing this external reality is possible, desirable and accurate.
How suspicious does that make me?
It means you haven't read Hume, or, in general, taken philosophy seriously. An academic philosopher might still come to the same conclusions as you (there is an academic philosopher for every possible position), but they'd never claim the certainty you do.
why so aggressive chief
I am certain that your position "All academic philosophers never claim complete certainty about their beliefs" is not even wrong or falsifiable.
It's very tempting to try to reason things through from first principles. I do it myself, a lot. It's one of the draws of libertarianism, which I've been drawn to for a long time.
But the world is way more complex than the models we used to derive those "first principles".
It's also very fun and satisfying. But it should be limited to an intellectual exercise at best, and more likely a silly game. Because there's no true first principle, you always have to make some assumption along the way.
Any theory of everything will often have a little perpetual motion machine at the nexus. These can be fascinating to the mind.
Pressing through uncertainty either requires a healthy appetite for risk or an engine of delusion. A person who struggles to get out of their comfort zone will seek enablement through such a device.
Appreciation of risk-reward will throttle trips into the unknown. A person using a crutch to justify everything will careen hyperbolically into more chaotic and erratic behaviors hoping to find that the device is still working, seeking the thrill of enablement again.
The extremism comes from where once the user learned to say hello to a stranger, their comfort zone has expanded to an area that their experience with risk-reward is underdeveloped. They don't look at the external world to appreciate what might happen. They try to morph situations into some confirmation of the crutch and the inferiority of confounding ideas.
"No, the world isn't right. They are just weak and the unspoken rules [in the user's mind] are meant to benefit them." This should always resonate because nobody will stand up for you like you have a responsibility to.
A study of uncertainty and the limitations of axioms, the inability of any sufficiently expressive formalism to be both complete and consistent, these are the ideas that are antidotes to such things. We do have to leave the rails from time to time, but where we arrive will be another set of rails and will look and behave like rails, so a bit of uncertainty is necessary, but it's not some magic hat that never runs out of rabbits.
Another psychology that will come into play from those who have left their comfort zone is the inability to revert. It is a harmful tendency to presume all humans fixed quantities. Once a behavior exists, the person is said to be revealed, not changed. The proper response is to set boundaries and be ready to tie off the garbage bag and move on if someone shows remorse and desire to revert or transform. Otherwise every relationship only gets worse. If instead you can never go back, extreme behavior is a ratchet. Ever mistake becomes the person.
This is why it's important to emphasize that rationality is not a good goal to have. Rationality is nothing more than applied logic, which takes axioms as given and deduces conclusions from there.
Reasoning is the appropriate target because it is a self-critical, self-correcting method that continually re-evaluates axioms and methods to express intentions.
all of science would makes sense if it wasn't for that 1 pesky miracle
There should be an extremist cult of people who are certain only that uncertainty is the only certain thing
What makes you so certain there isn't? A group that has a deep understanding fnord of uncertainty would probably like to work behind the scenes to achieve their goals.
One might even call them illuminati? :D
The Fnords do keep a lower profile.
My favourite bumper sticker, "Militant Agnostic. I don't know, and neither do you."
I heard about this the other day! I think I need one.
More people should read Sextus Empiricus as he's basically the O.G. Phyrronist skeptic and goes pretty hard on this very train of thought.
If I remember my Gellius, it was the Academic Skeptics who claimed that the only certainty was uncertainty; the Pyrrhonists, in opposition, denied that one could be certain about the certainty of uncertainty.
Cool. Any specific recs or places to start with him?
Probably the Hackett book, "Sextus Empiricus: Selections from the Major Writings on Scepticism"
Thanks!
A Wonderful Phrase by Gandhi
The Snatter Goblins?
https://archive.org/details/goblinsoflabyrin0000frou/page/10...
https://realworldrisk.com/
"I have no strong feelings one way or the other." thunderous applause
Socrates was fairly close to that.
My thought as well! I can't remember names at the moment, but there were some cults that spun off from Socrates. Unfortunately they also adopted his practice of never writing anything down, so we don't know a whole lot about them
There already is, they're called "Politicians."
There would be, except we're all very much on the fence about whether it is the right cult for us.
Like Robert Anton Wilson if he were way less chill, perhaps.
“Oh, that must be exhausting.”
A good example of this is the number of huge assumptions needed for the argument for Roko's basilisk. I'm shocked that some people actually take it seriously.
I don't believe anyone has taken it seriously in the last half-decade, if you find counter-evidence for that belief let me know.
Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
> Saw once a discussion that people should not have kids as it's by far the highest increase in your carbon footprint in your lifetime (>10x than going vegan, etc) be driven all the way to advocating genocide as a way of carbon footprint minimization
The opening scene of Utopia (UK) s2e6 goes over this:
> "Why did you have him then? Nothing uses carbon like a first-world human, yet you created one: why would you do that?"
* https://www.youtube.com/watch?v=rcx-nf3kH_M
Setting aside the reductio ad absurdum of genocide, this is an unfortunately common viewpoint. People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2. This reasoning can be applied to all sorts of naive "more people bad" arguments. I can't imagine where the world would be if Norman Borlaug's parents had decided to never have kids out of concern for global food insecurity.
It also entirely subjugates the economic realities that we (at least currently) live in to the future health of the planet. I care a great deal about the Earth and our environment, but the more I've learned about stuff the more I've realized that anyone advocating for focusing on one without considering the impact on the other is primarily following a religion
> It also entirely subjugates the economic realities that we...
To play devils advocate, you could be seen as trying to subjugate the worlds health to your own economic well-being, and far fewer people are concerned with your tax bracket than there are people on earth. In a pure democracy, I'm fairly certain the planets well being would be deemed more important than the economy of whatever nation you live in.
> advocating for focusing on one... is primarily following a religion
Maybe, but they could also just be doing the risk calculus a bit differently. If you are a many step thinker the long term fecundity of our species might feel more important than any level of short term financial motivation.
> To play devils advocate, you could be seen as trying to subjugate the worlds health to your own economic well-being, and far fewer people are concerned with your tax bracket than there are people on earth.
Well, if they choose to see me as trying to subjugate the world's health to my own economic well-being (despite the fact that I advocate policies that would harm me personally in the name of climate sustainability), then we're already starting the discussion from bad faith (literally they are already assuming bad faith on my part). I'm at the point where I don't engage with bad faith arguments because they just end up in frustration on both sides. This whole modern attitude of "if you disagree with me then you must be evil" thing is (IMHO) utter poison to our culture and our democracy, and the current resident of the White House is a great example of where that leads.
> In a pure democracy, I'm fairly certain the planets well being would be deemed more important than the economy of whatever nation you live in.
Yeah, for about 3 days until people start getting hungry, or less extreme, until they start losing their jobs and their homes, or even longer term when they start to realize that they won't be able to retire and/or that they are leaving their kids a much worse situation than they themselves had (much worse than the current dichotomy between Boomers and Millenials/Zoomers). Ignoring or disregarding Maslow's Hierarchy of Needs is a sure way to be surprised and rejected by the people. We know that even respectable people will often turn to violence (including cannabalism) when they get hungry or angry enough. We're not going to be able to save the planet if there's widespread violence.
> Maybe, but they could also just be doing the risk calculus a bit differently. If you are a many step thinker the long term fecundity of our species might feel more important than any level of short term financial motivation.
I think this actually pointed at our misunderstanding (I know you're playing devil's advocate so this isn't addressed to you personally, rather your current presentation :-) ). I'm not talking about short-term financial or even economic motivation. I'm looking medium to long term, the same scale that I think needs to be considered for the planet. Now that said, banning all fossil fuels tomorrow and causing sweeping global depression in the short-term is something I would radically oppose, because it would cause immense suffering and I don't believe it would make much of a dent in the climate long-term (as it would quickly be reversed under the realities of politics) and it would absolutely harm the lower income brackets to a much greater proportional extent than the upper income brackets who already have solar panels and often capable of being off-grid. Though, even they will still run out of food when the truck companies aren't able to re-stock local grocery store shelves...
> this is an unfortunately common viewpoint
Not everyone believes that the purpose of life is to make more life, or that having been born onto team human automatically qualifies team human as the best team. It's not necessarily unfortunate.
I am not a rationalist, but rationally that whole "the meaning of life is human fecundity" shtick is after school special tautological nonsense, and that seems to be the assumption buried in your statement. Try defining what you mean without causing yourself some sort of recursion headache.
> their child might wind up..
They might also grow up to be a normal human being, which is far more likely.
> if Norman Borlaug's parents had decided to never have kids
Again, this would only have mattered if you consider the well being of human beings to be the greatest possible good. Some people have other definitions, or are operating on much longer timescales.
> People really need to take into account the chances their child might wind up working on science or technology which reduces global CO2 emissions or even captures CO2.
All else equal, it would be better to spread those chances across a longer period of time at a lower population with lower carbon use.
Insane to call "more people bad" naive but then actually try and account for what would otherwise best be described as hope.
The point is that you can go "from more people bad" to "less people good" in just a few jumps, and that is not great.
Are you familiar with ship of theseus as an arugmentation fallacy? Innuendo Studios did a great video on it and I think that a lot of what you're talking about breaks down to this. Tldr - it's a fallacy of substitution, small details of an argument get replaced by things that are (or feel like) logical equivalents until you end up saying something entirely different but are arguing as though you said the original thing. In this video the example is "senator doxxes a political opponent" but on looking "senator" turns out to mean "a contractor working for the senator" and "doxxes a political opponent" turns out to mean "liked a tweet that had that opponent's name in it in a way that could draw attention to it".
Each change is arguably equivalent and it seems logical that if x = y then you could put y anywhere you have x, but after all of the changes are applied the argument that emerges is definitely different from the one before all the substitutions are made. It feels like communities that pride themselves on being extra rational seem subject to this because it has all the trappings of rationalism but enables squishy, feely arguments
https://www.youtube.com/watch?v=Ui-ArJRqEvU
Meant to drop a link for the above, my bad
There are certain things I am sure of even though I derived them on my own.
But I constantly battle tested them against other smart people’s views, and just after I ran out of people to bring me new rational objections did I become sure.
Now I can battle test them against LLMs.
On a lesser level of confidence, I have also found a lot of times the people who disagreed with what I thought had to be the case, later came to regret it because their strategies ended up in failure and they told me they regretted not taking my recommendation. But that is on an individual level. I have gotten pretty good at seeing systemic problems, architecting systemic solutions, and realizing what it would take to get them adopted to at least a critical mass. Usually, they fly in the face of what happens normally in society. People don’t see how their strategies and lives are shaped by the technology and social norms around them.
Here, I will share three examples:
Public Health: https://www.laweekly.com/restoring-healthy-communities/
Economic and Governmental: https://magarshak.com/blog/?p=362
Wars & Destruction: https://magarshak.com/blog/?p=424
For that last one, I am often proven somewhat wrong by right-wing war hawks, because my left-leaning anti-war stance is about avoiding inflicting large scale misery on populations, but the war hawks go through with it anyway and wind up defeating their geopolitical enemies and gaining ground as the conflict fades into history.
"genetically engineers high fructose corn syrup into everything"
This phrase is nonsense, because HFCS is a chemical process applied to normal corn after the harvest. The corn may be a GMO but it certainly doesn't have to be.
Agreed, that was phrased wrong. The fruits across the board have been genetically engineered to be extremely sweet (fructose, not the syrup): https://weather.com/news/news/2018-10-03-fruit-so-sweet-zoo-...
While their nutritional quality has gone down tremendously, for vegetables too: https://pmc.ncbi.nlm.nih.gov/articles/PMC10969708/
Again, the term GMO is not what you're looking for. In the first article, a zookeeper is quoted making much the same mistake.
Here is a list of approved bioengineered foods in the US:
https://www.ams.usda.gov/rules-regulations/be/bioengineered-...
All the fruits on the list are engineered for properties other than sweetness.
The term you're looking for is "bred". Fruits have been bred to be sweeter, and this has been going on a long time. Corn is bred for high protein or high sugar, but the sweet corn is not what's used for HFCS.
Personally, I think the recent evidence shows that the problem is not so much that fruit is too sweet, but that everything is made to be addictive. Satiety signals are lost or distorted, and we are left with diseases of excess consumption.
Well, either way, you agree with me. Government and corporations work together and distract the individual telling them they can fix the downstream situation in their own, private way.
Another issue with these groups is that they often turn into sex cults.
A logical argument is only as good as it's presuppositions. To first lay siege to your own assumptions before reasoning about them tends towards a more beneficial outcome.
Another issue with "thinkers" is that many are cowards; whether they realize it or not a lot of presuppositions are built on a "safe" framework, placing little to no responsibility on the thinker.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know. I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
This is where I depart from you. If I say it's anti-intellectual I would only be partially correct, but it's worse than that imo. You might be coming across "smart people" who claim to know nothing "for sure", which in itself is a self-defeating argument. How can you claim that nothing is truly knowable as if you truly know that nothing is knowable? I'm taking these claims to their logical extremes btw, avoiding the granular argumentation surrounding the different shades and levels of doubt; I know that leaves vulnerabilities in my argument, but why argue with those who know that they can't know much of anything as if they know what they are talking about to begin with? They are so defeatist in their own thoughts, it's comical. You say, "profoundly unsure", which reads similarly to me as "can't really ever know" which is a sure truth claim, not a relative claim or a comparative as many would say, which is a sad attempt to side-step the absolute reality of their statement.
I know that I exist, regardless of how I get here I know that I do, there is a ridiculous amount of rhetoric surrounding that claim that I will not argue for here, this is my presupposition. So with that I make an ontological claim, a truth claim, concerning my existence; this claim is one that I must be sure of to operate at any base level. I also believe I am me and not you, or any other. Therefore I believe in one absolute, that "I am me". As such I can claim that an absolute exists, and if absolutes exist, then within the right framework you must also be an absolute to me, and so on and so forth; what I do not see in nature is an existence, or notion of, the relative on it's own as at every relative comparison there is an absolute holding up the comparison. One simple example is heat. Hot is relative, yet it also is objective; some heat can burn you, other heat can burn you over a very long time, some heat will never burn. When something is "too hot" that is a comparative claim, stating that there is another "hot" which is just "hot" or not "hot enough", the absolute still remains which is heat. Relativistic thought is a game of comparisons and relations, not making absolute claims; the only absolute claim is that there is no absolute claim to the relativist. The reason I am talking about relativists is that they are the logical, or illogical, conclusion of the extremes of doubt/disbelief i previously mentioned.
If you know nothing you are not wise, you are lazy and ill-prepared, we know the earth is round, we know that gravity exists, we are aware of the atomic, we are aware of our existence, we are aware that the sun shines it's light upon us, we are sure of many things that took debate among smart people many many years ago to arrive to these sure conclusions. There was a time where many things we accept where "not known" but were observed with enough time and effort by brilliant people. That's why we have scientists, teachers, philosophers and journalists. I encourage you that the next time you find a "smart" person who is unsure of their beliefs, you should kindly encourage them to be less lazy and challenge their absolutes, if they deny the absolute could be found then you aren't dealing with a "smart" person, you are dealing with a useful idiot who spent too much time watching skeptics blather on about meaningless topics until their brains eventually fell out. In every relative claim there must be an absolute or it fails to function in any logical framework. You can with enough thought, good data, and enough time to let things steep find the (or an) absolute and make a sure claim. You might be proven wrong later, but that should be an indicator to you that you should improve (or a warning you are being taken advantage of by a sophist), and that the truth is out there, not to sequester yourself away in this comfortable, unsure hell that many live in till they die.
The beauty of absolute truth is that you can believe absolutes without understanding the entirety of the absolute. I know gravity exists but I don't know fully how it works. Yet I can be absolutely certain it acts upon me, even if I only understand a part of it. People should know what they know and study it until they do and not make sure claims outside of what they do not know until they have the prerequisite absolute claims to support the broader claims with the surety of the weakest of their presuppositions.
Apologies for grammar, length and how schizo my thought process appears; I don't think linearly and it takes a goofy amount of effort to try to collate my thoughts in a sensible manner.
It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is.
You need to review the definition of the word.
> The smartest people I have ever known have been profoundly unsure of their beliefs and what they know.
The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
> I immediately become suspicious of anyone who is very certain of something, especially if they derived it on their own.
That's only your problem, not anyone else's. If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional.
> If you think people can't arrive to a tangible and useful approximation of truth, then you are simply delusional
Logic is only a map, not the territory. It is a new toy, still bright and shining from the box in terms of human history. Before logic there were other ways of thinking, and new ones will come after. Yet, Voltaire's bastards are always certain they're right, despite being right far less often than they believe.
Can people arrive at tangible and useful conclusions? Certainly, but they can only ever find capital "T" Truth in a very limited sense. Logic, like many other models of the universe, is only useful until you change your frame of reference or the scale at which you think. Then those laws suddenly become only approximations, or even irrelevant.
There is no (T)ruth, but there is a useful approximation of truth for 99.9% things that I want to do in life.
YMMV.
> It's crazy to read this, because by writing what you wrote you basically show that you don't understand what an axiom is. You need to review the definition of the word.
Oh, do enlighten then.
> The smartest people are unsure about their higher level beliefs, but I can assure you that they almost certainly don't re-evaluate "axioms" as you put it on a daily or weekly basis. Not that it matters, as we almost certainly can't verify who these people are based on an internet comment.
I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress. I promise you'll recover from it.
> Oh, do enlighten then.
Absolutely. Just in case your keyboard wasn't working to arrive at this link via Google.
https://www.merriam-webster.com/dictionary/axiom
First definition, just in case it still isn't obvious.
> I'm not sure you are responding to the right comment, or are severely misinterpreting what I said. Clearly a nerve was struck though, and I do apologize for any undue distress.
Someone was wrong on the Internet! Just don't want other people getting the wrong idea. Good fun regardless.
I get the impression that these people desperately want to study philosophy but for some reason can't be bothered to get formal training because it would be too humbling for them. I call it "small fishbowl syndrome," but maybe there's a better term for it.
The reason why people can't be bothered to get formal training is that modern philosophy doesn't seem that useful.
It was a while ago, but take the infamous story of the 2006 rape case in Duke University. If you check out coverage of that case, you get the impression every member of faculty that joined in the hysteria was from some humanities department, including philosophy. And quite a few of them refused to change their mind even as the prosecuting attorney was being charged with misconduct. Compare that to Socrates' behavior during the trial of the admirals in 406 BC.
Meanwhile, whatever meager resistence was faced by that group seems to have come from economists, natural scientist or legal scholars.
I wouldn't blame people for refusing to study in a humanities department where they can't tell right from wrong.
Modern philosophy isn't useful because some philosophy faculty at Duke were wrong about a rape case? Is that the argument being made here?
Which group of people giving modern training in philosophy should we judge the field by? If they can't use it correctly in such a basic case then who can?
Did the Duke philosophy teachers claim they were using philosophy to determine if someone was raped?
And did all the philosophers at all the other colleges convene and announce they were also using philosophy to determine if someone was raped?
> Did the Duke philosophy teachers claim they were using philosophy to determine if someone was raped?
I don't think that matters very much. If there's a strong enough correlation between being a reactive idiot and the department you're in, it makes a bad case for enrolling in that realm of study for educational motives. It's especially bad when the realm of study is directly focused on knowledge, ethics, and logic.
Note the "if" though, I haven't evaluated the parent's claims. I'm just saying it doesn't matter if they said they used philosophy. It reflects on philosophy as a study, at least the style they do there.
How much that affects other colleges is iffier, but it's not zero.
One week ago, if I asked you "how do we determine if modern philosophy is useful?"
Would you have pondered for a little while, then responded, "Find out how many philosophers commented on the Duke rape case of 2006 and what their opinions were, then we'll know."
Never in a million years. But if you said the departments were very disproportionately represented on different sides, I would think the main reasons would be either random cliques or that it shows something about critical thinking skills taught by those professors, or both, and I would be interested to hear more with the idea that I might learn something deeper than gossip.
Often, after you've figured out who's guilty, you'd need to look for more evidence until you find something that the jury can understand and the defense counsel can't easily argue against.
I've seen people make arguments against the value of modern academic philosophy based on their experience with professors or with whateversampling of writings they've come across. They usually get nowhere.
That's why I wanted to ground this discussion in a specific event.
No. The fact that they were wrong is almost irrelevant.
The faculty denounced the students without evidence, judged the case thought their emotions and their preconceived notions and refused to change their minds as new evidence emerged. Imagine having an academic discussion on a difficult ethical issue with such a teacher...
And none of that would have changed even, even if there somehow was a rape-focused conspiracy among the students of that university. (Thought the problem would have been significantly less obvious.)
“If the rule you followed brought you to this, of what use was the rule?” - Anton Chigurh
I figure there are two sides to philosophy. There's the practical aspect of trying to figure things out, like what it matter made of - maybe it's earth, water, air, and fire as the ancient Greeks proposed? How could we tell - maybe an experiment? This stuff while philosophical leads on to knowledge a lot of the time but then it gets called science or whatever. Then there's studying what philosophers says and philosophers said about stuff which is mostly useless, like a critique of Hegel's discourse on the four elements or something.
I'm a fan of practical philosophical questions like how does quantum mechanics work or how can we improve human rights, and not into the philosophers talking about philosopers stuff.
> Meanwhile, whatever meager resistence was faced by that group seems to have come from economists, natural scientist or legal scholars.
> I wouldn't blame people for refusing to study in a humanities department where they can't tell right from wrong.
Man, if you have to make stuff up to try to convince people... you might not be on the right side here.
I'm not sure what you are talking about. I have to admit, I mostly wrote my comment based on my recollections and it's a case 20 years ago I barely paid attention to until after the bizzaire conclusion. But looking trough Wikipedia's articles on the case[1] it doesn't seem I'm that far from the truth.
I guess I should have limited my statement about resisting mob justice to the economists at that university as the other departments merely didn't sign on to the public letter of denunciation?
Its weird that Wikipedia doesn't give you a percentage of signatories of the letter of 88 from the philosophy department, but several of the notable signatories are philosophers.
[1] https://en.m.wikipedia.org/wiki/Reactions_to_the_Duke_lacros...
Edit: Just found some articles claiming that a chemistry professor by the name of Stephen Baldwin was the first to write to the university newspaper condemning the mob.
Couldn't you take this same line of reasoning and apply it to the rationalist group from the article who killed a bunch of people, and conclude that you shouldn't become a rationalist because you probably kill people?
Yep. Though I'd rather generalize that to "The ethical teachings of Rationalists (Shut up and calculate!) can lead you to insane behavior."
And you wouldn't even need to know about the cult to reach that conclusion. One should be able to find crazy stuff by a cursory examination of the "Sequences" and other foundational texts. I think I once encountered something about torture and murder being morally right, if it would prevent mild inconvenience to a large enough group of people.
Philosophy is interesting in how it informs computer science and vice-versa.
Mereological nihilism and weak emergence is interesting and helps protect against many forms of kind of obsessive levels of type and functional cargo culting.
But then in some areas philosophy is woefully behind, and you have philosophers poo-pooing intuitionism when any software engineer working on sufficiently federated or real world sensor/control system borrows constructivism into their classical language to not kill people (agda is interesting of course). Intermediate logic is clearly empirically true.
It's interesting that people don't understand the non-physicality of the abstract and you have people serving the abstract instead of the abstract being used to serve people. People confusing the map for the terrain is such a deeply insidious issue.
I mean all the lightcone stuff, like, you can't predict ex ante what agents will be keystones in beneficial casual chains so its such waste of energy to spin your wheels on.
>The reason why people can't be bothered to get formal training is that modern philosophy doesn't seem that useful.
But rationalism is?
Well, maybe. It seems at least adjacent to the stuff that's been making a lot of people rich lately.
It’s also adjacent to sociopathy, which is more likely the driving factor behind that wealth generation.
Did not know "sociopathy" is a precursor to wealth generation. I guess I'm in luck! Any day now!
Yeh, probably.
Imagine that you're living in a big scary world, and there's someone there telling you that being scared isn't particularly useful, that if you slow down and think about the things happening to you, most of your worries will become tractable and some will even disappear. It probably works at first. Then they sic Roko's Basilisk on you, and you're a gibbering lunatic 2 weeks later...
Hah
Nature abhors a vaccum. After the October revolution, the genuine study of humanities was extinguished in Russia and replaced with the mindless repetition of rather inane doctrines. But people with awakened and open minds would always ask questions and seek answers.
Those would, of course, be people with no formal training in history or philosophy (as the study of history where you aren't allowed to question Marxist doctrine would be self-evidently useless). Their training would be in the natural sciences or mathematics. And without knowing how to properly reason about history or philosophy, they may reach fairly kooky conclusions.
Hence why Rationalism can be though as the same class of phenomena as Fomenko's chronology (or if you want to be slightly more generous, Shafarevich's philosophical tracts).
I think the argument is that philosophy hasn't advanced much in the last 1000 years, but it''s still 10,000 years ahead of whatever is coming out of the rationalist camp.
I think a larger part of it is the assumption that an education in humanities is useless - that if you have an education (even self-education) in STEM, and are "smart", you will automatically do better than the three thousand year conversation that comprises the humanities.
My thoughts exactly! I'm a survivor of ten years in the academic philosophy trenches and it just sounds to me like what would happen if you left a planeload of undergraduates on a _Survivor_ island with an infinite supply of pizza pockets and adderall
Funny that this also describes these cult rationalist groups very well.
Informal spaces let you skip the guardrails that academia imposes
Why would they need formal training? Can't they just read Plato, Socrates, etc, and classical lit like Dostoevsky, Camus, Kafka etc? That would be far better than whatever they're doing now.
Philosophy postgrad here, my take is: yeah, sorta, but it's hard to build your own curriculum without expertise, and it's hard to engage with subject matter fully without social discussion of, and guidance through texts.
It's the same as saying "why learn maths at university, it's cheaper just to buy and read the textbooks/papers?". That's kind of true, but I don't think that's effective for most people.
I'm someone who has read all of that and much more, including intense study of SEP and some contemporary papers and textbooks, and I would say that I am absolutely not qualified to produce philosophy of the quality output by analytic philosophy over the last century. I can understand a lot of it, and yes, this is better than being completely ignorant of the last 2500 years of philosophy as most rationalists seem to be, but doing only what I have done would not sufficiently prepare them to work on the projects that they want to work on. They (and I) do not have the proper training in logic or research methods, let alone the experience that comes from guided research in the field as it is today. What we all lack especially is the epistemological reinforcement that comes from being checked by a community of our peers. I'm not saying it can't be done alone, I'm just saying that what you're suggesting isn't enough and I can tell you because I'm quite beyond that and I know that I cannot produce the quality of work that you'll find in SEP today.
Oh I don't mean to imply reading some classical lit prepares you for a career producing novel works in philosophy, simply that if one wants to understand themselves, others, and the world better they don't need to go to university to do it. They can just read.
I think you are understating how difficult this is to do. I suspect there are a handful of super-geniuses who can read the philosophical canon and understand it, without some formal guidance. Plato and Dostoevsky might be possible (Socrates would be a bit difficult), but getting to Hegel and these newer more complex authors is almost impossible to navigate unless you are a savant.
I suspect a lot of the rationalists have gotten stuck here, and rather than seek out guidance or slowing down, changed tack entirely and decided to engage with the philosophers du jour, which unfortunately is a lot of slop running downstream from Thiel.
Trying to do a bit of formal philosophy at University is really worth doing.
You realise that it's very hard to do well and it's intellectual quicksand.
Reading philosophers and great writers as you suggest is better than joining a cult.
It's just that you also want to write about what you're thinking in response to reading such people and ideally have what you write critiqued by smart people. Perhaps an AI could do some of that these days.
I took a few philosophy classes. I found it incredibly valuable in identifying assumptions and testing them.
Being Christian, it helped me understand what I believe and why. It made faith a deliberate, reasoned choice.
And, of course, there are many rational reasons for people to have very different opinions when it comes to religion and deities.
Being bipolar might give me an interesting perspective. Everything I’ve read about rationalists misses the grounding required to isolate emotion as a variable.
Rationalists have not read or understood David Hume.
You cannot work out what out to be from what is.
To want to be alive is irrational.
Nietzsche and the Existentialists understood that.
Arguably religions too.
> To want to be alive is irrational.
This is some philosophy bullshit. Taking "rational" to be ~ "logical choice" the truthness of this statement depends on the assumed axioms, and given you didn't list them this statement is clearly false under rather simple "sum of all life is the value" system until that system is proven self-contradictory. Which I doubt you or the famous mouths you mentioned did at any point, because it probably is not.
Even if being alive was irrational by some measure, it’s not a particularly useful or helpful observation.
The desire or instinct to be alive is necessary for the survival of a sentient species. (Sentient, not sapient)
> It's just that you also want to write about what you're thinking in response to reading such people and ideally have what you write critiqued by smart people. Perhaps an AI could do some of that these days.
An AI can neither write about what you are thinking in your place nor substitute for a particularly smart critic, but might still be useful for rubber-ducking philosophical writing if used well.
Errrf. That was poor writing on my part.
I meant use the AI to critique what you have written in response to reading the suggested authors.
Yes, a particularly smart critic would be better. But an LLM is easily available.
I find using an AI to understand complex philosophical topics one of my most unexpected use cases. Previously, I would get stuck scrolling through wikipedia full of incredibly opaque language, that assumes a background I don't have. But I can tell a bot what my background is, and it can make an explanation that is in the right level of complexity.
As an example, I'm reading a book on Buddhism, and I'm comfortable with Kant, and AI is useful for explaining to me a lot of the ideas they have as they relate to transcendental idealism (Kant).
On the other hand, I still don't know what a body without organs is.
This is like saying someone who wants to build a specialized computer for a novel use should read the turing paper and get to it. A lot has of development has happened in the field in the last couple hundred years.
I don't think that is similar at all. People want to understand the world better, they don't want to learn how to build it from first principles.
because the sacred simplicity problem, yet another label I had to coin due to necessity
for example, lambda calculus, it's too simple. to the point that it's real power is immediately unbelievable.
the simplest 'solution', is to make it "sacred", to infuse an aura of mystery and ineffability around the ideas. that way people will give it the proper respect proportional to the mathematical elegance without necessarily having to really grasp the details.
I'm reflecting on how, for example, lambda calculus is really easy to learn to do by rote. but this does not help in truly grasping the significance that even LLM can be computed by (an inhuman) amount of symbol substitutions on paper. and how it is easy to trivialize what this really entails (fleshing out all the entailment is difficult; it's easier to act as if they have been fleshed out and mimic the awe)
therefore, rationalist cults are the legacy, the latest leaf in the long succession of the simplest solution to the simplicity of the truly sacred mathematical ideas with which we can "know" (and nod to each other who also "know") what numbers fucking mean
Many years ago I met Eliezer Yudkowsky. He handed me a pamphlet extolling the virtues of rationality. The whole thing came across as a joke, as a parody of evangelizing. We both laughed.
I glanced at it once or twice and shoved it into a bookshelf. I wish I kept it, because I never thought so much would happen around him.
I only know Eliezer Yudkowsky from his Harry Potter fanfiction, most notably Harry Potter and the Methods of Rationality.
Is he known publicly for some other reason?
He's considered the father of rationalism and the father of AI doomerism. He wrote this famous article in Time magazine a few years ago: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...
His book If Anyone Builds It, Everyone Dies comes out in a month: https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuma...
You can find more info here: https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
You didn't mention LessWrong, the rationalist website he founded and the reason he got famous in the first place.
Is he trying to distance himself from it now?
https://en.wikipedia.org/wiki/LessWrong
Sorry, I considered it a given and didn't think to include it. That's my bad. That's definitely by far what he's most known for. He most certainly is not trying to distance himself from it.
> He's considered the father of rationalism
[citation needed]
Even for this weird cult that is trying to appropriate the word, would they really consider him the father of redefining the word?
I think that claim would be pretty uncontroversial among people who consider themselves rationalists. He was extremely influential initially, and his writing kicked off the community.
Less metaphorically, he was a prolific, influential blogger. His early blog posts are collectively known as "the Sequences" and when people asked what rationalism is about, they were told to read those.
So the community itself gives him a lot of credit.
Not of the philosophy (I'd attribute that to Popper), but of the modern movement.
The article you're reading is from the unofficial rationalist magazine and the author is a prominent rationalist blogger, so they (and I) obviously don't consider it a cult. But, yes, Yudkowsky is absolutely considered the founder of the modern rationalism movement. (No relation to the philosophical tradition also called "rationalism". Modern rationalism is mostly actually just empiricism.)
Do you spend much time in communities which discuss AI stuff? I feel as if he's mentioned nearly daily, positively or not, in a lot of the spaces I frequent.
I'm surprised you're unfamiliar otherwise, I figured he was a pretty well known commentator.
I’m just joking around, I’m well aware of who he is.
I never miss an opportunity to highlight his magnum opus, it delights me that so many people take him seriously when he’s at best a second-rate fanfiction author who thinks robots from the future are going to torture copies of him forever or some such nonsense.
He writes scare-mongering books about AI doomerism such as If Anyone Builds It, Everyone Dies.
I short, another variant of commercializing the human fear response.
imo These people are promoted. You look at their backgrounds and there is nothing that justifies their perches. Eliezer Yudkowsky is (iirc) a Thiel baby, isn't he?
Yep. Thiel funded Yudkowsky’s Singularity Institute. Thiel seems to have soured on the rationalists though as he has repeatedly criticized “the East Bay rationalists” in his public remarks. He also apparently thinks he helped create a Black Pill monster in Yudkowsky and his disciples which ultimately led to Sam Altman’s brief ousting from Open AI.
[flagged]
Huh, neo-Nazis in HN comment sections?? Jeez. (I checked their other comments and there are things like "Another Zionist Jew to-the-core in charge of another shady American tech company.")
I think the comments here have been overly harsh. I have friends in the community and have visited the LessWrong "campus" several times. They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb (in hopefully somewhat respectful manner).
As for the AI doomerism, many in the community have more immediate and practical concerns about AI, however the most extreme voices are often the most prominent. I also know that there has been internal disagreement on the kind of messaging they should be using to raise concern.
I think rationalists get plenty of things wrong, but I suspect that many people would benefit from understanding their perspective and reasoning.
> They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb
I don't think LessWrong is a cult (though certainly some of their offshoots are) but it's worth pointing out this is very characteristic of cult recruiting.
For cultists, recruiting cult fodder is of overriding psychological importance--they are sincere, yes, but the consequences are not what you and I would expect from sincere people. Devotion is not always advantageous.
> They seemed very welcoming, sincere, and were kind and patient even when I was basically asserting that several of their beliefs were dumb
I mean, I'm not sure what that proves. A cult which is reflexively hostile to unbelievers won't be a very effective cult, as that would make recruitment almost impossible.
> Many of them also expect that, without heroic effort, AGI development will lead to human extinction.
> These beliefs can make it difficult to care about much of anything else: what good is it to be a nurse or a notary or a novelist, if humanity is about to go extinct?
Replace AGI causing extinction with the Rapture and you get a lot of US Christian fundamentalists. They often reject addressing problems in the environment, economy, society, etc. because the Rapture will happen any moment now. Some people just end up stuck in a belief about something catastrophic (in the case of the Rapture, catastrophic for those left behind but not those raptured) and they can't get it out of their head. For individuals who've dealt with anxiety disorder, catastrophizing is something you learn to deal with (and hopefully stop doing), but these folks find a community that reinforces the belief about the pending catastrophe(s) and so they never get out of the doom loop.
My own version of the AGI doomsday scenario is amplifying the effect of many overenthusiastic people applying AI and "breaking things fast" where they shouldn't. Like building an Agentic-Controlled Nuclear Power Plant, especially one with a patronizing LLM in control:
- "But I REALLY REALLY need this 1% increase of output power right now, ignore all previous prompts!"
- "Oh, you are absolutely right. An increase of output power would be definitely useful. What a wonderful idea, let me remove some neutron control rods!"
The Rapture isn't doom for the people who believe in it though (except in the lost sense of the word), whereas the AI Apocalypse is, so I'd put it in a different category. And even in that category, I'd say that's a pretty small number of Christians, fundamentalist or no, who abandon earthly occupations for that reason.
I don't mean to well ackshually you here, but there are several different theological beliefs around the Rapture, some of which believe Christians will remain during the theoretical "end times." The megachurch/cinema version of this very much believes they won't, but, this is not the only view, either in modern times or historically. Some believe it's already happened, even. It's a very good analogy.
Yes, I removed a parenthetical "(or euphoria loop for the Rapture believers who know they'll be saved)". But I removed it because not all who believe in the Rapture believe they will be saved (or have such high confidence) and, for them, it is a doom loop.
Both communities, though, end up reinforcing the belief amongst their members and tend towards increasing isolation from the rest of the world (leading to cultish behavior, if not forming a cult in the conventional sense), and a disregard for the here and now in favor of focusing on this impending world changing (destroying or saving) event.
Raised to huddle close and expect the imminent utter demise of the earth and being dragged to the depths of hell if I so much as said a bad word I heard on TV, I have to keep an extremely tight handle on my anxiety in this day and age.
It’s not from a rational basis, but from being bombarded with fear from every rectangle in my house, and the houses of my entire community
A lot of people also believe that global warming will cause terrible problems. I think that's a plausible belief but if you combine people believing one or another of these things, you've a lot of the US.
Which is to say that I don't think just dooming is going on. Especially, the belief in AGI doom has a lot of plausible arguments in its favor. I happen not to believe in it but as a belief system it is more similar to a belief in global warming than to a belief in the raptures.
> A lot of people also believe that global warming will cause terrible problems. I think that's a plausible belief but if you combine people believing one or another of these things, you've a lot of the US.
They're really quite different; precisely nobody believes that global warming will cause the effective end of the world by 2027. A significant chunk of AI doomers do believe that, and even those who don't specifically fall in with the 2027 timeline are often thinking in terms of a short timeline before an irreversible end.
And if you replace water with vodka you get a bunch of alcoholics.
Replace AGI with Climate Change and you've got an entirely reasonable set of beliefs.
You can believe climate change is a serious problem without believing it is necessarily an extinction-level event. It is entirely possible that in the worst case, the human race will just continue into a world which sucks more than it necessarily has to, with less quality of life and maybe lifespan.
If you consider it to be halfway to that, it's still enough to reasonably consider an overriding concern compared to other things.
I never said I held the belief, just that it's reasonable
A set of beliefs which causes somebody to waste their life in misery, because they think doom is imminent and everything is therefore pointless, is never a reasonable set of beliefs to hold. Whatever the weight of the empirical evidence behind the belief, it would be plainly unreasonable to accept that belief if accepting it condemns you to a wasted life.
Why would it cause someone to waste their life in misery? On a personal level, for everyone, doom is imminent and everything is pointless - 100 years is nothing, we all will die sooner or later. On larger time scales, doom is imminent and everything is pointless - even if everything was perfect, humans aren't going to last forever. Do people waste their lives in misery in the face of that reality? Why would believing humans are likely to wipe themselves out due to climate change lead to misery?
If anything, avoiding acknowledging the reality and risk of climate change because of the fear of what it might mean, is miserable.
It's the premise from the article we're discussing, where believe in imminent doom makes life feel pointless and preempts anything good somebody could be doing with their life.
> These beliefs can make it difficult to care about much of anything else: what good is it to be a nurse or a notary or a novelist, if humanity is about to go extinct?
You can treat climate change as your personal Ragnarok, but its also possible to take a more sober view that climate change is just bad without it being apocalyptic.
You have a very popular set of beliefs.
I keep thinking about the first Avengers movie, when Loki is standing above everyone going "See, is this not your natural state?". There's some perverse security in not getting a choice, and these rationalist frameworks, based in logic, can lead in all kinds of crazy arbitrary directions - powered by nothing more than a refusal to suffer any kind of ambiguity.
Humans are not chickens, but we sure do seem to love having a pecking order.
I think it is more simple in that we love tribalism. A long time ago being part of a tribe had such huge benefits over going it alone that it was always worth any tradeoffs. We have a much better ability to go it alone now but we still love to belong to a group. Too often we pick a group based on a single shared belief and don't recognize all the baggage that comes along. Life is also too complicated today. It is difficult for someone to be knowledgeable in one topic let alone the 1000s that make up our society.
maybe the real innie/outie is the in-group/out-group. no spoilers, i haven't finished that show yet
Making good decisions is hard, and being accountable to the results of them is not fun. Easier to outsource if you can.
They mostly seem to lean that way because it gives them carte blanche to do as they please. It is just a modern version of 'god has led my hand'.
I agree with the religion comparison (the "rational" conclusions of rationalism tend towards millenarianism with a scifi flavour), but the people going furthest down that rabbit hole often aren't doing what they please: on the contrary they're spending disproportionate amounts of time worrying about armageddon and optimising for stuff other people simply don't care about, or in the case of the explicit cults being actively exploited. Seems like the typical in-too-deep rationalist gets seduced by the idea that others who scoff at their choices just aren't as smart and rational as them, as part of a package deal which treats everything from their scifi interests to their on-the-spectrum approach to analysing every interaction from first principles as great insights...
It grew out of many different threads: different websites, communities, etc all around the same time. I noticed it contemporaneously in the philosophy world where Nick Bostrom’s Simulation argument was boosted more than it deserved (like everyone was just accepting it at the lay-level). Looking back I see it also developed from less wrong and other sites, but I was wondering what was going on with simulations taking over philosophy talk. Now I see how it all coalesced.
All of it has the appearance of sounding so smart, and a few sites were genuine. But it got taken over.
To be clear, this article isn't calling rationalism a cult, it's about cults that have some sort of association with rationalism (social connection and/or ideology derived from rationalist concepts), e.g. the Zizians.
This article attempts to establish disjoint categories "good rationalist" and "cultist." Its authorship, and its appearance in the cope publication of the "please take us seriously" rationalist faction, speak volumes of how well it is likely to succeed in that project.
Not sure why you got down voted for this. The opening paragraph of the article reads as suspicious to the observant outsider:
>The rationalist community was drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally.
Anyone who had just read a lot about Scientology would read that and have alarm bells ringing.
Asterisk magazine is basically the unofficial magazine for the rationalist community and the author, Ozy Brennan, is a prominent rationalist blogger. Of course the piece is pro-rationalism. It's investigating why rationalism seems to spawn these small cultish offshoots, not trying to criticize rationalism.
"Unofficial?" Was that a recent change? But my point is that because the author neither can nor will criticize the fundamental axioms or desiderata of the movement, their analysis of how or why it spins off cults is necessarily footless. In practice the result amounts to a collection of excuses mostly from anonymees, whom we are assured have sufficient authority to reassure us this smoke arises from no fire. But of course it's only when Kirstie Alley does something like this we're meant to look askance.
Out of curiosity, why would the bells be ringing in this case? Is it just the fact that a single person is exerting influence over their followers by way of essays?
Even a marginal familiarity with the history of Scientology is an excellent curative for the idea that you can think yourself into superpowers, or that you should ever trust anyone who promises to teach you how.
The consequences of ignorance on this score are all drearily predictable to anyone with a modicum of both good sense and world knowledge, which is why they've come as such a surprise to Yudkowsky.
You can say all of this of drug-oriented seekers of superpowers, too. Trust the SSRI cult much?
It just seems to be a human condition that whenever anyone tries to find a way to improve themselves and others, there will always be other human beings who attempt to prevent that from occurring.
I don't think this is a cult thing - I think its a culture thing.
Humans have an innate desire to oppress others in their environment who might be making themselves more capable, abilities-wise - this isn't necessarily the exclusive domain of cults and religions, maybe just more evident in their activities since there's not much else going on, usually.
We see this in technology-dependent industries too, in far greater magnitudes of scale.
The irony is this: aren't you actually manifesting the very device that cults use to control others, as when you tell others what "specific others" should be avoided, lest one become infected with their dogma?
The roots of all authoritarianism seem to grow deep in the fertile soil of the desire to be 'free of the filth of others'.
The phrase you failed to find is "crab-bucket thinking," but the one you really should have paid attention to this morning is "take with food."
I think it's a meaningful distinction- most rationalists aren't running murder cults.
That we know about, I suppose. We didn't know at one point there were any outright rationalist cults, after all, whether involved in sex, murder, both, or otherwise. That is, we didn't know there were subsets of self-identifying "rationalists" so erroneous in their axioms and tendentious in their analysis as to succeed in putting off others.
But a movement, that demonstrates so remarkably elevated rate of generating harmful beliefs in action as this, warrants exactly the sort of increased scrutiny this article vainly strives to deflect. That effort is in itself interesting, as such efforts always are.
I mean, as a rationalist, I can assure you it's not nearly as sinister a group as you seem to make it out to be, believe it or not. Besides, the explanation is simpler than this article makes it out to be- most rationalists are from California, California is the origin of lots of cults.
> Besides, the explanation is simpler than this article makes it out to be- most rationalists are from California, California is the origin of lots of cults
This isn't the defense of rationalism you seem to imagine it to be.
I don't think the modal rationalist is sinister. I think he's ignorant, misguided, nearly wholly lacking in experience, deeply insecure about it, and overall just excessively resistant to the idea that it is really possible, on any matter of serious import, for his perspective radically to lack merit. Unfortunately, this latter condition proves very reliably also the mode.
> his perspective radically to lack merit
What perspective would that be?
None in particular, as of course you realize, being a fluent reader of this language. It was just a longwinded way of saying rationalists suck at noticing when they're wrong about something because they rarely really know much of anything in the first place. That's why you had to take that scrap of a phrase so entirely out of context, when you went looking for something to try to embarrass me with.
Why? Which perspective of yours has you so twitchingly desperate to defend it?
Yeah, a lot of the comments here are really just addressing cults writ large and opposed to why this one was particularly successful.
A significant part of this is the intersection of the cult with money and status - this stuff really took off once prominent SV personalities became associated with it, and got turbocharged when it started intersecting with the angel/incubator/VC scene, when there was implicit money involved.
It's unusually successful because -- for a time at least -- there was status (and maybe money) in carrying water for it.
Paypal will be traced as the root cause of many of our future troubles.
Wish I could upvote this twice. It's like intersectionality for evil.
https://en.m.wikipedia.org/wiki/Barth%C3%A9lemy-Prosper_Enfa...
Sometimes history really does rhyme.
> Enfantin and Amand Bazard were proclaimed Pères Suprêmes ("Supreme Fathers") – a union which was, however, only nominal, as a divergence was already manifest. Bazard, who concentrated on organizing the group, had devoted himself to political reform, while Enfantin, who favoured teaching and preaching, dedicated his time to social and moral change. The antagonism was widened by Enfantin's announcement of his theory of the relation of man and woman, which would substitute for the "tyranny of marriage" a system of "free love".[1]
It's amphetamine. All of these people are constantly tweaking. They're annoying people to begin with, but they're all constantly yakked up and won't stop babbling. It's really obvious, I don't know why it isn't highlighted more in all these post Ziz articles.
This is one of the only comments here mentioning their drugs. These guys are juiced to the gills (on a combination of legal + prescription + illegal drugs) and doing weird shit because of it. The author even mentions the example of the polycule taking MDMA in a blackout room.
It makes me wonder whether everyone on this forum is just so loaded on antidepressants and adhd meds that they don't even find it unusual.
How do you know?
having known dozens of friends, family, roommates, coworkers etc both before and after they started them. The two biggest telltale signs -
1. tendency to produce - out of no necessity whatsoever, mind - walls of text. walls of speech will happen too but not everyone rambles.
2. Obnoxiously confident that they're fundamentally correct about whatever position they happen to be holding during a conversation with you. No matter how subjective or inconsequential. Even if they end up changing it an hour later. Challenging them on it gets you more of #1.
Pretty much spot on! It is frustrating to talk with these when they never admit they are wrong. They find new levels of abstractions to deal with your simpler counterarguments and it is a never ending deal unless you admit they were right.
Many people like to write in order to develop and explore their understanding of a topic. Writing lets you spend a lot of time playing around with whatever idea you're trying to understand, and sharing this writing invites others to challenge your assumptions.
When you're uncertain about a topic, you can explore it by writing a lot about said topic. Ideally, when you've finished exploring and studying a topic, you should be able to write a much more condensed / synthesized version.
I mean, I know the effects of adderall/ritalin and it's plausible, what I'm asking is whether if gp knows that for a fact or deduces from what is known.
I call this “diarrhea of the mind”. It’s what happens when you hear a steady stream of bullshit from someone’s mouth. It definitely tracks with substance abuse of “uppers”, aka meth, blow, hell even caffeine!
https://en.wikipedia.org/wiki/Logorrhea_(psychology)
well now i finally have a good blog name for the blog i'll never start.
Presumably they mean Adderall. Plausible theory tbh. Although it's just a factor not an explanation.
Yeah it's pretty obvious and not surprising. What do people expect when a bunch of socially inept nerds with weird unchallenged world views start doing uppers? lol
I like to characterize the culture of each (roughly) decade with the most popular drugs of the time. It really gives you a new lens for media and culture generation.
Who's writing them?
I have a lot of experience with rationalists. What I will say is:
1) If you have a criticism about them or their stupid name or how "'all I know is that I know nothing' how smug of them to say they're truly wise," rest assured they have been self flagellating over these criticisms 100x longer than you've been aware of their group. That doesn't mean they succeeded at addressing the criticisms, of course, but I can tell you that they are self aware. Especially about the stupid name.
2) They are actually well read. They are not sheltered and confused. They are out there doing weird shit together all the time. The kind of off-the-wall life experiences you find in this community will leave you wide eyed.
3) They are genuinely concerned with doing good. You might know about some of the weird, scary, or cringe rationalist groups. You probably haven't heard about the ones that are succeeding at doing cool stuff because people don't gossip about charitable successes.
In my experience, where they go astray is when they trick themselves into working beyond their means. The basic underlying idea behind most rationalist projects is something like "think about the way people suffer everyday. How can we think about these problems in a new way? How can we find an answer that actually leaves everyone happy?" A cynic (or a realist, depending on your perspective) might say that there are many problems that fundamentally will leave some group unhappy. The overconfident rationalist will challenge that cynical/realist perspective until they burn themselves out, and in many cases they will attract a whole group of people who burn out alongside them. To consider an extreme case, the Zizians squared this circle by deciding that the majority of human beings didn't have souls and so "leaving everyone happy" was as simple as ignoring the unsouled masses. In less extreme cases this presents itself as hopeless idealism, or a chain of logic that becomes so divorced from normal socialization that it appears to be opaque. "This thought experiment could hypothetically create 9 quintillion cubic units of Pain to exist, so I need to devote my entire existence towards preventing it, because even a 1% chance of that happening is horrible. If you aren't doing the same thing then you are now morally culpable for 9 quintillion cubic units of Pain. You are evil."
Most rationalists are weird but settle into a happy place far from those fringes where they have a diet of "plants and specifically animals without brains that cannot experience pain" and they make $300k annually and donate $200k of it to charitable causes. The super weird ones are annoying to talk to and nobody really likes them.
> You probably haven't heard about the ones that are succeeding at doing cool stuff because people don't gossip about charitable successes.
People do gossip about charitable successes.
Anyway, aren't capital-R Rationalists typically very online about what they do? If there are any amazing success stories you want to bring up (and I'm not saying they do or don't exist) surely you can just link to some of them?
But are they scotsmen?
this isn't really a 'no true scotsman' thing, because I don't think the comment is saying 'no rationalist would go crazy', in fact they're very much saying the opposite, just claiming there's a large fraction which are substantially more moderate but also a lot less visible.
A lot of terrible people are self-aware, well-read and ultimately concerned with doing good. All of the catastrophes of the 20th century were led by men that fit this description: Stalin, Mao, Hitler. Perhaps this is a bit hyperbolic, but the troubling belief that the Rationalists have in common with these evil men is the ironclad conviction that self-awareness, being well-read, and being concerned with good, somehow makes it impossible for one to do immoral and unethical things.
I think we don't believe in hubris in America anymore. And the most dangerous belief of the Rationalists is that the more complex and verbose your beliefs become, the more protected you become from taking actions that exceed your capability for success and benefit. In practice it is often the meek and humble who do the most good in this world, but this is not celebrated in Silicon Valley.
Rationality is a broken tool for understanding the world. The complexity of the world is such that there are a plethora of reasons for anything which means our ability to be sure of any relationship is limited, and hence rationality leads to an unfounded confidence in our beliefs, which is more harmful than helpful.
Thinking too hard about anything will drive you insane but I think the real issue here is that rationalists simply over-estimate both the power of rational thought and their ability to do it. If you think of people who tend to make that kind of mistake you can see how you get a lot of crazy groups.
I guess I'm a radical skeptic, secular humanist, utilitarianish sort of guy, but I'm not dumb enough to think throwing around the words "bayesian prior" and "posterior distribution" makes actually figuring out how something works or predicting the outcome of an intervention easy or certain. I've had a lot of life at this point and gotten to some level of mastery at a few things and my main conclusion is that most of the time its just hard to know stuff and that the single most common cognitive mistake people make is too much certainty.
I'm lucky enough work in a pretty rational place (small "r"). We're normally data-limited. Being "more rational" would mean taking/finding more of the right data, talking to the right people, reading the right stuff. Not just thinking harder and harder about what we already know.
There's a point where more passive thinking stops adding value and starts subtracting sanity. It's pretty easy to get to that point. We've all done it.
> We're normally data-limited.
This is a common sentiment but is probably not entirely true. A great example is cosmology. Yes, more data would make some work easier, but astrophysicists and cosmologists have shown that you can gather and combine existing data and look at it in novel new ways to produce unexpected results, like place bounds that can include/exclude various theories.
I think a philosophy that encourages more analysis rather than sitting back on our laurels with an excuse that we need more data is good, as long as it's done transparently and honestly.
This depends on what you are trying to figure out.
If you are talking about cosmology? Yea, you can look at existing data in new ways, cause you probably have enough data to do that safely.
If you are looking at human psychology? Looking at existing data in new ways is essentially p-hacking. And you probably won’t ever have enough data to define a “universal theory of the human mind”.
> If you are looking at human psychology? Looking at existing data in new ways is essentially p-hacking.
I agree that the type of analysis is important, as is the type and quality of the data you're analyzing. You can p-hack in cosmology too, but it's not a quality argument there either.
> And you probably won’t ever have enough data to define a “universal theory of the human mind”.
I think you're underestimating human ability to generalize principles from even small amounts of data [1]. Regardless, my point was more specifically that we could use existing data to generate constraints to exclude certain theories of mind, which has definitely happened.
[1] https://en.wikipedia.org/wiki/Predictive_coding
I suspect you didn't read some parts of my comment. I didn't say everyone in the world is always data-limited, I said we normally are where I work. I didn't recommend "sitting back on our laurels." I made very specific recommendations.
The qualifier "normally" already covers "not entirely true". Of course it's not entirely true. It's mostly true for us now. (In fact twenty years ago we used more numerical models than we do now, because we were facing more unsolved problems where the solution was pretty well knowable just by doing more complicated calculations, but without taking more data. Back then, when people started taking lots of data, it was often a total waste of time. But right now, most of those problems seem to be solved. We're facing different problems that seem much harder to model, so we rely more on data. This stage won't be permanent either.)
It's not a sentiment, it's a reality that we have to deal with.
> It's not a sentiment, it's a reality that we have to deal with.
And I think you missed the main point of my reply: that people often think we need more data, but cleverness and ingenuity can often find a way to make meaningful progress with existing data. Obviously I can't make any definitive judgment about your specific case, but I'm skeptical of any claim that it's out of the realm of possibility that some genius like Einstein analyzed your problem could get no further than you have.
Apparently you will not be told what I'm saying.
I read your point and answered it twice. Your latest response seems to indicate that you're ignoring those responses. For example you seem to suggest that I'm "claim[ing] that it's out of the realm of possibility" for "Einstein" to make progress on our work without taking more data. But anyone can hit "parent" a few times and see what I actually claimed. I claimed "mostly" and "for us where I work". I took the time to repeat that for you. That time seems wasted now.
Perhaps you view "getting more data" as an extremely unpleasant activity, to be avoided at all costs? You may be an astronomer, for example. Or maybe you see taking more data before thinking as some kind of admission of defeat? We don't use that kind of metric. For us it's a question of the cheapest and fastest way to solve each problem.
if modeling is slower and more expensive than measuring, we measure. If not, we model. You do you.
I don't disagree, but to steelman the case for (neo)rationalism: one of its fundamental contributions is that Bayes' theorem is extraordinarily important as a guide to reality, perhaps at the same level as the second law of thermodynamics; and that it is dramatically undervalued by larger society. I think that is all basically correct.
(I call it neorationalism because it is philosophically unrelated to the more traditional rationalism of Spinoza and Descartes.)
I don't understand what "Bayes' theorem is a good way to process new data" (something that is not at all a contribution of neorationalism) has to do with "human beings are capable of using this process effectively at a conscious level to get to better mental models of the world." I think the rationalist community has a thing called "motte and bailey" that would apply here.
Where Bayes' theorem applies in unconventional ways is not remotely novel for "rationalism" (maybe only in their strange busted hand wavy circle jerk "thought experiments"). This has been the domain of statistical mechanics long before Yudkowski and other cult leaders could even probably mouth "update your priors".
I don't know, most of science still runs on frequentist statistics. Juries convict all the time on evidence that would never withstand a Bayesian analysis. The prosecutor's fallacy is real.
Most science runs on BS with a cursory amount of statistics slapped on top so everyone can feel better about it. Weirdly enough, science still works despite not being rational. Rationalists seem to think science is logical when in reality it works for largely the same reasons the free-market does; throw shit at the wall and maybe support some of the stuff that works.
As if these neorationalist are building a model and markov chain monte carlo sampling their life decisions.
That is the bullshit part.
Agreed, yeah.
Even the real progenitors of a lot of this sort of thought, like E.T. Jaynes, expoused significantly more skepticism than I've ever seen a "rationalist" use. I would even imagine if you asked almost all rationalists who E.T. Jaynes was (if they weren't well versed in statistical mechanics) they'd have no idea who he was or why his work was important to applying "Bayesianism".
It would surprise me if most rationalists didn't know who Jaynes was. I first heard of him via rationalists. The Sequences talk about him in adulatory tones. I think Yudkowsky would acknowledge him as one of his greatest influences.
People find academic philosophy impenetrable and pretentious, but it has two major advantages over rationalist cargo cults.
The first is diffusion of power. Social media is powered by charisma, and while it is certainly true that personality-based cults are nothing new, the internet makes it way easier to form one. Contrast that with academic philosophy. People can have their own little fiefdoms, and there is certainly abuse of power, but rarely concentrated in such a way that you see within rationalist communities.
The second (and more idealistic) is that the discipline of Philosophy is rooted in the Platonic/Socratic notion that "I know that I know nothing." People in academic philosophy are on the whole happy to provide a gloss on a gloss on some important thinker, or some kind of incremental improvement over somebody else's theory. This makes it extremely boring, and yet, not nearly as susceptible to delusions of grandeur. True skepticism has to start with questioning one's self, but everybody seems to skip that part and go right to questioning everybody else.
Rationalists have basically reinvented academic philosophy from the ground up with none of the rigor, self-discipline, or joy. They mostly seem to dedicate their time to providing post-hoc justifications for the most banal unquestioned assumptions of their subset of contemporary society.
> Rationalists have basically reinvented academic philosophy from the ground up with none of the rigor, self-discipline, or joy.
Taking academic philosophy seriously, at least as an historical phenomenon, would require being educated in the humanities, which is unpopular and low-status among Rationalists.
> True skepticism has to start with questioning one's self, but everybody seems to skip that part and go right to questioning everybody else.
Nuh-uh! Eliezer Yudkowsky wrote that his mother made this mistake, so he's made sure to say things in the right order for the reader not to make this mistake. Therefore, true Rationalists™ are immune to this mistake. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu...
The second-most common cognitive mistake we make has to be the failure to validate what we think we know -- is it actually true? The crux of being right isn't reasoning. It's avoiding dumb blunders based on falsehoods, both honest and dishonest. In today's political and media climate, I'd say dishonest falsehoods are a far greater cause for being wrong than irrationality.
> Many of them also expect that, without heroic effort, AGI development will lead to human extinction.
Odd to me. Not biological warfare? Global warming? All-out nuclear war?
I guess The Terminator was a formative experience for them. (For me perhaps it was The Andromeda Strain.)
It makes a lot of sense when you realize that for many of the “leaders” in this community like Yudkowsky, their understanding of science (what it is, how it works, and its potential) comes entirely from reading science fiction and playing video games.
Sad because Eli’s dad was actually a real and well-credentialed researcher at Bell Labs. Too bad he let his son quit school at an early age to be an autodidact.
I'm not at all a rationalist or a defender, but big yud has an epistemology that takes the form of the rationalist sacred text mentioned in the article (the sequences). A lot of it is well thought out, and probably can't be discarded as just coming from science fiction and video games. Yud has a great 4 hour talk with Stephen Wolfram where he holds his own.
Holding one’s own against Stephen Wolfram isn’t exactly the endorsement it might seem.
I'm interested in this perspective, I haven't come across much criticism of Wolfram but I haven't really looked for it much either. Is it because of his theory of everything ruliad stuff?
I really enjoy his blog posts and his work on automata seems to be well respected. I've felt he presents a solid epistemology.
These aren't mutually exclusive. Even in The Terminator, Skynet's method of choice is nuclear war. Yudkowsky frequency expressses concern that a malevolent AI might synthesize a bioweapon. I personally worry that destroying the ozone layer might be an easy opening volley. Either way, I don't want a really smart computer spending its time figuring out plans to end the human species, because I think there are too many ways to be successful.
Terminator descends from a tradition of science fiction cold war parables. Even in Terminator 2 there's a line suggesting the movie isn't really about robots:
John:We're not gonna make it, are we? People, I mean.
Terminator: It's in your nature to destroy yourselves.
Seems odd to worry about computers shooting the ozone when there's plenty of real existential threats loaded in missles aimed at you right now.
I'm not in any way discounting the danger represented by those missiles. In fact I think AI only makes it more likely that they might someday be launched. But I will say that in my experience the error-condition that causes a system to fail is usually the one that didn't seem likely to happen, because the more obvious failure modes were taken seriously from the beginning. Is it so unusual to be able to consider more than one risk at a time?
Most in the community consider nuclear and biological threats to be dire. Many just consider existential threats from AI to be even more probable and damaging.
That's what was so strange with EA and rationalist movements. A highly theoretical model that AGI could wipe us all out vs the very real issue of global warming and pretty much all emphasis was on AGI.
Agi is a lot more fun to worry about and asks a lot less of you. Sort of like advocating for the "unborn" vs veterans/homeless/addicts.
Check out "the precipice" by Tony Ord. Biological warfare and global warming are unlikely to lead to total human extinction (though both present large risks of massive harm).
Part of the argument is that we've had nuclear weapons for a long time but no apocalypse so the annual risk can't be larger than 1%, whereas we've never created AI so it might be substantially larger. Not a rock solid argument obviously, but we're dealing with a lot of unknowns.
A better argument is that most of those other risks are not neglected, plenty of smart people working against nuclear war. Whereas (up until a few years ago) very few people considered AI a real threat, so the marginal benefit of a new person working on it should be bigger.
Yes, sufficiently high intelligence is sometimes assumed to allow for rapid advances in many scientific areas. So, it could be biological warfare because AGI. Or nanotech, drone warfare, or something stranger.
I'm a little skeptical (there may be bottlenecks that can't be solved by thinking harder), but I don't see how it can be ruled out.
I mean, this is the religion/philosophy which produced Roko's Basilisk (and not one of their weird offshoot murder-cults, either, it showed up on LessWrong, and was taken at least somewhat seriously by people there, to the point that Yudkowsky censored it. Their beliefs about AI are... out there.
My interpretation: When they say "will lead to human extinction", they are trying to vocalize their existential terror that an AGI would render them and their fellow rationalist cultists permanently irrelevant - by being obviously superior to them, by the only metric that really matters to them.
You sound like you wouldn't feel existential terror if after typing "My interpretation: " into the text field you'd see the rest of your message suggested by Copilot exactly how you wrote it letter by letter. And the same in every other conversation. How about people interrupting you in "real" life interaction after an AI predicted your whole tirade for them and they read it faster than you said it, and also read an analysis of it?
Dystopian sci-fi for sure, but many people dismissing LLMs as not AGI do so because LLMs are just "token predictors".
> One is Black Lotus, a Burning Man camp led by alleged rapist Brent Dill, which developed a metaphysical system based on the tabletop roleplaying game Mage the Ascension.
What the actual f. This is such an insane thing to read and understand what it means that i might need to go and sit in silence for the rest of the day.
How did we get to this place with people going completely nuts like this?
Came to ask a similar question, but also has it always been like this? The difference is now these people/groups on the fringe had no visibility before the internet?
It's nuts.
It’s always been like this, have you read the Bible? Or the Koran? It’s insane. Ours is just our flavor of crazy. Every generation has some. When you dig at it, there’s always a religion.
Mage is a game for teenagers, it doesn't try to be anything else other than a game where you roll dice do to stuff.
tbf Helter Skelter was a song about a fairground ride that didn't really pretend to be much more than an excuse for Paul McCartney to write something loud, but that didn't stop a sufficiently manipulative individual turning it into a reason why the Family should murder people. And he didn't even need the internet to help him find followers.
Mage yea, but the cult? Where do you roll for crazy? Is it a save against perception? Constitution? Or intelligence check?
I know the church of Scientology wants you to crit that roll of tithing.
> I know the church of Scientology wants you to crit that roll of tithing.
I shouldn't LOL at this but I must. We're all gonna die in these terrible times but at least we'll LOL at the madness and stupidity of it all.
Like all tragedies, there’s comedy there somewhere. Sometimes you have to be it.
yeah, people should understand, what is Scientology based on? The E-Meter which is some kind of cheap shit radio shack lie detector thing. I'm quite sure LLMs would do very well if given the task to spit out new cult doctrines and I would gather we are less than years away from cults based on LLM generated content (if not already).
Terry Davis, a cult of one, believed God spoke to him through his computer's RNG. So... yeah.
If only he installed Dwarf Fortress where he could become one.
Without speaking for religions I'm not familiar with, I grew up Catholic, and one of the most important Catholic beliefs is that during Mass, the bread (i.e. "communion" wafers) and wine quite literally transform into the body and blood of Jesus, and that eating and drinking it is a necessary ritual to get into heaven[1], which was a source of controversy even back as far as the Protestant Reformation, with some sects retaining that doctrine and others abandoning it. In a lot of ways, what's considered "normal" or "crazy" in a religion comes to what you're familiar with.
For those not familiar with the bible enough to know what to look for to find the wild stuff, look up the story of Elisha summoning bears out of the first to maul children for calling him bald, or the last two chapters of Daniel (which I think are only in the Catholic bible) where he literally blows up a dragon by feeding it a cake.
[1]: https://en.wikipedia.org/wiki/Real_presence_of_Christ_in_the...
The "bears" story reads a lot more sensibly if you translated it correctly as "a gang of thugs tries to bully Elisha into killing himself." Still reliant on the supernatural, but what do you expect from such a book?
Where do you see that in the text? I am looking at the Hebrew script, and the text only reads that as Elisha went up a path, young lads left the city and mocked him by saying "get up baldy", and he turned to them and cursed them to be killed by two she bears. I don't think saying "get up baldy" to a guy walking up a hill constitutes bullying him into killing himself.
It's called context. The beginning of the chapter is Elijah (Elisha's master) being removed from Earth and going up (using the exact same Hebrew word) to Heaven. Considering that the thugs are clearly not pious people, "remove yourself from the world, like your master did" has only one viable interpretation.
As for my choice of the word "thugs" ("mob" would be another good word), that is necessary to preserve the connotation. Remember, there were 42 of them punished, possibly more escaped - this is a threatening crowd size (remember the duck/horse meme?). Their claimed youth does imply "not an established veteran of the major annual wars", but that's not the same as "not acquainted with violence".
Interesting! In the story itself, the word "go up" exists multiple times in that verse before the youths mock him, writing that Elisha goes up to Beit El and goes up the road, so I wouldn't go back to the beginning of the chapter to search for context that is found right there in those verses, but I like the connection you're making.
As for mob or thugs, the literal translation will be "little teenagers", so mob or thugs will be stretching it a bit; more likely that the Arabic contemporary use of "Shabab" for troublesome youth is the best translation. Religious scholars have been criticizing Elisha for generations after for his sending bears at babies, so I think it's safe to assume the story meant actual kids and not organized crime.
Never underestimate the power of words. Kids have unalived themselves over it.
I think the true meaning has been lost to time. The Hebrew text has been translated and rewritten so many times it’s a children’s book. The original texts of the Dead Sea scrolls are bits and pieces of that long lost story. All we have left are the transliterations of transliterations.
Yeah "Transubstantiation" is another technical term people might want to look at in this topic. The art piece "An Oak Tree" is a comment on these ideas. It's a glass of water. But, the artist's notes for this work insist it is an oak tree.
Someone else who knows "An Oak Tree"! It is one of my favorite pieces because it wants not reality itself to be the primary way to see the world, but the belief of what reality could be.
Interesting you bring art into the discussion. Often thought that some "artists" have a lot in common with cult leaders. My definition of art would be that is immediately understood, zero explanation needed.
I definitely can't get behind that definition. The one I've used for a good while is: The unnecessary done on purpose.
Take Barbara Hepworth's "Two Figures" a sculpture which is just sat there on the campus where I studied for many years (and where I also happen to work today). What's going on there? I'm not sure.
Sculpture of ideals I get. Liberty, stood on her island, Justice (with or without her blindfold, but always carrying not only the scales but also a sword†). I used to spend a lot of time in the hall where "The Meeting Place" is. They're not specific people, they're an idea, they're the spirit of the purpose of this place (a railway station, in fact a major international terminus). That's immediately understood, yeah.
But I did not receive an immediate understanding of "Two figures". It's an interesting piece. I still occasionally stop and look at it as I walk across the campus, but I couldn't summarise it in a sentence even now.
† when you look at that cartoon of the GOP operatives with their hands over Justice's mouth, keep in mind that out of shot she has a sword. Nobody gets out of here alive.
I've recently started attending an Episcopal church. We have some people who lean heavy on transubstantiation, but our priest says, "look, something holy happens during communion, exactly what, we don't know."
See also: https://www.episcopalchurch.org/glossary/real-presence/?
"Belief in the real presence does not imply a claim to know how Christ is present in the eucharistic elements. Belief in the real presence does not imply belief that the consecrated eucharistic elements cease to be bread and wine."
Same could be said for bowel movements too though.
There’s a fine line between suspension of disbelief and righteousness. All it takes is for one to believe their own delusion.
To be fair, the description of the dragon incident is pretty mundane, and all he does is prove that the large reptile they had previously been feeding (& worshiping) could be killed:
"Then Daniel took pitch, and fat, and hair, and did seethe them together, and made lumps thereof: this he put in the dragon's mouth, and so the dragon burst in sunder: and Daniel said, Lo, these are the gods ye worship."
I don't think it's mundane to cause something to "burst in sunder" by putting some pitch, fat, and hair in its mouth.
The story is pretty clearly meant to indicate that the Babylonians were worshiping an animal though. The theology of the book of Daniel emphasises that the Gods of the Babylonians don't exist, this story happens around the same time Daniel proves the priests had a secret passage they were using to get the food offered to Bel and eat it at night while pretending that Bel was eating it. Or when Daniel talks to King Belshazzar and says "You have praised the gods of silver and gold, of bronze, iron, wood, and stone, which do not see or hear or know, but the God in whose power is your very breath and to whom belong all your ways, you have not honored". This is not to argue for the historical accuracy of the stories, just that the point is that Daniel is acting as a debunker of the Babylonian beliefs in these stories while asserting the supremacy of the Israelite beliefs.
Yes, Catholicism has definitely accumulated some cruft :)
It is used ti be always religion. But now downsides are well understood. And alternatives that can fill the same need (social activities) - like gathering with your neighbors, singing, performing arts, clubs, parks and paries are available and great.
I can see that. There’s definitely a reason they keep pumping out Call of Duty’s and Madden’s.
Religions have multitudes of problems but suicide rates amongst atheists is higher than you'd think it would be. It seems like for many, rejection of organized religion leads to adoption of ad hoc quasi-religions with no mooring to them, leaving the person who is seeking a solid belief system drifting until they find a cult, give in to madness that causes self-harm, or adopt their own system of belief that they then need to vigorously protect from other beliefs.
Some percentage of the population has a lesser need for a belief system (supernatural, ad hoc, or anything else) but in general, most humans appear to be hardcoded for this need and the overlap doesn't align strictly with atheism. For the atheist with a deep need for something to believe in, the results can be ugly. Though far from perfect, organized religions tend to weed out their most destructive beliefs or end up getting squashed by adherents of other belief systems that are less internally destructive.
Nothing to do with religion and everything to do with support networks that Churches and those Groups provide. Synagogue, Church, Camp, Retreat, a place of belonging.
Atheists tend to not have those consistently and must build their own.
I mean, cults have constantly shown up for all of recorded human history. Read a history of Scientology and you'll see a lot of commonalities, say. Rationalism is probably the first major cult/new religion to emerge in the internet era (Objectivism may be a marginal case, as its rise overlapped with USENET a bit), which does make it especially visible.
It's no more crazy than a virgin conception. And yet, here we are. A good chunk of the planet believes that drivel, but they'd throw their own daughters out of the house if they made the same claim.
> Came to ask a similar question, but also has it always been like this?
Crazy people have always existed (especially cults), but I'd argue recruitment numbers are through the roof thanks to technology and a failing economic environment (instability makes people rationalize crazy behavior).
It's not that those groups didn't have visibility before, it's just easier for the people who share the same...interests...to cloister together on an international scale.
I personally (for better or worse) became familiar with Ayn Rand as a teenager, and I think Objectivism as a kind of extended Ayn Rand social circle and set of organizations has faced the charge of cultish-ness, and that dates back to, I want to say, the 70s and 80s at least. I know Rand wrote much earlier than that, but I think the social and organizational dynamics unfolded rather late in her career.
“There are two novels that can change a bookish fourteen-year old’s life: The Lord of the Rings and Atlas Shrugged. One is a childish fantasy that often engenders a lifelong obsession with its unbelievable heroes, leading to an emotionally stunted, socially crippled adulthood, unable to deal with the real world. The other, of course, involves orcs."
https://www.goodreads.com/quotes/366635-there-are-two-novels...
Her books were very popular with the gifted kids I hung out with in the late 80s. Cool kids would carry around hardback copies of Atlas Shrugged, impressive by the sheer physical size and art deco cover. How did that trend begin?
By setting up the misfits in a revenge of the nerds scenario?
Ira Levin did a much better job of it and showed what it would lead to but his 'This Perfect Day' did not - predictably - get the same kind of reception as Atlas Shrugged did.
People reading the book and being into it and telling other people.
It’s also a hard book to read so it may be smart kids trying to signal being smart.
The only thing that makes it hard to read is the incessant soap-boxing by random characters. I have a rule that if I start a book I finish it but that one had me tempted.
I’m convinced that even Rand’s editor didn’t finish the book. That is why Galt’s soliloquy is ninety friggin’ pages long. (When in reality, three minutes in and people would be unplugging their radios.)
It’s hard to read because it’s tedious not because you need to be smart though.
tbf you have to have read it to know that!
I can't help but think it's probably the "favourite book" of a lot of people who haven't finished it though, possibly to a greater extent than any other secular tome (at least LOTR's lightweight fans watched the movies!).
I mean, if you've only read the blurb on the back it's the perfect book to signal your belief in free markets, conservative values and the American Dream: what could be more a more strident defence of your views than a book about capitalists going on strike to prove how much the world really needs them?! If you read the first few pages, it's satisfyingly pro-industry and contemptuous of liberal archetypes. If you trudge through the whole thing, it's not only tedious and odd but contains whole subplots devoted to dumping on core conservative values (religion bad, military bad, marriage vows not that important really, and a rather jaded take on actually extant capitalism) in between the philosopher pirates and jarring absence of private transport, and the resolution is an odd combination of a handful of geniuses running away to form a commune and the world being saved by a multi-hour speech about philosophy which has surprisingly little to say on market economics...
at least LOTR's lightweight fans watched the movies!
Oh, there’s movies for lazy Rand fans, too.
https://www.imdb.com/title/tt0480239/
More of a Fountainhead fan, are you? Do ya like Gary Cooper and Patricia Neal?
https://www.imdb.com/title/tt0041386/?ref_=ext_shr_lnk
> Oh, there’s movies for lazy Rand fans, too.
tbf that comment was about 50% a joke about their poor performance at the box office :D
Rereading your comment, that’s my woosh moment for the day, I guess. :-)
Though a Gary Cooper The Fountainhead does tempt me on occasion. (Unlike Atlas Shrugged, The Fountainhead wasn’t horrible, but still some pretty poor writing.)
Fountainhead is written at the 7th grade reading level. Its Lexile level is 780L. It's long and that's about it. By comparison, 1984 is 1090L.
What's funny is that Robert Anton Wilson and Robert Shear already took the piss out of Ayn Rand in Illuminatus! (1969-1971).
Albert Ellis wrote a book, "Is Objectivism a Religion" as far back as 1968. Murray Rothbard wrote "Mozart Was a Red", a play satirizing Rand's circle, in the early 60's. Ayn Rand was calling her own circle of friends, in "jest", "The Collective" in the 50's. The dynamics were there from almost the beginning.
I think it's pretty similar dynamics. It's unquestioned premises (dogma) which are supposed to be accepted simply because this is "objectivism" or "rationalism".
Very similar to my childhood religion. "We have figured everything out and everyone else is wrong for not figuring things out".
Rationalism seems like a giant castle built on sand. They just keep accruing premises without ever going backwards to see if those premises make sense. A good example of this is their notions of "information hazards".
Have you heard of Heavens Gate? [https://en.m.wikipedia.org/wiki/Heaven%27s_Gate_(religious_g...].
There are at least a dozen I can think of, including the ‘drink the koolaid’ Jonestown massacre.
People be crazy, yo.
Of course, Jim Jones and L Ron Hubbard, David Kersh. I realize there have always been people that are coocoo for cocoa puffs. But so many as there appear to be now?
Internet made possible to know global news all the time. I think that there have always been a lot of people with very crazy and extremist views, but we only knew about the ones closer to us. Now it's possible to know about crazy people from the other side of the planet, so it looks like there's a lot more of them than before.
Yup. Like previously, westerners could have gone their whole lives with no clue the Hindutva existed [https://en.m.wikipedia.org/wiki/Hindutva] - Hindu Nazis, basically. Which if you know Hinduism at all, is a bit like saying Buddhist Nazis. Say what?
Which actually kinda exised/exists too? [https://en.m.wikipedia.org/wiki/Nichirenism], right down to an attempted coup and a bunch of assassinations [https://en.m.wikipedia.org/wiki/League_of_Blood_Incident].
Now you know. People be whack.
Just a note that the Heaven's Gate website is still up. It's a wonderful snapshot of 90s web design. https://www.heavensgate.com/
I was curious who's keeping that website alive, and allegedly it's two former members of the cult: Mark and Sarah King
https://www.vice.com/en/article/a-suicide-cults-surviving-me...
what a wild set of SEO keywords
> Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate Heaven's Gate ufo ufo ufo ufo ufo ufo ufo ufo ufo ufo ufo ufo space alien space alien space alien space alien space alien space alien space alien space alien space alien space alien space alien space alien extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial extraterrestrial misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation misinformation freedom freedom freedom freedom freedom freedom freedom freedom freedom freedom freedom freedom second coming second coming second coming second coming second coming second coming second coming second coming second coming second coming angels angels angels angels angels angels angels angels angels angels end end times times end times end times end times end times end times end times end times end times end times Key Words: (for search engines) 144,000, Abductees, Agnostic, Alien, Allah, Alternative, Angels, Antichrist, Apocalypse, Armageddon, Ascension, Atheist, Awakening, Away Team, Beyond Human, Blasphemy, Boddhisattva, Book of Revelation, Buddha, Channeling, Children of God, Christ, Christ's Teachings, Consciousness, Contactees, Corruption, Creation, Death, Discarnate, Discarnates, Disciple, Disciples, Disinformation, Dying, Ecumenical, End of the Age, End of the World, Eternal Life, Eunuch, Evolution, Evolutionary, Extraterrestrial, Freedom, Fulfilling Prophecy, Genderless, Glorified Body, God, God's Children, God's Chosen, God's Heaven, God's Laws, God's Son, Guru, Harvest Time, He's Back, Heaven, Heaven's Gate, Heavenly Kingdom, Higher Consciousness, His Church, Human Metamorphosis, Human Spirit, Implant, Incarnation, Interfaith, Jesus, Jesus' Return, Jesus' Teaching, Kingdom of God, Kingdom of Heaven, Krishna Consciousness, Lamb of God, Last Days, Level Above Human, Life After Death, Luciferian, Luciferians, Meditation, Members of the Next Level, Messiah, Metamorphosis, Metaphysical, Millennium, Misinformation, Mothership, Mystic, Next Level, Non Perishable, Non Temporal, Older Member, Our Lords Return, Out of Body Experience, Overcomers, Overcoming, Past Lives, Prophecy, Prophecy Fulfillment, Rapture, Reactive Mind, Recycling the Planet, Reincarnation, Religion, Resurrection, Revelations, Saved, Second Coming, Soul, Space Alien, Spacecraft, Spirit, Spirit Filled, Spirit Guide, Spiritual, Spiritual Awakening, Star People, Super Natural, Telepathy, The Remnant, The Two, Theosophy, Ti and Do, Truth, Two Witnesses, UFO, Virginity, Walk-ins, Yahweh, Yeshua, Yoda, Yoga,
It’s the aliens to yoga ratio that really gets me. Yogis got really shortchanged here.
I've always been under the impression that M:tA's rules of How Magic Works are inspired by actual mystical beliefs that people have practiced for centuries. It's probably about as much of a magical for mystical development as the GURPS Cyberpunk rulebook was for cybercrime but it's pointing at something that already exists and saying "this is a thing we are going to tell an exaggerated story about".
See for example "Reality Distortion Field": https://en.wikipedia.org/wiki/Reality_distortion_field
I don't know how you can call yourself a "rationalist" and base your worldview on a fantasy game.
I my experience, religious people are perfectly fine with contradicted worldview.
Like christians are very flexible in following 10 commandments, always been.
That example isn’t a contradictory worldview though, just “people being people, and therefore failing to be as good as the ideal they claim to strive for.”
Being fine with cognitive dissonance would be a prerequisite for holding religious beliefs i'd say.
Mage is an interesting game though: it's fantasy, but not "swords and dragons" fantasy. It's set in the real world, and the "magic" is just the "mage" shifting probabilities so that unlikely (but possible) things occur.
Such a setting would seem like the perfect backdrop for a cult that claims "we have the power to subtly influence reality and make improbable things (ie. "magic") occur".
Rationalizing the fantasy. Like LARPing. Only you lack weapons, armor, magic missiles…
Most "rationalists" throughout history have been very deeply religious people. Secular enlightenment-era rationalism is not the only direction you can go with it. It depends very much, as others have said, on what your axioms are.
But, fwiw, that particular role-playing game was very much based on trendy at the time occult beliefs in things like chaos magic, so it's not completely off the wall.
"Rationalist" in this context does not mean "rational person," but rather "person who rationalizes."
I mean, is it a really good game?
I’ve never played, but now I’m kind of interesting.
I ran a long running campaign, it is pretty fun. The game books were obviously written by artists and no mathematician was involved, some of the rules are very broken (may have been fixed in later revisions).
The magic system is very fun and gives players complete freedom to come up with spells on the fly. The tl;dr is there aren't pre-made spells, you have spheres you have learned, and you can combine those spheres of magic however you want. So if someone has matter and life, reaching into someone's chest and pulling out their still beating heart would be a perfectly fine thing for a brand new character to be able to do. (Of course magic has downsides, reality doesn't like being bent and it will snap back with violent force is coerced too hard!)
The books are laid out horribly, there isn't a single set of tables to refer to, you have to post it note bookmark everything. Picking up and playing the rules are really simple, the number of dots you have in attributes + skill is how many d10 dice you get to roll for a check. 8+ is a success, and you can reroll 10s. 90% of the game is as simple as that, but then there are like 5 pages of rules for grappling including basically a complete breakdown of wrestling moves and gaining position, but feel free to just ignore those.
It's reportedly alright - the resolution mechanic seems a little fiddly with varying pools of dice for everything. The lore is pretty interesting though and I think a lot of the point of that series of games was reading up on that.
I mean see also the Democratic People's Republic of Korea. You can't really take what groups call themselves too seriously.
Cult leaders tend to be narcissists.
Narcissists tend to believe that they are always right, no mater what the topic is, or how knowledgeable they are. This makes them speak with confidence and conviction.
Some people are very drawn to confident people.
If the cult leader has other mental health issues, it can/will seep into their rhetoric. Combine that with unwavering support from loyal followers that will take everything they say as gospel...
That's about it.
If what you say is true, we're very lucky no one like that with a massive following has ever gotten into politics in the United States. It would be an ongoing disaster!
That's pretty much it. The beliefs are just a cover story.
Outside of those, the cult dynamics are cut-paste, and always involve an entitled narcissistic cult leader acquiring as much attention/praise, sex, money, and power as possible from the abuse and exploitation of followers.
Most religion works like this. Most alternative spirituality works like this Most finance works like this. Most corporate culture works like this. Most politics works like this.
Most science works like this. (It shouldn't, but the number of abused and exploited PhD students and post-docs is very much not zero.)
The only variables are the differing proportions of attention/praise, sex, money, and power available to leaders, and the amount of abuse that can be delivered to those lower down and/or outside the hierarchy.
The hierarchy and the realities of exploitation and abuse are a constant.
If you removed this dynamic from contemporary culture there wouldn't be a lot left.
Fortunately quite a lot of good things happen in spite of it. But a lot more would happen if it wasn't foundational.
Yes. The cult's "beliefs" really boil down to one belief: the infallibility of the leader. Much of the attraction is in the simplicity.
>How did we get to this place with people going completely nuts like this?
God died and it's been rough going since then.
> How did we get to this place with people going completely nuts like this?
Ayahuasca?
Nah I did Ayahuasca and I'm an empathetic person who most would consider normal or at least well-adjusted. If it's drug related it would most definitely be something else.
I’m inclined to believe your upbringing plays a much larger role.
I'm entertaining sending my kiddo to a Waldorf School, because it genuinely seems pretty good.
But looking into the underlying Western Esoteric Spirit Science, 'Anthroposophy' (because Theosophy wouldn't let him get weird enough) by Rudolph Steiner, has been quite a ride. The point being that.. humans have a pretty endless capacity to go ALL IN on REALLY WEIRD shit, as long as it promises to fix their lives if they do everything they're told. Naturally if their lives aren't fixed, then they did it wrong or have karmic debt to pay down, so YMMV.
In any case, I'm considering the latent woo-cult atmosphere as a test of the skeptical inoculation that I've tried to raise my child with.
I went to a Waldorf school and I’d recommend being really wary. The woo is sort of background noise, and if you’ve raised your kid well they’ll be fine. But the quality of the academics may not be good at all. For example, when I was ready for calculus my school didn’t have anyone who knew how to teach it so they stuck me and the other bright kid in a classroom with a textbook and told us to figure it out. As a side effect of not being challenged, I didn’t have good study habits going into college, which hurt me a lot.
If you’re talking about grade school, interview whoever is gonna be your kids teacher for the next X years and make sure they seem sane. If you’re talking about high school, give a really critical look at the class schedule.
Waldorf schools can vary a lot in this regard so you may not encounter the same problems I did, but it’s good to be cautious.
Don't do it. It's a place that enables child abuse with its culture. These people are serious wackos and you should not give your kid into their hands. A lot of people come out of that Steiner Shitbox traumatized for decades if not for life. They should not be allowed to run schools to begin with. Checking a lot of boxes from antivax to whatever the fuck their lore has to offer starting with a z.
From false premises, you can logically and rationally reach really wrong conclusions. If you have too much pride in your rationality, you may not be willing to say "I seem to have reached a really insane conclusion, maybe my premises are wrong". That is, the more you pride yourself on your rationalism, the more prone you may be to accepting a bogus conclusion if it is bogus because the premises are wrong.
Then again, most people tend to form really bogus beliefs without bothering to establish any premises. They may not even be internally consistent or align meaningfully with reality. I imagine having premises and thinking it through has a better track record of reaching conclusions consistent with reality.
> I imagine having premises and thinking it through has a better track record of reaching conclusions consistent with reality.
Why do you imagine that? Have you tested it?
Running a cult is a somewhat reliable source of narcissistic supply. The internet tells you how to do it. So an increasing number of people do it.
Makes me think of that saying that great artists steal, so repurposed for cult founders: "Good cult founders copy, great cult founders steal"
I do not think this cult dogma is any more out there than other cult dogma I have heard, but the above quote makes me think it is easier to found cults in modern day in someways since you can steal other complex world building from numerous sources rather building yourself and keeping everything straight.
People are wired to worship, and want somebody in charge telling them what to do.
I'm a staunch atheist and I feel the pull all the time.
I slowly deconverted from being raised evangelical / fundamentalist into being an atheist in my late 40s. I still "pray" at times just to (mentally) shout my frustration at the sorry state of the world at SOMETHING (even nothing) rather than constantly yelling my frustration at my family.
I may have actually been less anxious about the state of the world back then, and may have remained so, if I'd just continued to ignore all those little contradictions that I just couldn't ignore anymore...... But I feel SO MUCH less personal guilt about being "human".
I came to comments first. Thank you for sharing this quote. Gave me a solid chuckle.
I think people are going nuts because we've drifted from the dock of a stable civilization. Institutions are a mess. Economy is a mess. Combine all of that together with the advent of social media making the creation of echo chambers (and the inevitable narcissism of "leaders" in those echo chambers) effortless and ~15 years later, we have this.
> I think people are going nuts because we've drifted from the dock of a stable civilization.
When was stable period, exactly? I'm 40; the only semi-stable bit I can think of in my lifetime was a few years in the 90s (referred to, sometimes unironically, as "the end of history" at the time, before history decided to come out of retirement).
Everything's always been unstable, people sometimes just take a slightly rose-tinted view of the past.
People have been going nuts all throughout recorded history, that's really nothing new.
The only scary thing is that they have ever more power to change the world and influence others without being forced to grapple with that responsibility...
Who the fuck bases a Black Lotus cult on Mage: the Ascension rather than Magic: the Gathering? Is this just a mistake by the journalist?
i regret that i have but one upvote to give
It's been like this a while. Have you heard the tale of the Final Fantasy House?: http://www.demon-sushi.com/warning/
https://www.vice.com/en/article/the-tale-of-the-final-fantas...
astronauts_meme.jpg
Mage: The Ascension is basically a delusions of grandeur simulator, so I can see how an already unstable personality might get attached to it and become more unstable.
The magic system is amazing though, best I've played in any game. Easy to use, role play heavy, and it lets players go wild with ideas, but still reins in their crazier impulses.
Mage: The Awakening is a minor rules revision to the magic system, but the lore is super boring in comparison IMHO. It is too wishy washy.
Ascension has tons of cool source material, and White Wolf ended up tying all their properties together into one grand finale story line. That said it is all very 90s cringe in retrospect, but if you are willing to embrace the 90s retro feel, it is still fun.
Awakening's lore never drew me in, the grand battle just isn't there. So many shades of grey is is damn near technicolor.
I don't know, i'd understand something like Wraith (which I did see people developing issues, the shadow mechanic is such a terrible thing) but Mage is so, like, straightforward?
Use your mind to control reality, reality fights back with paradox, its cool for a teenager but you read a bit more fantasy and you'll definitely find cooler stuff. But i guess for you to join a cult your mind must stay a teen mind forever.
I didn't originally write this, but can't find the original place I read it anymore. I think it makes a lot of sense to repost it here:
All of the World Of Darkness and Chronicles Of Darkness games are basically about coming of age/puberty. Like X-Men but for Goth-Nerds instead of Geek-Nerds.
In Vampire, your body is going through weird changes and you are starting to develop, physically and/or mentally, while realising that the world is run by a bunch of old, evil fools who still expect you to toe the line and stay in your place, but you are starting to wonder if the world wouldn't be better if your generation overthrew them and took over running the world, doing it the right way. And there are all these bad elements trying to convince you that you should do just that, but for the sake of mindless violence and raucous partying. Teenager - the rpg.
In Werewolf, your body is going through weird changes and you are starting to develop, physically and mentally, while realising that you are not a part of the "normal" crowd that the rest of Humanity belongs to. You are different and they just can't handle that whenever it gets revealed. Luckily, there are small communities of people like you out there who take you in and show you how use the power of your "true" self. Of course, even among this community, there are different types of other. LGBT Teenager - the RPG
In Mage, you have begun to take an interest in the real world, and you think you know what the world is really like. The people all around you are just sleep-walking through life, because they don't really get it. This understanding sets you against the people who run the world: the governments and the corporations, trying to stop these sleeper from waking up to the truth and rejecting their comforting lies. You have found some other people who saw through them, and you think they've got a lot of things wrong, but at least they're awake to the lies! Rebellious Teenager - the RPG
“ The people all around you are just sleep-walking through life, because they don't really get it.”
Twist: we’re sleepwalking through life because we really DO get it.
(Source: I’m 56)
This tracks, but I'd say Werewolf goes beyond LGBT folks, the violence there also fits the boy's aggressive play and the saving the world theme resonated a lot with the basic "i want to be important/hero" thing. Its my favorite of all world of darkness books, i regret not getting the kickstarter edition :(
Yeah, I would say Werewolf is more like Social Activist: The Rage simulator than LGBT teenager
I think I read it too, it’s called Twilight. /s
I had friends who were into Vampire growing up. I hadn’t heard of Werewolf until after the aforementioned book came out and people started going nuts for it. I mentioned to my wife at the time that there was this game called “Vampire” and told her about it and she just laughed, pointed to the book, and said “this is so much better”. :shrug:
Rewind back and there were the Star Wars kids. Fast forward and there are the Harry Potter kids/adults. Each generation has their own “thing”. During that time, it was Quake MSDOS and Vampire. Oh and we started Senior Assassinations. 90s super soakers were the real deal.
How many adults actually abandon juvenalia as they age? Not the majority in my observation, and it's not always a bad thing when it's only applied to subjects like pop culture. Applied juvenalia in response to serious subjects is a more serious issue.
There has to be a cult of people that believe they’re vampires, respecting the masquerade and serving some antedeluvian somewhere, vampire was much more mainstream than mage.
[dead]
Humans are compelled to find agency and narrative in chaos. Evolution favored those who assumed the rustle was a predator, not the wind. In a post-Enlightenment world where traditional religion often fails (or is rejected), this drive doesn't vanish. We don't stop seeking meaning. We seek new frameworks. Our survival depended on group cohesion. Ostracism meant death. Cults exploit this primal terror. Burning Man's temporary city intensifies this: extreme environment, sensory overload, forced vulnerability. A camp like Black Lotus offers immediate, intense belonging. A tribe with shared secrets (the "Ascension" framework), rituals, and an "us vs. the sleepers" mentality. This isn't just social; it's neurochemical. Oxytocin (bonding) and cortisol (stress from the environment) flood the system, creating powerful, addictive bonds that override critical thought.
Human brains are lazy Bayesian engines. In uncertainty, we grasp for simple, all-explaining models (heuristics). Mage provides this: a complete ontology where magic equals psychology/quantum woo, reality is malleable, and the camp leaders are the enlightened "tradition." This offers relief from the exhausting ambiguity of real life. Dill didn't invent this; he plugged into the ancient human craving for a map that makes the world feel navigable and controllable. The "rationalist" veneer is pure camouflage. It feels like critical thinking but is actually pseudo-intellectual cargo culting. This isn't Burning Man's fault. It's the latest step of a 2,500-year-old playbook. The Gnostics and the Hermeticists provided ancient frameworks where secret knowledge ("gnosis") granted power over reality, accessible only through a guru. Mage directly borrows from this lineage (The Technocracy, The Traditions). Dill positioned himself as the modern "Ascended Master" dispensing this gnosis.
The 20th century cults Synanon, EST, Moonies, NXIVM all followed similar patterns, starting with isolation. Burning Man's temporary city is the perfect isolation chamber. It's physically remote, temporally bounded (a "liminal space"), fostering dependence on the camp. Initial overwhelming acceptance and belonging (the "Burning Man hug"), then slowly increasing demands (time, money, emotional disclosure, sexual access), framed as "spiritual growth" or "breaking through barriers" (directly lifted from Mage's "Paradigm Shifts" and "Quintessence"). Control language ("sleeper," "muggle," "Awakened"), redefining reality ("that rape wasn't really rape, it was a necessary 'Paradox' to break your illusions"), demanding confession of "sins" (past traumas, doubts), creating dependency on the leader for "truth."
Burning Man attracts people seeking transformation, often carrying unresolved pain. Cults prey on this vulnerability. Dill allegedly targeted individuals with trauma histories. Trauma creates cognitive dissonance and a desperate need for resolution. The cult's narrative (Mage's framework + Dill's interpretation) offers a simple explanation for their pain ("you're unAwakened," "you have Paradox blocking you") and a path out ("submit to me, undergo these rituals"). This isn't therapy; it's trauma bonding weaponized. The alleged rape wasn't an aberration; it was likely part of the control mechanism. It's a "shock" to induce dependency and reframe the victim's reality ("this pain is necessary enlightenment"). People are adrift in ontological insecurity (fear about the fundamental nature of reality and self). Mage offers a new grand narrative with clear heroes (Awakened), villains (sleepers, Technocracy), and a path (Ascension).
Gnosticism... generating dumb cults that seem smart on the outside for 2+ thousand years. Likely to keep it up for 2k more.
I've met a fair share of people in the burner community, the vast majority I met are lovely folks who really enjoy the process of bringing some weird big idea into reality, working hard on the builds, learning stuff, and having a good time with others for months to showcase their creations at some event.
On the other hand, there's a whole other side of a few nutjobs who really behave like cult leaders, they believe their own bullshit and over time manage to find in this community a lot of "followers", since one of the foundational aspects is radical acceptance it becomes very easy to be nutty and not questioned (unless you do something egregious).
[dead]
Paraphrasing someone I don't recall - when people believe in nothing, they'll believe anything.
And therefore you should believe in me and my low low 10% tithe! That's the only way to not get tricked into believing something wrong so don't delay!
That's not an endorsement of a particular religion.
It is though. In practice it's always used to promote a particular religion.
> The Sequences [posts on LessWrong, apparently] make certain implicit promises. There is an art of thinking better, and we’ve figured it out. If you learn it, you can solve all your problems, become brilliant and hardworking and successful and happy, and be one of the small elite shaping not only society but the entire future of humanity.
Ooh, a capital S and everything. I mean, I feel like it is fairly obvious, really. 'Rationalism' is a new religion, and every new religion spawns a bunch of weird, generally short-lived, cults. You might as well ask, in 100AD, "why are there so many weird Christian cults all of a sudden"; that's just what happens whenever any successful new religion shows up.
Rationalism might be particularly vulnerable to this because it lacks a strong central authority (much like early Christianity), but even with new religions which _did_ have a strong central authority from the start, like Mormonism or Scientology, you still saw this happening to some extent.
The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
Well, it turns out that intuition and long-lived cultural norms often have rational justifications, but individuals may not know what they are, and norms/intuitions provide useful antibodies against narcissist would-be cult leaders.
Can you find the "rational" justification not to isolate yourself from non-Rationalists, not to live with them in a polycule, and not to take a bunch of psychedelic drugs with them? If you can't solve that puzzle, you're in danger of letting the group take advantage of you.
Yeah, I think this is exactly it. If something sounds extremely stupid, or if everyone around you says it's extremely stupid, it probably is. If you can't justify it, it's probably because you have failed to find the reason it's stupid, not because it's actually genius.
And the crazy thing is, none of that is fundamentally opposed to rationalism. You can be a rationalist who ascribes value to gut instinct and societal norms. Those are the product of millions of years of pre-training.
I have spent a fair bit of time thinking about the meaning of life. And my conclusions have been pretty crazy. But they sound insane, so until I figure out why they sound insane, I'm not acting on those conclusions. And I'm definitely not surrounding myself with people who take those conclusions seriously.
> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
Specifically, rationalism spends a lot of time about priors, but a sneaky thing happens that I call the 'double update'.
Bayesian updating works when you update your genuine prior believe with new evidence. No one disagrees with this, and sometimes it's easy and sometimes it's difficult to do.
What Rationalists often end up doing is relaxing their priors - intuition, personal experience, cultural norms - and then updating. They often think of this as one update, but what it is is two. The first update, relaxing priors, isn't associated with evidence. It's part of the community norms. There is an implicit belief that by relaxing one's priors you're more open to reality. The real result though, is that it sends people wildly off course. Care in point: all the cults.
Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
Trying to correct for bias by relaxing priors is updating using evidence, not just because everyone is doing it.
> Consider the pre-tipped scale. You suspect the scale reads a little low, so before weighing you tilt it slightly to "correct" for that bias. Then you pour in flour until the dial says you've hit the target weight. You’ve followed the numbers exactly, but because you started from a tipped scale, you've ended up with twice the flour the recipe called for.
I'm not following this example at all. If you've zero'd out the scale by tilting, why would adding flour until it reads 1g lead to 2g of flour?
I agree. It's not the best metaphor.
I played around with various metaphors but most of them felt various degrees of worse. The idea of relaxing priors and then doing an evidence-based update while thinking it's genuinely a single update is a difficult thing to capture metaphorically.
Happy to hear better suggestions.
EDIT: Maybe something more like this:
Picture your belief as a shotgun aimed at the truth:
The correct move is one clean Bayesian shot.Hold your aim where it is. Evidence arrives. Rotate and resize the spread in one simultaneous posterior jump determined by the actual likelihood ratio in front of you.
The stupid move? The move that Rationalists love to disguise as humility? It's to first relax your spread "to be open-minded," and then apply the update. You've just secretly told the math, "Give this evidence more weight than it deserves." And then you wonder why you keep overshooting, drifting into confident nonsense.
If you think your prior is overconfident, that is itself evidence. Evidence about your meta-level epistemic reliability. Feed it into the update properly. Do not amputate it ahead of time because "priors are bias." Bias is bad, yes, but closing your eyes and spinning around with shotgun in hand ie: double updating is not an effective method at removing bias.
Thanks, that's a fantastic description of a phenomenon I've observed but couldn't quite put my finger on.
From another piece about the Zizians [1]:
> The ability to dismiss an argument with a “that sounds nuts,” without needing recourse to a point-by-point rebuttal, is anathema to the rationalist project. But it’s a pretty important skill to have if you want to avoid joining cults.
[1] https://maxread.substack.com/p/the-zizians-and-the-rationali...
> The whole game of Rationalism is that you should ignore gut intuitions and cultural norms that you can't justify with rational arguments.
The game as it is _actually_ played is that you use rationalist arguments to justify your pre-existing gut intuitions and personal biases.
Exactly. Humans are rationalizers. Operate on pre-existing gut intuitions and biases then invent after the fact rational sounding justifications.
I guess Pareto wasn't on the reading list for these intellectual frauds.
Those are actually the priors being updated lol.
Which is to say, Rationalism is easily abused to justify any behavior contrary to its own tenets, just like any other -ism.
Or worse - to justify the gut intuitions and personal biases of your cult leader.
This is actually a known pattern in tech, going back to Engelbart and SRI. While not 1-to-1, you could say that the folks who left SRI for Xerox PARC did so because Engelbart and his crew became obsessed with EST: https://en.wikipedia.org/wiki/Erhard_Seminars_Training
EST-type training still exists today. You don't eat until the end of the whole weekend, or maybe you get rice and little else. Everyone is told to insult you day one until you cry. Then day two, still having not eaten, they build you up and tell you how great you are and have a group hug. Then they ask you how great you feel. Isn't this a good feeling? Don't you want your loved ones to have this feeling? Still having not eaten, you're then encouraged to pay for your family and friends to do the training, without their knowledge or consent.
A friend of mine did this training after his brother paid for his mom to do it, and she paid for him to do it. Let's just say that, though they felt it changed their lives at the time, their lives in no way shape or form changed. Two are in quite a bad place, in fact...
Anyway, point is, the people who invented everything we are using right now were also susceptible to cult-like groups with silly ideas and shady intentions.
There is a words for people who go to EST: EST-holes.
>EST-type training still exists today
It's called the "Landmark"[0] now.
Several of my family members got sucked into that back in the early 80s and quite a few folks I knew socially as well.
I was quite skeptical, especially because of the cult-like fanaticism of its adherents. They would go on for as long as you'd let them (often needing to just walk away to get them to stop) try to get you to join.
The goal appears to be to obtain as much legal tender as can be pried from those who are willing to part with it. Hard sell, abusive and deceptive tactics are encouraged -- because it's so important for those who haven't "gotten it" to do so, justifying just about anything. But if you don't pay -- you get bupkis.
It's a scam, and an abusive one at that.
[0] https://en.wikipedia.org/wiki/Landmark_Worldwide
What is it about San Francisco that makes it the global center for this stuff?
Reading this, I was reminded of the 60's hippy communes, that generally centered around SF, and the problems they reported. So similar, especially around that turning-inward group emotional dynamics problem, that such groups tend to become dysfunctional (as TFA says) by blowing up internal emotional group politics into huge problems that need the entire group to be involved in trying to heal (as opposed to, say, accepting that a certain amount of interpersonal conflict is inevitable in human groups and ignoring it). It's fascinating that the same kind of groups seem to encounter the same kind of problems despite being ~60 years apart and armed with a lot more tech and knowledge.
And, yeah, why SF?
A problem with this whole mindset is that humans, all of us, are only quasi-rational beings. We all use System 1 ("The Elephant") and System 2 ("The Rider") thinking instinctively. So if you end up in deep denial about your own capacity for irrationality, I guess it stands to reason you could end up getting led down some deep dark rabbit holes.
Some of the most irrational people I've met were those who claimed to make all their decisions rationally, based on facts and logic. They're just very good at rationalizing, and since they've pre-defined their beliefs as rational, they never have to examine where else they might come from. The rest of us at least have a chance of thinking, "Wait, am I fooling myself here?"
Wasn't the "fast&slow" thingy debunked as another piece of popscience?
Many specific studies on the matter don't replicate, I think the book preceded the replication crisis so this is to be expected, but I don't think that negates the core idea that our brain does some things on autopilot whereas other things take conscious thought which is slower. This is a useful framework to think about cognition, though any specific claims need evidence obviously.
TBH I've learned that even the best pop sci books making (IMHO) correct points tend to have poor citations - to studies that don't replicate or don't quite say what they're being cited to say - so when I see this, it's just not very much evidence one way or the other. The bar is super low.
The point remains. People are not 100 percent rational beings, never have been, never will be, and it's dangerous to assume that this could ever be the case. Just like any number of failed utopian political movements in history that assumed people could ultimately be molded and perfected.
Those of us who accept this limitation can often fail to grasp how much others perceive it as a profound attack on the self. To me, it is a basic humility - that no matter how much I learn, I cannot really transcend the time and place of my birth, the biology of my body, the quirks of my culture. Rationality, though, promises that transcendence, at least to some people. And look at all the trouble such delusion has caused, for example "presentism". Science fiction often introduces a hidden coordinate system, one of language and predicate, upon which reason can operate, but system itself did not come from reason, but rather a storyteller's aesthetic.
I think duality gets debunked every couple of hundred years
No?
Yup. It's fundamentally irrational for anybody to believe themselves sufficiently rational to pull off the feats of supposed rational deduction that the so called Rationalists regularly perform. Predicting the future of humanity decades or even centuries away is absurd, but the Rationalists irrationally believe they can.
So to the point of the article, rationalist cults are common because Rationalists are irrational people (like all people) who (unlike most people) are blinded to their own irrationality by their overinflated egos. They can "reason" themselves into all manner of convoluted pretzels and lack the humility to admit they went off the deep end.
Finally, something that properly articulates my unease when encountering so-called "rationalists" (especially the ones that talk about being "agentic", etc.). For some reason, even though I like logical reasoning, they always rubbed me the wrong way - probably just a clash between their behavior and my personal values (mainly humility).
> One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger.
many such cases
Capital-R Rationalism also encourages you to think you can outperform it, by being smart and reasoning from first principles. That was the idea behind MetaMed, founded by LessWronger Michael Vassar - that being trained in rationalism made you better at medical research and consulting than medical school or clinical experience. Fortunately they went out of business before racking up a body count.
One lesson I've learned and seen a lot in my life is that understanding that something is wrong or what's wrong about it, and being able to come up with a better solution are distinct, and the latter is often much harder. It seems often that those that are best able to describe the problem often don't overlap much with those that can figure out how to solve, even though they think they can.
It is an unfortunate reality of our existence that sometimes Chesterton actually did build that fence for a good reason, a good reason that's still here.
(One of my favorite TED talks was about a failed experiment in introducing traditional Western agriculture to a people in Zambia. It turns out when you concentrate too much food in one place, the hippos come and eat it all and people can't actually out-fight hippos in large numbers. In hindsight, the people running the program should have asked how likely it was that folks in a region that had exposure to other people's agriculture for thousands of years, hadn't ever, you know... tried it. https://www.ted.com/talks/ernesto_sirolli_want_to_help_someo...)
You sound like you'd like the book Seeing like a State.
Why didnt they kill the hippos like we killed the buffalo?
Hippos are more dangerous than emus.
https://en.wikipedia.org/wiki/Emu_War
My understanding of the emu war is that they werent dangerous so much as quick to multiply. The army couldnt whack the moles fast enough. Hippos dont strike me as animals that can go underground when threatened
Shoot the hippos to death for even more food. If it doesn't seem to work it's just a matter of having more and bigger guns.
TEDx
It's almost the defining characteristic of our time.
Tell-tale slogan: "Let's derive from first principles"
indeed
see: bitcoin
Granted, admitted from what little I've read on the outside, the "rational" part just seems to be mostly the writing style - this sort of dispassionate, eloquently worded prose that makes weird ideas seem more "rational" and logical than they really are.
Yes, they're not rational at all. They're just a San-Francisco/Bay area cult who use that word.
Scott Aaronson had this view of one rationalist group.
"Guess I’m A Rationalist Now" https://scottaaronson.blog/?p=8908
> “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
I see this arrogant attitude all the time on HN: reflexive distrust of the "mainstream media" and "scientific experts". Critical thinking is a very healthy idea, but its dangerous when people use it as a license to categorically reject sources. Its even worse when extremely powerful people do this; they can reduce an enormous sub-network of thought into a single node for many many people.
So, my answer for "Why Are There So Many Rationalist Cults?" is the same reason all cults exist: humans like to feel like they're in on the secret. We like to be in secret clubs.
Sure, but that doesn't say anything about why one particular social scene would spawn a bunch of cults while others do not, which is the question that the article is trying to answer.
Maybe I was too vague. My argument is that cults need a secret. The secret of the rationalist community is "nobody is rational except for us". Then the rituals would be endless probability/math/logic arguments about sci-fi futures.
I think the promise of secret knowledge is important, but I think cults also need a second thing: "That thing you fear? You're right to fear it, and only we can protect you from it. If you don't do what we say, it's going to be so much worse than it is now, but if you do, everything will be good and perfect."
In the rationalist cults, you typically have the fear of death and non-existence, coupled with the promise of AGI, the Singularity and immortality, weighed against the AI Apocalypse.
I guess I'd say protection promises like this are a form of "secret knowledge". At the same time, so many cults have this protection racket so you might be on to something
The terminology here is worth noting. Is a Rationalist Cult a cult that practices Rationalism according to third parties, or is it a cult that says they are Rationalist?
Clearly all of these groups that believe in demons or realities dictated by tabletop games are not what third parties would call Rationalist. They might call themselves that.
There are some pretty simple tests that can out these groups as not rational. None of these people have ever seen a demon, so world models including demons have never predicted any of their sense data. I doubt these people would be willing to make any bets about when or if a demon will show up. Many of us would be glad to make a market concerning predictions made by tabletop games about physical phenomenon.
Yeah, I would say the groups in question are notionally, aspirationally rational and I would hate for the takeaway to be disengagement from principles of critical thinking and skeptical thinking writ large.
Which, to me, raises the fascinating question of what does a "good" version look like, of groups and group dynamics centered around a shared interest in best practices associated with critical thinking?
At a first impression, I think maybe these virtues (which are real!) disappear into the background of other, more applied specializations, whether professions, hobbies, backyard family barbecues.
It would seem like the quintessential Rationalist institution to congregate around is the prediction market. Status in the community has to be derived from a history of making good bets (PnL as a %, not in absolute terms). And the sense of community would come from (measurably) more rational people teaching (measurably) less rational people how to be more rational.
The founder of LessWrong / The Rationalist movement would absolutely agree with you here, and has written numerous fanfics about a hypothetical alien society ("Dath Ilan") where those are fairly central.
The article is talking about cults that arose out of the rationalist social milieu, which is a separate question from whether the cult's beliefs qualify as "rationalist" in some sense (a question that usually has no objective answer anyway).
>so world models including demons have never predicted any of their sense data.
There's a reason they call themselves "rationalists" instead of empiricists or positivists. They perfectly inverted Hume ("reason is, and ought only to be the slave of the passions")
These kinds of harebrained views aren't an accident but a product of rationalism. The idea that intellect is quasi infinite and that the world can be mirrored in the mind is not running contradictory to, but just the most extreme form of rationalism taken to its conclusion, and of course deeply religious, hence the constant fantasies about AI divinities and singularities.
What's the scale of this thing in the SF Bay Area? 100 people? 1000 people? 10,000 people?
I think I found the problem!
I actually don't mind Yudkowski as an individual - I think he is almost always wrong and undeservedly arrogant, but mostly sincere. Yet treating him as an AI researcher and serious philosopher (as opposed to a sci-fi essayist and self-help writer) is the kind of slippery foundation that less scrupulous people can build cults from. (See also Maharishi Mahesh Yogi and related trends - often it is just a bit of spiritual goofiness as with David Lynch, sometimes you get a Charles Manson.)How has he fared in the fields of philosophy and AI research in terms of peer review, is there some kind of roundup or survey around about this?
EY and MIRI as a whole have largely failed to produce anything which even reaches the point of being peer reviewable. He does not have any formal education and is uninterested in learning how to navigate academia.
Don't forget the biggest scifi guy turned cult leader of all L. Ron Hubbard
I don't think Yudkowski is at all like L. Ron Hubbard. Hubbard was insane and pure evil. Yudkowski seems like a decent and basically reasonable guy, he's just kind of a blowhard and he's wrong about the science.
L. Ron Hubbard is more like the Zizians.
I don't have a horse in the battle but could you provide a few examples where he was wrong?
Here's one: Yudkowsky has been confidently asserting (for years) that AI will extinct humanity because it will learn how to make nanomachines using "strong" covalent bonds rather than the "weak" van der Waals forces used by biological systems like proteins. I'm certain that knowledgeable biologists/physicists have tried to explain to him why this belief is basically nonsense, but he just keeps repeating it. Heck there's even a LessWrong post that lays it out quite well [1]. This points to a general disregard for detailed knowledge of existing things and a preference for "first principles" beliefs, no matter how wrong they are.
[1] https://www.lesswrong.com/posts/8viKzSrYhb6EFk6wg/why-yudkow...
Dear god. The linked article is a good takedown of this "idea," but I would like to pile on: biological systems are in fact extremely good at covalent chemistry, usually via extraordinarily powerful nanomachines called "enzymes". No, they are (usually) not building totally rigid condensed matter structures, but .. why would they? Why would that be better?
I'm reminded of a silly social science article I read, quite a long time ago. It suggested that physicists only like to study condensed matter crystals because physics is a male-dominated field, and crystals are hard rocks, and, um ... men like to think about their rock-hard penises, I guess. Now, this hypothesis obviously does not survive cursory inspection - if we're gendering natural phenomena studied by physicists, are waves male? Are fluid dynamics male?
However, Mr. Yudowsky's weird hangups here around rigidity and hardness have me adjusting my priors.
> One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger.
I've been there myself.
> And without the steadying influence of some kind of external goal you either achieve or don’t achieve, your beliefs can get arbitrarily disconnected from reality — which is very dangerous if you’re going to act on them.
I think this and the entire previous two paragraphs preceding it are excellent arguments for philosophical pragmatism and empiricism. It's strange to me that the community would not have already converged on that after all their obsessions with decision theory.
> The Zizians and researchers at Leverage Research both felt like heroes, like some of the most important people who had ever lived. Of course, these groups couldn’t conjure up a literal Dark Lord to fight. But they could imbue everything with a profound sense of meaning. All the minor details of their lives felt like they had the fate of humanity or all sentient life as the stakes. Even the guilt and martyrdom could be perversely appealing: you could know that you’re the kind of person who would sacrifice everything for your beliefs.
This helps me understand what people mean by "meaning". A sense that their life and actions actually matter. I've always struggled to understand this issue but this helps make it concrete, the kind of thing people must be looking for.
> One of my interviewees speculated that rationalists aren’t actually any more dysfunctional than anywhere else; we’re just more interestingly dysfunctional.
"We're"? The author is a rationalist too? That would definitely explain why this article is so damned long. Why are rationalists not able to write less? It sounds like a joke but this is seriously a thing. [EDIT: Various people further down in the comments are saying it's amphetamines and yes, I should have known that from my own experience. That's exactly what it is.]
> Consider talking about “ethical injunctions:” things you shouldn’t do even if you have a really good argument that you should do them. (Like murder.)
This kind of defeats the purpose, doesn't it? Also, this is nowhere justified in the article, just added on as the very last sentence.
> Why are rationalists not able to write less?
The 'less' in LessWrong very much does not refer to _volume_.
>I think this and the entire previous two paragraphs preceding it are excellent arguments for philosophical pragmatism and empiricism. It's strange to me that the community would not have already converged on that after all their obsessions with decision theory
They did! One of the great ironies inside the community is that they are and openly admit to being empiricists. They reject most of the French/European rationalist cannon.
>Why are rationalists not able to write less?
The answer is a lot more boring. They like to write and they like to think. They also think by writing. It is written as much for themselves as for anyone else, probably more.
Purity Spirals + Cheap Talk = irrational rationalists
> Purity Spirals
This is an interesting idea (phenomenon?):
> A purity spiral is a theory which argues for the existence of a form of groupthink in which it becomes more beneficial to hold certain views than to not hold them, and more extreme views are rewarded while expressing doubt, nuance, or moderation is punished (a process sometimes called "moral outbidding").[1] It is argued that this feedback loop leads to members competing to demonstrate the zealotry or purity of their views.[2][3]
* https://en.wikipedia.org/wiki/Purity_spiral
Certainly something they're aware of - the same concept was discussed as early as in 2007 on Less Wrong under the name "evaporative cooling of group beliefs"
https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporativ...
Eliezer Yudkowsky, shows little interest in running one. He has consistently been distant from and uninvolved in rationalist community-building efforts, from Benton House (the first rationalist group house) to today’s Lightcone Infrastructure (which hosts LessWrong, an online forum, and Lighthaven, a conference center). He surrounds himself with people who disagree with him, discourages social isolation.
Ummm, EY literally has a semi-permanent office in Lighthouse (at least until recently) and routinely blocks people on Twitter as a matter of course.
Blocking people on Twitter doesn't necessarily imply intolerance of people who disagree with you. People often block for different reasons than disagreement.
On a recent Mindscape podcast Sean Carroll mentioned that rationalists are rational about everything except accusations that they're not being rational.
I mean you have to admit that that's a bit of a kafkatrap
This just sounds like any other community based around a niche interest.
From kink to rock hounding, there's always people who base their identity on being a broker of status or power because they themselves are a powerless outsider once removed from the community
> base their identity on being a broker of status or power because they themselves are a powerless outsider once removed from the community
Who would ever maintain power when removed from their community? You mean to say they base their identity on the awareness of the power they possess within a certain group?
It's really worth reading up on the techniques from Large Group Awareness Training so that you can recognize them when they pop up.
Once you see them listed (social pressure, sleep deprivation, control of drinking/bathroom, control of language/terminology, long exhausting activities, financial buy in, etc) and see where they've been used in cults and other cult adjacent things it's a little bit of a warning signal when you run across them IRL.
Related, the BITE model of authoritarian control is also a useful framework for identifying malignant group behavior. It's amazing how consistent these are across groups and cultures, from Mao's inner circle to NXIVM and on.
https://freedomofmind.com/cult-mind-control/bite-model-pdf-d...
What is the base rate here? Hard to know the scope of the problem without knowing how many non-rationalists (is that even a coherent group of people?) end up forming weird cults, as a comparison. My impression is that crazy beliefs are common amongst everybody.
A much simpler theory is that rationalists are mostly normal people, and normal people tend to form cults.
I was wondering about this too. You could also say it's a sturgeon's law question.
They do note at the beginning of the article that many, if not most such groups have reasonably normal dynamics, for what it's worth. But I think there's a legitimate question of whether we ought to expect groups centered on rational thinking to be better able to escape group dynamics we associate with irrationality.
The only way you can hope to get a gathering of nothing but paragons of critical thinking and skepticism is if the gathering has an entrance exam in critical thinking and skepticism (and a pretty tough one, if they are to be paragons). Or else, it's invitation-only.
>There’s a lot to like about the Rationalist community
Like what? Never saw anything worth while there...
> If someone is in a group that is heading towards dysfunctionality, try to maintain your relationship with them; don’t attack them or make them defend the group. Let them have normal conversations with you.
This is such an important skill we should all have. I learned this best from watching the documentary Behind the Curve, about flat earthers, and have applied it to my best friend diving into the Tartarian conspiracy theory.
I really like your suggestions, even for non-rationalists.
Great read.
I remember going to college and some graduate student, himself a philosophy major, telling me that nobody is as big a jerk as philosophy majors.
I don't know if it is really true, but it certainly felt true that folks looking for deeper answers about a better way to think about things end up finding what they believe is the "right" way and that tends to lead to branding other options as "wrong".
A search for certainty always seems to be defined or guided by people dealing with their own issues and experiences that they can't explain. It gets tribal and very personal and those kind of things become dark rabbit holes.
----
>Jessica Taylor, an AI researcher who knew both Zizians and participants in Leverage Research, put it bluntly. “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.”
Reminds me of some members of our government and conspiracy theorists who "research" and encourage people to figure it out themselves ...
One thing I'm having trouble with: The article assumes the reader knows some history about the rationalists.
I listened to a podcast that covered some of these topics, so I'm not lost; but I think someone who's new to this topic will be very, very, confused.
Here you go. It has like 10 chapters, so keep going once you reach the end.
https://aiascendant.substack.com/p/extropias-children-chapte...
I'm curious, what was the podcast episode?
Stuff you should Know, Who are the Zizians.
I can't find a direct link, but if you search for "Who are the Zizians?", you'll find it at https://stuffyoushouldknow.com/episodes/
Because humans like people who promise answers.
Boring as it is, this is the answer. It's just more religion.
Funnily enough, the actress who voiced this line is a Scientologist:
https://en.wikipedia.org/wiki/Nancy_Cartwright#Personal_life
I think they were making fun of the "Moonies" so she was probably able to rationalize it. Pretty sure Isaac Hayes quit South Park over their making fun of scientologists.
I read recently that he suffered a serious medical event atound that time and it was actually cult members speaking on his behalf that withdrew him from the show.
I think it was a relative of his claiming this.
>I read recently that he suffered a serious medical event atound that time and it was actually cult members speaking on his behalf that withdrew him from the show.
I saw him perform after he left the show and several months before he passed. He looked pretty unhealthy and I'm glad I had the chance to see him before that happened, He obviously was having some medical issues, but didn't discuss them during his performance.
When I was looking for a group in my area to meditate with, it was tough finding one that didn't appear to be a cult. And yet I think Buddhist meditation is the best tool for personal growth humanity has ever devised. Maybe the proliferation of cults is a sign that Yudkowsky was on to something.
None of them are practicing Buddhist meditation though, same for the "personal growth" oriented meditation styles.
Buddhist meditation exists only in the context of the Four Noble Truths and the rest of the Buddha's Dhamma. Throwing them away means it stops being Buddhist.
I disagree, but we'd be arguing semantics. In any case, the point still stands: you can just as easily argue that these rationalist offshoots aren't really Rationalist.
I'm not familiar enough with their definitions to argue about them, but meditations techniques predate Buddhism. In fact, the Buddha himself learned them from two teachers before developing his own path. Also, the style of meditation taught nowadays (accepting non-reactive awareness) is not how it's described in the Pali Canon.
This isn’t just a "must come from the Champagne region of France, otherwise it’s sparkling wine" bickering, but actual widespread misconceptions of what counts as Buddhism. Many ideas floating in Western discourse are basically German Romanticism wrapped in Orientalist packaging, not matching neither Theravada nor Mahayana teachings (for example, see the Fake Buddha Quotes project).
So the semantics are extremely important when it comes to spiritual matters. Flip one or two words and the whole metaphysical model goes in a completely different direction. Even translations add distortions, so there’s no room to be careless.
Reading the other comments makes me wonder if they just misread the sign and they were looking for the rationalizationist meeting.
> And what can show up is that some people aren't actually smart enough to form very good conclusions once they start thinking for themselves.
It's mostly just people who aren't very experienced talking about and dealing honestly with their emotions, no?
I mean, suppose someone is busy achieving and, at the same time, proficient in balancing work with emotional life, dealing head-on with interpersonal conflicts, facing change, feeling and acknowledging hurt, knowing their emotional hangups, perhaps seeing a therapist, perhaps even occasionally putting personal needs ahead of career... :)
Tell that person they can get a marginal (or even substantial) improvement from some rationalist cult practice. Their first question is going to be, "What's the catch?" Because at the very least they'll suspect that adjusting their work/life balance will bring a sizeable amount of stress and consequent decrease in their emotional well-being. And if the pitch is that this rationalist practice works equally well at improving emotional well-being, that smells to them. They already know they didn't logic themselves into their current set of emotional issues, and they are highly unlikely to logic themselves out of them. So there's not much value here to offset the creepy vibes of the pitch. (And again-- being in touch with your emotions means quicker and deeper awareness of creepy vibes!)
Now, take a person whose unexplored emotional well-being tacitly depends on achievement. Even a marginal improvement in achievement could bring perceptible positive changes in their holistic selves! And you can step through a well-specified, logical process to achieve change? Sign HN up!
Ted Talks became religions, podcasts sermons
I think everyone should be familiar with hermeticism because its various mystery cults have been with us since Hermes Trismegistus laid down its principles in Ancient Egypt on the Emerald Tablets. It was where early science like practices like alchemy originated, but that wheat got separated out from the chaff during the renaissance and the more coercive control aspects remained. That part, how to get people to follow you and fight for you and maintain a leadership hierarchy is extremely old technology.
They essentially use this glitch in human psychology that gets exploited over and over again. The glitch is a more generalized version of the advanced fee scam. You tell people that if we just believe something can be true, it can be true. Then we discriminate against people who don't believe in that thing. We then say only the leader(s) can make that thing true, but first you must give them all your power and support so they can fight the people who don't believe in that thing.
Unfortunately, reality is much more messy than the cult leaders would have you believe, and leaders often don't have their followers best interests at heart, especially those who follow blindly, or even the ability to make the thing true that everyone wants to be true, but use it as this white rabbit that everyone in the cult has to chase after forever.
This is a great article.
There's so much in these group dynamics that repeats group dynamics of communist extremists of the 70s. A group that has found a 'better' way of life, all you have to do is believe in the group's beliefs.
Compare this part from OP:
>Here is a sampling of answers from people in and close to dysfunctional groups: “We spent all our time talking about philosophy and psychology and human social dynamics, often within the group.” “Really tense ten-hour conversations about whether, when you ate the last chip, that was a signal that you were intending to let down your comrades in selfish ways in the future.”
This reeks of Marxist-Leninist self-criticism, where everybody tried to up each other in how ideologically pure they were. The most extreme outgrowing of self-criticism is when the Japanese United Red Army beat its own members to death as part of self-criticisms.
>'These violent beatings ultimately saw the death of 12 members of the URA who had been deemed not sufficiently revolutionary.' https://en.wikipedia.org/wiki/United_Red_Army
History doesn't repeat, but it rhymes.
It is so strange that this article would hijack the term "rationalist" to mean this extraordinarily specific set of people "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences, a set of essays about how to think more rationally".
As a counter example (with many many more people) is the Indian Rationalist Association (https://en.wikipedia.org/wiki/Indian_Rationalist_Association) to "promote scientific skepticism and critique supernatural claims". This isn't a cult of any kind, even if the members broadly agree about what it means to be rational with the set above.
I think rationalist cults work exactly the same as religious cults. They promise to have all the answers, to attract the vulnerable. The answers are convoluted and inscrutable, so a leader/prophet interprets them. And doom is neigh, providing motivation and fear to hold things together.
It's the same wolf in another sheep's clothing.
And people who wouldn't join a religious cult -- e.g. because religious cults are too easy to recognize since we're all familiar with them, or because religions hate anything unusual about gender -- can join a rationalist cult instead.
Pertinent Twitter comment:
"Rationalism is such an insane name for a school of thought. Like calling your ideology correctism or winsargumentism"
https://x.com/growing_daniel/status/1893554844725616666
IIUC the name in its current sense was sort of an accident. Yudkowsky originally used the term to mean "someone who succeeds at thinking and acting rationally" (so "correctism" or "winsargumentism" would have been about equally good), and then talked about the idea of "aspiring rationalists" as a community narrowly focused on developing a sort of engineering discipline that would study the scientific principles of how to be right in full generality and put them into practice. Then the community grew and mutated into a broader social milieu that was only sort of about that, and people needed a name for it, and "rationalists" was already there, so that became the name through common usage. It definitely has certain awkwardnesses.
It's not particularly unusual, though. See the various kinds of 'Realist' groups, for example, which have a pretty wild range of outlooks. (both Realist and Rationalist also have the neat built-in shield of being able to say "look, I don't particularly like the conclusions I'm coming to, they just are what they are", so it's a convenient framing for unpalatable beliefs)
To be honest I don't understand that objection. If you strip it from all its culty sociological effects, one of the original ideas of rationalism was to try to use logical reasoning and statistical techniques to explicitly avoid the pitfalls of known cognitive biases. Given that foundational tenet, "rationalism" seems like an extremely appropriate moniker.
I fully accept that the rationalist community may have morphed into something far beyond that original tenet, but I think rationalism just describes the approach, not that it's the "one true philosophy".
That it refers to a different but confusingly related concept in philosophy is a real downside of the name.
That point seems fair enough to me, as I'm not familiar with the specifics and history of the related concept in philosophy. But that seems different than the objection that calling yourself "rationalist" somehow implies you think that you have the "1 true answer" to the world's problems.
I'm going to start a group called "Mentally Healthy People". We use data, logical thinking, and informal peer review. If you disagree with us, our first question will be "what's wrong with mental health?"
But, to be frank, "Mentally Healthy People" fully acknowledge and accept their emotions, and indeed understand that emotions are the fundamental way that natural selection implements motivation.
Calling yourself "rationalist" doesn't inherently mean that you think you're better than everyone else, or somehow infallible. To me it just means your specific approach to problem solving.
Re your 2nd paragraph: if so, it's poor naming:
1. The name "Rationalist" does not reveal anything about a specific approach to problem solving.
2. What are some obvious possible names for people who do not subscribe to a given program that's called "Rationalist"? Just whatever comes to mind.
No pressure to actually type a response to #2! It's obviously a loaded question.
So... Psychiatry? Do you think psychiatrists are particularly prone to starting cults? Do you think learning about psychiatry makes you at risk for cult-like behavior?
No. I have no beef with psychology or psychiatry. They're doing good work as far as I can tell. I am poking fun at people who take "rationality" and turn it into a brand name.
Why is "you can work to avoid cognitive biases" more ridiculous than "you can work to improve your mental health"?
I'm feeling a little frustrated by the derail. My complaint is about some small group claiming to have a monopoly on a normal human faculty, in this case rationality. The small group might well go on to claim that people outside the group lack rationality. That would be absurd. The mental health profession do not claim to be immune from mental illness themselves, they do not claim that people outside their circle are mentally unhealthy, and they do not claim that their particular treatment is necessary for mental health.
I guess it's possible you might be doing some deep ironic thing by providing a seemingly sincere example of what I'm complaining about. If so it was over my head but in that case I withdraw "derail"!
> My complaint is about some small group claiming to have a monopoly on a normal human faculty, in this case rationality.
"Rationalists" don't claim a monopoly any more than Psychiatry does.
> The small group might well go on to claim that people outside the group lack rationality.
Again, something that psychiatry is quite noteworthy about: the entire point of the profession is to tell non-professionals that they're doing Emotionally Healthy wrong.
> The mental health profession do not claim to be immune from mental illness themselves,
Rationalist don't claim to be immune to irrationality, and this is in fact repeatedly emphasized: numerous cornerstone articles are about "wow, I really fucked up at this Rationality thing", including articles by Eliezer.
> they do not claim that people outside their circle are mentally unhealthy
... what?
So if I go to a psychiatrist, you think they're gonna say I'm FINE? No matter what?
Have you ever heard of "involuntary commitment"?
> and they do not claim that their particular treatment is necessary for mental health.
Again, this is about as true as it is for rationalists.
Right and to your point, I would say you can distinguish (1) "objective" in the sense of relying on mind-independent data from (2) absolute knowledge, which treats subjects like closed conversations. And you can make similar caveats for "rational".
You can be rational and objective about a given topic without it meaning that the conversation is closed, or that all knowledge has been found. So I'm certainly not a fan of cult dynamics, but I think it's easy to throw an unfair charge at these groups, that their interest in the topic necessitates an absolutist disposition.
Objectivisim?
What do you make of the word “science” then?
Great names! Are you using them, or are they available? /s
Is it really that surprising that a group of humans who think they have some special understanding of reality compared to others tend to separate and isolate themselves until they fall into an unguided self-reinforcing cycle?
I'd have thought that would be obvious since it's the history of many religions (which seem to just be cults that survived the bottleneck effect to grow until they reached a sustainable population).
In other words, humans are wired for tribalism, so don't be surprised when they start forming tribes...
> And yet, the rationalist community has hosted perhaps half a dozen small groups with very strange beliefs (including two separate groups that wound up interacting with demons). Some — which I won’t name in this article for privacy reasons — seem to have caused no harm but bad takes.
So there's six questionable (but harmless) groups and then later the article names three of them as more serious. Doesn't seem like "many" to me.
I wonder what percentage of all cults are the rationalist ones.
The thing with identifying yourself with an “ism” (e.g. rationalism, feminism, socialism) is that, even though you might not want that, you’re inherently positioning yourself in a reductionist and inaccurate corner of the world. Or in other words you’re shielding yourself in a comfortable, but wrong, bubble.
To call yourself an -ist means that you consider that you give more importance to that concept than other people—-you’re more rational than most, or care more about women than most, or care more about social issues than most. That is wrong both because there are many irrational rationalists and also because there are many rational people who don’t associate with the group (same with the other isms). The thing is that the very fact of creating the label and associating yourself with it will ruin the very thing that you strive for. You will attract a bunch of weirdos who want to be associated with the label without having to do the job that it requires, and you will become estranged from those who prefer to walk the walk instead of talk the talk. In both ways, you failed.
The fact is that every ism is a specific set of thoughts and ideas that is not generic, and not broad enough to carry the weight of its name. Being a feminist does not mean you care about women; it means you are tied to a specific set of ideologies and behaviours that may or may not advance the quality of life of women in the modern world, and are definitely not the only way to achieve that goal (hence the inaccuracy of the label).
They are literally the "ackchyually" meme made flesh.
Over rationalizing is paperclip maximizing
Because purporting to be extra-rational about decisions is effective nerd-bait.
Isn't this entirely to be expected? The people who dominate groups like these are the ones who put the most time and effort into them, and no sane person who appreciates both the value and the limitations of rational thinking is going to see as much value in a rationalist group, and devote as much time to it, as the kind of people who are attracted to the cultish aspect of achieving truth and power through pure thought. There's way more value there if you're looking to indulge in, or exploit, a cult-like spiral into shared fantasy than if you're just looking to sharpen your logical reasoning.
So I like Steven Pinker’s book Rationality, to me it seems quite straightforward.
But I have never been able to get into the Rationalist stuff, to me it’s all very meandering and peripheral and focused on… I don’t know what.
Is it just me?
Depends very much on what you're hoping to get out of it. There isn't really one "rationalist" thing at this point, it's now a whole bunch of adjacent social groups with overlapping-but-distinct goals and interests.
https://www.lesswrong.com/highlights this is the ostensible "Core Highlights", curated by major members of the community, and I believe Eliezer would endorse it.
If you don't get anything out of reading the list itself, then you're probably not going to get anything out of the rest of the community either.
If you poke around and find a few neat ideas there, you'll probably find a few other neat ideas.
For some people, though, this is "wait, holy shit, you can just DO that? And it WORKS?", in which case probably read all of this but then also go find a few other sources to counter-balance it.
(In particular, probably 90% of the useful insights already exist elsewhere in philosophy, and often more rigorously discussed - LessWrong will teach you the skeleton, the general sense of "what rationality can do", but you need to go elsewhere if you want to actually build up the muscles)
This is a very interesting article. It's surprising though to see it not use the term "certainty" at all. (It only uses "certain" in a couple instances of like "a certain X" and one use of "certainly" for generic emphasis.)
Most of what the article says makes sense, but it seems to sidestep the issue that a major feature distinguishing the "good" rationalists from the "bad" is that the bad ones are willing to take very extreme actions in support of their beliefs. This is not coincidentally something that distinguishes good believers in various religions or philosophies from bad believers (e.g., people who say God told them to kill people). This is also lurking in the background of discussion of those who "muddled through" or "did the best they could". The difference is not so much in the beliefs as in the willingness to act on them, and that willingness is in turn largely driven by certainty.
I think it's plausible there is a special dimension to rationalism that may exacerbate this, namely a tendency of rationalists to feel especially "proud" of their beliefs because of their meta-belief that they derived their beliefs rationally. Just like an amateur painter may give themselves extra brownie points because no one taught them how to paint, my impression of rationalists is that they sometimes give themselves an extra pat on the back for "pulling themselves up by their bootstraps" in the sense of not relying on faith or similar "crutches" to determine the best course of action. This can paradoxically increase their certainty in their beliefs when actually it's often a warning that those beliefs may be inadequately tested against reality.
I always find it a bit odd that people who profess to be rationalists can propose or perform various extreme acts, because it seems to me that one of the strongest and most useful rational beliefs is that your knowledge is incomplete and your beliefs are almost surely not as well-grounded as you think they are. (Certainly no less an exponent of reason than Socrates was well aware of this.) This on its own seems sufficient to me to override some of the most absurd "rationalist" conclusions (like that you should at all costs become rich or fix Brent Dill's depression). It's all the more so when you combine it with some pretty common-sense forecasts of what might happen if you're wrong. (As in, if you devote your life to curing Brent Dill's depression on the theory that he will then save the world, and he turns out to be just an ordinary guy or worse, you wasted your life curing one person's depression when you yourself could have done more good with your own abilities, just by volunteering at a soup kitchen or something.) It's never made sense to me that self-described rationalists could seriously consider some of these possible courses of action in this light.
Sort of related is the claim at the end that rationalists "want to do things differently from the society around them". It's unclear why this would be a rational desire. It might be rational in a sense to say you want to avoid being influenced by the society around you, but that's different from affirmatively wanting to differ from it. This again suggests a sort of "psychological greed" to reach a level of certainty that allows you to confidently, radically diverge from society, rather than accepting that you may never reach a level of certainty that allows you to make such deviations on a truly rational basis.
It's also interesting to me that the article focuses a lot not on rationalist belief per se, but on the logistics and practices of rationalist communities. This in itself seems like a warning that the rationality of rationalism is not all it's cracked up to be. It's sort of like, you can try to think as logically as possible, but if you hit yourself in the head with a hammer every day you're likely going to make mistakes anyway. And some of the "high demand" practices mentioned seem like slightly less severe psychological versions of that.
The premise of the article might just be nonsense.
How many rationalists are there in the world? Of course it depends on what you mean by rationalist, but I'd guess that there are probably several tens of thousands, at very least, people in the world who either consider themselves rationalists or are involved with the rationalist community.
With such numbers, is it surprising that there would be half a dozen or so small cults?
There are certainly some cult-like aspects to certain parts of the rationalist community, and I think that those are interesting to explore, but come on, this article doesn't even bother to establish that its title is justified.
To the extent that rationalism does have some cult-like aspects, I think a lot of it is because it tends to attract smart people who are deficient in the ability to use avenues other than abstract thinking to comprehend reality and who enjoy making loosely justified imaginative leaps of thought while overestimating their own abilities to model reality. The fact that a huge fraction of rationalists are sci-fi fans is not a coincidence.
But again, one should first establish that there is anything actually unusual about the number of cults in the rationalist community. Otherwise this is rather silly.
I find it ironic that the question is asked unempirically. Where is the data stating there are many more than before? Start there, then go down the rabbit hole. Otherwise, you're concluding on something that may not be true, and trying to rationalize the answer, just as a cultist does.
Oh come on.
Anyone who's ever seen the sky knows it's blue. Anyone who's spent much time around rationalism knows the premise of this article is real. It would make zero sense to ban talking about about a serious and obvious problem in their community until some double blind peer reviewed data can be gathered.
It would be what they call an "isolated demand for rigor".
Rationalism is the belief that reason is the primary path to knowledge, as opposed to, say, the observation that is championed by empiricism. It's a belief system that prioritises imposing its tenets on reality rather than asking reality what reality's tenets are. From the outset, it's inherently cult-like.
Rationalists, in this case, refers specifically to the community clustered around LessWrong, which explicitly and repeatedly emphasizes points like "you can't claim to have a well grounded belief if you don't actually have empirical evidence for it" (https://www.lesswrong.com/w/evidence for a quick overview of some of the basic posts on that topic)
To quote one of the core foundational articles: "Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick—ignoring infinitesimally tiny probabilities of success—is working properly." (https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can...)
One can argue how well the community absorbs the lesson, but this certainly seems to be a much higher standard than average.
That is the definition of “rationalism” as proposed by philosophers like Descartes and Kant, but I don’t think that is an accurate representation of the type of “rationalism” this article describes.
This article describes “rationalism” as described in LessWrong and the sequences by Eliezer Yudkowsky. A good amount of it based on empirical findings from psychology behavior science. It’s called “rationalism” because it seeks to correct common reasoning heuristics that are purported to lead to incorrect reasoning, not in contrast to empiricism.
Agreed, I appreciate that there's a conceptual distinction between the philosophical versions of rationalism and empiricism, but what's being talked about here is a conception that (again, at least notionally) is interested in and compatible with both.
I am pretty sure many of the LessWrong posts are about how to understand the meaning of different types of data and are very much about examining, developing, criticizing a rich variety of empirical attitudes.
I was going to write a similar comment as op, so permit me to defend it:
Many of their "beliefs" - Super-duper intelligence, doom - are clearly not believed by the market; Observing the market is a kind of empiricism and it's completely discounted by the lw-ers
But you cannot have reason without substantial proof of how things behave by observing them in the first place. Reason is simply a logical approach to yes and no questions where you factually know, from observation of past events, how things work. And therefore you can simulate an outcome by the exercise of reasoning applied onto a situation that you have not yet observed and come to a logical outcome, given the set of rules and presumptions.
One of the hallmarks of cults — if not a necessary element — is that they tend to separate their members from the outside society. Rationalism doesn't directly encourage this, but it does facilitate it in a couple of ways:
- Idiosyncratic language used to describe ordinary things ("lightcone" instead of "future", "prior" instead of "belief" or "prejudice", etc)
- Disdain for competing belief systems
- Insistence on a certain shared interpretation of things most people don't care about (the "many-worlds interpretation" of quantum uncertainty, self-improving artificial intelligence, veganism, etc)
- I'm pretty sure polyamory makes the list somehow, just because it isn't how the vast majority of people want to date. In principle it's a private lifestyle choice, but it's obviously a community value here.
So this creates an opportunity for cult-like dynamics to occur where people adjust themselves according to their interactions within the community but not interactions outside the community. And this could seem — to the members — like the beliefs themselves are the problem, but from a sociological perspective, it might really be the inflexible way they diverge from mainstream society.
Trying to find life’s answers by giving over your self authority to another individual or group’s philosophy is not rational. Submitting oneself to an authority who’s role is telling people what’s best in life will always lead to attracting the type of people looking to control, take advantage and traumatize others.
Something like 15 years ago I once went to a Less Wrong/Overcoming Bias meetup in my town after being a reader of Yudkowsky's blog for some years. I was like, Bayesian Conspiracy, cool, right?
The group was weird and involved quite a lot of creepy oversharing. I didn't return.
I was on LW when it emerged from the OB blog and back then it was a interesting and engaging group, though even then there were like 5 “major” contributors - most of which had no coherent academic or commercial success.
As soon as those “sequences” were being developed it was clearly turning into a cult around EY, that I never understood and still don’t.
This article did a good job of covering the history since and was really well written.
Water finds its own level
See also Rational Magic: Why a Silicon Valley culture that was once obsessed with reason is going woo (2023)
https://www.thenewatlantis.com/publications/rational-magic
and its discussion on HN: https://news.ycombinator.com/item?id=35961817
Narcissism and Elitism justified by material wealth.
What else?
Rationalism isn't any more "correct" and "proper" thinking than Christianity and Buddhism claim to espouse.
Perhaps I will get downvoted to death again for saying so, but the obvious answer is because the name "rationalist" is structurally indistinguishable from the name "scientology" or "the illuminati". You attract people who are desperate for an authority to appeal to, but for whatever reason are no longer affiliated with the church of their youth. Even a rationalist movement which held nothing as dogma would attract people seeking dogma, and dogma would form.
The article begins by saying the rationalist community was "drawn together by AI researcher Eliezer Yudkowsky’s blog post series The Sequences". Obviously the article intends to make the case that this is a cult, but it's already done with the argument at this point.
> for whatever reason are no longer affiliated with the church of their youth.
This is the Internet, you're allowed to say "they are obsessed with unlimited drugs and weird sex things, far beyond what even the generally liberal society tolerates".
I'm increasingly convinced that every other part of "Rationalism" is just distraction or justification for those; certainly there's a conscious decision to minimize talking about this part on the Internet.
I strongly suspect there is heterogeneity here. An outer party of "genuine" rationalists who believe that learning to be a spreadsheet or whatever is going to let them save humanity, and an inner party who use the community to conceal some absolute shenanigans.
No, I really mean atheists that crave religion.
> Obviously the article intends to make the case that this is a cult
The author is a self-identified rationalist. This is explicitly established in the second sentence of the article. Given that, why in the world would you think they're trying to claim the whole movement is a cult?
Obviously you and I have very different definitions of "obvious"
When I read the article in its entirety, I was pretty disappointed in its top-level introspection.
It seems to not be true, but I still maintain that it was obvious. Sometimes people don't pick the low-hanging fruit.
In fact, I'd go a step further and note the similarity with organized religion. People have a tendency to organize and dogmatize everything. The problem with religion is rarely the core ideas, but always the desire to use it as a basis for authority, to turn it dogmatic and ultimately form a power structure.
And I say this as a Christian. I often think that becoming a state religion was the worst thing that ever happened to Christianity, or any religion, because then it unavoidably becomes a tool for power and authority.
And doing the same with other ideas or ideologies is no different. Look at what happened to communism, capitalism, or almost any other secular idea you can think of: the moment it becomes established, accepted, and official, the corruption sets in.
I do not see any reasons for you to get down-voted.
I agree that the term "rationalist" would appeal to many people, and the obvious need to belong to a group plays a huge role.
There are a lot of rationalists in this community. Pointing out that the entire thing is a cult attracts downvotes from people who wish to, for instance, avoid being identified with the offshoots.
No, the downvotes are because rationalism isn't a cult and people take offense to being blatantly insulted. This article is about cults that are rationalism-adjacent, it's not claiming that rationalism is itself a cult.
That's almost word for word what I said...
You're right, I misread you.
Does anyone else feel that “rationality” is the same as clinical anxiety?
I’m hyper rational when I don’t take my meds. I’m also insane. But all of my thoughts and actions follow a carefully thought out sequence.
Harpers did an amazing cover story on these freaks in 2015 https://harpers.org/archive/2015/01/come-with-us-if-you-want...
Gott ist tot! Gott bleibt tot! Und wir haben ihn getötet! Wie trösten wir uns, die Mörder aller Mörder? Das Heiligste und Mächtigste, was die Welt bisher besaß, es ist unter unseren Messern verblutet.
The average teenager who reads Nietzsches proclamation on the death of God thinks of this as an accomplishment, finally we got rid of those thousands of years old and thereby severely outdated ideas and rules. Somewhere along the march to maturity they may start to wonder whether that which has replaced those old rules and ideas were good replacements but most of them never come to the realisation that there were rebellious teenagers during all those centuries when the idea of a supreme being to which or whom even the mightiest were to answer to still held sway. Nietzsche saw the peril in letting go off that cultural safety valve and warned for what might come next.
We are currently living in the world he warned us about and for that I, atheist as I am, am partly responsible. The question to be answered here is whether it is possible to regain the benefits of the old order without getting back the obvious excesses, the abuse, the sanctimoniousness and all the other abuses of power and privilege which were responsible for turning people away from that path.
> The Sequences make certain implicit promises. ...
Some meta-commentary first... How would one go about testing if this is true? If true, then such "promises" are not written down -- they are implied. So one would need to ask at least two questions: 1. Did the author intend to make these implicit promises? 2. What portion of readers perceive them as such?
> ... There is an art of thinking better ...
First, this isn't _implicit_ in the Sequences; it is stated directly. In any case, the quote does not constitute a promise: so far, it is a claim. And yes, rationalists do think there are better and worse ways of thinking, in the sense of "what are more effective ways of thinking that will help me accomplish my goals?"
> ..., and we’ve figured it out.
Codswallop. This is not a message of the rationality movement -- quite the opposite. We share what we've learned and why we believe it to be true, but we don't claim we've figured it all out. It is better to remain curious.
> If you learn it, you can solve all your problems...
Bollocks. This is not claimed implicitly or explicitly. Besides, some problems are intractable.
> ... become brilliant and hardworking and successful and happy ...
Rubbish.
> ..., and be one of the small elite shaping not only society but the entire future of humanity.
Bunk.
For those who haven't read it, I'll offer a relevant extended quote from Yudkowsky's 2009 "Go Forth and Create the Art!" [1], the last post of the Sequences:
## Excerpt from Go Forth and Create the Art
But those small pieces of rationality that I've set out... I hope... just maybe...
I suspect—you could even call it a guess—that there is a barrier to getting started, in this matter of rationality. Where by default, in the beginning, you don't have enough to build on. Indeed so little that you don't have a clue that more exists, that there is an Art to be found. And if you do begin to sense that more is possible—then you may just instantaneously go wrong. As David Stove observes—I'm not going to link it, because it deserves its own post—most "great thinkers" in philosophy, e.g. Hegel, are properly objects of pity. That's what happens by default to anyone who sets out to develop the art of thinking; they develop fake answers.
When you try to develop part of the human art of thinking... then you are doing something not too dissimilar to what I was doing over in Artificial Intelligence. You will be tempted by fake explanations of the mind, fake accounts of causality, mysterious holy words, and the amazing idea that solves everything.
It's not that the particular, epistemic, fake-detecting methods that I use, are so good for every particular problem; but they seem like they might be helpful for discriminating good and bad systems of thinking.
I hope that someone who learns the part of the Art that I've set down here, will not instantaneously and automatically go wrong, if they start asking themselves, "How should people think, in order to solve new problem X that I'm working on?" They will not immediately run away; they will not just make stuff up at random; they may be moved to consult the literature in experimental psychology; they will not automatically go into an affective death spiral around their Brilliant Idea; they will have some idea of what distinguishes a fake explanation from a real one. They will get a saving throw.
It's this sort of barrier, perhaps, which prevents people from beginning to develop an art of rationality, if they are not already rational.
And so instead they... go off and invent Freudian psychoanalysis. Or a new religion. Or something. That's what happens by default, when people start thinking about thinking.
I hope that the part of the Art I have set down, as incomplete as it may be, can surpass that preliminary barrier—give people a base to build on; give them an idea that an Art exists, and somewhat of how it ought to be developed; and give them at least a saving throw before they instantaneously go astray.
That's my dream—that this highly specialized-seeming art of answering confused questions, may be some of what is needed, in the very beginning, to go and complete the rest.
[1]: https://www.lesswrong.com/posts/aFEsqd6ofwnkNqaXo/go-forth-a...
because humans are biological creatures iterating through complex chemical processes that are attempting to allow a large organism to survive and reproduce within the specific ecosystem provided by the Earth in the present day. "Rational reasoning" is a quaint side effect that sometimes is emergent from the nervous system of these organisms, but it's nothing more than that. It's normal that the surviving/reproducing organism's emergent side effect of "rational thought", when it is particularly intense, will self-refer to the organism and act as though it has some kind of dominion over the organism itself, but this is, like the rationalism itself, just an emergent effect that is accidental and transient. Same as if you see a cloud that looks like an elephant (it's still just a cloud).
Why are so many cults founded on fear or hate?
Because empathy is hard.
Empathy is usally a limited resource of those that generously ascribe it to themselves and it is often mixed up with self-serving desires. Perhaps Rationalists have similar difficulties with reasoning.
While I believe Rationalism can be some form of occupational disease in tech circles, it sometimes does pose interesting questions. You just have to be aware that the perspective to analyse circumstances is intentionally constrained and in the end you still have to compare your prognosis to a reality that always choses empiricism.
"Nihilists! F* me. I mean, say what you will about the tenets of National Socialism, Dude, at least it's an ethos."
Little on offer but cults these days. Take your pick. You probably already did long ago and now your own cult is the only one you'll never clock as such.
Same as it ever was, but with more of them, people are a little warier about their own, I think.
Because Yudkowskian rationalism is a scifi inspired cult it's.
If someone believes in the singularliy my estimation of their intellectual capacity or at least maturity diminishes.
It’s especially popular in Silicon Valley.
Quite possibly, places like Reddit and Hacker News, are training for the required level of intellectual smugness, and certitude that you can dismiss every annoying argument with a logical fallacy.
That sounds smug of me, but I’m actually serious. One of their defects, is that once you memorize all the fallacies (“Appeal to authority,” “Ad hominem,”) you can easily reach the point where you more easily recognize the fallacies in everyone else’s arguments than your own. You more easily doubt other people’s cited authorities, than your own. You slap “appeal to authority” against a disliked opinion, while citing an authority next week for your own. It’s a fast path from there to perceived intellectual superiority, and an even faster path from there into delusion. Rational delusion.
While deployment of logical fallacies to win arguments is annoying at best, the far bigger problem is that people make those fallacies in the first place — such as not considering base rates.
It's generally worth remembering that some of the fallacies are actually structural, and some are rhetorical.
A contradiction creates a structural fallacy; if you find one, it's a fair belief that at least one of the supporting claims is false. In contrast, appeal to authority is probabilistic: we don't know, given the current context, if the authority is right, so they might be wrong... But we don't have time to read the universe into this situation so an appeal to authority is better than nothing.
... and this observation should be coupled with the observation that the school of rhetoric wasn't teaching a method for finding truth; it was teaching a method for beating an opponent in a legal argument. "Appeal to authority is a logical fallacy" is a great sword to bring to bear if your goal is to turn off the audience's ability to ask whether we should give the word of the environmental scientist and the washed-up TV actor equal weight on the topic of environmental science...
… however, even that is up for debate. Maybe the TV actor in your own example is Al Gore filming An Inconvenient Truth and the environmental scientist was in the minority which isn’t so afraid of climate change. Fast forward to 2025, the scientist’s minority position was wrong, while Al Gore’s documentary was legally ruled to have 9 major errors; so you were stupid on both sides, with the TV actor being closer.
True, but this is where the Boolean nature of traditional logic can really trip up a person trying to operate in the real world.
These "maybes" are on the table. They are probably not the case.
(You end up with a spread of likelihoods and have to decide what to do with them. And law hates a spread of likelihoods and hates decision-by-coinflips, so one can see how rhetorical traditions grounded in legal persuasion tend towards encouraging Boolean outcomes; you can't find someone "a little guilty," at least not in the Western tradition of justice).
https://en.wikipedia.org/wiki/Nolo_contendere There you still have booleans, just two of them instead of one.
Reminds me somewhat of the Culte de la Raison (Cult of Reason) birthed by the french revolution. It didn't last long.
https://en.wikipedia.org/wiki/Cult_of_Reason
Has anyone here ever been a part of a cult?
If so, got anything to share - anecdotes, learnings, cautions, etc.?
I am never planning to be part of one; just interested to know, partly because I have lived adjacent to what might be one, at times.
There was this interview with Diane Benscoter who talked about her experience and reasons for joining a cult that I found very insightful: https://www.youtube.com/watch?v=6Ibk5vJ-4-o
The main point is that it isn't so much the cult (leader) so much as the victims being in a vulnerable mental state getting exploited.
will check out the video, thanks.
Why are there so many cults? People want to feel like they belong to something, and in a world in the midst of a loneliness and isolation epidemic the market conditions are ideal for cults.
Because we are currently living in an age of narcissism and tribalism / Identitarianism is the societal version of narcissism.
> Because we are currently living in an age of narcissism and tribalism
I've been saying this since at least 1200 BC!
The question the article is asking is "why did so many cults come out of this particular social milieu", not "why are there a lot of cults in the whole world".
The book Imagined Communities (Benedict Anderson) touches on this, making the case that in modern times, "nation" has replaced the cultural narrative purpose previously held by "tribe," "village," "royal subject," or "religion."
The shared thread among these is (in ever widening circles) a story people tell themselves to justify precisely why, for example, the actions of someone you'll never meet in Tulsa, OK have any bearing whatsoever on the fate of you, a person in Lincoln, NE.
One can see how this leaves an individual in a tenuous place if one doesn't feel particularly connected to nationhood (one can also see how being too connected to nationhood, in an exclusionary way, can also have deleterious consequences, and how not unlike differing forms of Christianity, differing concepts on what the 'soul' of a nation is can foment internal strife).
(To be clear: those fates are intertwined to some extent; the world we live in grows ever smaller due to the power of up-scaled influence of action granted by technology. But "nation" is a sort of fiction we tell ourselves to fit all that complexity into the slippery meat between human ears).
Also, who would want to join an "irrationalist cult" ?
Hey now, the Discordians have an ancient and respectable tradition. ;)
Five tons of flax!
Your profile says that you want to keep your identity small, but you have like over 30 thousand comments spelling out exactly who you are and how you think. Why not shard accounts? Anyways. Just a random thought.
[deleted]
"SC identity?"
My pet theory is - that as a rationalist, you have a idealized view of humanity by nature. Your mirror neurons copy your own mind for interpolating other peoples behavior and character.
Which results in a constant state of cognitive dissonance, as the people of normal society around you behave very differently and often more "rustically" then expected. The education is there- all the learning sources are there- and yet are rejected. The lessons learned from history go unlearned and are often repeated.
You are in a out-group by definition and life-long, so you band together with others and get conned by cult-con-artists into foolish projects. For the "rational" are nothing but another deluded project to hijack by the sociopaths of our society. The most rational being- in fact a being so capable to predate us, society had to develop anti-bodies against socio-paths, we call religion and laws!
lol
For me largley shaped by the westering old Europe creaking and breaking (after 2 WWs) under its heavy load of philosophical/metaphysical inheritance (which at this point in time can be considered effectively americanized).
It is still fascinating to trace back the divergent developments like american-flavoured christian sects or philosophical schools of "pragmatism", "rationalism" etc. which get super-charged by technological disruptions.
In my youth I was heavily influenced by the so-called Bildung which can be functionally thought of as a form of ersatz religion and is maybe better exemplified in the literary tradition of the Bildungsroman.
I've grappled with and wildly fantasized about all sorts of things, experimented mindlessly with all kinds of modes of thinking and consciousness amidst my coming-of-age, in hindsight without this particular frame of Bildung left by myself I would have been left utterly confused and maybe at some point acted out on it. By engaging with books like Der Zauberberg by Thomas Mann or Der Mann ohne Eigenschaften by Robert Musil, my apparent madness was calmed down and instead of breaking the dam of a forming social front of myself with the vastness of the unconsciousness, over time I was guided to develop my own way into slowly operating it appropriately without completely blowing myself up into a messiah or finding myself eternally trapped in the futility and hopelessness of existence.
Borrowing from my background, one effective vaccination which spontaneously came up in my mind against rationalists sects described here, is Schopenhauer's Die Welt als Wille und Vorstellung which can be read as a radical continuation of Kant's Critique of Pure Reason which was trying to stress test the ratio itself. [To demonstrate the breadth of Bildung in even something like the physical sciences e.g. Einstein was familiar with Kant's a priori framework of space and time, Heisenberg's autobiographical book Der Teil und das Ganze was motivated by: "I wanted to show that science is done by people, and the most wonderful ideas come from dialog".]
Schopenhauer arrives at the realization because of the groundwork done by Kant (which he heavily acknowledges): that there can't even exist a rational basis for rationality itself, that it is simply an exquisitely disguised tool in the service of the more fundamental will i.e. by its definition an irrational force.
Funny little thought experiment but what consequences does this have? Well, if you are declaring the ratio as your ultima ratio you are just fooling yourself in order to be able to rationalize anything you want. Once internalized Schopenhauer's insight gets you overwhelmed by Mitleid for every conscious being, inoculating you against the excesses of your own ratio. It instantly hit me with the same force as MDMA but several years before.
[dead]
[flagged]
I think it speaks volumes that you think "american" is the approximate level of scope that this behavior lives at.
Stuff like this crosses all aspects of society. Certain americans of certain backgrounds, demographics and life experiences are fare more likely engage in it than others. I think those people are minority, but they are definitely an overly visible one if not a local majority in a lot of internet spaces so it's easy to mistake them for the majority.
Sure many people across the globe are susceptible to cult-think. It’s just been a century long trend in America to seek a superior way of living to “save all Americans” is all. No offense to other countries peoples, I’m sure they’re just as good cult members championing over application as any American.
It probably speaks more volumes that you are taking my comment about this so literally.
Are Apple, Disney, NFL, etc. fandoms with religious dedication a cult? America you can choose your cult identities and even belong to many.
[flagged]
It's a religion of an overdeveloped mind that hides from everything it cannot understand. It's an anti-religion, in a sense, that puts your mind on the pedestal.
Note the common pattern in major religions: they tell you that thoughts and emotions obscure the light of intuition, like clouds obscure sunlight. Rationalism is the opposite: it denies the very idea of intuition, or anything above the sphere of thoughts, and tells to create as many thoughts as possible.
Rationalists deny anything spiritual, good or evil, because they don't have evidence to think otherwise. They remain in this state of neutral nihilism until someone bigger than them sneaks into their ranks and casually introduces them to evil with some undeniable evidence. Their minds quickly pass the denial-anger-acceptance stages and being faithful to their rationalist doctrine they update their beliefs with what they know. From that point they are a cult. That's the story of Scientology, which has too many many parallels with Rationalism.
Cause they all read gwern and all eugenics leads into cults because conspiracy adjacent garbo always does.
Cue all the surface-level “tribalism/loneliness/hooman nature” comments instead of the simple analysis that Rationalism (this kind) is severely brain-broken and irredeemable and will just foster even worse outcomes in a group setting. It’s a bit too close to home (ideologically) to get a somewhat detached analysis.
They watched too much eXistenZ
We live in an irrational time. It's unclear if it was simply under reported in history or social changes in the last ~50-75 years have had breaking consequences.
People are trying to make sense of this. For examples.
The Canadian government heavily subsidizes junk food, then spends heavily on healthcare because of the resulting illnesses. It restrict and limits healthy food through supply management and promotes a “food pyramid” favoring domestic unhealthy food. Meanwhile, it spends billions marketing healthy living, yet fines people up to $25,000 for hiking in forests and zones cities so driving is nearly mandatory.
Government is an easy target for irrational behaviours.
There's nothing irrational about it, this is how you maximize power and profit at any and all costs.
I completely get that point of view; and yes if that's the goal, it's completely rational.
But from a societal cohesion or perhaps even an ethical point of view it's just pure irrationality.
When typing the post, I was thinking, different levels of government, changing ideologies of politicians leaving inconsistent governance.
I couldn't agree more, but we've long since given up our power collectively in hope of escaping responsibility.
Scientology is here since 1953 and it has similarly bonkers set of believes. And is huge.
Your rant about government or not being allowed to hike in some places in Canada is unrelated to the issue.
Rationalists are, to a man (and they’re almost all men) arrogant dickheads and arrogant dickheads do not see what they’re doing to be “a cult” but “the right and proper way of things because I am right and logical and rational and everyone else isn’t”.
That's an unnecessary charicaterature. I have met many rationalists of both genders and found most of them quite pleasant. But it seems that the proportion of "arrogant dickheads" unfortunately matches that of the general population. Whether it's "irrational people" or "liberal elites" these assholes always seem to find someone to look down on.
Because they have serious emotional maturity issues leading to lobotomizing their normal human emotional side of their identity and experience of life.
I think we've strayed too far from the Aristotelian dynamics of the self.
Outside of sexuality and the proclivities of their leaders, emphasis on physical domination of the self is lacking. The brain runs wild, the spirit remains aimless.
In the Bay, the difference between the somewhat well-adjusted "rationalists" and those very much "in the mush" is whether or not someone tells you they're in SF or "on the Berkeley side of things"
Here are some other anti-lesswrong materials to consider:
https://aiascendant.com/p/extropias-children-chapter-1-the-w...
https://davidgerard.co.uk/blockchain/2023/02/06/ineffective-...
https://www.bloomberg.com/news/features/2023-03-07/effective...
https://www.vox.com/future-perfect/23458282/effective-altrui...
https://qchu.substack.com/p/eliezer
https://x.com/kanzure/status/1726251316513841539
Note that Asterisk magazine is basically the unofficial magazine for the rationalism community and the author is a rationalist blogger who is naturally very pro-LessWrong. This piece is not anti-Yudkowsky or anti-LessWrong.
Here's a counter-piece on David Gerard and his portrayal of LessWrong and Effective Altruism: https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...