Coldbrew's law states that if an article is written in praise of AI and abstrusely written, then it's probably AI slop, written by someone who holds their own ideas in high enough regard to publish but holds their audience in low enough regard that they won't bother to edit it.
Edit: there it is. `vibe-coded and deployed with Claude Code`
Y'know, you (the general 'you', not you specifically, coldbrewed) feel bad about your writing or blog because of the odd spelling error, or grammar issue, or repeated language, or maybe your points aren't clear enough, or maybe you're talking in the wrong tone...
...and then you read something like this, and realize, "Yeah, no, I have room for improvement, sure, but thank f*** I'm not like this."
The entire post reads like someone high on their own supply. Just when I think they're getting to a point, they pull out every fifty-dollar word and concept they possibly can (explaining none of it, nor linking to any Wikipedia articles to help readers understand) to ostensibly sound smarter-than-you and therefore entitled to authority.
I'm sure there's a law/rule/principle for this concept somewhere, but if you can't explain your point simply, you don't understand the topic you're trying to communicate. This one-off, vibe-coded (RETCH), slop-slinger is a prime example of such.
Pay no attention to the charlatan cosplaying as tenured academia.
I got so annoyed with the “trying to be smart” writing that I summarized it with ChatGPT.
The article “The World Is Ending. Welcome to the Spooner Revolution” from Aethn delves into the transformative impact of advanced AI models on the global socio-economic landscape.
The author critiques the belief in a static “end of history,” suggesting that recent advancements in AI, particularly large language models (LLMs), are catalyzing a profound shift in how work and economic structures function. These AI models have evolved beyond previous limitations, enabling automation of tasks across various knowledge-based professions without extensive fine-tuning.
This technological leap is diminishing the traditional value of wages, as individuals can now leverage AI to perform tasks that previously required entire teams. Consequently, there’s a growing trend toward self-employment and entrepreneurial ventures, echoing the ideals of 19th-century thinker Lysander Spooner, who advocated for a society where individuals operate independently of wage-based systems.
The article posits that this “Spooner Revolution” will lead to a surge in small enterprises, a decline in traditional corporate structures, and a reevaluation of educational and economic institutions to accommodate this new paradigm.
> The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few
This reads like an example from Orwell's "Politics and the English Language". Which on its face leads me to wonder what sort of semantic shell game the author is up to.
>Even with that, there are obvious limitations described by Amdahl's law, which states there is a logarithmic maximum potential improvement by increasing hardware provisions.
I don't know why so many people are obsessed with Amdahl's law as some universal argument. The quoted section is not only 100% incorrect, it sweeps the blatantly obvious energy problem under the rug.
Imagine going to a local forest and pointing at a crow and shouting "penguin!", while there are squirrels running around.
What Amdahl's law says is that given a fixed problem size and infinite processors, the parallel section will cease to be a bottleneck. This is irrelevant for AI, because people throw more hardware at bigger problems. It's also irrelevant for a whole bunch of other problems. Self driving cars aren't all connected to a supercomputer. They have local processors that don't even communicate with each other.
>The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few.
>And yet these perinatal automatons are totally eviscerating all knowledge based work as the relaxation of the original hysterics arrives.
These two sentences contradict each other. You can't eviscerate something and only mostly "replace" it.
This is a very disappointing blog post that focuses on wankery over substance.
We would see neither squirrels nor crows since these criticisms miss the forest for the trees. But we can address them.
> This is irrelevant for AI, because people throw more hardware at bigger problems
GAI is a fixed problem which is Solomonoff Induction. Further Amdahl's law is a limitation on neither software nor a super computer.
Both inference and training rely on parallelization, LLM inference has multiple serialization points per layer. Vegh et al 2019 quantifies how Amdahl's law limits success in neural networks[1]. He further states:
"A general misconception (introduced by successors of Amdahl) is to assume that Amdahl’s law is valid for software only". It would apply to a neural network as it does equally to the problem of self-driving cars.
> These two sentences contradict each other
There is no contradiction only a misunderstanding of what "eviscerates" means and even with that incorrect definition resulting in your threshold test, it still remains applicable.
I am new to Amdahl's law, but wouldn't a rearchitecture make it less relevant. For example if instead of growing an LLM that has more to do in parallel, seperate it into agents (maybe a bit like areas of the brain?). Is Amdahls law just a limit for the classic LLM architecture?
This is GPT’s take after I prompted it for his opinion on the crux of the piece:
It’s a bold, sharp take on AI’s coming shake-up.
• Core claim: New GPT-powered tools let one person match a small team’s output. That erodes big firms’ value and shifts us toward solo, AI-driven gigs.
• Strength: It sees how low coding, design or analysis costs plunge entry barriers—e.g. an indie writer using GPT to research, draft and polish entire articles.
But it skips some real-world frictions.
• Go-to-market still needs sales, trust and networks. AI can build a prototype, not always sell it.
• Risk and capital haven’t vanished: legal, data, infrastructure, marketing still demand teams or funding.
My take: AI will empower many more solo ventures and cut rote jobs. Yet wages and firms won’t crumble overnight. We’ll get hybrids—small outfits plus AI—before a full “Spooner” world arrives.
It would be better if the author could write plain English.
Coldbrew's law states that if an article is written in praise of AI and abstrusely written, then it's probably AI slop, written by someone who holds their own ideas in high enough regard to publish but holds their audience in low enough regard that they won't bother to edit it.
Edit: there it is. `vibe-coded and deployed with Claude Code`
Y'know, you (the general 'you', not you specifically, coldbrewed) feel bad about your writing or blog because of the odd spelling error, or grammar issue, or repeated language, or maybe your points aren't clear enough, or maybe you're talking in the wrong tone...
...and then you read something like this, and realize, "Yeah, no, I have room for improvement, sure, but thank f*** I'm not like this."
The entire post reads like someone high on their own supply. Just when I think they're getting to a point, they pull out every fifty-dollar word and concept they possibly can (explaining none of it, nor linking to any Wikipedia articles to help readers understand) to ostensibly sound smarter-than-you and therefore entitled to authority.
I'm sure there's a law/rule/principle for this concept somewhere, but if you can't explain your point simply, you don't understand the topic you're trying to communicate. This one-off, vibe-coded (RETCH), slop-slinger is a prime example of such.
Pay no attention to the charlatan cosplaying as tenured academia.
The article is not written by claude code...only the website. And the article is not praising AI.
[delayed]
It directly and implicitly describes a fantasy as if it’s reality multiple times.
I got so annoyed with the “trying to be smart” writing that I summarized it with ChatGPT.
The article “The World Is Ending. Welcome to the Spooner Revolution” from Aethn delves into the transformative impact of advanced AI models on the global socio-economic landscape.
The author critiques the belief in a static “end of history,” suggesting that recent advancements in AI, particularly large language models (LLMs), are catalyzing a profound shift in how work and economic structures function. These AI models have evolved beyond previous limitations, enabling automation of tasks across various knowledge-based professions without extensive fine-tuning.
This technological leap is diminishing the traditional value of wages, as individuals can now leverage AI to perform tasks that previously required entire teams. Consequently, there’s a growing trend toward self-employment and entrepreneurial ventures, echoing the ideals of 19th-century thinker Lysander Spooner, who advocated for a society where individuals operate independently of wage-based systems.
The article posits that this “Spooner Revolution” will lead to a surge in small enterprises, a decline in traditional corporate structures, and a reevaluation of educational and economic institutions to accommodate this new paradigm.
[dead]
Despite the criticisms I do see the self-enterprising case... or maybe merely hope for
> The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few
"Replaces". Uh-huh.
This reads like an example from Orwell's "Politics and the English Language". Which on its face leads me to wonder what sort of semantic shell game the author is up to.
[delayed]
>Even with that, there are obvious limitations described by Amdahl's law, which states there is a logarithmic maximum potential improvement by increasing hardware provisions.
I don't know why so many people are obsessed with Amdahl's law as some universal argument. The quoted section is not only 100% incorrect, it sweeps the blatantly obvious energy problem under the rug.
Imagine going to a local forest and pointing at a crow and shouting "penguin!", while there are squirrels running around.
What Amdahl's law says is that given a fixed problem size and infinite processors, the parallel section will cease to be a bottleneck. This is irrelevant for AI, because people throw more hardware at bigger problems. It's also irrelevant for a whole bunch of other problems. Self driving cars aren't all connected to a supercomputer. They have local processors that don't even communicate with each other.
>The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few.
>And yet these perinatal automatons are totally eviscerating all knowledge based work as the relaxation of the original hysterics arrives.
These two sentences contradict each other. You can't eviscerate something and only mostly "replace" it.
This is a very disappointing blog post that focuses on wankery over substance.
We would see neither squirrels nor crows since these criticisms miss the forest for the trees. But we can address them.
> This is irrelevant for AI, because people throw more hardware at bigger problems
GAI is a fixed problem which is Solomonoff Induction. Further Amdahl's law is a limitation on neither software nor a super computer.
Both inference and training rely on parallelization, LLM inference has multiple serialization points per layer. Vegh et al 2019 quantifies how Amdahl's law limits success in neural networks[1]. He further states:
"A general misconception (introduced by successors of Amdahl) is to assume that Amdahl’s law is valid for software only". It would apply to a neural network as it does equally to the problem of self-driving cars.
> These two sentences contradict each other
There is no contradiction only a misunderstanding of what "eviscerates" means and even with that incorrect definition resulting in your threshold test, it still remains applicable.
1. https://pmc.ncbi.nlm.nih.gov/articles/PMC6458202/
Further reading on Amdahl's law w.r.t LLM:
2. https://medium.com/@TitanML/harmonizing-multi-gpus-efficient...
3. https://pages.cs.wisc.edu/~sinclair/papers/spati-iiswc23-tot...
I am new to Amdahl's law, but wouldn't a rearchitecture make it less relevant. For example if instead of growing an LLM that has more to do in parallel, seperate it into agents (maybe a bit like areas of the brain?). Is Amdahls law just a limit for the classic LLM architecture?
[flagged]
This is GPT’s take after I prompted it for his opinion on the crux of the piece:
It’s a bold, sharp take on AI’s coming shake-up.
• Core claim: New GPT-powered tools let one person match a small team’s output. That erodes big firms’ value and shifts us toward solo, AI-driven gigs.
• Strength: It sees how low coding, design or analysis costs plunge entry barriers—e.g. an indie writer using GPT to research, draft and polish entire articles.
But it skips some real-world frictions.
• Go-to-market still needs sales, trust and networks. AI can build a prototype, not always sell it.
• Risk and capital haven’t vanished: legal, data, infrastructure, marketing still demand teams or funding.
My take: AI will empower many more solo ventures and cut rote jobs. Yet wages and firms won’t crumble overnight. We’ll get hybrids—small outfits plus AI—before a full “Spooner” world arrives.