Stubsack: weekly thread for sneers not worth an entire post, week ending 1st February 2026
-
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)
we gotta dunk on documenting agi more around these parts
fearmongers over AI bullshit, and posts shitty memes when there’s no news to fearmonger about
-
I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.
Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.
I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.
Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!
If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)
What TF is his notation for Turing machines?
-
the ruliad is something in a sense infinitely more complicated. Its concept is to use not just all rules of a given form, but all possible rules. And to apply these rules to all possible initial conditions. And to run the rules for an infinite number of steps
So it’s the complete graph on the set of strings? Stephen how the fuck is this going to help with anything
-
Copy-pasting my tentative doomerist theory of generalised “AI” psychosis here:
I’m getting convinced that in addition to the irreversible pollution of humanity’s knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, there’s one insidious damage from LLMs that is still underestimated.
I will make without argument the following claims:
Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.
The Cloudflare person who blog-posted self-congratulations about their “Matrix implementation” that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced they’re Machine Jesus. The difference is of degree not kind.
Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.
Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the “follower” role.
Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to you—they pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.
n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.
Corollary #1: Every “legitimate” use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By “better” it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.
Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.
Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.
Relevant:
BBC journalist on breaking up with her AI companion
AI companion break-up made BBC journalist 'surprisingly nervous'
When it was time for Nicola to let George know she wouldn't be calling again, she felt surprisingly nervous.
(www.bbc.com)
-
$81.25 is an astonishingly cheap price for selling one’s soul.
You gotta understand that it was a really good bowl of soup
–Esau, probably
-
The Ruliad sounds like an empire in a 3rd rate SF show
-
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)
LW ghoul does the math and concludes: letting measles rip unhindered through the population isn’t that bad, actually
robo's Shortform — LessWrong
Comment by robo - In the 1950s, with 0% vaccination rate, measles caused about 400-500 deaths per year in the US. Flu causes are about 20,000 deaths per year in the US, and smoking perhaps 200,000. If US measles vaccination rates fell to 90%, and we had 100-200 deaths per year, that would be pointless and stupid, but for public health effects the anti-smoking political controversies of the 1990s were >10 times more impactful.
(www.lesswrong.com)
-
Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)

Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)
-

Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)
eagerly awaiting the multi page denial thread
-

Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)
Somehow, I registered a total lack of surprise as this loaded onto my screen
-
eagerly awaiting the multi page denial thread
“im saving the world from AI! me talking to epstein doesn’t matter!!!”
-
None of these words are in the Star Trek Encyclopedia
at least Khan Noonien Singh had some fucking charisma
-

Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)
it’s all coming together. every single techbro and current government moron, they all loop back around to epstein in the end
-
it’s all coming together. every single techbro and current government moron, they all loop back around to epstein in the end
‘We have certain things in common Jeffrey’
-
I think that’s more about Wolfram giving a clickbait headline to some dicking around he did in the name of “the ruliad”, a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.
The full ruliad—which appears at the foundations of physics, mathematics and much more—is the entangled limit of all possible computations. […] In representing all possible computations, the ruliad—like the “everything machine”—is maximally nondeterministic, so that it in effect includes all possible computational paths.
Unrelated William James quote from 1907:
The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.
that is best studied using the Wolfram Language,
isn’t this just a particularly weird lisp </troll>
-

Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)
no fucking way
-
“im saving the world from AI! me talking to epstein doesn’t matter!!!”
€5 say they’ll claim he was talking to jefffrey in an effort to stop the horrors.
no not the abuse of minors, he was asking epstein for donations to stop AGI, and it’s morally ethical to let rich abusers get off scott free if that’s the cost of them donating money to charitable causes such as the alignment problem /s
-
€5 say they’ll claim he was talking to jefffrey in an effort to stop the horrors.
no not the abuse of minors, he was asking epstein for donations to stop AGI, and it’s morally ethical to let rich abusers get off scott free if that’s the cost of them donating money to charitable causes such as the alignment problem /s
I dont like how I can envision this and find it perfectly plausible
-

Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)
You know, it makes the exact word choices Eliezer chose on this post: https://awful.systems/post/6297291 much more suspicious. “To the best of my knowledge, I have never in my life had sex with anyone under the age of 18.” So maybe he didn’t know they were underage at the time?
-
You know, it makes the exact word choices Eliezer chose on this post: https://awful.systems/post/6297291 much more suspicious. “To the best of my knowledge, I have never in my life had sex with anyone under the age of 18.” So maybe he didn’t know they were underage at the time?
aka the Minsky defense