The Purpose of POSIWID Is What It Does
Or, What Is the Purpose of Scott Alexander’s Blogpost?
The phrase POSIWID—“The Purpose Of a System Is What It Does”—has been making the rounds on Twitter.
In response, Scott Alexander just published a Substack post arguing the opposite: that the purpose of a system is “obviously not” what it does.
Scott’s post boils down to this: POSIWID is a dumb meme because systems often produce outcomes no one intended—glitches, inefficiencies, accidents. Therefore, inferring purpose from outcome is misguided.
He concludes that POSIWID serves no real function beyond making people paranoid and hateful.
So, it’s up to me, a paranoid hater, to defend it.
POSIWID is not about always being right. It’s about surviving in a specific kind of environment. I’ll expand on which kind in a bit. For now, let me state my thesis directly:
Obviously, the purpose of POSIWID is to make lying harder.
What Is the Purpose of a System?
In his blogpost, Scott offers four examples of systems doing things they weren’t intended to do. And, you know, fair enough. Systems do fail, some outputs are accidental, and some results are side-effects.
But into “some outcomes are unintended,” he smuggles in a much stronger claim: that we should never infer hidden purpose from observed behavior.
Come on.
Of course systems sometimes succeed at goals they’re deliberately concealing. Often, concealment is the strategy: the only way to achieve certain goals is by hiding them.
What makes Scott’s position even stranger is his involvement in the Effective Altruism movement, which exists only because so many charities are ineffective.
Does Scott really believe all charity inefficiency is just well-meaning incompetence?
That none of it is ill-intentioned competence?
That none of it is grift?
Because if he does, he’s wrong.
Four Counter-Examples
Scott presents four examples to argue POSIWID is a dumb meme. So here are four quick counter-examples to show it’s a useful one. None curated, all found while randomly scrolling Twitter.
You’ll notice that POSIWID shines when a system’s real function is concealed—and that concealment takes many forms. Everything from grift in fake charities, to geopolitical fronting through fake “NGOs”, to soft-power interference and narrative laundering by international and national institutes.
None of these are “well-intentioned failures.” They’re deliberate and succeed only insofar as their true purpose is hidden.
1. Charities
In 2015, the FTC shut down a network of cancer charities (CFA and affiliates) that raised $187 million and spent it on luxury cars, salaries, and professional fundraisers. They claimed to help patients. They didn’t.
That’s not a glitch. That was the actual purpose—concealed behind a noble-sounding mission. That’s why the government shut them down. If they’d been honest about their goals, they never would have succeeded.
2. GONGOs
Charities often are NGOs—non-governmental organizations. A category that, by definition, isn’t supposed to be government-linked. That’s what the name says.
But look at USAID. In many cases, NGOs are government-funded and used as a tool of statecraft—doing things that would be politically toxic to do directly.
These are so common it was necessary to come up with a name for them in the 1980s(!)— GONGOs: Government-Organized NGOs.
Maybe, even though it’s nearly half a century old, you’ve never heard of the term. It’s clunky. You know what else is clunky? Asking, “Are non-governmental organizations linked to the government?” It sounds like a contradiction.
It’s not. It’s just a system doing something other than what it claims, structured to mislead you down to its very name.
3. The Confucius Institutes
“Chyna.”
The Confucius Institutes exist worldwide with the stated purpose of “promoting Chinese language and culture”.
Now maybe I am just a paranoid hater. Very well. But how come so many institutions shut their Confucius Institutes down?
Osaka Sangyo University in 2010, McMaster University and Université de Sherbrooke in 2013, University of Chicago, Penn State, and the Toronto District School Board in 2014, and dozens more since.
Are they all just paranoid haters? Or was the Confucius Institute behaving not as described, but as designed—for propaganda and political interference?
4. The BBC
The BBC’s stated mission is to “serve all audiences through impartial content that informs, educates, and entertains.”
Yet here they recently lambasted the UK Conservative leader for not watching Adolescence—a fictional drama about a “13-year-old incel” shown free in schools. Which public interest is served here exactly, and to whose account?
Is the purpose of peer review to “ensure scientific rigor”?
Is the purpose of HR to “protect employees”?
Is the purpose of university DEI offices to “promote inclusion” and “support marginalized students”?
Does Scott truly believe all this? Does he truly believe that every divergence from stated goals is just a well-meaning failure? That every single system is run by bumbling but good-hearted idiots?
Maybe. But enough about Scott. Let me talk to you now.
Perhaps you had used “POSIWID”, but then you were convinced by his post not to, and maybe now you’re starting to find yourself convinced to use it again. I can empathise with the epistemic whiplash, and I don’t wish to cause it. So let’s go one level up.
What Is the Purpose of POSIWID?
The POSIWID meme—and Scott’s anti-POSIWID blogpost—are both trying to thread a line between two epistemic errors:
False negative: Refusing to see that a system is enacting a concealed purpose it denies (the POSIWID worry).
False positive: Attributing to a system a purpose it doesn’t have (Scott’s worry).
Avoiding false negatives requires leaning toward assuming competence, while risking attributing intent when it isn’t there.
Avoiding false positives requires leaning toward assuming benevolence, while risking not seeing intentionality that is there.
Used exclusively, neither is tenable. So, since you have to err one way or the other, the question becomes: which mistake is worse? Which should you work hardest to avoid?
And the answer—which will satisfy no one—is: it depends.
Whether you need POSIWID or anti-POSIWID depends on which way the errors are skewing: are you in an environment where people are too quick to assume malice? Or are you in an environment where people are too quick to assume good intentions?
There’s no fixed answer—it depends on the kind of environment you’re in, and how it’s changing over time.
If the system as a whole is erring too far one way you want to push the other way, whichever way it is erring too far toward. Speaking of…
Ok, I Lied—Back to Scott
What was the purpose of Scott’s post?
He’s smart enough to be aware of all the above. Perhaps his purpose was to deliberately push Twitter/X away from what he sees as an excess of paranoia.
If that’s the case I’d prefer for him to come out and say it outright, for his audience is not just Twitter users: EAs read his blog and they’re famously quokkas—too quick to assume good intent where there’s none. Personally, I worry far more about well-intentioned kids giving too much leeway to bad actors than about well-intentioned systems taking more pushback than they should. They can take it.
But maybe that has to do with the environment that *I* am in.
And maybe Scott couldn’t come out and say it outright. Maybe that would undermine the purpose of his blog post 😉
In Conclusion
That’s the joke—but also the point.
POSIWID, Scott’s blog post against it, and this blog post defending it are all memetic interventions—initiatives in a live contest over the dynamics of discourse.
POSIWID emerged because too much was slipping through: too many systems doing one thing while claiming another. It offered a heuristic: outcomes over slogans. Naturally, it became a slogan.
But it did make it harder for things to slip by. It detonated the plausible deniability that was acting as cover. "I’m just a little birthday boy!"
POSIWID is not law, but heuristic. Of course it’s not infallible. But it is adaptive in low-trust environments, because it shifts the burden of proof away from institutional PR and toward observable behavior.
It isn’t always right. But when trust has broken down, it’s often… less wrong 😎
Amusingly, Scott’s blog post itself makes for a worthy target for POSIWID-style analysis.
His stated intent was to examine the limits of the meme and, in failing to find a good use case, “proving”—by his own standard—that POSIWID isn’t useful.
But whatever his stated intent, the effect of his post is memetic countersteering: steering discourse away from a specific heuristic. In this case, one designed to surface hidden functions.
And in doing so, it became an example of exactly the kind of strategic behavior his post tries to deny—and that POSIWID helps us notice.
To conclude:
The purpose of Scott’s blog post is what it does.
The purpose of POSIWID is what it does: making lying harder.
P.S.: You liked this one? Excellent. Follow-up here.
The thing POSIWID misses is that you can't understand even a corrupt or malevolent system without thinking about its nominal idealistic purpose. Fake cancer charities would make no sense unless there were someone who thought they were real cancer charities. And keeping that in mind, you can ask important (and hard) questions like "who's consciously scamming, who's in denial, and who's sincerely ill-informed?" POSIWID says we don't have to think about intent at all, only outcomes. But for some purposes (including criminal convictions! and moral blame IMO, and predicting future behavior) intent does matter, even if it's hard to deduce from the outside. If it's genuinely hard to tell who intends what, then it's also genuinely hard to tell what to do about the problem! This isn't naivete -- it's not assuming there are no bad guys -- it's acknowledging that different motives actually work differently. It's not like p-zombies or something. A Machiavellian schemer and a dupe *behave differently*, if you observe them in enough contexts.
Maybe part of the confusion here is that most “systems” (hospitals, charities, governments) are an agglomeration of different people, often with competing interests. Averaging out their outputs creates an orientation, but each individual actor within the system is n-degrees misaligned with that orientation. So they produce a range of outcomes, some of which more closely linked to the “purpose” of the system than others.