21 Comments
User's avatar
Sarah Constantin's avatar

The thing POSIWID misses is that you can't understand even a corrupt or malevolent system without thinking about its nominal idealistic purpose. Fake cancer charities would make no sense unless there were someone who thought they were real cancer charities. And keeping that in mind, you can ask important (and hard) questions like "who's consciously scamming, who's in denial, and who's sincerely ill-informed?" POSIWID says we don't have to think about intent at all, only outcomes. But for some purposes (including criminal convictions! and moral blame IMO, and predicting future behavior) intent does matter, even if it's hard to deduce from the outside. If it's genuinely hard to tell who intends what, then it's also genuinely hard to tell what to do about the problem! This isn't naivete -- it's not assuming there are no bad guys -- it's acknowledging that different motives actually work differently. It's not like p-zombies or something. A Machiavellian schemer and a dupe *behave differently*, if you observe them in enough contexts.

Expand full comment
Guy's avatar

I think this is right. Scott got to agreement here: https://x.com/slatestarcodex/status/1912050973603930407

Expand full comment
Guy's avatar

Also see here: https://rivalvoices.substack.com/p/the-purpose-of-posiwid-is-what-it/comment/109934687

I think both are trying to push the equilibrium the way they deem necessary and there's a sense in which there isn't a categorical (but only contextual, bound) "right answer".

But I fully agree with you that POSIWID is trying to do a relative thing (push people more towards an extreme) by passing itself as categorical. It is wrong (literally) but it works (relatively)

Expand full comment
JenniferRM's avatar

The POSIWID vs non-POSIWID stuff is, in some sense, about "whether to ever assume good-or-bad faith" in this or that social system.

When do you begin a search for evidence of bad faith that is likely to engender a sense of adversity in the investigated person? When do you tolerate "writing off a good person"?

Likewise, with an institution. Investigating an organization may trigger it to hire PR people who lie for a living (whether or not the organization is essentially decent (lying PR people is just part of the meta now)), while cutting funding during a confused social media controversy can lead to the destruction of institutional capital and much regret.

Part of the challenge, however, is that western civilization is *so deep* into games like this that the entire existence of "apparent adversity in two apparently hostile systems" is itself often fake.

For example, by the time Trump was elected, most of the Republican Party establishment had gotten used to never acting on their abortion rhetoric, but just using it to rile up the base and get elected, but also an entire generation of young republicans didn't realize that it was a LARP, and the laws they were proposing are terrible and stupid, and not even something they would endorse once they see the details. Now those terrible new laws are a forgotten memory of vaguely recent controversy... not repealed... just forgotten, as new horrors sweep across the government in many new waves of "confusedly taking OTHER old stupid claims seriously that were supposed to just be a LARP for getting elected".

Looking at the general pattern of "stupid arguments by stupid groups", Ben Hoffman's position seems to be that often the lies are used, in practice, as a shibboleth for detecting people who will lie in service of a faction, and can therefore safely be admitted to an institution without risk to the institution's internally load bearing canards.

https://benjaminrosshoffman.com/the-drama-of-the-hegelian-dialectic/

He does not mention the reason to expect memeplexes like this to arise and spread and secure votes at low cost by adopting this or that random sound byte into one's platform...

https://www.youtube.com/watch?v=rE3j_RHkqJc

I tend to presume that everyone in media, in leadership, etc is "stupid (not compared to other humans, but compared to how smart they would have to be to understand the true circumstances and the consequences of their choices)", and doing almost everything they are doing as a kind of self-protective habitual sleep-walking routine... Puppeted by memes. Running on inertia. Etc.

In a deep sense, I'm searching for ANYONE who is (1) awake and (2) basically ethical and (3) interested in taking responsibility for literally everything and then applying first principles triage to all the various ongoing fires and tragedies.

I think maybe the MIRI people are doing this for AGI and the imminent likely extinction of humans? But I think MIRI is currently working on solution plans that do not grapple with the catastrophically broken state of global governance in general.

Like: there IS a world government, and the way it makes decisions right now is that part of the world government threatens other parts of the world government with nuclear weapons. A book about AI doesn't solve that barrier, which is a barrier to many solutions that could be applied to many problems.

Almost no one endorses the current state of world governance, that I'm aware of. It is just "one of those things" that people are too embarrassed to talk about... But it appears to me to be the central barrier to coordinating efficiently and without lies to solve real and vast problems (like Imminent Unfriendly AGI and non-existent Climate Engineering and "war at all, ever", and so on), for the good of humanity, in good faith.

I have no strong position pro- or anti-POSIWID. I am in favor of global goodness, and opposed to global sadness.

Expand full comment
Noah's avatar

Maybe part of the confusion here is that most “systems” (hospitals, charities, governments) are an agglomeration of different people, often with competing interests. Averaging out their outputs creates an orientation, but each individual actor within the system is n-degrees misaligned with that orientation. So they produce a range of outcomes, some of which more closely linked to the “purpose” of the system than others.

Expand full comment
Unverified Revelations's avatar

Particular tendency of first principles, high decoupler thinkers to assume problems are engineering problems that emerge even in conditions of good faith co-operation. This is true, there are lots of problems.

But then there's an additional layer of problems on top of that, which are the ones that arise from bad faith, self interested actors.

Normies often don't understand the first kind, high IQ autists often don't understand the second.

Expand full comment
Jessie Ewesmont's avatar

New proposal: rename the heuristic to "Fuck Hanlon's Razor". It's particularly easy to convince people that Hanlon's Razor is wrong these days, and it skips past all the nitpicking about systems and purposes.

Expand full comment
Guy's avatar

This is a good proposal and I think it highlights the dynamic aspect of this equilibrium: Hanlon's Razor appeared in a context where too much bad-intent was being incorrectly attributed. POSIWID appears in a context where Hanlon's Razor won *so hard* that too much good intent was being incorrectly attributed. Any diagnostic (including mine) that gets any distribution/is believed pushes the equilibrium again.

Expand full comment
Jack's avatar

Interesting stuff. I always understood POSIWID to be a shorthand way of being honest about the outcomes of certain complex systems. Also it can be useful to understand how emergent, incidental, or injected purposes might be found and why people are for and against them. Also it shows that purposes might be more multifaceted than what the creators or commentators might declare.

Expand full comment
Kay's avatar

Liked this! Hope you share this in the comments on his post.

Expand full comment
Guy's avatar

Thank you. I believe I did put it there.

Expand full comment
Simon Ohler's avatar

Very fun argument, great writing and great read!!

Expand full comment
unremarkable guy's avatar

I didn't read this carefully or fully and I'm not really responding on point at all here.

All those caveats in mind.

I just want to underscore your points about non-profits. I spent 15 years of my life in the non-profit world, and here's the conclusion I came to:

The mission of all nonprofits is to keep all their employees employed, and, if possible, to employ more.

The *strategy* for achieving this, for each one, is its stated misison.

That's why none of them are ever actually out to solve any problems, but to keep people concerned about problems.

I left extremely jaded. At a minimum, I'd like to see tax exemption go away so there would be a lot harder competition to keep them all going and maybe the better ones would survive.

Yes, not really your larger point. I know, I know. Just needed to say that tho.

Expand full comment
Guy's avatar

No. I appreciate this. This is one of the best replies I could get since it goes right to my point. Thank you!

Expand full comment
orpheas's avatar

The Purpose of A Scott Alexander Post Is What It Does (PASAPIWID)

Expand full comment
Ari Nielsen's avatar

Tight, streamlined argument.

Note that, not only was it originally presented as a heuristic, but also as a starting point, not a final conclusion:

Stafford Beer: "It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment, or sheer ignorance of circumstances."

In other words, we might elaborate POSIWID as:

The provisional premise should be that the purpose of a system is what it does, bracketing assumptions about the circumstances in which the system is operating. It is indeed possible to demonstrate that it has some other purpose, but this must be convincingly substantiated.

Expand full comment
Guy's avatar

This is perfect substantiation, thank you! It makes perfect sense to me *as heuristic*. People are overcorrecting far and Scott is throwing out the baby with the bathwater.

Expand full comment
nope's avatar

I do think Scott is sorta right to complain about people with a conspiratorial bent like yourself adopting the phrase. I argue that it doesn't matter why the system doesn't do what it says on the box, the point is simply that it doesn't. And once you see what it actually does you can make your decisions about it's quality. The conspiracy aspect is orthogonal to this, you don't need conspiracies to make dysfunctional systems. Just making a complex system work at all is hard enough

Expand full comment
unremarkable guy's avatar

so now I’ve read your post, and I’m going to actually try to reply to the point of it, and this is something I feel pretty strongly about. I don’t know what it matters what the purpose of a system is anyway.

what matters about a system is what it does.

why is purpose even in the discussion?

it’s irrelevant. it’s distracting.

what was intended by its creators, or what is now intended by its insiders, it’s not important. What’s important is the actual outcomes.

And the thing that I think people often fail to understand, or deny an example is put in front of them, is the fact that very often complex incentives and hurting behavior cause systems to do very strange things, emergent things. That no one planned. But, that’s still all that actually matters… What the actual outcome is. Who cares if someone intended it or not?

take the nonprofit world: something that I know a great deal about. Many of the people who are wasting money for their own comfort and security… The example of the luxury cars is definitely an edge case, but still only a difference if degree…

many of them really believe they are honoring the purpose!

honoring the mission!

There is also competent, well intended avarice

Expand full comment
Eugine Nier's avatar

Unfortunately, the people talking about POSIWID are the people most in need of the anti-POSIWID heuristic.

Also most quokkas have a relatively high IQ, so they could use much better heuristics than POSIWID, e.g, learning how to actually analyze the incentives in a system.

Expand full comment
f_d's avatar

Come on, the purpose of Scott's blogpost is *obviously* not to make lying easier.

The point is simply to point out that TPOASIWID is not true. Scott's not making the opposite claim (that one can not infer motives from outcomes, ever), which also wouldn't be true.

It's difficult to be neither paranoid nor naive but sane, but a solid first step is to not carelessly use slogans that *sanction* one error or the other.

Expand full comment