What if legibility/formalism is not the driver Scott and I think it is?

In episode 17 and episode 18, I discuss Scott's Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Scott puts a lot of the blame on people's love of formalized systems (and the closely related concept of "legibility"). Fair enough. But I recently read this bit of text from a New Yorker interview with Cory Doctorow (Note: don't follow link if you're prone to seizures from flashing lights - pretty irresponsible, New Yorker):

I think that the problems of A.I. are not its ability to do things well but its ability to do things badly, and our reliance on it nevertheless. So the problem isn’t that A.I. is going to displace all of our truck drivers. The fact that we’re using A.I. decision-making at scale to do things like lending, and deciding who is picked for child-protective services, and deciding where police patrols go, and deciding whether or not to use a drone strike to kill someone, because we think they’re a probable terrorist based on a machine-learning algorithm—the fact that A.I. algorithms don’t work doesn’t make that not dangerous. In fact, it arguably makes it more dangerous. The reason we stick A.I. in there is not just to lower our wage bill so that, rather than having child-protective-services workers go out and check on all the children who are thought to be in danger, you lay them all off and replace them with an algorithm. That’s part of the impetus. The other impetus is to do it faster—to do it so fast that there isn’t time to have a human in the loop. With no humans in the loop, then you have these systems that are often perceived to be neutral and empirical.
Patrick Ball is a statistician who does good statistical work on human-rights abuses. He’s got a nonprofit called the Human Rights Data Analysis Group. And he calls this “empiricism-washing”—where you take something that is a purely subjective, deeply troubling process, and just encode it in math and declare it to be empirical. If you are someone who wants to discriminate against dark-complexioned people, you can write an algorithm that looks for dark skin. It is math, but it’s practicing racial discrimination.

This is worse than Seeing Like a State. In Scott's example, the State has a "theory of the matter". Lenin had scientific Marxism. Le Corbusier had what he considered an exhaustive list of what people needed and a method to deliver that most efficiently. Agricultural reformers had a theory of agriculture that applied in the temperate zones and assumed it didn't need to be altered for the tropics. They could all explain why they were right, and in some detail. They were wrong because (a) their theories were incomplete and (b) they were formalizing processes very hard to formalize.

The sort of "AI" that Doctorow is talking about is still formal, in what I think is a fairly strong sense, but it is completely missing a theory of why. People don't seem to care that the decisions can't be explained or justified. They just are.

Scott seems to think (and I know I think) that formalism and legibility are "attractive nuisances" that trick people into doubling down on inadequate schemes. But here we see a similar doubling-down without even that excuse.

I'm not sure what to make of that.

Discuss on Mastodon.