The Muse

The sheer variety of symbols and artefacts in use across the ages and geographies does not necessarily point to a multitude of assumptions and values from which they spring. The study of mythology and folklore then, is a reverse approach to anthropology. This blog is dedicated to my favourite symbols, tales and artefacts - both ancient and contemporary.

Tuesday, June 6, 2023

The banality of Big Tech’s evil

I was referred to the article by Cory Doctorow entitled Ayyyyyy Eyeeeee - The lie that raced around the world before the truth got its boots on by a Tumblr post.

It pertains to one particular lie propagated by those who shill machine learning under the brand of AI, one called ‘criti-hype’ i.e. criticism that incorporates a self-serving commercial boast. To use the article’s example case:

But there’s another aspect to Hamilton’s fantasy about the blood-lusting, operator-killing drone: this may be a dangerous weapon, but it is also a powerful one.

A drone that has the “smarts” to “realize” that its primary objective of killing enemies is being foiled by a human operator in a specific air-traffic control tower is a very smart drone. File the rough edges off that bad boy and send it into battle and it’ll figure out angles that human operators might miss, or lack the situational awareness to derive. Put that algorithm in charge of space-based nukes and tell the world that even if your country bombs America into radioactive rubble, the drones will figure out who’s responsible and nuke ’em ’till they glow! Yee-haw, I’m the Louis Pasteur of Mutually Assured Destruction!

The genius of this tactic is described thusly:

By focusing on Facebook’s own claims about behavior modification, these critics shifted attention away from Facebook’s real source of power: evading labor and tax law, using predatory pricing and killer acquisitions to neutralize competitors, showering lawmakers in dark money to forestall the passage and/or enforcement of privacy law, defrauding advertisers and publishers, illegally colluding with Google to rig ad markets, and using legal threats to silence critics.

These are very boring sins, the same tactics that every monopolist has used since time immemorial. Framing Facebook as merely the latest clutch of mediocre sociopaths to bribe the authorities to look the other way while it broke ordinary laws suggests a pretty ordinary solution: enforce those laws, round up the miscreants, break up the company.

However, if Facebook is run by evil sorcerers, then we need to create entirely novel anti-sorcery measures, the likes of which society has never seen. That’ll take a while, during which time, Facebook can go on committing the same crimes as Rockefeller and Carnegie, but faster, with computers.

And best of all, Facebook can take “evil sorcerer” to the bank. There are plenty of advertisers, publishers, candidates for high office, and other sweaty, desperate types who would love to have an evil sorcerer on their team, and they’ll pay for it.

So long as Congress is focused on preventing our robot overlords from emerging, they won’t be forcing these companies to halt discriminatory hiring and rampant spying.

Best of all, the people who get rich off this stuff get to claim to be evil sorcerers, rather than boring old crooks.

The article pointed me to a book authored by Doctorow, How to Destroy Surveillance Capitalism, the entirety of which is available here.

The book is an in depth examination of the phenomenon of Surveillance Capitalism, one aspect of which is talked about in the article. Here is a non-exhaustive summary of the book.

Doctorow starts by exploring why we have a rise of conspiracy theories, anti-intellectualism and misinformation.

What if the trauma of living through real conspiracies all around us — conspiracies among wealthy people, their lobbyists, and lawmakers to bury inconvenient facts and evidence of wrongdoing (these conspiracies are commonly known as “corruption”) — is making people vulnerable to conspiracy theories?

Like in the article, he points out that Big Tech does not have mind control beams, and persuasion itself is not so powerful, as monopolism is.

influence campaigns that seek to displace existing, correct beliefs with false ones have an effect that is small and temporary while monopolistic dominance over informational systems has massive, enduring effects. Controlling the results to the world’s search queries means controlling access both to arguments and their rebuttals and, thus, control over much of the world’s beliefs. If our concern is how corporations are foreclosing on our ability to make up our own minds and determine our own futures, the impact of dominance far exceeds the impact of manipulation and should be central to our analysis and any remedies we seek.

Doctorow debunks the idea that collecting and hoarding data is in itself a source of power – rather it is a Ponzi scheme, an application of the greater fool theory: 

Pick-up artists assume they fail to entice women because they are bad at being pick-up artists, not because pick-up artistry is bullshit. Pick-up artists are bad at selling themselves to women, but they’re much better at selling themselves to men who pay to learn the secrets of pick-up artistry.

Even if you never figure out how to profit from the data, someone else will eventually offer to buy it from you to give it a try.

The real danger of such data hoards is from identity theft and related crimes. And of course, government surveillance.

any hard limits on surveillance capitalism would hamstring the state’s own surveillance capability. 

Again he points out how Big Tech’s so-called mind control or even influence is based on tenuous stuff.

For example, the reliance on the “Big Five” personality traits as a primary means of influencing people even though the “Big Five” theory is unsupported by any large-scale, peer-reviewed studies and is mostly the realm of marketing hucksters and pop psych.

The antidote, he says, is in the poison itself. Facebook started by allowing people to import data from other social media and uploading contacts, i.e. it started by exploiting interoperability of technology. And it can be ended the same way.

Today, incumbency is seen as an unassailable advantage. Facebook is where all of your friends are, so no one can start a Facebook competitor. But adversarial compatibility reverses the competitive advantage: If you were allowed to compete with Facebook by providing a tool that imported all your users’ waiting Facebook messages into an environment that competed on lines that Facebook couldn’t cross, like eliminating surveillance and ads, then Facebook would be at a huge disadvantage. 

The biggest danger of all, is corruption, leading to an epistemological crisis.

This concentration of both wealth and industries means that our political outcomes are increasingly beholden to the parochial interests of the people and companies with all the money.

In a world as complex as this one, we have to defer to authorities, and we keep them honest by making those authorities accountable to us and binding them with rules to prevent conflicts of interest. We can’t possibly acquire the expertise to adjudicate conflicting claims about the best way to make the world safe and prosperous, but we can determine whether the adjudication process itself is trustworthy.

You’re left with a kind of inchoate constellation of rules of thumb about which experts you trust to fact-check controversial claims and then to explain how all those respectable doctors with their peer-reviewed research on opioid safety were an aberration and then how you know that the doctors writing about vaccine safety are not an aberration.

No one can say for certain why this has happened, but the two dominant camps are idealism (the belief that the people who argue for these conspiracies have gotten better at explaining them, maybe with the help of machine-learning tools) or materialism (the ideas have become more attractive because of material conditions in the world).

I’m a materialist. I’ve been exposed to the arguments of conspiracy theorists all my life, and I have not experienced any qualitative leap in the quality of those arguments.

The major difference is in the world, not the arguments. In a time where actual conspiracies are commonplace, conspiracy theories acquire a ring of plausibility.

We have always had disagreements about what’s true, but today, we have a disagreement over how we know whether something is true. This is an epistemological crisis, not a crisis over belief. It’s a crisis over the credibility of our truth-seeking exercises, from scientific journals (in an era where the biggest journal publishers have been caught producing pay-to-play journals for junk science) to regulations (in an era where regulators are routinely cycling in and out of business) to education (in an era where universities are dependent on corporate donations to keep their lights on).

Throughout the book Doctorow notes that both Big Tech and its critics agree that tech, its powers and associated troubles are somehow unique. Tech exceptionalism, basically. Doctorow refutes their belief. But he does believe in one exceptional power of tech: coordination.

The hard problem of our species is coordination. Everything from climate change to social change to running a business to making a family work can be viewed as a collective action problem.

The upshot of this is that our best hope of solving the big coordination problems — climate change, inequality, etc. — is with free, fair, and open tech. Our best hope of keeping tech free, fair, and open is to exercise caution in how we regulate tech and to attend closely to the ways in which interventions to solve one problem might create problems in other domains.

Doctorow notes that just as anti-whalers and anti-pollution activists were united under the term ‘ecology’ and the mandate of protecting it, activists working against Big Tech have to unite under the umbrella of trustbusting (enacting and enforcing anti-trust legislation and policy).

But there is a catch: governments worldwide prefer that Big Tech clean up their own messes, by policing their users. So if you want a temporary fix of reducing the level of online abuse and crime, while working towards breaking up Big Tech in the long run, that won’t work.

That’s because any move to break up Big Tech and cut it down to size will have to cope with the hard limit of not making these companies so small that they can no longer afford to perform these duties — and it’s expensive to invest in those automated filters and outsource content moderation. It’s already going to be hard to unwind these deeply concentrated, chimeric behemoths that have been welded together in the pursuit of monopoly profits. Doing so while simultaneously finding some way to fill the regulatory void that will be left behind if these self-policing rulers were forced to suddenly abdicate will be much, much harder.

The ultimate solution:

As cyber lawyer Lawrence Lessig wrote in his 1999 book, Code and Other Laws of Cyberspace, our lives are regulated by four forces: law (what’s legal), code (what’s technologically possible), norms (what’s socially acceptable), and markets (what’s profitable).

Getting people to care about monopolies will take technological interventions that help them to see what a world free from Big Tech might look like. Imagine if someone could make a beloved (but unauthorized) third-party Facebook or Twitter client that dampens the anxiety-producing algorithmic drumbeat and still lets you talk to your friends without being spied upon — something that made social media more sociable and less toxic. Now imagine that it gets shut down in a brutal legal battle. It’s always easier to convince people that something must be done to save a thing they love than it is to excite them about something that doesn’t even exist yet.