Large language models will upend human rituals

Even AI optimists, such as Ethan Mollick, worry that they will not bear the strain of LLMs. (Image: Pixabay)
Even AI optimists, such as Ethan Mollick, worry that they will not bear the strain of LLMs. (Image: Pixabay)

Summary

  • The results could be disturbing, argue Marion Fourcade and Henry Farrell

ARTHUR C. CLARKE wrote a story in which the entire universe was created so that monks could ritually write out the nine billion names of God. The monks buy a computer to do this faster and better, with unfortunate consequences for the rest of us. The story’s last sentence: “Overhead, without any fuss, the stars were going out."

Rituals aren’t just about God, but about people’s relations with each other. Everyday life depends on ritual performances such as being polite, dressing appropriately, following proper procedure and observing the law. The particulars vary, often mightily, across time, space and societies. But they are the foundation of all formal and informal institutions, making co-ordination between people feel effortless. They seem invisible, only because we take them so much for granted.

Organisations couldn’t work without rituals. When you write a reference letter for a former colleague or give or get a tchotchke on Employee Appreciation Day, you are enacting a ceremony, reinforcing the foundations of a world in which everyone knows the rules and expects them to be observed—even if you sometimes secretly roll your eyes. Rituals also lay the paper and electronic trails through which organisations keep track of things.

Like Clarke’s monks, we have recently discovered much better engines for efficiently performing rituals: large language models (LLMs). Their main use is within organisations, where LLMs are being applied to speed up the efficiency of internal processes. People already use them to produce boilerplate language, write mandatory statements and end-of-year reports, or craft routine emails. External uses directed at organisations—such as composing personal statements for college applications—are rising fast, too. Even if LLMs don’t improve further, they will transform these aspects of institutional life.

Serious religion involves soul-searching and doubt, but for many ritual observances, the dreary repetition of the cliché is the point. Much organisational language is static rather than dynamic, intended not to spur original thought but to align everyone on a shared understanding of internal rules and norms. When prospective Republican National Committee employees were asked whether the American presidential election in 2020 was stolen, they weren’t being invited to consider the question but to performatively affirm their loyalty to the presumptive nominee, Donald Trump.

Because LLMs have no internal mental processes they are aptly suited to answering such ritualised prompts, spinning out the required clichés with slight variations. As Dan Davies, a writer, puts it, they tend to regurgitate “maximally unsurprising outcomes". For the first time, we have non-human, non-intelligent processes that can generatively enact ritual at high speed and industrial scale, varying it as needed to fit the particular circumstances.

Organisational ceremonies, such as the annual performance evaluations that can lead to employees being promoted or fired, can be carried out far more quickly and easily with LLMs. All the manager has to do is fire up ChatGPT, enter in a brief prompt with some cut-and-pasted data, and voilà! Tweak it a little, and an hour’s work is done in seconds. The efficiency gains could be remarkable.

And perhaps, sometimes, efficiency is all we care about. If a ritual is performed just to affirm an organisational shibboleth, then a machine’s words may suit just as well, or even better.

Still, things might get awkward if everyone suspects that everyone else is inauthentically using an LLM. As Erving Goffman, a sociologist, argued, belief in the sincerity of others—and the ritualistic performance of that belief—is one of the bedrocks of social life. What happens when people lose their faith? A bad performance evaluation is one thing if you think the manager has sweated over it, but quite another if you suspect he farmed it out to an algorithm. Some managers might feel ashamed, but will that really stop them for long?

What may hurt even more is the “decoupling" of organisational rituals from the generation of real knowledge. Scientific knowledge may seem impersonal, but it depends on a human-run infrastructure of evaluation and replication. Institutions like peer review are shot through with irrationality, jealousy and sloppy behaviour, but they are essential to scientific progress. Even AI optimists, such as Ethan Mollick, worry that they will not bear the strain of LLMs. Letters of recommendation, peer reviews and even scientific papers themselves will become less trustworthy. Plausibly, they already are.

Exactly because LLMs are mindless, they might enact organisational rituals more efficiently, and sometimes more compellingly, than curious and probing humans ever could. For just the same reason, they can divorce ceremony from thoughtfulness, and judgment from knowledge. Look overhead. The stars are not all going out. But without any fuss, some are guttering and starting to fade.

Marion Fourcade is a professor of sociology at the University of California, Berkeley and co-author of “The Ordinal Society". Henry Farrell is a professor of democracy and international affairs at Johns Hopkins University and co-author of “Underground Empire: How America Weaponized the World Economy".

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

MINT SPECIALS