I have often been asked how I manage to find a new topic to write on every week. Truth be told, it is hard work. It helps that I read a lot on a wide range of diverse topics, and that I have worked at the intersection of law and society for over two decades. So there are a fair number of experiences I can draw upon to help place current legislative developments in a historical context. But even so, trying to find new perspectives to take and write about is hard work, and I wish there was a reliable technological solution I could use to help me do it.
Earlier this year, I, along with the rest of the world, was blown away by a new artificial intelligence (AI) system that seemed to have an almost human-like facility with the English language. It was called GPT-3 and was the latest iteration of the machine-learning language system developed by OpenAI that was capable of generating text so coherent that it was indistinguishable from human prose. So impressive was this technology that some of the early articles describing GPT-3 were written by the AI engine itself, and it was not until the end of the article, when readers were informed they had been reading the words of a machine, that it dawned on them that they had been reading computer-generated text.
Almost at once, examples of the many uses to which such technology could be put were described. Some argued that once it becomes possible for us to converse with computers at a conceptual level, we could find answers to philosophical questions about the existence of God and the meaning of life. Others pointed out that GPT-3 could be used to provide a medical diagnosis or serve as a therapist. There were also some who began to conceptualize a future without keyboards—where we would use a touch interface to massage prose into a form of our choice, just like we manipulate images today with photo-editing software.
Despite its promise, GPT-3 is not the solution I’ve been looking for. Don’t get me wrong. It is an impressive step forward, showing us just how far computers have come in their ability to work with words. But as impressive as it is, GPT-3 is little more than a glorified auto-complete program that generates essay-length suggestions of appropriate text in much the same way that our mobile operating systems suggest short sentence responses to the text messages we receive. The text it produces makes coherent sense not because it understands any better the meaning of the words it uses, or their conceptual context, but because it is really good at identifying sentence patterns and using those to generate other sentences that make contextual sense. This is not intelligence, just an impressive parlour trick.
The internet has brought all the world’s knowledge within our grasp. But access to information is just the first step. What we need next are tools to reveal connections hidden deep within that knowledge. As good as machines have become at predicting patterns, they are still hopeless at connecting the dots. That is why a tool like GPT-3 is not really useful for what I want to do. What I need instead is technology that helps me think.
Niklas Luhmann was a German scholar who in the course of his career wrote 70 books and over 400 scholarly articles. The secret behind this prodigious output was a technique that he called Zettlekasten—a systematic note-taking workflow that has all of a sudden become a rage among some of the world’s top knowledge warriors.
Described simply, Zettlekasten is a process designed to bring to the surface connections between disparate pieces of knowledge by taking precise, atomic notes and systematically indexing and tagging them so that they are relevant in all the different intellectual contexts they might be put to use.
Luhmann made notes of everything he read, but, unlike the rest of us, also linked each note with those he had made before. This way, every item of new information appropriately re-surfaced all the connected pieces of past knowledge that he had accumulated—even if that past knowledge had been gathered in a completely different context.
It is this ability to access undiscovered connections that was the secret of his prolific literary output. It is this magic that some of the latest, most cutting-edge personal knowledge management tools are trying to replicate in digital code.
For nearly a year now, I have been using one of them—an application called Roam Research—for all my knowledge management needs. Roam allows me to take free-form notes and then, using a technique called back-linking, lets me link those notes to every other note related to the topic that I have made before.
As with all such technologies, it takes a while before the effort you put in starts to yield results. But once the volume of notes in your personal knowledge graph crosses a critical threshold, the connections begin to magically surface on their own. This is how I realized that an article I had read a year ago on the history of classical music contained examples I could use to describe the notion of standardization in relation to data structures. And how the observations that Thomas Edison made in the context of battery-operated cars could be used to think a bit differently about the impact of fossil fuels on climate change.
It might be a while before computers can actually think for themselves. Until then, we can—and should—use them as tools for human thought.
Rahul Matthan is a partner at Trilegal and also has a podcast by the name Ex Machina. His Twitter handle is @matthan
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.