Starting with the opening ceremony at the London Olympics, I have spent the last few weeks considering history and spending time at meetings with people trying to predict—or invent—the future in mass media/social media, health and healthcare, and space travel. Then, by chance, I was invited to fill in a survey about the future. When will we have learnt how to cure cancer? When will the last person die of malaria? When will self-driving cars become mainstream?
Without intending to (or at least without highlighting it), the survey’s authors were illuminating the distinction between knowing how to do something and actually doing it in and for the world. For example, obesity and poor dental health (just Google “dental health impact on general health”) are not only treatable, they are also generally preventable. And yet they persist and affect many more people than cancer, malaria, or other more “fashionable” ailments do.
If we actually thought with a view to the long term, we would focus attention and resources on preventive health measures, education, and public services that raise productivity. The problem is that technology’s use is not always just a question of capability, it also involves taxes and public spending, trade-offs and long-term thinking.
Consider self-driving cars or remotely piloted passenger airplanes. Will scientific data showing that self-driving cars and remotely piloted planes (once perfected) are safer than human-operated cars and planes prevail over culture, old laws, and other inhibiting factors? For starters, if there were a self-driving-car crash (and there will be), whom could we blame?
In short, though we depend on scientists and engineers to invent things, a much broader slice of society will determine whether those things are widely adopted and how they are paid for. And, more and more, technological progress depends on this social willingness.
Part of our reluctance is that we are still uncomfortable with machines making decisions—yet there are already too many decisions for us to make ourselves. In our human world, knowledge and capability imply some responsibility: If you are able-bodied and you see a person in front of you walk into the path of an oncoming car, you have a responsibility to pull that person to safety. But in a world in which we can know what is happening everywhere all the time, what responsibility will we feel—or be burdened with? Already, many people are withdrawing. There are just too many problems in the world.
Thus, for individuals at least, personal constraints can be helpful, not merely limiting—if we use them properly. For example, which problems are you best equipped to address?
Your money may be worth as much as anyone else’s, but your advice or involvement will be worth more when focused on a problem or location in which you have expertise or a unique concern, such as a disease that afflicted your mother, or the lack of training opportunities in your industry. In a constrained world, such considerations could guide your choice of career; in an unconstrained world, they can guide your mission.
People should be free to the maximum extent to pursue their own mission—much like the milling crowd of performers in the recent Olympic opening ceremony (versus the perfectly synchronized Chinese ceremony in 2008). Ultimately, these individuals “self-organized” to create entire new industries, the National Health Service, and great music, among other things.
But, for society as a whole, there are fewer technological constraints to guide us in what we can do—or prevent. Countries will find that their priorities must reflect broad public sentiment rather than that of a ruling elite. But all kinds of people may be tempted to make harmful trade-offs, whether for short-term pleasure or for “sex appeal” over true value.
For example, what are the trade-offs between clean energy sources and economic development—a big issue in India and many other nations? Rich people may want to keep cars expensive and unshared, playing into the hands of others who may be scared of automated vehicles.
More generally, how can we achieve consensus around public missions, and how will we make decisions if we cannot? Leaving decisions to a self-interested elite is not a good answer, but neither (currently) is broad voting: People are easily swayed and might not understand the issues, and they might be too oriented towards the short term. In many cases, this causes countries and groups to focus narrowly, for example, by limiting foreign aid—or by focusing on aid when investment would yield better results.
The primary solution is better education, so that a broader swath of the population is informed enough to make fact-based judgements in both their personal and public lives. Just as we are beginning to understand how to model the climate (not without some political interference), so will we begin to understand how to model the economy—and with it, the trade-offs that we face. It is not that everything can be reduced to a price, but that even decisions based on nonmonetary values have real costs and consequences. Most important of all, as we become better at modelling, we will discover how much value we can create (and how many costs we can avoid) by spending now to create a better future.
Indeed, perhaps the biggest culture/value challenge of all is short-term thinking. Around the entire planet, we are approaching some kind of singularity, with the market pandering to our fundamental short-term natures by offering us instant gratification and long-term destruction.
Education does the opposite. It enables us to improve our lot by building things—using first fire and wood, and now computers and machines—to overcome physical limitations and to create technology to extend and enhance our lives. Will technology and learning prevail, or will our susceptible, long-evolved weaknesses overcome us?
Esther Dyson, CEO of EDventure Holdings, is an active investor in a variety of start-ups around the world. Her interests include information technology, health care, private aviation, and space travel. Comment at firstname.lastname@example.org