© 2024
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
An update has been released for the Android version of the WAMC App that addresses performance issues. Please check the Google Play Store to download and update to the latest version.

As Science Advances, A Fear Of The Future

James A Prince
/
Getty Images/Science Source

You read it everywhere, you watch it on TV and in sci-fi movies: Science is dangerous, it can create terrible weapons, it can control our lives, it can create new diseases, machines that will take over the world, that will wipe out the human race and redefine life as we know it.

The concern is such that governments, universities and think tanks have pondered what these risks are — and what can be done about them, if anything. One example is the University of Cambridge Center for the Study of Existential Risk. There are many others. A short list of terrifying scientific by-products (sometimes called "existential risks") may include:

  • Nanobots that would multiply into a sort of "gray goo" that would consume everything in its path. This would happen as the microscopic entities reach a kind of explosive self-reproductive level, or it could be done by humans that use the machines as weaponry.
  • Artificial Intelligence (AI) machines, computers capable not only of obeying commands in a program but also of having autonomy — the ability to act out of their own will — could quickly surpass us, becoming the next big foe in world domination. No more power-crazed dictators, but machines immune to human frailties or morals, that could easily dispense with us if they wanted to. Even if the reality of true AI machines is far in the future (if even possible), the specter of such an invention haunts many of us, including some prominent thinkers, such as Oxford University philosopher Nick Bostrom.
  • Biotechnology, in particular the genetic manipulation of living creatures, is already a reality. We eat it what it creates, and use it to treat a growing number of diseases, from cystic fibrosis and AIDS to various forms of cancer. However, bad uses are easily imaginable, such as the creation of incurable (or potentially incurable) pathogens used by terrorist groups or as weapons. This is no news, of course, but with the advances in biotechnology, the possibility that an unbeatable pathogen can be created edges a bit closer to reality.
  • Nuclear materials and/or weapons in the wrong hands would have devastating long-term effects. And there's no need for explosions. Contamination of water reservoirs of big cities, for example, would be cataclysmic.
  • As technology advances, threats will increase in number and severity. We can see this by retracing the past 200 years or so of warfare, where the killing potential and diversity of weapons increased dramatically, from the many uses of explosive gunpowder to nerve gas to Agent Orange to nuclear weapons. We invented these weapons and used them against one another.

    When such topics are raised, the most obvious answer is regulation and surveillance. Governments should make sure that such technologies aren't developed, or bought, by groups of evil intent. Even if this sounds right at first, it hits roadblocks almost immediately. What governments have the high moral ground here? And who is to decide what's the high moral ground? Also, regulation and surveillance raise suspicion as governments can abuse their power quickly, impinging on research progress and social freedom. The specter of Big Brother hovers.

    The crucial change, as compared to, say, 40 years ago, is that the rules of the game are different. Before, small rogue groups or countries could do only localized harm. Now, small rogue groups with distorted morals can get hold of one or more deadly technologies and cause global, or at least widespread, damage.

    Apart from strict regulation and surveillance, it's hard to imagine what else can be done. One possibility, though, is widespread open debate of such threats, coupled with an effort to understand the reasoning behind rogue groups and their potentially destructive actions.

    Given the option, people prefer not to be scared or think about such horrendous possibilities. We have lives to live and can't let such craziness bother us. However, it is such oblivious behavior, which offended groups often interpret as contempt, that feeds their hatred. It may be hard, perhaps impossible, to see the world from their perspective. But we will only know if we try.

    There are research accidents and there are mindful misuses of technology. Both can have lethal consequences, but they are different in very essential ways. In the end, it's all really about us, what we can create and what we do with what we create. Fearing science is not the same as fearing what we are capable of. We create so that we can reinvent our future. What kind of future that will be is up to us.

    Marcelo Gleiser is a theoretical physicist and cosmologist — and professor of natural philosophy, physics and astronomy at Dartmouth College. He is the co-founder of 13.7, a prolific author of papers and essays, and active promoter of science to the general public. His latest book is The Island of Knowledge: The Limits of Science and the Search for Meaning. You can keep up with Marcelo on Facebook and Twitter: @mgleiser

    Copyright 2021 NPR. To see more, visit https://www.npr.org.

    Marcelo Gleiser is a contributor to the NPR blog 13.7: Cosmos & Culture. He is the Appleton Professor of Natural Philosophy and a professor of physics and astronomy at Dartmouth College.