The Basic Problem, revisited

Week 4’s readings take exploring the ‘Basic Problem’ into new depths.  These readings are markedly different from Freeman Dyson’s “The Darwinian Interlude” and Michio Kaku’s “Visions: How science will revolutionize the 21st century,” in that they are much more pessimistic. Where Dyson and Kaku predict a bright future in which children will play with biotech games, gardeners will use gene transfer on their plants[1], and genetic diseases will be eliminated.[2] Moreover, while Prof. Chaloupka’s writing defines and examines the possible risks of the ‘Basic Problem,’ it seemed that his analysis was one of a kind. Bill Joy’s article, however, is evidence that understanding of the ‘Basic Problem’ is spreading to other prominent scientists. Joy’s writing strikes a balance between the blind optimism of Dyson and Kaku on the one hand, and the stark realism of Chaloupka on the other. While by no means naïve about the risks of scientific progress, Joy also expresses his belief in humanity’s “great capacity for caring”[3] and his hope that “discussion of these issues… with people from many different backgrounds, in settings not predisposed to fear or favor [of] technology”[4] will prevent catastrophes.

I would like to examine Bill Joy’s article in conjunction with the ‘Basic Problem’ as defined in class. In doing so, I hope to explain how these four weeks have changed my understanding of biology and physics.

Joy begins by examining the possibility of sentient robots becoming mainstream technology. Using the dystopian vision of the Unabomber, Kaczynski, as an example, Joy ponders the causes of unintended consequences to technology. The answer seems clear – due to the complexity of technology, an changes may “cascade in ways that are difficult to predict,… especially [when] human actions are involved.”[5] I agree with joy that changes of technology may cause unpredictable outcomes due to the nature of the humans using it, but I disagree with his choice of example. To start with, I do not agree with Joy’s conviction that sentient robots are feasible in the near future. Intelligent machines have been predicted to be on the horizon of science for decades. Evidence of this is the predictions of robotic servants in the World Fairs of the early 1900s and the Jetson’s cartoons of the 1960s. None of these predictions, however, have been true. Indeed, it is 14 years since Bill Joy’s article was published and still sentient robots remain the expensive feats of engineering labs, not part of mainstream life. Regardless of whether sentient robots are possible or not, I think Joy’s argument would have been stronger if he had focused on the unintended consequences that small changes in technology can have. For example, the invention of the radio and television, the move from corded to wireless telephones, and the invention of contact lenses have also made significant impacts on society. Radio and television started the communications revolutions. They have also led to unintended consequences such as making sedentary lifestyles more common and leading to a spike in obesity. These consequences are smaller in magnitude and less dramatic than humanity’s incapability to make decisions without machines. But these smaller unintended consequences are more realistic, and would thus strengthen Joy’s argument.

Our class’ exploration of the ‘Basic Problem’ has changed the way I understand my biology and physics classes. After reading Prof. Chaloupka’s and Bill Joy’s writings, I now ask myself how new technologies might be used and the implications for society. This week, for example, I learned about a novel gene pyro-sequencing technique allowing for greater precision in the analysis of penguin GI microbiota. Before taking 216, I would have found this new technology interesting, but focused on understanding only how it works. Now, I also ask myself: does this technique have potential commercial uses? Would it be dangerous in the wrong hands? What will it allow biologists to do, that they have not yet thought of applying this technology to? Similarly, I have realized that the involvement of physicists in the Manhattan Project was never discussed in my year of physics classes at the UW. My classes focused instead on the mathematics behind basic physics phenomena and the solving of problems. While physics classes have, of course, only a limited time to teach students to analyze and solve physics problems, even spending one minute on the Manhattan Project and other real world applications of physics would have improved my understanding. Indeed, it is alarming that prior to this course in my 4th year of university I have not been exposed to these issues in my courses. The material taught in JSIS 216 should become part of regular curriculum in schools. Increasing awareness and understanding of the Basic Problem is the first, and most important, step in preventing disasters in the future.

 

[1] Freeman Dyson, “The Darwinian Interlude,” Technology Review (2005), 27.

[2] Michio Kaku, “Visions: How Science Will Revolutionize the 21st Century.”

[3] Bill Joy, “Why the future doesn’t need us,” Wired 8.04 (2000), 16.

[4] Ibid, 16.

[5] Ibid., 2.

Caution, Culture, and Race for Progress

Having been introduced in Weeks 1 & 2 of the course, the Basic Problem of scientific progress and its ironic propensity to increase risks while at the same time healing societal problems, is the focus of this post. To reiterate for readers, the Basic Problem as defined by Prof. Chaloupka both in class and in transcripts of speeches given in Bristol and Vienna is as follows:

“In our understanding of nature (science), and in the application of that understanding (technology), we are acquiring powers that will soon become truly god-like…. However, our ability to use this power wisely has not increased correspondingly. For the first time in human history, the capability of causing extreme harm is, or will soon be, in the hands of individuals or small groups. This is the ‘Basic Problem’.”[i]

As a class, we have seen how Richard Feynman dealt with the dilemma of the Basic Problem himself. A participant in the Manhattan project, [ii]he found himself asking the same questions that we, as a class, continue to examine today.

This leads me to the topic of this post: the relation between innovation and progress in science and the resulting inherent increase in risk to humanity. Specifically, I would like to consider selected readings from Weeks 3&4 on the Ukraine Crisis’ possible impact on nuclear non-proliferation and on Global Warming’s role today. I began thinking about the relation between the two during Week 3’s quiz section, to which both physics and social science students contributed.

The Ukraine Crisis

Current events in what has become known as the ‘Ukraine Crisis’ have been dominating headlines for almost a full month. They remind the world of a time when Russia and the West were on less than stellar terms – a time of fear, especially for western countries bordering the Iron Curtain. The Ukraine Crisis does not, however, lead immediately to consideration of its effects on global nuclear non-proliferation (or, at least this is the case for myself).  Finding the link between the two was set as a challenge by Prof. Chaloupka – and led to the learning of some very interesting learning.

At the dissolution of the USSR in 1991, Ukraine had the 3rd largest nuclear arsenal in the world. Much of the USSR’s nuclear weaponry had some element of production in the Ukraine. This information was most definitely new to me. I knew, of course, of the famed Chernobyl disaster. My parents themselves have stories of Chernobyl, and how it was covered by Bulgarian media at the time. My aunt visited Ukraine on a business trip not far from Chernobyl just one week after the accident. She recalls a group tour of a zoo, where many of the animals appeared weak and sick – the tour guides did not, of course, talk about Chernobyl. Less than a year later, she had a benign tumor removed from her thyroid. Yet Chernobyl was just one of many nuclear projects in Ukraine.

On December 5, 1994, Russia signed the Budapest Memorandum on Security Assistance. This agreement, which was also signed by the US and UK, stipulated security assurances against threats to territorial integrity or force against Ukraine. The Memorandum was spurred by Ukraine’s joining the Treaty on the Non-Proliferation of Nuclear Weapons. As a condition of the Memorandum, Ukraine agreed to relinquish its nuclear arsenal to Russia. The nuclear weapons it had stockpiled were sent to Russia by 1996, and Ukraine was declared nuke-free.

How does this relate to the Basic Problem? It is a direct example of how “our ability to use [great] power wisely has not increased corresponding” to scientific advancements. The Ukraine Crisis, in particular, brings with it the spectre of MAD. If large, powerful nations such as Russia do not hold to their assurances on nuclear power and agreements such as the Budapest Memorandum lose validity, then what alternatives are available as preventative measures to nuclear war and devastation?

The most obvious answer to come to mind is the reinstallation of Cold War era MAD policies. If other countries considering signing the Treaty on Non-Proliferation believe that doing so will risk their territorial integrities, then they will be far less willing to join. If agreements such as the Budapest Memorandum are disregarded by larger countries, they are no longer of any political value. In such a scenario, it seems logical that small countries will not only refuse to join the Treaty on Non-Proliferation, believing that the ability to threaten nuclear retaliation may discourage large powers from breaching their sovereignty. Countries such as Iran and North Korea are particularly likely to use the Ukraine Crisis as justification for intensifying their nuclear programs.

On MAD & Global Warming

MAD policies are, by definition, based on fear tactics. As long as all actors in MAD are equally deterred by the possibility of nuclear retaliation to their actions, then nobody should step out of line.

It is interesting, then, to compare the fear tactics of MAD and the Global Warming movement.

As documented in the NY Times article “Global Warming Scare Tactics” by Ted Nordhaus and Michael Shellenberger, fear-based tactics have been used in attempts to raise public concern about climate change. Such tactics include linking the increasing frequency and severity of natural disasters to human-caused climate change. Al Gore’s 2006 documentary, An Inconvenient Truth, also used this method. Strangely, however, the number of Americans believing that global warming has been exaggerated the media has increased since 2006 – from 34% to 42%.

Why should fear tactics work in MAD then, if they are falling short for climate change? The answer lies in the immediacy perceived about each.

MAD is a construct left over from the Cold War. In the USA, the adults of today remember being instructed to “duck and cover” in response to the flash of nuclear detonations. Photos and accounts of the human suffering in Hiroshima and Nagasaki create immediate emotional responses. The public can, therefore, clearly visualize the potential risks of nuclear disaster and how they would affect everyday life. As a result, the public perceives the risks of MAD as realistic and possible.

By contrast, the effects of global warming are easy to envision for future generations, but much more difficult to envision within our own lifetimes. Despite being accelerated by human pollution, climate change is nevertheless an extremely slow process. The harmful consequences of our actions today will not come to be for generations. It is not our current society that will have to deal with the result of rising seas, intense storms, and changes in regional climates. It will be our grand- and great-grand children who will suffer the consequences of inaction today. There are no disastrous events concretely linked to climate change, so the public cannot visualize the consequences of global warming as it can for MAD. The result is that the public’s incentive for responding to fear-based global warming tactics is greatly reduced.

Implications

MAD and the global warming movement both use fear-based tactics in attempting to prevent nuclear warfare and further environmental destruction, respectively. The successes of MAD and the shortcomings of global warming each hold implications for the other.

MAD is in part useful because of the immediacy of the negative consequences if it fails to prevent nuclear warfare. It is maybe possible, then, for the global warming movement to increase its effectiveness in changing people’s behavior by finding new, more immediate ways to portray the consequences of climate change. Crucial to this will be clearly and definitively linking natural disasters and similar events to global warming. If the increasing frequency of extreme storms (ie. Hurricane Katrina) or rising water levels in Venice can be linked to climate change by basic science, skepticism about the problem should decrease.

Similarly, global warming’s failure to use fear-tactics for motivating change holds an important lesson for MAD supporters. Using catastrophic rhetoric may cause skepticism about the risks of nuclear conflict. If this happens, MAD loses much of its effectiveness as a preventative deterrent. It may be in society’s best interests to make a ‘back-up plan’ should MAD fail.

Concluding Thoughts

The readings, lectures, and discussions of Week 3&4 have made me think in more depth than ever before how science can both create problems of and solutions to Human Security issues. Scientific advancement is a double-edged sword. It is somewhat alarming that prior to this course in my 4th year of university I have not been exposed to these issues in my courses. The material taught in JSIS 216 should become part of regular curriculum in schools. Increasing awareness and understanding of the Basic Problem is the first, and most important, step in preventing disasters in the future.

 

 

 

[i] Vladimir Chaloupka, “Science, the Basic Problem and Human Security: or What is To Be Done?” (2008).

[ii] Vladimir Chaloupka, Common Book 2011: UW and Meaning of It All Study and Teaching Guide,” (2011).

Welcome!

Today, I completed the first Short Response Paper of the term. While this is mandatory of everyone in the class, and therefore doesn’t count towards this Ad-Hoc Honors project, I’m uploading it because I believe that it will be helpful to my readers as they read my first in-depth.

So, without further ado, here it is!

Two of the assigned readings this week were Prof. Michio Kaku’s “Visions: How science will revolutionize the 21st century” and the transcript Prof. Chaloupka’s speech on “Science, the Basic Problem and Human Security: or What is To Be Done?” Both authors present their interpretations on the consequences science will have on the future of humanity, yet they have fundamental – and thus importantly – distinctions. I would like to examine these distinctions, as they are significant to our understanding of how science impacts progress and the field of Human Security. In addition, I will highlight certain contentions in both Kaku’s and Chaloupka’s articles that I disagree with. In all cases, my disagreement stems from my exposure to differing views in other courses. I will attempt to explain why I find these views more probable than those of Kaku and Chaloupka, using my academic and personal background as a framework.

In “Visions,” Kaku advances the thesis that the era of scientific discovery is ending, as advances in technology allow us to mature from passive observers unravelling the secrets of Nature to masters of Nature. Kaku begins by examining the three scientific revolutions of the 20th century: the quantum, computer, and DNA revolutions. The discoveries and knowledge gained from each of these, Kaku argues, is central to our debut into the “Age of Mastery.” The computer revolution gives us the skills to create artificial intelligence. The DNA revolution will give us “nearly god-like ability[ies] to manipulate life almost at will.” The quantum revolution has and continues to contribute to advancement in the other two revolutions, in addition to providing us with deeper insight into the physics of the universe.

Kaku’s predictions are intriguing. I find his predictions of artificial intelligence or a shift from wealth from natural resources to wealth from knowledge and skill entirely possible and believable. I do, however, believe that Kaku takes his predictions too far in some cases For example, he states by 2020 cheap microprocessors will allow us to place intelligent systems everywhere. This is all well and good – but he fails to acknowledge that this depends on the percentage of people having access to the technology, their education about its use and consequences, and the cultural acceptance of intelligent systems everywhere in life. Education and cultural acceptance require time, and they are important for the adoption of any new technology. Another premature prediction of Kaku’s is that “many genetic diseases will be eliminated by injecting people’s cells with the correct gene.” As a biology major, I am admittedly quick to disagree with this statement. I think, however, that even a non-biology major would understand that Kaku’s prediction is only feasible if the injection of genes is done early in the development and differentiation of an embryo. In adults, gene therapy would need to reach all target cells. This is a huge obstacle – injecting a correct gene into every single target cell is a strategic nightmare, and one that is the focus of much current research.

Prof. Chaloupka’s speech on “Science, the Basic Problem, and Human Security” is similar to Kaku’s chapter, in that it looks to the future impacts of science on society. It has, however, a crucial distinction. While Chaloupka agrees with Kaku that we are acquiring god-like powers, he goes further to recognize the possible negative outcomes of scientific progress. Where Kaku is singularly optimistic, Chaloupka also examines the costs of science. This is, in my opinion, very important. If we are blindly optimistic, we risk being caught unprepared if the relationship between science and society does go wrong. This is the Basic Problem of foresight that Chaloupka outlines. While Kaku sees increasing access to scientific discoveries as leading to the dissemination of intelligent systems around the world, Chaloupka is correct in pointing out how such access can lead to individuals or small groups being capable of causing serious harm. His reasoning that reactions to catastrophes will probably not be rational also seems reasonable – in fact, it is all the more believable for Chaloupka’s use of past examples like the public response to 9/11.

By taking the leap and acknowledging the potential downsides of scientific advancement, Prof. Chaloupka ultimately does something very important that Kaku does not: making possible the consideration of defensive, preventative, and reactionary measures to minimize the risks of science as much as possible. It is counter-intuitive and, in my opinion, somewhat ironic that only by acknowledging the potential risks and detriments of scientific advancement can we hope to prevent and mitigate them. And the difference between Chaloupka’s and Kaku’s conclusions is itself reminiscent of a scientific discovery for the good.