Today, I completed the first Short Response Paper of the term. While this is mandatory of everyone in the class, and therefore doesn’t count towards this Ad-Hoc Honors project, I’m uploading it because I believe that it will be helpful to my readers as they read my first in-depth.
So, without further ado, here it is!
Two of the assigned readings this week were Prof. Michio Kaku’s “Visions: How science will revolutionize the 21st century” and the transcript Prof. Chaloupka’s speech on “Science, the Basic Problem and Human Security: or What is To Be Done?” Both authors present their interpretations on the consequences science will have on the future of humanity, yet they have fundamental – and thus importantly – distinctions. I would like to examine these distinctions, as they are significant to our understanding of how science impacts progress and the field of Human Security. In addition, I will highlight certain contentions in both Kaku’s and Chaloupka’s articles that I disagree with. In all cases, my disagreement stems from my exposure to differing views in other courses. I will attempt to explain why I find these views more probable than those of Kaku and Chaloupka, using my academic and personal background as a framework.
In “Visions,” Kaku advances the thesis that the era of scientific discovery is ending, as advances in technology allow us to mature from passive observers unravelling the secrets of Nature to masters of Nature. Kaku begins by examining the three scientific revolutions of the 20th century: the quantum, computer, and DNA revolutions. The discoveries and knowledge gained from each of these, Kaku argues, is central to our debut into the “Age of Mastery.” The computer revolution gives us the skills to create artificial intelligence. The DNA revolution will give us “nearly god-like ability[ies] to manipulate life almost at will.” The quantum revolution has and continues to contribute to advancement in the other two revolutions, in addition to providing us with deeper insight into the physics of the universe.
Kaku’s predictions are intriguing. I find his predictions of artificial intelligence or a shift from wealth from natural resources to wealth from knowledge and skill entirely possible and believable. I do, however, believe that Kaku takes his predictions too far in some cases For example, he states by 2020 cheap microprocessors will allow us to place intelligent systems everywhere. This is all well and good – but he fails to acknowledge that this depends on the percentage of people having access to the technology, their education about its use and consequences, and the cultural acceptance of intelligent systems everywhere in life. Education and cultural acceptance require time, and they are important for the adoption of any new technology. Another premature prediction of Kaku’s is that “many genetic diseases will be eliminated by injecting people’s cells with the correct gene.” As a biology major, I am admittedly quick to disagree with this statement. I think, however, that even a non-biology major would understand that Kaku’s prediction is only feasible if the injection of genes is done early in the development and differentiation of an embryo. In adults, gene therapy would need to reach all target cells. This is a huge obstacle – injecting a correct gene into every single target cell is a strategic nightmare, and one that is the focus of much current research.
Prof. Chaloupka’s speech on “Science, the Basic Problem, and Human Security” is similar to Kaku’s chapter, in that it looks to the future impacts of science on society. It has, however, a crucial distinction. While Chaloupka agrees with Kaku that we are acquiring god-like powers, he goes further to recognize the possible negative outcomes of scientific progress. Where Kaku is singularly optimistic, Chaloupka also examines the costs of science. This is, in my opinion, very important. If we are blindly optimistic, we risk being caught unprepared if the relationship between science and society does go wrong. This is the Basic Problem of foresight that Chaloupka outlines. While Kaku sees increasing access to scientific discoveries as leading to the dissemination of intelligent systems around the world, Chaloupka is correct in pointing out how such access can lead to individuals or small groups being capable of causing serious harm. His reasoning that reactions to catastrophes will probably not be rational also seems reasonable – in fact, it is all the more believable for Chaloupka’s use of past examples like the public response to 9/11.
By taking the leap and acknowledging the potential downsides of scientific advancement, Prof. Chaloupka ultimately does something very important that Kaku does not: making possible the consideration of defensive, preventative, and reactionary measures to minimize the risks of science as much as possible. It is counter-intuitive and, in my opinion, somewhat ironic that only by acknowledging the potential risks and detriments of scientific advancement can we hope to prevent and mitigate them. And the difference between Chaloupka’s and Kaku’s conclusions is itself reminiscent of a scientific discovery for the good.