At the forefront of philosophy and academic inquiry into the future of humanity, where machines and technology are playing larger and larger roles in our society, is Nick Bostrom. As a founder of the Future of Humanity Institute at Oxford, Bostrom is confident that some form of self-aware, machine superintelligence will probably be developed (possibly through machine’s developing themselves or humans developing machines for a myriad of purposes as we do already) and Bostrom is not confident that superintelligent machines can be controlled.
In his introduction Bostrom outlines what he writes about in his book. He concludes, “We glance at some recent expert opinion surveys, and contemplate our ignorance about the timeline of future advances.”
“Contemplate our ignorance about the timeline of future advances.”
Bostrom directs our attention to a quotation from I. G. Good’s “Speculations Concerning the First Ultraintelligent Machine” to draw us further into his book about superintelligence: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
“Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
Click below to check out Bostrom’s book. If you’re not into reading academic works “Superintelligence: Paths, Dangers, Strategies” has plenty of relevance for the layman, especially as we don’t know how soon the intelligence explosion may occur.