Home

Author's blog

On this page, observations of various kinds will appear that are related to big history teaching, as well as to big history in general.
 
Earlier blogs
- Big history and web site design
- Did Galileo overstate the magnification of his telescope? 
- Did Columbus falsify his latitude measurements?
- Reflections on observations in big history
- What about questions in learning goals?
 
 
 
WHAT TO THINK ABOUT MACHINES THAT THINK?
 
I was invited to join a panel at De Balie, Amsterdam, on April 4, 2016, to discuss the question "What do you think about machines that think?," which had been posed earlier by John Brockman of www.edge.org to a number of 'leading thinkers.' Their answers were reproduced in a book, which was translated into Dutch. The meeting at De Balie can be watched here.
 
I am not in any way an expert on computers or their development, and I was not asked to contribute to the book. Yet many of those who did contribute do not seem to be computer wizards either, while, to my surprise, one of the greatest experts on these matters that I know, the U.S. inventor Ray Kurzweil, was not included in the book, even though his profile figures large on edge.org. This makes one wonder how the choices for contributors were arrived at.
 
I may not be a computer expert, but I am a big historian. This background may lead to a slightly different tack than that of many others. So without claiming any primacy, but with the claim of originality in the sense that I am not aware of others having said this already, here are some of my current thoughts about this question.
 
First of all, we need to question what we mean by thinking. Is thinking the same as solving problems or seeking to answer questions? Or does it also involve recognizing and defining problems, and questions? Computers are very good at solving predefined problems. But can they recognize and define problems themselves in original ways?  Can computers ask fresh questions and then seek to answer them? To the best of my knowledge, they cannot, at least not yet.
 
What makes our brains so special that they are able to do all these things? To answer this question, we need to look at how brains emerged during history. This is a most difficult field, and right now we only possess preliminary models of how this may have happened.
 
The best model known to me of the emergence of brains and consciousness was proposed in 1986 by French-Canadian astrophysicist Hubert Reeves in his book L'heure de s'enivrer, and, about 15 years later, also independently by Dutch biochemist Karel van Dam. This model is explained in chapter 5 of my book (p.163 ff) as follows:
 
The model starts with the generally accepted idea that at a certain point in time, single cells emerged equipped with a sensor that was able to detect food or danger. These cells would also have sported one or more little tails, with the aid of which they could either swim away from, or move toward, the detected source, depending on whether they liked it.
 
As soon as sensor and tail became interconnected, a novel mechanism was in place for the microorganisms that possessed such organs to undergo a specific process of non-
random elimination, for there must have been a survival premium for organisms that were able to do such things better.
 
Furthermore, such microorganisms were able to learn (defined as the modification of behavior based on experience) by storing information of past events and using it for determined action.
 
Now what would happen if such microorganisms evolved two sensors that were both connected to one tail, especially if these two sensors gave off different signals about where to go? One would expect that a more elaborate connection would emerge between the sensors and the tail able to make decisions about what action to take.
 
To do so effectively, an image would need to be created of the situation as perceived by the sensors, with the aid of which such decisions could be taken. In other words, questioning which sensory data should be considered the most important may well have been he first ever question asked, and answered.
 
As soon as all of that was in place, living things were able to form a more detached image of the surrounding world for the first time in biological history. It was more detached, because there would have been some time for questioning, and reflecting on, what course of action to take between the incoming stimulus and the subsequent reaction. This image would have been the first form of consciousness.
 
Ever since that time, any change in such an image-forming regime that improved the harvesting of matter and energy favored the long-term survival of that species. This would have included the storing of data in a rudimentary memory bank, as well as better control over organs that made the organism move into the desired direction.
 
Multicellular organisms may have developed along similar lines. A few cells that served as sensors would have become connected to other cells that were able to process information and send commands to a tail. As soon as such a situation was in place, multicellular complexes would have evolved brains, map making and consciousness, as well as controlled behavior - ultimately leading to organisms such as you and me.
 
In this process over many millions of years, some of these organisms developed more, and more refined sensors; larger and more intricate brains; and more limbs that could produce effective action. In doing so they were increasingly able to ask, and answer, an increasing number of questions, leading to effective action.
 
As long as such images and their effects on the organism's emerging behavior improved its survival and reproductive chances, there must have been a positive reward on achieving reasonably reality-congruent images of the outside world.
 
Evidence found in 2013 CE indicating that the decision-making body within brains called the lateral habenula is in evolutionary terms among the oldest parts, if not the oldest, of brains, very much supports this argument.
 
Furthermore, nerve cells connect modern brains and intestines, where a mostly hidden but remarkably powerful nervous system is located. This much smaller so-called second brain helps to regulate the matter and energy intake by monitoring its state in the intestines.
 
It would not be surprising if the first and second brain had first emerged as one single tiny little brain, and that over the course of evolution both of them grew while they became physically separated. Yet they remained connected while organisms grew in size and the distance between food intake and digestion increased.
 
The most important sensors for imaging the outside world (eyes, ears, nose) evolved close to the first brain, which became much larger as a result. It is, of course, very inefficient to have these sensors in your belly, while it is favorable to have them close to the food intake area, because they can assist the organism to steer itself toward the desired matter and energy as well as flee from danger, or fight it.
 
As a result, the evolution of brains led to heads as we know them today, movable, and loaded with sensors that can detect signals from as many directions as possible. These important sensors are all situated close to the brain, and are closely connected to it, while they are also very close to the mouth.
 
The intestinal brains, by contrast, remained much smaller, so that they escaped the attention of scientists until very recently, even though in daily life its function has been recognized for a long time.
 
This little intestinal brain is still fulfilling vital functions, most notably decision making about matter and energy flows, such as when, and when not, to eat. It is therefore not surprising that paying attention to one's gut feeling for taking decisions makes perfect sense, and that any tension concerning one's well-being can be expressed in intestinal stress.
 
If this model is reasonably correct, our brains have acquired their ability to recognize and define problems --in other words: ask questions and answer them-- because, during their emergence and development, they have always been connected to a number of sensors that help them detect and define the outside world.
 
This is what most computers are lacking. And those that are connected to sensors, including self-driving cars or computers that are managing complex systems such as factories or rocket launches, are preprogrammed to execute certain tasks in response to the sensory input. But these computers are not yet able to recognize and define new problems and formulate efficient answers to them.
 
In other words, the current computers can only very partially think like humans. They excel at managing well-defined systems, but they are very poor at recognizing and defining new problems. That is why, I think, human brains will keep outperforming computers for some time to come, even though in their daily lives humans will have to deal ever more with such 'thinking' machines.
 
In sum, the evolutionary trajectory of computers has been very different from that of living beings, and may remain very different. These thinking machines are extraordinarily good at executing certain tasks, which is why they were built in the first place. But unless their evolutionary trajectory becomes similar to that of brainy animals, they will not become very similar to such living things either, or so it seems to me.
 
Another aspect of this discussion that may not yet have received the attention that it deserves is the question of who will be liable for any damage caused by computers. I am inspired to think along those lines thanks to the work of my brother Jaap Spier, who is an  internationally recognized legal expert in such matters.
 
As long as the answer to the question of who is liable for damage caused by computers is: “humans,” then humans will want to keep control over computers, simply because if they do not do so while they still own them, it may cost them a lot of money. This will pose clear limits to the autonomy that computers may achieve.
 
Surely, computers can become dangerous to humans, in the hands of other humans, as a result of their powerful problem-solving abilities, depending on how the problems are defined and on the amount of control humans want to exercise over these machines.
 
This particularly, but not exclusively, applies to wartime situations. But also in peacetime, solving problems by computers may well lead to unintended consequences. Yet as long as humans are held liable for the consequences by other humans, they will seek to maintain control over their thinking machines.
 
The recent adventures of the Tay bot launched by Microsoft provides such a case in point. But also the less adventurous virtual assistant Siri from Apple may not yet have reached the intellectual thinking prowess that some of us may fear.
 
This became evident (if we needed that) when my son Louis and I recently posed Siri the question: “What do you think about machines that think?” This was Siri's answer: “I think, therefore I am. But let’s not put Descartes before the horse.”
bhi_005027001.jpg
International Big History Association
Un. of Amsterdam big history
Cosmic Evolution
Big History Project
Chronozoom
Book: Teaching Big History
Bill Bryson: Short History of Nearly Everything
Other useful stuff on the web
Other big history
resources
Feedback
How to use the book
Course models
Learning goals and objectives
Teaching tools
Assignments (little big histories)
Answers to FAQs by students
Questions by students and teachers that go beyond the book
Examination models
Teaching big history