Opinion
Future of Work Opinion

Defining Life and Learning in the Age of Artificial Intelligence

By Marc Tucker — January 04, 2018 7 min read
  • Save to favorites
  • Print

I found a new book I’d been looking for over the holiday, Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence. Tegmark is a professor of physics at MIT and president of the Future of Life Institute. He looks into the future as it might be shaped by developments in artificial intelligence. He finds the possibilities to be just as intriguing as the optimists do and just as frightening as the pessimists do. The future, he says, is malleable. We need to understand the possibilities, make some choices and then start fashioning the world we want, because if we don’t, someone else will and we might not like it.

It is all very interesting, but the part that really grabbed me was the introductory material in which Tegmark sets the stage by giving the reader a very short course in computing, learning, and the computational basis of learning. It is not the course in learning you might have gotten in your school of education.

My aim here is to give you just enough of a whiff of what he does to get you interested in reading his book. Don’t worry. This is not some massive tome that will put you to sleep. He has a very informal, engaging style. And he is a great teacher.

Before Tegmark tells us what he thinks learning is, he tells us what he thinks life is. Life, he says, is a “self-replicating information processing system whose information (software) determines both its behavior and the blueprints for its hardware.” Tegmark knows, of course, that all definitions of life are controversial and knows as well that his is, to say the least, unconventional, but there is a method to his madness.

This definition of life is what Tegmark labels Life 1.0. He goes on to describe Life 2.0 and 3.0. When you see what he means by 2.0 and 3.0, you will see why he defined 1.0 the way he did.

Consider any elementary organism that all would agree is alive, say an amoeba. It is a cellular structure that contains DNA, which encodes the information required to duplicate the organism. That information contains instructions that enable it not only to produce a duplicate of itself, but those instructions will also determine the behavior of the amoeba. Over time, the DNA will be altered by radiation and other factors, so the behavior of the amoeba as encoded in its DNA will change. As that happens in any other simple life form, some of these changes will result in beneficial adaptations and others in less beneficial adaptations. As time goes on the amoebas with the beneficial adaptations will survive and prosper, those with the less beneficial adaptations not so much. This is a very slow process, taking many generations to play out.

As the eons pass by, life grows more complex. Why? Because, Tegmark says, evolution rewards life forms that are complex enough to predict and exploit regularities in the environment. As these life forms become more complex and varied, they make the environment even more complex, providing both stimulus and support for even more complex life forms to evolve.

Very early on in this process, evolution rewarded those life forms that were able to sense their environment and respond to it and to changes in it. Artificial intelligence researchers refer to entities that can do that as “intelligent agents.” Intelligent agents have the ability to collect information from the environment, process this information and then decide how to act back on the environment.

In simple organisms, this whole process is encoded into the DNA. The physical properties of the organism are all controlled by the DNA that it inherits and the processes that it uses to convert the information that it gets from its sensors into some reaction is also determined entirely by the coding of the DNA. It requires no thought or consideration. It is automatic. One can think of the physical properties of the amoeba or blue crab as its hardware and the instincts that are really the programming of its behavior as its software. Both are entirely inherited.

What I have just described is what Tegmark calls Life 1.0. Learning—of a sort—and adaption take place here, but only in the changes that take place over long periods as the DNA of these creatures change. No single creature learns anything.

And then we have you and I and other creatures like us in which the hardware—our physical selves—evolves over long stretches of time, but our software is constantly developing and is not limited by the structure of our DNA. The difference in this case is that the organism can learn without requiring changes in our DNA to do it.

This, of course, is not a characteristic limited to Homo Sapiens. Other creatures, also possessed of brains of various capacities, can learn, too.

But, as Tegmark points out, human brains are in a class by themselves because they continue to develop well into our teens. The capacity of our software is not limited by the size of our brain at birth. “The synaptic connections that link the neurons in my brain can store about 100,000 times more information,” says Tegmark, “than the brain I was born with.”

This fact of the continued development of the brain after birth confers an enormous advantage on human beings. “Your synapases store all your knowledge and skills as roughly 100 terabytes of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download.”

This, in Tegmark’s computational theory of the stages of life, is Life 2.0. It makes life not only smarter than Life 1.0, but much more flexible. Our capacity to adapt to changing environments and the challenges they bring with them is not limited to the information that came with us at birth. We were able to develop far more software than merely what we are born with which enables us to both store and process far more information than any other creature on the planet.

But it does not end there at the level of the individual. That’s because we have an information processing module for natural language processing located in our brain that gives each of us access to the data and the processing capacity of other humans. And that enormous advantage was compounded many times over by books and computers and computer networks, enabling an explosion of information and information processing capacity among our kind as a collectivity. The information stored in one brain could get copied to another, outliving any one person. The software module that enables us to read and write makes it possible to store and transfer vastly more information than any one person could memorize.

“That flexibility has enabled Life 2.0 to dominate Earth. Freed from its genetic shackles, humanity’s combined knowledge has kept growing at an accelerating pace as each breakthrough enabled the next.”

One could argue, Tegmark points out, that what we really have now is Life 2.1, inasmuch as our software has enabled us to redesign some of our hardware in the form of such things as artificial limbs, joints and pacemakers.

But all that is nothing compared to what Tegmark describes as Life 3.0. That is the stage in which the software becomes smart enough to completely redesign its hardware and, just for kicks, smart enough to build it, too. In that future, Tegmark says, life is finally the "...master of its own destiny, free from its evolutionary shackles.”

Ah, you say, this is nuts, Tegmark just went off the deep end. Do you really think so? Artificial intelligence programs have now been developed that are capable of writing artificial intelligence programs. These programs are given a goal and the capability of learning whatever they have to learn to accomplish that goal. Programs of this sort are coming up with designs for achieving these goals that no human has ever thought of. This is not the future; this is now.

These musings lead Tegmark to an interesting question. What, exactly, is human consciousness? This is not a new question. It is as old as philosophy. But in this context of intelligent systems design, it takes on a new twist. Military robots are being built today that are capable of making their own decisions about who to kill and who not to kill. Is it ethical to kill a robot and, if so, under what circumstances? What an absurd question, you say. The thing is only metal and plastic and electrical circuits. Of course it is OK to kill a robot, whether or not it is chasing you down with a murderous glint in its eye.

But what if that robot has emotions? That’s ridiculous, you say. Robots don’t have emotions.

But there are artificial intelligence researchers at work right now who are working to enable robots to detect and respond to human emotions and, in time, it is quite likely that it will be impossible to distinguish their reactions to those of humans who have emotions. It may be up to us to determine whether Life 3.0 has emotions. Or maybe not. Maybe the machines that design Life 3.0 will make that decision.

Do I have you yet? Will you buy the book? I have only given you a little taste of what is in it.

Related Tags:

The opinions expressed in Top Performers are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.