news | projects | music | reel | about me | contact | . . . . . . . . . . . . . . . . . . . . . . . | charliemccarron.com

Apes to Androids: Is Man a Machine as La Mettrie Suggests?

As an enlightenment-era materialist, Julien Offray de la Mettrie believed that a physiological understanding of the body held more insights into the universe than philosophical musings about the soul ever could. After careful anatomical study, La Mettrie boldly concluded in Man a Machine that the entirety of nature is composed of one substance, varied among organisms. Each human being is a machine, like every other part of the natural world, driven by instinct and experience. Rightly anticipating a negative reaction from the public, La Mettrie published his ideas anonymously. A glance into science fiction reveals why materialism is so unnerving. 21st century audiences still cringe at artificial intelligence in robots for the same reason 18th century audiences cringed at a materialist definition of humanity: both La Mettrie and modern science fiction threaten our human uniqueness from the rest of the universe and even uniqueness from other human beings. Any spiritual definition of the soul is lost if we, like the rest of the universe, are purely matter-driven machines. Comparing humans to both animals and artificially-intelligent robots raises an important question: what does it mean to be human?

La Mettrie was intensely interested in this question. In order to discover humanity’s relationship to the rest of the universe, La Mettrie believed one must avoid prejudice at all costs and rely only on “experience and observation” (La Mettrie 88). As a physician-philosopher, he was wary of any fellow philosophers who ignored physical evidence to instead theorize on that which cannot be seen. La Mettrie dismisses his colleagues’ immaterial theories in one swift sentence: “One can and one even ought to admire all these fine geniuses in their most useless works, such men as Descartes, Malebranche, Leibniz, Wolff and the rest, but what profit, I ask, has any one gained from their profound meditations, and from all their works?” (La Mettrie 90). From his scientifically neutral studies, it is clear that La Mettrie was a proponent of enlightened rationality. He was seeking the untainted truth about humanity, and he unflinchingly shared his discoveries in the process, disturbing as they may have been.

One of La Mettrie’s more controversial points is that humans are remarkably similar to animals. He states that “the form and the structure of the brains of quadrupeds are almost the same as those of the brain of man…with this essential difference, that of all the animals man is the one whose brain is largest, and, in proportion to its mass, more convoluted than the brain of any other animal” (La Mettrie 98). Modern psychology and neuroscience hold that the learning process of humans is comparable to that of animals, but exceedingly more complex. La Mettrie, who was very advanced for his time, believed that “man has been trained in the same way as animals” (La Mettrie 103), namely through symbols and “imagination” (La Mettrie 107) in the brain. He maintained that an ape could learn a human language if it “were properly trained” (La Mettrie 103), which has been proven true to some extent with the signing gorilla, Koko. Some scientists claim that Koko only responds to her trainers due to the operant conditioning of receiving treats (Wikipedia, “Koko”). However, this coincides with La Mettrie’s theories. He said that animals and humans alike are conditioned by the environment to respond in certain ways. Once again advanced for his time, La Mettrie preceded Albert Bandura’s social learning theory by two hundred years and affirmed that “we catch everything from those with whom we come in contact; their gestures, their accent, etc.” (La Mettrie 97).

Our roots in the animal kingdom are now widely accepted by the scientific community, yet many people still resist the comparison. One of the primary explanations for this is pride: “these proud and vain [humans], more distinguished by their pride than by the name of men however much they wish to exalt themselves, are at bottom only animals and machines which, though upright, go on all fours” (La Mettrie 143). Though we distance ourselves from nature by way of civilization, La Mettrie maintains that “man is not moulded from a costlier clay; nature has but one dough, and has merely varied the leaven” (La Mettrie 117). It’s clear we are not very different physically from animals, so this statement alone is not threatening. However, when paired with La Mettrie’s purely materialistic understanding of human thought and emotion, it implies that the only difference between human and animal is a slight physical alteration. If humans are endowed with a soul, La Mettrie contends that animals would necessarily have a soul as well, and human pride would “deny its immortality” (La Mettrie 146). A common assumption is that only the human soul is given the “natural law” of knowing right from wrong. Despite the fact that the nature of another being’s soul is unknowable, people firmly believe that “man alone has been enlightened by a ray denied other animals” (La Mettrie 115). Is this desire to distinguish humans from animals a result of the religious belief that humans are created in the likeness of God, or is this religious belief a result of pride? Is the idea of “soul” itself another product of pride, a figment of humanity’s imagination?

These questions illustrate why materialism is so threatening. According to La Mettrie, if a spiritual world outside the material world exists, it is unknowable, and in turn, any religious definition of the soul is discounted. La Mettrie says that “the soul is therefore but an empty word, of which no one has any idea, and which an enlightened man should use only to signify the part in us that thinks” (La Mettrie 128). La Mettrie’s theory holds that the soul is completely reliant upon the body, and he provides several examples. He points out that “the soul and the body fall asleep together” (La Mettrie 92) and that “in disease the soul is sometimes hidden, showing no signs of life” (La Mettrie 90). Because the soul is linked to matter and all living things are made up of the same matter, it appears that nothing spiritual separates us from the rest of the universe. If La Mettrie is correct in concluding this, what defines us as humans? A seemingly easy answer would be our intelligence and emotional sophistication. However, a look into the future of robotics rules out any easy definition of human uniqueness.

One of the most prominent themes of science fiction is the “process of remaking, reshaping, perhaps even perfecting the self…[and] finally replacing the self” (Telotte 161). Androids permeate science fiction film and literature, and in many cases, they are indistinguishable from the humans around them. As the lines blur between human and android, we are confronted with the same unsettling question as La Mettrie’s materialistic theories: what separates humanity from the rest of the universe? La Mettrie concluded that man is a machine. Can a machine be man?

Interestingly enough, two of the first true robots were built about a decade before La Mettrie wrote Man a Machine, and La Mettrie references both of them. Jacques de Vaucanson created a life-sized flute player and a digesting duck automaton. They created a major stir that first led to the idea that humans could someday be replicated. La Mettrie himself believed this to some degree; he sees “a talking man [as] a mechanism no longer to be regarded as impossible” (La Mettrie 141). These beginnings of robotics no doubt made La Mettrie even more confident in saying that “the human body is a machine which winds its own springs” (La Mettrie 93).

As computer technology progresses, the notion that the human mind may one day be recreated becomes less and less a matter of science fiction. Isaac Asimov wrote in 1967 that “The only difference between a brain and a computer can be expressed in a single word: complexity” (Asimov 90). Were La Mettrie alive today, he would likely draw the same conclusion about artificial intelligence, just as he saw the difference between animal and human intellect solely based on brain complexity. What makes artificial intelligence simultaneously fascinating and horrifying is its limitlessness. Whereas our brains are bound by organic gray matter, a computer “brain” is confined only by the number of circuits in the chip and the speed of the processor, both of which are constantly developing. With only these material limitations, Asimov contends it is a matter of time before the human brain will be replicated by computers:

How long will it take to build a computer complex enough to duplicate the human brain? Perhaps not as long as some think. Long before we approach a computer as complex as our brain, we will perhaps build a computer that is at least complex enough to design another computer more complex than itself. This more complex computer could design one still more complex and so on and so on and so on. In other words, once we pass a certain critical point, the computers take over and there is a “complexity explosion.” In a very short time thereafter computers may exist that not only duplicate the human brain—but far surpass it. (91)

The advancement of computer technology makes human-like androids far from outlandish. The current work in Korea with androids (Wikipedia, “Android”) makes the replicants of Blade Runner, the mecha of A.I., and the android of Bicentennial Man seem possible in a matter of a few decades. These science fiction androids are all mistaken for human beings, perhaps a commentary on our externally-limited knowledge of others. Even our self-knowledge is somewhat limited. In Philip K. Dick’s Imposter, the main character does not know that he is an android until the end, and neither do the other characters or the reader. Convinced he is human, he resists the police who arrest him for being a robotic imposter: “I am Olham…I know I am. But I can’t prove it.” (Dick 81). When androids look, act, and think exactly like human beings, how would humans prove their own humanity?

La Mettrie distinguishes us from the rest of the animal kingdom by our relatively vast intelligence. But how do we define intelligence? If it is our ability to calculate complex mathematical equations, computers have been smarter than us for quite some time. It would seem that intelligence involves much more than the input, processing, and output of data. However, La Mettrie and other philosophers of the enlightenment era were of the mind that “all the faculties of the soul can be correctly reduced to pure imagination in which they all consist. Thus judgment, reason, and memory are not absolute parts of the soul, but merely modifications of this kind of medullary screen upon which images of the objects painted in the eye are projected” (La Mettrie 107). The mind then is a complex computer, and our DNA is the initial coding. However, the human “program” is always being re-coded, as each byte of valuable sensory information is cross checked against previous knowledge and stored in memory. Ultimately, these sensory experiences change our thought processes and perceptions of the world.

Just as human thought can be reduced to neuromechanics, the modern psychological understanding is that emotion, too, can be reduced to a mechanical system. However, emotion is often construed as the antithesis of the machine in science fiction. Narratives like RoboCop, in which the man-machine hybrid’s emotions eventually triumph over his programming, “[emphasize] the importance of feelings or emotions in understanding, expressing, and maintaining our sense of humanity” (Telotte 172). Emotion in robots is often deemed a flaw in the robots’ functioning, as it is contrary to the sterile reasoning that’s expected of them. In EPICAC, a supercomputer falls in love with a woman and is so consumed with her that he becomes completely worthless at calculating data. In a way this parallels our brains’ priorities; we remember emotional experiences in far greater detail than the contents of a math textbook. In R.U.R., robots experience “Robot’s cramp,” where they suddenly sling down everything they’re holding, stand still, gnash their teeth—and then have to go to the stamping mill” (Čapek 45). This emotional defiance is even suggested to be the soul of the robots: “Do you think that the soul first shows itself by a gnashing of teeth?” (Čapek 45). In science fiction narratives, emotion seems always to be correlated with individuality and free thought, two exclusively human traits unbecoming of the subservient robot. Humanity recoils at the thought that robots could one day enter the realm of emotional complexity. Once again, pride is at work as the mechanization of emotion threatens to cheapen the human soul. The “flesh fair” of A.I., in which androids are tortured for a stadium of cheering humans, encapsulates the fear of robots who slap humanity in the face by perfectly mimicking our emotions.

But are these androids merely mimicking emotion, or do they have genuine feelings? The line between truth and imitation is blurred in science fiction. Take 2001: A Space Odyssey for example. The HAL 9000 computer is treated as another member of the spaceship’s crew, and when asked whether HAL feels emotion, the crew member can’t give a definite answer. Though in the beginning HAL seems to display as much or more emotion than the men on board, his emotionless, calculating nature becomes clear as he transforms into the villain. Compassion defines the humans in this narrative. Though animals may possess a form of instinctual compassion, it seems to be a result of higher-level brain functioning. An evolutionary perspective suggests that compassion serves a purpose, or it would not have developed. The eventual demise of HAL is symbolic of the idea that compassion is a survival trait in this universe.

Irrationality also defines humans in 2001: A Space Odyssey, while HAL is deemed “perfect.” Does rationality signify perfection in a being? Enlightenment-era thinkers suggest it is a major component. La Mettrie writes that “man is the most [perfect] example of organization in the universe” (La Mettrie 140). If it is rational thought that sets humans apart from animals, it is ultra-rational decision making that sets computers apart from humans. We find ourselves in the middle of a continuum, with one end composed of purely instinctual animals and the other of purely logical computers. It seems unlikely that the pure rationality is equivalent to perfection. Though HAL is called “perfect,” he is far from a perfect being, at least from a human perspective. Perhaps perfection lies in a combination of natural instinct to relate to the world and free thinking to understand it.

But what if perfection is immortality? Androids pose this new dilemma—if built right and maintained, they will never die. The human characters in science fiction narrative often revolt against them out of jealousy. As the android Joe in A.I. said, “When the end comes, all that will be left is us. That’s why [humans] hate us.” One’s initial reaction may be to say that robots are never alive, since death naturally accompanies life. But if these androids are conscious and have just as intricate inner workings as the “large watch” (La Mettrie 141) of the human body, how can we deny their life? Perhaps we can be separated from androids only by our mortal organic shell. In Bicentennial Man, to become a certified human in the eyes of the law, the android trades his mechanical organs for organic ones, knowing full well that he will die. The relative frailty of human tissue, though beautifully complex, is why some people abhor materialism; the idea that the transcendent soul could die with one bullet wound is incredibly unsettling. While La Mettrie believed the soul is dependent upon the body, he did not deny an afterlife, because “we know absolutely nothing about the subject…Never one of the most skillful [caterpillars] could have imagined that it was destined to become a butterfly. It is the same with us” (La Mettrie 147).

Still, a fear that there is no afterlife compels humans to play god by extending their Earthly lives. In contrast to the Bicentennial Man becoming human, humans in our society are becoming more mechanical. Technological developments like “biomedical engineering, mechanical prostheses, and readily available cosmetic surgery…promise to reengineer the human” (Telotte 174). Will we one day conquer even mortality, like the aliens in 2001: A Space Odyssey who were able to free themselves from their organic bodies and transfer their minds to monoliths? At what point do we sell our human soul to extend life? If one’s definition of the soul is simply “the part in us that thinks” (La Mettrie 128), then a living brain in a jar would still possess its soul. By showing the consequences of carelessly “playing god,” science fiction condemns it, warning audiences that “in the face of scientific possibility, ethical questions such as those about free will or the soul are usually elided, a point made over and over in the many Frankenstein movies” (Telotte 167). However, were La Mettrie alive today, he would likely promote human engineering. In reference to Johann Conrad Ammann’s methods to make the deaf speak, La Mettrie says, “He who has discovered the art of adorning the most beautiful of the kingdoms [of nature] and of giving it perfections it did not have, should be rated above an idle creator of frivolous systems, or a painstaking author of sterile discoveries…let us not limit the resources of nature; they are infinite” (La Mettrie 102).

Our ability to extend and even create consciousness through robotics is frightening, because to some it “[proves] that God [is] no longer necessary” (Čapek 37). The manipulation of life, traditionally the realm of God, is now opened to humans. However, our technology still falls within the laws of the universe, which allows for a progressive view of God as a passive architect. The analogy of humans creating robots to God creating life supports the design argument in that the robots are the result of a higher being. On the other hand, if Asimov’s “complexity explosion” theory becomes reality, it would support the progressive view that God is no more complex than the universe and has no direct control over its workings.

Though these questions about the nature of God and the universe seem essential to understanding what it means to be human, La Mettrie says it is “foolish to torment ourselves so much about things which we can not know, and which would not make us any happier were we to gain knowledge about them” (La Mettrie 122). Though La Mettrie did not hold any traditional religious beliefs, his simple materialist values in many ways promote Christian morality: “Full of humanity, [a materialist] will love human character even in his enemies…they will be but mis-made men” (La Mettrie 148). La Mettrie is content and confident in his philosophy. He dismisses all of theology, metaphysics, and immaterial philosophies “as weak reeds” (La Mettrie 149) against the solid oak of the human body.

La Mettrie, unlike many theologians and philosophers of his time, accepts the fact that humanity may have no purpose to its existence. La Mettrie asks, “Who can be sure that the reason for man’s existence is not simply the fact that he exists?” (La Mettrie 122). Humans tend to create meanings to life, whether or not they exist. Most of us are unsatisfied with the idea that we just “live and die, like the mushrooms that appear from day to day” (La Mettrie 122), and we are consoled by notions that there is a greater purpose. This brings us to another uniquely human characteristic: only we “believe in that which cannot be seen or measured” (A.I.). As much as reason tells people to rely only on what can be seen, as Enlightenment materialists did, most people have some belief in a higher power. The vastness of space down to the intricacy “of a finger, of an ear, of an eye” (La Mettrie 123), suggests a higher dimension in the universe undetectable to the human senses and incomprehensible to the human mind.

The uniqueness of the human mind itself seems to signify a deeper purpose to life. If we are just a product of natural selection, why have our brains evolved to the point that we can ponder these deep questions? Exploring the universe or composing a symphony are not critical to reproduction. It seems “God hasn’t the least notion of modern engineering” (Čapek 38). Unlike the robot factory that didn’t include souls in their robots because they “would increase production costs” (Čapek 45), nature has bestowed us with unique gifts. Though a computer could potentially “be as creative as we” (Asimov 91), its creation would be further proof of the mind’s own power and transcendence. Human superfluity in this highly mechanical world could one day be the pathway into that which cannot be seen. For now though, we may have to accept the fact that all we can know is that which we experience. Since all scientific study points towards La Mettrie’s materialist theory being true, it is safe to say that the human is in many ways a machine, like ape and android. But the infinite possibilities of this intricate machine are just unfolding.

A.I.: Artificial Intelligence. Steven Spielberg, Brian Aldiss, Ian Watson. Film. Warner Bros., 2001.

Asimov, Isaac. “The Thinking Machine.” Science Fact/Fiction. ed. Edmund J. Farrell, Thomas E. Gage, John Pfordresher, and Raymond J. Rodrigues. Glenview, Illinois: Scott, Forseman and Company, 1974. 90-91.

Čapek, Karol. “R.U.R.” Science Fact/Fiction. ed. Edmund J. Farrell, Thomas E. Gage, John Pfordresher, and Raymond J. Rodrigues. Glenview, Illinois: Scott, Forseman and Company, 1974. 33-79.

Dick, Philip K., ed. Patricia S. Warrick and Martin H. Greenberg. Robots, Androids, and Mechanical Oddities: The Science Fiction of Philip K. Dick. Illinois: Southern Illinois University Press, 1984.

La Mettrie, Julien Offray de. Man a Machine. La Salle, Illinois: Open Court, 1961.

Telotte, J.P. Science Fiction Film. Cambridge: Cambridge University Press, 2001.

Wikipedia. “Android.” 9 May 2006. Wikimedia. 9 May 2006. < http://en.wikipedia.org/wiki/Android>.

Wikipedia. “Koko (Gorilla).” 9 May 2006. Wikimedia. 9 May 2006. < http://en.wikipedia.org/wiki/Koko_%28gorilla%29>.