apple tree logo
apple tree logo
Philosophy

Good Places to Start

Readings Online

Related Web Sites

Related Pages

More Readings
(see FAQ)

Recent News about THE TOPICS (annotated)



 

 

Artificial Intelligence cannot avoid philosophy.
If a computer program is to behave intelligently in the real world, it must be provided with some kind of framework into which to fit particular facts it is told or discovers. This amounts to at least a fragment of some kind of philosophy, however naive.
- John McCarthy
Mathematical Logic in Artificial Intelligence. Daedalus 117(1): 297-310, Winter 1988

photo of John McCarthy

man pondering a skullMany traditional philosophical questions take new twists in the context of intelligent machines. For example: What is a mind? What is consciousness? Where do we draw the line on responsibility for actions when dealing with robots, computers, programming? Do human beings occupy a privileged place in the universe? "Is it reasonable to ascribe consciousness to a droll and well-mannered aunt, yet deny it in a robot that behaves like one?" "How do we acquire knowledge of the world? What do our languages tell us about our minds or the world? What is knowledge? What is a proof? What is art? Most of the sources listed below discuss issues in the philosophy of mind, epistemology, and the philosophy of language. Social and ethical considerations are certainly related, but these are listed separately, on the Social and Ethical Implications page. Similarly, many works of science fiction deal with philosophical and social issues, and these are listed separately, on the Science Fiction page.


Good Places to Start

Q&A; Wth Zain Verjee. Transcript of show that aired February 4, 2002 on CNN International with participants Rodney Brooks, Rolf Pfeifer, John Searle, Doug Lenat, and Dick Stottler. "Searle: ... And I have no objections to artificial intelligence technology. Where I draw the line is when they say, well, now we have created a thinking machine, or we've created a conscious machine. Now, I'm glad to see Rolf doesn't say that, but an awful lot of people in AI do."

The Ethics of Creating Consciousness. The Connection radio program hosted by Dick Gordon, with guests: Marvin Minsky, Brian Cantwell Smith, and Paul Davies. From WBUR Boston and NPR. June 13, 2005. "Next month, IBM is set to activate the most ambitious simulation of a human brain yet conceived. It's a model they say is accurate down to the molecule. No one claims the 'Blue Brain' project will be self-aware. But this project, and others like it, use electrical patterns in a silicon brain to simulate the electrical patterns in the human brain -- patterns which are intimately linked to thought. But if computer programs start generating these patterns -- these electrical 'thoughts' -- then what separates us from them? Traditionally human beings have reserved words like 'reasoning,' 'self-awareness,' and 'soul' as their exclusive property. But with the stirring of something akin to electronic consciousness -- some argue that human beings need to give up the ghost, and embrace the machine in all of us." Links to the broadcast are provided.

"It's a three-part question. What is consciousness? Can you put it in a machine? And if you did, how could you ever know for sure?"
- from Kenneth Chang's Can Robots Become Conscious?

Philosophical Encounter. A symposium organized by Aaron Sloman at IJCAI-95, with speakers John McCarthy and Marvin Minsky. Two papers are available online:

  • A Philosophical Encounter. By Aaron Sloman, School of Computer Science and Cognitive Science Research Centre, University of Birmingham, UK.In Proceedings of the 14th International Joint Conference on AI, Montreal, August 1995.
  • Artificial Intelligence and Philosophy. By John McCarthy. ("The present version is somewhat improved. I would like to give better references to work by philosophers that I consider to have positively influenced AI research, but it may take some time to formulate this. By John McCarthy, Computer Science Department, Stanford University, Stanford, CA.
  • Also see: Aaron Sloman interviewed by Patrice Terrier for EACE Quarterly. August 1999; updated 11 July 2002. "Those who are ignorant of philosophy are doomed to reinvent it badly."

Spiritual Robots Symposium: Will Spiritual Robots Replace Humanity by 2100? A series in eleven parts. Made available online by Stanford's Symbolic Systems Program andTechNetCast. "In 1999, two distinguished computer scientists, Ray Kurzweil and Hans Moravec, came out independently with serious books that proclaimed that in the coming century, our own computational technology, marching to the exponential drum of Moore's Law and more general laws of bootstrapping, leapfrogging, positive-feedback progress, will outstrip us intellectually and spiritually, becoming not only deeply creative but deeply emotive, thus usurping from us humans our self-appointed position as 'the highest product of evolution'. Reasonable fact or complete fiction? Expert panel assembled by Doug Hofstadter explores the issue. With presentations by Frank Drake, Doug Hofstadter, John Holland, Bill Joy, Kevin Kelly, John Koza, Ray Kurzweil, Ralph Merkle and Hans Moravec" See/hear/read what they have to say via video, audio and text.

  • Also see our Ethics page for the articles related to the Bill Joy - Ray Kurzweil "dialogue".
"Growing impatient with me as I pressed [Cynthia Breazeal] for a definition of 'alive,' she said: 'Do you have to go to the bathroom and eat to be alive?'"
- from Programming the Post-Human [p.67]

Humans and their Machines. NPR Science Friday (April 26, 2002). "Researchers at the MIT Artificial Intelligence Lab are working to create robots as intelligent and sociable as humans. At the same time, medical advances are making humans more robot-like, with mechanical hearts and working artificial limbs. In this hour, we'll talk with the participants of the First Utah Symposium in Science and Literature about the relationship between humans and machines - and just what it means to be human." Listen to Ira Flatow, anchor of Talk Of The Nation: Science Friday, interview Rodney Brooks, Anne Foerst, and Richard Powers.

two men lost in thoughtSentience: The next moral dilemma. By Richard Barry. ZDNet UK. (January 24, 2001). "If they are right, one day man will give life to a new race of intelligent sentient beings powered by artificial means. If we can, for argument's sake, agree that this is possible we should consider how a sentient artificial being would be received by man and by society. Would it be forced to exist like its automaton predecessors who have effectively been our slaves, or would it enjoy the same rights as the humans who created it, simply because of its intellect?"

AI and Philosophy. From of Chapter One (available online) of George F. Luger's textbook, Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 5th Edition (Addison-Wesley; 2005). "In Section 1.1 we presented the philosophical, mathematical, and sociological roots of artificial intelligence. It is important to realize that modern AI is not just a product of this rich intellectual tradition but also contributes to it. For example, the questions that Turing posed about intelligent programs reflect back on our understanding of intelligence itself. What is intelligence, and how is it described? What is the nature of knowledge? Can knowledge be represented? How does knowledge in an application area relate to problem-solving skill in that domain? How does knowing what is true, Aristotle's theoria, relate to knowing how to perform, his praxis ? Answers proposed to these questions make up an important part of what AI researchers and designers do."

"I think many passionate researchers in artificial intelligence are fundamentally interested in the question of Who am I? Who are people? What are we? There's a sense of almost astonishment at the prospect that information processing or computation, if you take that perspective, could lead to this. Coupled with that is the possibility of the prospect of creating consciousnesses with computer programs, computing systems some day. ... Is it possible - - - is it possible that parts turning upon parts could generate this?" - Eric Horvitz on The Charlie Rose Show

What is Consciousness? This video program is part of the USC Presents...Closer To Truth series available from the ResearchChannel ("a non-profit organization founded in 1996 by a consortium of leading research universities, institutions and corporate research centers dedicated to creating a widely accessible voice for research through video and Internet channels"). Panelists for this August 8, 2004 program include "David Chalmers, professor of philosophy, co-head, Center for Consciousness, University of Arizona, [and] John Searle, professor of philosophy, University of California, Berkeley."

Readings Online

Daniel Dennett. Interviewed by Harvey Blume. The Atlantic Unbound (December 9, 1998). "As posed by Alan Turing, the question of machine intelligence has become a central theme of our time -- and here, as elsewhere, Dennett brings analytic rigor to bear. To the question of whether machines can attain high-order intelligence, Dennett makes this provocative answer: 'The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves.' This is part of Dennett's campaign to overcome the mind-body split bequeathed to us by Descartes, who identified his existence with his self-consciousness (his Cogito) and believed that the thinking portion of the self was attached almost accidentally to the body. Like many in cognitive science, Dennett wants to show that mind and matter are not necessarily opposed."

  • Be sure to see our collection of Interviews for what others have to say about this.
  • Also see: The semantic engineer - Profile: Daniel Dennett. By Andrew Brown. The Guardian (April 17, 2004). "It was at Oxford, too, that he first became interested in computers and the brain. The Oxford philosopher John Lucas had published a paper - still famous - arguing that Gödel's theorem disproved any theory that humans must be machines, and that human thought could be completely simulated on a computer. This is the position Dennett became famous for attacking. ... He's famous among philosophers as an extreme proponent of robot consciousness, who will argue that even thermostats have beliefs about the world. ... 'Conscious robot is not an oxymoron -or maybe it was, but it's not going to be for much longer. How much longer? I don' t know. Turing [50 years ago] said 50 years, and he was slightly wrong, but the popular imagination is already full with conscious robots.'"

Being Real. By Judith S. Donath, MIT Media Lab. [To appear in Goldberg, K. (ed.) The Robot in the Garden: Telerobotics and Telepistemology in the Age of the Internet, MIT Press.] "This essay approaches these issues by focusing on a question with special resonance for both technologists and philosophers: can one tell if the person at the other end of an online discussion is indeed a person? The problem of "other minds", while of perennial philosophical interest, is not one that normally intrudes upon everyday life. One concludes either that others do indeed have minds (the pragmatic approach) or that the state of others' minds is unknowable (the skeptical approach) and then goes about one's daily business. The advent of computer-mediated communication - and, particularly, the advent of communication between man and machine- has changed this dramatically. Suddenly the question of other minds, as in "is the being with whom I am speaking in any way conscious or intelligent?" is no longer a rhetorical question asked only in ironic exasperation, but a pressing problem addressed with increasing frequency by ordinary people (i.e. non-philosophers)."

It's the thought that counts. By Dylan Evans. Guardian (October 6, 2001). "Will machines ever be able to think for themselves? "There are those, however, who argue that the Turing test is, in fact, too difficult: not only does a machine have to be able to think, they say, but it also has to be able to think like a human. Unless we assume, chauvinistically, that human thought is the only kind there is, we shall have to admit that a machine might be able to think and yet still fail the test - it might simply be thinking in a non-human-like way. To illustrate this point, the philosopher Robert French tells the following story. ..."

Robot: Child of God. By Anne Foerst. "Sometimes computers act as if they are possessed -- does that mean they may have souls? Probably not right now, but Anne Foerst explores the possibility of soulful robots. Originally published March 2000 as a chapter in the book "God for the 21st Century." Published on KurzweilAI.net May 9, 2001." Excerpt: "In the light of this understanding of human specialness, I would have a hard time not to assign personhood to a creature possessing the appropriate degree of complexity. If a being is understood as a partner and friend, it seems hard to take this attribute of value, assigned to it by its friends, away."

Constructions of the Mind--Artificial Intelligence and the Humanities. A special issue of the Stanford Humanities Review 4(2): Spring 1995. Stefano Franchi and Guven Guzeldere, editors. From the Table of Contents, you may link to several full-text articles.

Review of The Philosophy of Artificial Intelligence, edited by Margaret A. Boden (1990). Reviewed by Lee A. Gladwin. AI Magazine 14 (2): 67-68.

Talking Heads...A Review of Speaking Minds: Interviews with Twenty Eminent Cognitive Scientists. By Patrick J. Hayes and Kenneth M. Ford. 1997. AI Magazine 18 (2): 123-125.

Kiss me, you human. Robot Kismet can walk, talk, and make logical decisions. What's the next step in the quest for artificial intelligence? By Stephen Humphries.The Christian Science Monitor. (June 28, 2001) "It's the astonishing growth in real-world artificial-intelligence technology that is forcing thinkers, theologians, philosophers, and the public to reexamine some age-old fundamental philosophical questions with a new vigor and urgency. Is it possible to replicate human consciousness in machines? If so, then what does that tell us about consciousness? What does it mean to be human?"

Philosophy of AI. From Mark Humphrys, Lecturer, School of Computing, Dublin City University. "Philosophy of AI is a history of "big names". The debates are great fun to watch. Here are some big names and my take on them. You don't have to agree with me of course."

Are You There, God? (It's Me, HAL) - Science Meets Spirituality. Techies and theologians are talking about the spiritual implications of the Web, robots and virtual reality -- and they think business leaders should too. By Sari Kalin. Darwin Magazine (December 2001).

  • Sidebars to the article include:
    • Q&A; with Anne Foerst. Questions include: How do you start a dialogue between AI and theology? ... Are AI researchers trying to play God? ... Will humanoid robots ever be conscious, and will they ever have souls? ... What are the business implications of your research?
    • Ray Kurzweil Speaks His Mind. Questions include: Will robots ever become conscious? ... How would we ever prove that a machine is -- or isn't -- conscious?

Do Artificial Intelligence Systems Incorporate Intrinsic Meaning? Review by Kendrick Kay. The Harvard Brain (Volume 8; Spring 2001). "Current computer systems can perform seemingly intelligent tasks (e.g., solve problems, play games), but whether these systems possess 'true' intelligence is debatable. Fred Drestke, in his essay 'Machines and the Mental,' argues that because the semantics of the symbols manipulated by machines are defined by humans and can change irrespective of the machine, there is no ‘meaning to the machine’. Thus, machines are not mental and do not have 'true' intelligence. In light of this, however, Dretske posits specific requirements, the fulfillment of which would justify us in attributing intelligence to a system."

God Is the Machine. In the beginning there was 0. and then there was 1. A mind-bending meditation on the transcendent power of digital computation. By Kevin Kelly. Wired Magazine (December 2002).

Philosophical Roots. By Raymond Kurzweil (1990). Chapter Two of the book: The Age of the Intelligent Machine, ed. Kurzweil, Raymond, 23-100. Cambridge, MA: The MIT Press.

  • Also see the Will Machines Become Conscious? collection of articles at KurzweilAI.net.: "'Suppose we scan someone's brain and reinstate the resulting 'mind file' into a suitable computing medium,' asks Raymond Kurzweil. 'Will the entity that emerges from such an operation be conscious?' Asking that question is a good way to start an argument, which is exactly what we intend to do right here."

The Soul of the Ultimate Machine. By John Markoff. The New York Times, December 10, 2000: Section 3, Page 1. "The astrophysicist Larry Smarr talks about what he calls 'the emerging planetary supercomputer.' The Internet, he explains, is evolving into a single vast computer. The big question is 'Will it become self -aware?'"

Some Philosophical Problems from the Standpoint of Artificial Intelligence. By John McCarthy and Patrick J. Hayes. 1969. In Machine Intelligence 4, ed. Meltzer, B., D. Michie and M. Swann, 463-502. Edinburgh, Scotland: Edinburgh University Press. An online version is available at John McCarthy's web site.

What has AI in Common with Philosophy? By John McCarthy. "AI needs many ideas that have hitherto been studied only by philosophers. This is because a robot, if it is to have human level intelligence and ability to learn from its experience, needs a general world view in which to organize facts. It turns out that many philosophical problems take new forms when thought about in terms of how to design a robot. Some approaches to philosophy are helpful and others are not."

Programs of the Mind. Review by Gary Marcus. Science Magazine (June 4, 2004; subscription required). "Eric Baum's What Is Thought? [MIT Press, Cambridge, MA, 2004], consciously patterned after [Erwin] Schrödinger's book [What Is Life?], represents a computer scientist's look at the mind. Baum is an unrepentant physicalist. He announces from the outset that he believes that the mind can be understood as a computer program. Much as Schrödinger aimed to ground the understanding of life in well-understood principles of physics, Baum aims to ground the understanding of thought in well-understood principles of computation. In a book that is admirable as much for its candor as its ambition, Baum lays out much of what is special about the mind by taking readers on a guided tour of the successes and failures in the two fields closest to his own research: artificial intelligence and neural networks. ... Advocates of what the philosopher John Haugeland famously characterized as GOFAI (good old-fashioned artificial intelligence) create hand-crafted intricate models that are often powerful yet too brittle to be used in the real world. ... At the opposite extreme are researchers working within the field of neural networks, most of whom eschew built-in structure almost entirely and rely instead on statistical techniques that extract regularities from the world on the basis of massive experience."

The rise of 'Digital People' - Tales about artificial beings have sparked fascination and fear for centuries; now the tales are turning into reality. Excerpt from "Digital People: From Bionic Humans to Androids" by Sidney Perkowitz, the Charles Howard Candler professor of physics at Emory University. MSNBC Science News (July 13, 2004). "There is, however, considerable debate about the possibility of achieving the centerpiece of a complete artificial being, artificial intelligence arising from a humanly constructed brain that functions like a natural human one. Could such a creation operate intelligently in the real world? Could it be truly self-directed? And could it be consciously aware of its own internal state, as we are? These deep questions might never be entirely settled. We hardly know ourselves if we are creatures of free will, and consciousness remains a complex phenomenon, remarkably resistant to scientific definition and analysis. One attraction of the study of artificial creatures is the light it focuses on us: To create artificial minds and bodies, we must first better understand ourselves. While consciousness in a robot is intriguing to discuss, many researchers believe it is not a prerequisite for an effective artificial being. In his 'Behavior-Based Robotics,' roboticist Ronald Arkin of the Georgia Institute of Technology argues that 'consciousness may be overrated,' and notes that 'most roboticists are more than happy to leave these debates on consciousness to those with more philosophical leanings.' For many applications, it is enough that the being seems alive or seems human, and irrelevant whether it feels so. ... And yet ... there is the dream and the breathtaking possibility that humanity can actually develop the technology to create qualitatively new kinds of beings. These might take the form of fully artificial, yet fully living, intelligent, and conscious creatures -- perhaps humanlike, perhaps not. Or they might take the form of a race of 'new humans'; that is, bionic or cyborgian people who have been enormously augmented and extended physically, mentally, and emotionally."

Jeff Hawkins: Q&A;. Interviewed by Jason Pontin. Technology Review (October 13, 2005). "Jeff Hawkins, the chief technology officer of Palm, was the founder of Palm Computing, where he invented the PalmPilot, and also the founder of HandSpring, where he invented the Treo. But Palm and creating mobile devices are only a part-time job for Hawkins. His true passion is neuroscience. Now, after many years of research and meditation, he has proposed an all-encompassing theory of the mammalian neocortex. 'Hierarchical Temporal Memory' (HTM) claims to explain how our brains discover, infer, and predict patterns in the phenomenal world. JP: Is the higher consciousness -- what philosophers sometimes call 'self-consciousness' -- a byproduct of HTM? JH: Yes. I think I understand what consciousness is now. There are two elements to consciousness. First, there is the element of consciousness where we can say, 'I am here now.' This is akin to a declarative memory where you can actively recall doing something. Riding a bike cannot be recalled by declarative memory, because I can't remember how I balanced on a bike. But if I ask, 'Am I talking to Jason?' I can answer 'Yes.' So I like to propose a thought experiment: if I erase declarative memory, what happens to consciousness? I think it vanishes. But there is another element to consciousness: what philosophers and neuroscientists call 'qualia:' the feeling of being alive. ..."

Chinese Room Argument. Entry by John R. Searle in the MIT Encyclopedia of Cognitive Science. "The Chinese room argument is a refutation of strong artificial intelligence. 'Strong AI' is defined as the view that an appropriately programmed digital computer with the right inputs and outputs, one that satisfies the Turing test, would necessarily have a mind."

  • Also see:
    • Chinese room - An argument forwarded by John Searle intended to show that the mind is not a computer and how the Turing Test is inadequate. By Chris Eliasmith. Dictionary of Philosophy of Mind.
    • The Chinese Room Argument. By Larry Hauser. The Internet Encyclopedia of Philosophy.
    • Two interviews with John Searle and this panel discussion.

Alert to alarmed robots. By Luke Slattery. The Weekend Australian (August 7, 2004; subscription req'd.). "[I, Robot] may be loosely indebted to Isaac Asimov's short story cycle of the same name, with a few obvious bows and scrapes to Mary Shelley's Frankenstein, but it is primarily skirting around a very lively contemporary controversy in the philosophy of the mind. Can consciousness be simulated through artificial intelligence or is it a distinctively biological process? The two most vocal antagonists in the debate are the director of the Centre for Cognitive Studies at Tufts University, Daniel C. Dennett, author of Darwin's Dangerous Idea and Consciousness Explained; and John R.Searle, professor of philosophy at the University of California, author of The Rediscovery of the Mind and The Construction of Social Reality.Dennett believes, in essence, that in the foreseeable future computer engineers will fashion robots able to feel pain and experience emotions. They might legitimately claim the same civic rights as those of us with a mortal casing. What's more, in essence we are sophisticated robots, or zombies. ... Searle has attacked Dennett vigorously, describing his view as a form of 'intellectual pathology' because it denies the existence of consciousness; consciousness for Searle is a state of sentience and awareness resulting from neurobiological process -- it cannot be artificially engineered. ... I, Robot groans somewhat under the load but it ultimately delivers a timely and relevant pop cultural expression of an argument that occupies some of the best minds in science and philosophy. The I, Robot story, briefly, concerns a new generation of household super-robots that run amok; they not only refuse to yield to human authority, they covet power. They give all the appearance, in other words, of exercising free will. ... Unanticipated, these free radicals engender questions of free will, creativity and even the nature of what we might call the soul."

Artificial Intelligence and Philosophy. From Aaron Sloman. "This was a lecture to first year AI students at Birmingham, Dec 11th 2001, on AI and Philosophy, explaining how AI relates to philosophy and in some ways improves on philosophy. It was repeated December 2002, December 2003, October 2004, each time changing a little. It introduces ideas about ontology, architectures, virtual machines and how these can help transform some old philosophical debates."

The Computer Revolution in Philosophy. Philosophy, science and models of mind. By Aaron Sloman. [Originally published in 1978 by Harvester Press and Humanities Press. Though the book is now out of print, it has been made available online by the author.] From the Preface: "And computing is more important than computers: programming languages, computational theories and concepts these are what computing is about, not transistors, logic gates or flashing lights. Computers are pieces of machinery which permit the development of computing as pencil and paper permit the development of writing. In both cases the physical form of the medium used is not very important, provided that it can perform the required functions. Computing can change our ways of thinking about many things, mathematics, biology, engineering, administrative procedures, and many more. But my main concern is that it can change our thinking about ourselves: giving us new models, metaphors, and other thinking tools to aid our efforts to fathom the mysteries of the human mind and heart. The new discipline of Artificial Intelligence is the branch of computing most directly concerned with this revolution. By giving us new, deeper, insights into some of our inner processes, it changes our thinking about ourselves. It therefore changes some of our inner processes, and so changes what we are, like all social, technological and intellectual revolutions."

At one with the universe - Do androids dream of electric sheep? Colin Tudge in London examines definitions of consciousness and artificial intelligence. The Age (February 10, 2003). "There are three points of view. The first, which can be traced back to the founder of modern computing, Alan Turing, and is embraced by the Oxford physiologist Colin Blakemore, is pragmatic. Turing pointed out that it is impossible to know whether other human beings are conscious. Because we feel conscious, we assume other people must be like us. But this can only be an inference. But suppose we made a computer - a robot - that could make whimsical jokes and pass the sandwiches without being asked.... [U]ntil now, three main views have prevailed. One is the 'dualism' of Rene Descartes, which says the universe has two components - matter and mind. The second is the modern orthodox idea - that only matter 'exists', and that mind (including consciousness) is just an 'epiphenomenon'; something that seems to emerge when matter is suitably organised. The third is reflected most starkly in the idealist philosophy of Bishop Berkeley; that only thought is real, and matter is an illusion. But the emerging modern view says that matter and consciousness are not separate entities, as Descartes supposed, but complementary aspects of the universe. Both exist, but neither is primary. Each is the obverse of the other, like two sides of a coin." (It is also this article from which the question toward the top of this page was excerpted.)

Growing Up in the Age of Intelligent Machines: Reconstructions of the Psychological and Reconsiderations of the Human. By Sherry Turkl. From Ray Kurzweil's book, The Age of Intelligent Machines(1990). "Thus, the presence of intelligent machines in the culture provokes a new philosophy in everyday life. Its questions are not so different than the ones posed by professionals: If the mind is (at least in some ways) a machine, who is the actor? Where is intention when there is program? Where is responsibility, spirit, soul? In my research on popular attitudes toward artificial intelligence I have found that the answers being proposed are not very different either. Faced with smart objects, both professional and lay philosophers are moved to catalog principles of human uniqueness."

A Mind for Consciousness. "Somewhere in the brain, Christof Koch believes, there are certain clusters of neurons that will explain why you're you and not someone else." By Julie Wakefield. Scientific American (July 2001).

The Age of Intelligent Machines: Can Computers Think? By Mitchell Waldrop. From Ray Kurzweil's book, The Age of Intelligent Machines (1990). "The complexities of the mind mirror the challenges of Artificial Intelligence. This article discusses the nature of thought itself -- can it be replicated in a machine?" Among the topics covered are: Can a Machine Be Aware?, The Chinese Room, and, Science as a Message of Hope.

Edison's Eve - A Magical History of the Quest for Mechanical Life. By Gaby Wood. Anchor Trade Paperback (July 2003). Author Q & A: "Q: You begin your 'Magical History of the Quest for Mechanical Life' at a very specific place and time: with the story of the philosopher Rene Descartes sailing to Sweden in the mid-17th-century, in the company of an android. Why this moment? A: Although people have tried to construct mechanical simulations of human and animal life for millennia (from Plato’s contemporary, Archytas of Tarentum, to Albertus Magnus, a 13th-century Dominican monk), I wanted to show that it was only really during the Enlightenment that these attempts became more than practical enterprises: they were philosophical experiments as well. Descartes was an immediate precursor to the philosophers of the 18th century who were preoccupied with the question of whether humans were born with a soul, or were merely very complex machines. In their quest for an answer to this question, they built machines in the image of men and women, thinking: if men are just machines, then does a mechanically-constructed man amount to a human being? Rather than being a craft, in other words, the art of mechanics became, in that period, a way of thought. The objects made by the mechanicians of the Enlightenment were puzzles, riddles, concrete attempts to answer conceptual problems: Who are we? What are we made of? What makes us human? Can we be replicated artificially? These questions, which we are still trying to answer today -- at MIT’s Artificial Intelligence lab, at the cloning clinic of Severino Antinori -- were first crystallized by Descartes and his followers."

AI and Philosophy: How Can You Know the Dancer from the Dance? By Linda World. IEEE Intelligent Systems (July/August 2005; Vol. 20 (4): 84 - 85). Excerpt from the abstract: "Aaron Sloman was teaching philosophy at the University of Sussex in 1969, when he met Max Clowes. Clowes had done pioneering work in computer image interpretation. Now, he was asking Sloman to drop the way he learned to do philosophy at Oxford and to start studying artificial intelligence instead. Nine years later, Sloman published The Computer Revolution in Philosophy...."

  • The full text of the article is available for a limited promotional period. Here's an excerpt: "Sloman sees 'a deep continuity' between AI and very old problems in philosophy. Philosophy needs AI to progress in its study of difficult questions about the nature of mind. AI needs philosophy to clarify its requirements analyses."

Mind and Body: Rene Descartes to William James. By Robert H. Wozniak. "The common sense view of mind and body is that they interact. Our perceptions, thoughts, intentions, volitions, and anxieties directly affect our bodies and our actions. States of the brain and nervous system, in turn, generate our states of mind. Unfortunately, the common sense notion appears to involve a contradiction."

Related Web Sites

AI on the Web: Philosophy and the Future. A resource companion to Stuart Russell and Peter Norvig's "Artificial Intelligence: A Modern Approach."

Center for Philosophy of Science at the University of Pittsburgh. Resources include the PhilSci Archive and a collection of links to related sites.

Essays on the Philosophy of Technology. Maintained by Dr. Frank Edler, Metropolitan Community College, Omaha, Nebraska. A well-presented and wide ranging list of links to full-text online papers and other websites.

Models of Consciousness Workshop - In Search for a Unified Theory. September 1-3, 2003. Organized by Ricardo Sanz (Universidad Politécnica de Madrid, Spain), Aaron Sloman (University of Birmingham, U.K.), and Ron Chrisley (University of Sussex, U.K.). "Objectives: The Context for the Workshop - The objective of building conscious machines was already a research topic in the early years of artificial intelligence, but the extreme difficulties encountered at that time in developing implementable models of even the simplest features of human intelligence halted the research and put machine consciousness into the bin of Utopian research topics (more or less like time-travel, immortality or hair-restoring). But the case for consciousness is a little bit different because consciousness does exist now. Consequently, we know a priori that the construction of a conscious entity is possible. Research in artificial consciousness is not any longer Utopian research...."

Newsletter on Philosophy and Computers. Online articles from the most recent issue of this newsletter, published by the American Philosophical Association.

Online Papers on Consciousness. Compiled by David Chalmers, Professor of Philosophy and Associate Director of the Center for Consciousness Studies at the University of Arizona, this well organized site offers links to 698 online papers. WOW!

Philosophy in Cyberspace: Philosophy of Mind, AI, and Cognitive Science. Maintained by Dey Alexander, Monash University, Melbourne, Australia. Extensive and annotated list of links to established web sites.

Philosophy of Artificial Intelligence. Part of Contemporary Philosophy of Mind: An Annotated Bibliography. Compiled by David Chalmers, Professor of Philosophy and Associate Director of the Center for Consciousness Studies at the University of Arizona.

The Blurring Test [In Development]. "The Blurring Test playfully explores the increasingly blurred lines between humans and machines. For decades, the Turing test for Artificial Intelligence has forced computers to mimic humans. But why let humans off the hook? This project turns the test on its head by creating various challenges for humans to prove their humanity... to computers and to themselves." Visit Web Lab's site and converse with the chatterbot, MR MIND.

  • "Can you claim that your 'human' attributes will forever be exclusively human? MR MIND asks you to take a close look at the changing boundaries between humans and machines; his cause is your understanding. The Blurring Test is about human progress: Someday it might be important to convince our computers (and each other) that we are human." -from the Introduction
  • also check out the article, Being Real, by Judith Donath

Related Pages:

More Readings:

Articles from Newspapers, Journals & Magazines

Readings & Chapters from Books

Books

Articles from Newspapers, Journals and Magazines

Abrahamson, Joseph R. 1994. Mind, Evolution and Computers. AI Magazine 15 (1): 19-22. "Science deals with knowledge of the material world based on objective reality. It is under constant attack by those who need magic, that is, concepts based on imagination and desire, with no basis in objective reality. A convenient target for such people is speculation on the machinery and method of operation of the human mind, questions that are still obscure in 1994. In The Emperor's New Mind, Roger Penrose attempts to look beyond objective reality for possible answers, using, in his argument, the theory that computers will never be able to duplicate the human experience. This article attempts to show where Penrose is in error by reviewing the evolution of men and computers and, based on this review, speculates about where computers might and might not imitate human perception. It then warns against the dangers of passive acceptance when respected scientists venture into the occult."

Aleksander, Igor. 2003. I, computer. New Scientist (July 19, 2003). "Will there come a day when a machine declares itself to be conscious? An increasing number of laboratories around the world are trying to design such a machine. Their efforts are not only revealing how to build artificial beings, they are also illuminating how consciousness arises in living beings too. At least, that's how those of us doing this research see it. Others are not convinced. Generally speaking, people believe that consciousness has to do with life, evolution and humanity, whereas a machine is a lifeless thing designed by a limited mind and has no inherent feeling or humanity. So it is hardly surprising that the idea of a conscious machine strikes some people as an oxymoron." At the end of the article, he lists his five axioms of consciousness: a sense of place, imagination, directed attention, planning, and decision/emotion.

Brean, Joseph. Scientist says you can be a person without being human - Sussing out a 'partner species.' National Post (October 11, 2002). "Watching this scene on video in a conference hall at the University of Waterloo, Canada's top engineering school, it is easy to believe robots are the way of the future. It involves a far greater leap of faith to believe Anne Foerst, who is trying to convince the audience that robots are the people of the future. Dr. Foerst, a Lutheran minister and computer scientist who helped build Kismet, believes it is only a matter of time before robots have souls. ... In developing a theory of personhood that includes robots, Dr. Foerst is slowly reconciling her religious beliefs with her scientific theories, and teasing out the religious implications of playing God with science. She believes building robots in our image will transfer to them the gift we received by being built in God's image. They won't be human, she says, but they will be persons. After all, she says, 'God was not intending to build gods.' ... Among the computer scientists and religious scholars who came to hear Dr. Foerst's talks at the University of Waterloo, there was a clear consensus that what sets us apart from robots is the nature of our intelligence. Whereas today's robots run through their 'mental' operations with brute force, the human brain is more intuitive and adept at taking logical shortcuts. This supposed difference clouds a key similarity, Dr. Foerst says, and this similarity is at the heart of her work. She argues that intelligence depends on the body; the mind does not exist, nor did it evolve, separately from the limbs and muscles it controls. This kind of thinking puts her in a camp that broke away from the Cartesian idea that we are minds that have bodies, and replaced it with the notion that we are simply thinking bodies. The insight had a profound effect on robotics."

Chang, Kenneth. Can Robots Become Conscious? #14 of the 25 of the most provocative questions facing science. The New York Times (November 11, 2003; no fee reg. req'd.). "It's a three-part question. What is consciousness? Can you put it in a machine? And if you did, how could you ever know for sure? ... The field of artificial intelligence started out with dreams of making thinking -- and possibly conscious -- machines, but to date, its achievements have been modest. No one has yet produced a computer program that can pass the Turing test. ... But with the continuing gains in computing power, many believe that the original goals of artificial intelligence will be attainable within a few decades. ... To Dr. [Hans] Moravec, if it acts conscious, it is. To ask more is pointless. Dr. [David] Chalmers regards consciousness as an ineffable trait, and it may be useless to try to pin it down."

Dennett, Daniel C. 1988. When Philosophers Encounter Artificial Intelligence. Daedalus 117 (1): 283-296. *NOTE: All articles in this section listed from the journal Daedalus 117(1) are reprinted in the book The Artificial Intelligence Debate: False Starts, Real Foundations, ed. Stephen R. Graubard. Cambridge, MA: MIT Press, 1990.

Doyle, Jon. 1983. What is Rational Psychology? Toward a Modern Mental Philosophy. AI Magazine 4 (3): 50-53. "Rational psychology is the conceptual investigation of psychology by means of the most fit mathematical concepts. Several practical benefits should accrue from its recognition."

Dreifus, Claudia. 2000. A Conversation with Anne Foerst [Director of MIT's God and Computers project]. The New York Times. Science, page D3. November 7, 2000.

Gelernter, David. 1997. How Hard is Chess? Time Magazine (May 19, 1997): 72.

Kirsch, D. 1991a. Foundations of AI: The Big Issues. Artificial Intelligence 47: 3-30.

Kirsch, D. 1991b. Today the Earwig, Tomorrow Man? Artificial Intelligence 47: 161-184

LaForte, Geoffrey, Patrick J. Hayes, and Kenneth M. Ford. 1998. Why Godel's Theorem Cannot Refute Computationalism. Artificial Intelligence 104 (1/2): 211-264. The authors find flaws in Roger Penrose's claim that Godel's theorem implies that human thought cannot be mechanized..

McCorduck, Pamela. 1988. Artificial Intelligence: An Apercu. Daedalus 117 (1): 65-84.

Papert, Seymour. 1988. One AI or Many? Daedalus 117 (1): 1-14.

Pinsker, Steven. Can a computer be conscious? U.S.News & World Report (August 18, 1997).

Putnam, Hillary. 1988. Much Ado About Not Very Much. Daedalus 117 (1): 269-282.

The Charlie Rose Show: A Conversation About Artificial Intelligence (December 21, 2004), with Rodney Brooks (Director, MIT Artificial Intelligence Laboratory & Fujitsu Professor of Computer Science & Engineering, MIT), Eric Horvitz (Senior Researcher and Group Manager, Adaptive Systems & Interaction Group, Microsoft Research), and Ron Brachman (Director, Information Processing Technology Office, Defense Advanced Research Project Agency, and President, American Association for Artificial Intelligence). "Rose: What do you think has been the most important advance so far? Brachman: A lot of people will vary on that and I'm sure we all have different opinions. In some respects one of the - - - I think the elemental insights that was had at the very beginning of the field still holds up very strongly which is that you can take a computing machine that normally, you know, back in the old days we think of as crunching numbers, and put inside it a set of symbols that stand in representation for things out in the world, as if we were doing sort of mental images in our own heads, and actually with computation, starting with something that s very much like formal logic, you know, if-then-else kinds of things, but ultimately getting to be softer and fuzzier kinds of rules, and actually do computation inside, if you will, the mind of the machine, that begins to allow intelligent behavior. I think that crucial insight, which is pretty old in the field, is really in some respects one of the lynch pins to where we've gotten. ... Horvitz: I think many passionate researchers in artificial intelligence are fundamentally interested in the question of Who am I? Who are people? What are we? There's a sense of almost astonishment at the prospect that information processing or computation, if you take that perspective, could lead to this. Coupled with that is the possibility of the prospect of creating consciousnesses with computer programs, computing systems some day. It's not talked about very much at formal AI conferences, but it's something that drives some of us in terms of our curiosity and intrigue. I know personally speaking, this has been a core question in the back of my mind, if not the foreground, not on my lips typically, since I've been very young. This is this question about who am I. Rose: ... can we create it? Horvitz: Is it possible - - - is it possible that parts turning upon parts could generate this?"

Sokolowski, Robert. 1988. Natural and Artificial Intelligence. Daedalus 117 (1): 45-64.

Tolson, Jay. 2000. Who am I? U.S.News & World Report (June 12, 2000). "Introspective scientists are probing the mystery of human consciousness."

Ullman, Ellen. 2002. Programming the Post-Human: Computer science redefines "life." Harper's, Vol. 305, No. 1929: 60-70. "Growing impatient with me as I pressed [Cynthia Breazeal] for a definition of 'alive,' she said: 'Do you have to go to the bathroom and eat to be alive?'" [p. 67]

Readings and Chapters from Books

Glymour, Clark, Kenneth Ford, and Patrick Hayes. 1995. The Prehistory of Android Epistemology. In Computation and Intelligence: Collected Readings, ed. Luger, George F., 3-21. Menlo Park/Cambridge/London: AAAI Press/The MIT Press. Going back to the ancient Greeks, the authors put the philosophical questions posed by AI into the context of Western philosophical tradition.

McCarthy, John. 1977. Epistemological Problems in Artificial Intelligence. In Readings in Artificial Intelligence, ed. Webber, Bonnie Lynn and Nils J. Nilsson, 459-465. Palo Alto, CA: Tioga Publishing Co., 1977. (Originally published in Proceedings of the Fifth International Joint Conference on Artificial Intelligence [IJCAI-77].)

Minsky, Marvin. 1961. Steps Toward Artificial Intelligence. In Computers and Thought, ed. Feigenbaum, Edward A. and Julian Feldman, Cambridge, MA: MIT Press, 1995.

Russell, Stuart, and Peter Norvig. 1995. Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall. Chapter 26 (pages 817-841) takes an accessible approach to the question "Can machines think?" by clearly analyzing the question, describing positions taken by various contributors to the discussion, and simply defining much of the jargon related to the philosophical issues.

Searle, John R. 1992. The Rediscovery of the Mind. Cambridge, MA: MIT Press.

Waltz, David L. 1988. The Prospects for Building Truly Intelligent Machines. In The Artificial Intelligence Debate, ed. Graubard, Stephen R., Cambridge, MA: The MIT Press.

Winograd, Terry. 1990. Thinking Machines: Can there be? Are we? In Foundations of Artificial Intelligence: A Sourcebook, ed. Partridge, D. and Y. Wilks, 167-189. Cambridge, England: Cambridge University Press.

Books

Anderson, Alan R., editor. 1964. Minds and Machines. Englewood Cliffs, NJ: Prentice-Hall.

Boden, Margaret, editor. 1990. The Philosophy of Artificial Intelligence. Oxford: Oxford University Press.

Bynum, Terrell Ward, and James H. Moor, editors. 1998. The Digital Phoenix: How Computers are Changing Philosophy. Cambridge, MA: Blackwell Publishers. A collection of readings.

Churchland, Paul M. 1992. Matter and Consciousness. Cambridge and London: MIT Press. Written expressly for people who are not professionals in philosophy or artificial intelligence, Churchland writes about the nature of conscious intelligence with an eye toward the progress science is making in understanding it.

Churchland, Paul M. 1995. The Engine of Reason, the Seat of the Soul. Cambridge, MA: MIT Press/Bradford Books. Explanations of recent scientific discoveries about the mind by a philosopher who examines not only the science, but also social and ethical implications of ascribing consciousness to all but the simplest of animal life.

Churchland, P. S. 1986. Neurophilosophy: Toward a Unified Science of the Mind-Brain. Cambridge, MA: MIT Press.

Clark, Andy. 1997. Being There: Putting Brain, Body, and World Together Again. Cambridge, MA and London: The MIT Press.

Copeland, Jack. 1993. Artificial Intelligence: A Philosophical Introduction. Oxford: Blackwell.

Crane, Tim. 1991. The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation. New York and London: Penguin Books.

Cummins, Robert, and John Pollock, editors. 1991. Philosophy and AI: Essays at the Interface. Cambridge, MA: MIT Press.

Dennett, Daniel C. 1998. Brainchildren: Essays on Designing Minds. Cambridge, MA: MIT Press/Bradford Books. A multidisciplinary look at the mind -- biological, social, philosophical. Reprinted from scholarly journal articles appearing 1984-1996.

Dennett, Daniel. 1978. Brainstorms: Philosophical Essays on Mind and Psychology. Montgomery, VT: Bradford Books.

Denning, Peter, and Bob Metcalfe, editors. 1997. Beyond Calculation: The Next 50 Years of Computing. New York: Springer Verlag. Essays by Terry Winograd, Sherry Turkle, Donald Norman and many others.

Dreyfus, Hubert. 1992. What Computers Still Can't Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press.

Dreyfus, Hubert. 1979. What Computers Can't Do: The Limits of Artificial Intelligence. Revised edition. New York: Harper and Row.

Dreyfus, H., S. Dreyfus, and T. Athanasiou. 1986. Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford: Blackwell. Ford, Kenneth, Clark Glymour, and Patrick Hayes, editors. 1995. Android Epistemology. Menlo Park, CA: AAAI Press. Approaches artificial intelligence and cognitive psychology as a unified endeavor, with AI focused on possible ways of engineering intelligence and cognitive science on reverse engineering a particular intelligent system. Sixteen essays by computer scientists and philosophers.

Gershenfeld, Neil. 1998. When Things Start to Think. New York: Henry Holt and Co. Philosophical discussion and lots of information about new inventions at MIT's Media Lab.

Gelernter, David. 1994. The Muse in the Machine: Computerizing the Poetry of Human Thought. New York: Free Press of Macmillan, Inc.

Graubard, Stephen, editor. 1988. The Artificial Intelligence Debate: False Starts, Real Foundations. Cambridge, MA: MIT Press. Reprinted 1990. Essays that examine fundamental conceptual issues in AI. This book reprints a collection of articles from the journal Daedalus 117(1). Contributors include Dennett, Dreyfus, McCarthy, McCorduck, Papert, Waltz, and others. For individual annotations, see the "Articles" section, above.

Haugeland, John., editor. 1997. Mind Design II: Philosophy, Psychology, Artificial Intelligence. 2nd edition. Cambridge, MA: MIT Press. With contributions from both scientists and philosophers, this book retains a few classic essays from the first edition and expands with articles on connectionism, dynamical systems, and symbolic versus nonsymbolic models

Haugeland, John. 1985. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press.

Hofstadter, Douglas R., and Daniel C. Dennett 1981. The Mind's I: Fantasies and Reflections on Self and Soul. New York: Basic Books. Philosophical essays on the self, the intellect, and consciousness.

Kurzweil, Ray. 1998. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking. Speculations on how society will be influenced and affected as intelligent machines become more powerful and prevalent. "This is a book for computer enthusiasts, science fiction writers in search of cutting-edge themes and anyone who wonders where technology is going next." (New York Times Book Review, Jan. 3, 1999.)

Penrose, Roger. 1989. The Emperor's New Mind: Concerning Computers, Minds and the Laws of Physics. Oxford: Oxford University Press.

Ringle, M. 1979. Philosophical Perspectives in Artificial Intelligence. Atlantic Highlands, NJ: Humanities Press.

Sloman, Aaron. 1978. The Computer Revolution in Philosophy. Hassocks, Sussex, UK: Harvester Press. [Out of print, but available online from the author.]

Smith, Brian Cantwell. 1996. On the Origin of Objects. Cambridge, MA: MIT Press/Bradford Books. The author offers his conclusions about the philosophical and metaphysical underpinnings of artificial intelligence, cognitive science, and computation.

Thagard, Paul 1993. Computational Philosophy of Science. Cambridge, MA: MIT Press.