apple tree logo
apple tree logo
Interviews & Oral Histories
(a subtopic of Resources)



 

 
an interview


It is the province of knowledge to speak,

and it is the privilege of wisdom to listen.

- Oliver Wendell Holmes

Interviews

Colin Angle:

  • The leader of the robot pack. By Michael Kanellos. CNET News.com (July 7, 2005). Colin Angle, co-founder and CEO of iRobot "met recently with News.com to demonstrate the next version of the Roomba and talk about the future of the robotics market."
  • Robots: Today, Roomba. Tomorrow... iRobot CEO Colin Angle says the robotic vacuum cleaner "is insanely cool because it retails for $200" -- and more products like it are on the way. BusinessWeek Online (May 6, 2004). "Angle recently talked to Adam Aston, BusinessWeek's Industries editor, about what iRobot has learned from the Roomba and what the future holds for its descendants."
  • The robots are coming. By Larry Dignan. (October 8, 2002). "CNET News.com recently spoke to Colin Angle, co-founder and CEO of iRobot, to talk about the future of robotics and how robots will infiltrate people's lifestyles."

Michael Arbib. USC's Michael Arbib. By Eric Smalley. Technology Research News (October 3, 2005). "Technology Research News Editor Eric Smalley carried out an email conversation with Michael Arbib, the Fletcher Jones Professor of Computer Science and a Professor of Biological Sciences, Biomedical Engineering, Electrical Engineering, and Neuroscience and Psychology at the University of Southern California (USC) in September 2005. ... Throughout his career Arbib has encouraged an interdisciplinary environment where computer scientists and engineers can talk to neuroscientists and cognitive scientists. ... TRN: Context -- the body, the physical environment, society -- seems to play a critical role in shaping consciousness and intelligence. What does this mean for building artificial intelligences? Will we be able to relate to truly intelligent machines? Arbib: ... I do think that there will be future robots that indeed have emotions -- as high-level indicators of process state that set an overall bias on decision making and condition patterns of communication with others. However, I also think that emotions that are useful (but sometimes harmful) for robots interacting with other robots (imagine a team of autonomous robots responsible for spaceship maintenance on a decades long mission, or a team of agents monitoring the whole Earth for ecosystem evaluation) need not necessarily be similar to the "mammalian humans" that are so much part of human life. TRN: One of the big challenges in robotics is simply giving machines the ability to accurately perceive their surroundings. What will it take to build machines that can operate effectively in unfamiliar, dynamic environments? Arbib: One part of the answer, clearly, is that learning will be necessary. ... TRN: Is there a particular image (or images) related to science or technology that you find particularly compelling or instructive? Why do you like it; why do you find it compelling or instructive? ... "

Ronald Arkin. Georgia Tech's Ronald Arkin (September 12, 2005). "Technology Research News Editor Eric Smalley carried out an email conversation with Georgia Institute of Technology professor Ronald C. Arkin in August of 2005 that covered the economics of labor, making robots as reliable as cars, getting robots to trust people, biorobotics, finding the boundaries of intimate relationships with robots, how much to let robots manipulate people, giving robots a conscience, robots as humane soldiers and The Butlerian Jihad. ..."

Ruth Aylett. An Interview with Artificial Intelligence expert Ruth Aylett. The Science Teacher (the National Science Teachers Association's journal for high school science teachers). January 2003; page 52. "In this month’s special issue on math and science, a particular article, Field Trips Online , describes the use of solar-powered robots to sample and analyze water in lakes. This example of the increasingly important role robots have in our lives led us to sit down with Ruth Aylett, Professor of Intelligent Virtual Environments at the University of Salford in the U.K. She has been involved in the vast field of Artificial Intelligence (AI) -- the study of how computer systems can simulate intelligent processes -- for 20 years now, and robotics specifically for the past 14 years. She recently published Robots: Bringing Machines to Life." Interview questions include: What inspired you to become involved in AI?; What educational background is needed to design robots?; and, What advice would you give to high school students interested in AI?

Benjamin B. Bederson. Checking in with Ben Bederson. Ubiquity (October 13 - 19, 2004; Volume 5, Issue 32). "Benjamin B. Bederson is an Associate Professor of Computer Science and director of the Human-Computer Interaction Lab at the Institute for Advanced Computer Studies at the University of Maryland, College Park. His work is on information visualization, interaction strategies, and digital libraries. UBIQUITY: Why don't we start by talking a little about the Human-Computer Interaction Lab. Tell us something about its history. BEDERSON: I believe we're the oldest center in the country focusing on research in Human Computer Interaction. We were started just over 21 years ago by Ben Shneiderman. He's still happily continuing to work here, but about four years ago, he asked me to take over as Director. We've chosen to remain a relatively small group, with a half-dozen faculty, about ten full-time researchers, and about thirty students, mostly working towards their PhDs. Our focus is thinking about the user experience: how can we improve people's lives using computers. I see our lab goals being to design, implement and evaluate novel interaction technologies that are universally usable, useful, efficient and appealing."

Tim Berners-Lee:

  • The Web's Father Expects a Grandchild - Tim Berners-Lee is working on the "Semantic Web," with its richer information links that unlock the power of "unplanned reuse of data." Interviewed by Andy Reinhardt. BusinessWeek online (October 22, 2004). "Q: You're working now on the Semantic Web, which will allow richer associations among data and, as the name implies, start to create a sense of "meaning" in online information. Where are things heading? A: The impact of the Semantic Web will be different from [today's] hypermedia Web. ... The Semantic Web is different. It's a space of data. It's all the information which is now in databases, spreadsheets, and application-specific files, like calendar files or photo metadata. What's exciting about the Semantic Web is its potential for serendipity, the unplanned reuse of data. The effect will be even more powerful for the Semantic Web because you won't have to be a person following the links. A machine will be able to follow links. Q: Can you give me an example? ..."
  • Net guru peers into web's future - The inventor of the web, Tim Berners-Lee, outlines his ideas for a more "intelligent" web in an interview with the BBC programme, Go Digital (September 25, 2003).

Albert Borgman. An interview/dialogue with Albert Borgmann [Holding On to Reality: The Nature of Information at the Turn of the Millennium (1999)] and N. Katherine Hayles [How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (1999)] on humans and machines. From the University of Chicago Press. "Q: It sounds like you two disagree about the extent to which artificial intelligence could mimic human intelligence. But you both seem to be saying that's not the central issue anyway. The real issue is not whether a machine will be built that can replicate human behavior, but whether humans will begin (or continue) to think of themselves as machines. Is that right?"

Cynthia Breazeal.

  • A Conversation with Cynthia Breazeal (March 1 , 2005; available as a web feature of Scientific American Frontiers' Robot Pals). "The elderly are often reticent about picking up a new technology, so it can't be something too confusing or esoteric. It probably has to be something that they see as genuinely helpful, but in the big picture people should actually really enjoy having these robots around as well. In many ways I think about a blind person's relationship with a seeing eye dog. The seeing eye dog performs a very critical function for that person, a very pragmatic, useful function. But on the other hand, people adore having their dog! So my vision was to use this social form of interaction to really address the needs of a person on a holistic level, not just about helping them with their cognitive and physical abilities, but also appreciate that people are social and emotional creatures and they have pleasure in interacting with things in this way. ... There are still a lot of social barriers to women pursuing math and science. We're not encouraged as much at an early age as boys, that's just a fact. ... I think it's important to appreciate that there are outstanding women scientists and there are outstanding men scientists, this isn't a gender thing. It has much more to do with trying to encourage and foster a person to do the best work they can in their chosen field of study." Other topics covered in this interview include: "her first interest in robots," "how she got started on her career path," and "recommendations for those interested in following in her footsteps."
  • A Conversation with Cynthia Breazeal - A Passion to Build a Better Robot, One With Social Skills and a Smile. By Claudia Dreifus. The New York Times (June 10, 2003; no fee reg. req'd.) / also available from CNET. "Dr. Cynthia L. Breazeal of the Massachusetts Institute of Technology is famous for her robots, not just because they they are programmed to perform specific tasks, but because they seem to have emotional as well as physical reactions to the world around them. They are 'embodied,' she says, even 'sociable' robots -- experimental machines that act like living creatures. ... Q. What is the root of your passion for robots? ... Q. How did you get into robot building?"

Rodney Brooks:

  • Read his responses to questions posed by viewers after the airing of the Scientific America Frontiers special Robots Alive! Among the questions asked are: I'm curious to know what ideas you have for the future - and - I am interested in a career like yours, designing and building robots. What courses would I have to take in college? Do you have any other helpful information to help me get started in the field of robotics?
  • Robot risk 'is worth it.' HARDtalk's Lyce Doucet interviews Rodney Brooks. BBC (August 19, 2002). Visit the site and watch the television interview.
  • Designed for life. Duncan Graham-Rowe interviews Rodney Brooks. New Scientist (June 1, 2002). Among the questions posed are: Some critics might accuse you of getting religious when you talk about this mystical 'stuff' out there; Will these robots still be driven by conventional computing; Can we have these machines without creating a new slave trade; and, AI and robotics have a long history of military funding. Are you worried about what happens to your research?
  • Rodney Brooks. Interviewed by Terry Gross for Fresh Air (radio program). WHYY-FM / available from NPR (March 4, 2002). "His new book is called Flesh and Machines: How Robots Will Change Us. Brooks offers a vision of the future of humans and robots."
  • Two interviews from EDGE/Third Culture:
    • Beyond Computation - A talk with Rodney Brooks. (June 5, 2002). "Maybe there's something beyond computation in the sense that we don't understand and we can't describe what's going on inside living systems using computation only. When we build computational models of living systems—such as a self-evolving system or an artificial immunology system—they're not as robust or rich as real living systems. Maybe we're missing something, but what could that something be?
    • The Deep Question - A talk with Rodney Brooks. Interviewed by John Brockman. (November 19, 1997). "He was going to become a pure mathematician, and then discovered that research assistantships were availalbe in American universities. He received a Ph.D. at Stanford in computer science, in John McCarthy's artificial intelligence lab, and then came to MIT where he thinks about biological systems and their interaction with the world. Rod Brooks is director of the AI Lab at MIT."

Bruce Buchanan. Interviewed by John Aronis for Links, the newsletter of The Department of Computer Science at the University of Pittsburgh (Spring 2003; pages 2 - 4)." While working in the Stanford Artificial Intelligence Laboratory, Bruce and his collaborators made important contributions to artificial intelligence. Their assertion -- obvious in retrospect like most great ideas -- was that knowledge is important for intelligent behavior. They drove this point home with a series of programs that embodied the knowledge of scientific and medical experts -- sometimes rivaling or surpassing their abilities -- and the creation of an industry centered around expert systems."

  • Also see: Text of a 1999 e-mail interview in which a college student asked questions such as: Q: What is your definition of Artificial Intelligence? and, Do you believe that AI is morally correct?

David Cavallo. 'Hard fun' yields lessons on nature of intelligence. By Chappell Brown. EE Times Online (July 11, 2005). "It's what [MIT's] Future of Learning co-director David Cavallo calls 'hard fun' -- creative yet disciplined and purposeful uses for technology." Interview questions include: What was your first encounter with computers and digital technology, and how did it influence your intellectual development? ... The computer and AI have been compared to the mind in some ways, but they are also very different from how the mind works. Is the computer the appropriate instrument for that type of work? ... What would you say is a seminal idea that has come out of this that was not known before? ... So what is the future of learning?

Harold Cohen. "Watch a video clip from The Age of Intelligent Machines, Ray Kurzweil's award-winning 1987 film, where Harold and Ray discuss AARON's abilities and explore machine creativity."

Mark Cutkosky. Is it a cockroach? A robot? Artificial intelligence takes a new form when Stanford researchers mix robotics with biology. By Jessica Lin. The Stanford Daily Online Edition (January 11, 2004). "Stanford researchers in the Engineering Department are looking at other creatures to model in their artificial intelligence projects, specifically insects. ... This sprawl project is led by Engineering Prof. Mark Cutkosky. The Daily took an opportunity to chat with this innovative robotics researcher to find out more. ... The Daily: Why are you designing robots that imitate animals as opposed to humans? Mark Cutkosky: There are some things that animals can do much better than humans.... TD: Where did the idea of biomimetic robots originate --- and when did you get involved? MC: I think that robots have always, to some extent, been inspired by animals or humans. That’s part of what the historical dream behind having robots is all about. What is new is that we can start to build and control them more as nature does. The days of 'tin men' robots are over."

The Deep Blue Team. The Deep Blue Team Plots Its Next Move. Scientific American (1996). "It was a classic match of man versus machine: In February 1996, world champion chess player Gary Kasparov pitted his wits against Deep Blue, a computer designed by a team of computer scientists from IBM. Deep Blue took the first game, but Kasparov recovered and ended up winning the match 4 games to 2. Three weeks later, John Horgan of Scientific American interviewed the Deep Blue team...." (And then check out our CHESS page for information about the rematch.)

Johan de Kleer. Inside PARC. Ubiquity; Volume 3, Issue 34 (October 8-14, 2002). "UBIQUITY: You've been in the artificial intelligence field for 25 years now. What changes have you seen over that period of time? DE KLEER: Twenty-five years ago, we thought that we would have an artificial mind by now. It turned out to be harder and further beyond our reach than we ever imagined. One of the biggest changes in artificial intelligence has been the realization of how hard and how long-term this project is going to be."

Daniel Dennett. Interviewed by Harvey Blume. The Atlantic Unbound (December 9, 1998). Can robotics shed light on the human mind? On evolution? Daniel Dennett -- whose work unites neuroscience, computer science, and evolutionary biology -- has some provocative answers. Is he on to something, or just chasing the zeitgeist?"

Anne Foerst:

  • Why robots are scary--and cool. By Jonathan Skillings. CNET News.com (April 12, 2005). "For early researchers in artificial intelligence who were out to play God, it turned out the devil was in the details. ... The newer generation of AI researchers is taking a more humble approach to the cognitive conundrum, according to Anne Foerst, who's a rare combination of computer scientist and theologian--two types that don't always see eye to eye. ... In her new book, 'God in the Machine: What Robots Teach Us About God and Humanity,' Foerst draws on her experience at MIT's Artificial Intelligence Laboratory to paint a picture of how people and robots can and should interact--and whether, at some point down the road from today's Aibo and Asimo contraptions, the human community might confer 'personhood' on robots. ... She spoke recently with CNET News.com about changes in the field of AI, social learning for robots and the need for embodied intelligence--that is, the ability for thinking creatures, and machines, to interact with and survive in the real world. Q: How does a theologian end up at the MIT AI Labs?... What did you find out about the people who study AI--what makes somebody want to study AI?... Are there classes of robots? ... What's the distinction between computers and robots? ... I want to ask you about the ethics of people working with robots, using robots. Should we build robots to do our dirty work? If we're going to think about according them personhood, are we ready to send them into combat to do mine sweeping and things like that?"
  • The theological robot. By Joshua Glenn. The Boston Globe (February 6, 2005). "[S]elf--described robotics theologian Anne Foerst ... seeks to bridge the divide between religion and AI research--by arguing that robots have much to teach us about ourselves and our relationship with God. Foerst spoke with me from St. Bonaventure University in upstate New York, where she teaches theology and computer science. ... FOERST: What I learned from the AI Lab's robots, which were designed to trigger emotional and social responses, is that we can bond with them. So although they can't be human--to be human, I think, means needing to participate in the mutual process of telling stories that make sense of the world and who we are--humanoid robots can still be considered persons. Personhood simply means playing a role, if only a passive one, in that mutual narrative process. Like babies, or Alzheimer's patients, humanoid robots don't tell their own stories, but they play a role in our lives so we include them in our narrative structures. This suggests that perhaps we ought to think about treating robots right."
  • Q&A; with Anne Foerst. Sidebar to Sari Kalin's article, Are You There, God? (It's Me, HAL) - Science Meets Spirituality. Darwin Magazine (December 2001). Questions include: How do you start a dialogue between AI and theology? ... Are AI researchers trying to play God? ... Will humanoid robots ever be conscious, and will they ever have souls? ... What are the business implications of your research?
  • Baptism by Wire - Bringing religion to the Artificial Intelligence lab. Anne Foerst, MIT professor of theology. Interview by John Zollinger. Networker@USC (Summer 2000; Volume 10, Number 3). "Networker: What do you do as the resident theologian at MIT's Artificial Intelligence Laboratory? Anne Foerst: I'm working in three different directions. The first direction is to bring theological insight into the AI community and that has two aspects. The first aspect is to analyze religion's underpinnings and existential questions underlying the people's research -- in AI this is particularly the desire to build artificial humans and secondly, the desire to analyze everything that's going on in us and therefore to get rid of a lot of our problems. It is also in many ways that many researchers wish to have eternal life through technology to avoid death by building yourself artificially and rebuilding yourself artificially. So this is one aspect of the work. The other aspect of the work is to bring concepts like the dignity of a person into a completely mechanized, functionalistic understanding of what it is to be human. ... The second thing that I'm doing is I'm working to fight against prejudices and fears of these technologies. ... The third thing in my work -- and this is actually the one which is most dear to me -- I want to go back to theology and bring back all of my insights about what society is about and what technology is all about back into theology."
  • Do Androids Dream? M.I.T. Working on It. By Claudia Dreifus. The New York Times (Science, page D3; November 7, 2000). "Dr. Foerst, a Lutheran minister who supported herself by repairing computers during eight years of higher education in Germany, serves as the theological advisor to the scientists building Kismet and the robot's brother, Cog." She's also the director of MIT's God and Computers project.

Ken Ford. Amplified Intelligence. Astrobiology Magazine (July 28, 2004). "Astrobiology Magazine (AM): The IMHC [Interdisciplinary Study of Human & Machine Cognition] research agenda broadly seems to cover robotics, cognition and simulations. Are there parts of machine intelligence that your research institute doesn't cover today, but that you see as growth areas? Ken Ford (KF): Don't forget that second letter is 'H'. Although a lot of our research could be categorized as AI, and five of our researchers are AAAI (American Association for Artificial Intelligence) Fellows, IHMC is not a traditional machine intelligence laboratory. The focus and theme of our research is what has become known as human-centered computing which, in a nutshell, is about fitting technology to people instead of fitting people to technology. The human is part of the system, and it is the performance of the whole system, including the human, that we are interested in. This requires that machines should be designed to fit us physically, cognitively, and perhaps even socially. We think of AI as meaning 'Amplified Intelligence.' The interesting thing is that many traditional AI technologies in fact are being used in just this way.

Chris Forsythe. Interviewed by BusinessWeek Online Reporter Olga Kharif (The Ghost in Your Machine, August 25, 2003). "At their most benign, smart computers seem like executive secretaries for those of us who can't afford one -- offering tremendous advances in productivity. Yet some fear that the concept suggests an ominous encroachment out of a sci-fi movie. Cognitive psychologist Chris Forsythe, who leads the Sandia team, insists that the machines are designed to augment -- not replace -- human activity. ... Q: How would you characterize the current state of human-machine interaction? A: The biggest problem is that if you're the user, for the most part the technology doesn't know anything about you. The onus is on the user to learn and understand how the technology works. What we would like to do is reverse that equation so that it becomes the responsibility of the computer to learn about the user. The computer would have to learn what the user knows, what the user doesn't know, how the user performs everyday, common functions. It would also recognize when the user makes a mistake or doesn't understand something."

Ernest J. Friedman-Hill. Rule Engines and Java: Jess in Action: Interview with Dr. Ernest J. Friedman-Hill of Sandia National Laboratory [excerpt]. By Jason Morris. PC AI (17.3). "JM: Considering your background in chemistry, how did you become involved with artificial intelligence and expert systems? EJF: My Ph.D. is in physical chemistry - very mathematical, very computational - so I've always been around computers. I've been interested in AI since I read Hofstadter's Godel, Escher, Bach in college...."

John Funge. AI - the smart way to go. By Paul Hyman. HollywoodReporter.com (August 26, 2005). "Artificial intelligence -- or 'AI' -- is the Rodney Dangerfield of video game design. It gets no respect when it's working great, as when it contributed to 'Halo 2' and 'Half-Life 2' becoming the hugely successful games that they are. But when game characters start walking into walls, everyone knows to blame the AI. According to John Funge, high-quality graphics may be what attracts a player to a game, but it's the AI and the gameplay that holds their attention. ... In a chat with Hollywood Reporter columnist Paul Hyman, Funge talks about why designers ought to think about AI when turning their IP into games, and how AI has the potential to become the new driving force behind video game innovation."

Bill Gates. Talking to Bill. Interview by Gary Stix. Scientific American (May 24, 2004). "On the occasion of the fourth TechFest at Microsoft Research--an event at which researchers demonstrate their work to the company’s product developers--Bill Gates talked with Scientific American’s Gary Stix on topics ranging from artificial intelligence to cosmology to the innate immune system. A slightly edited version of the conversation follows. ... SA: One of the things that some critics have said is that while there is an unbelievable collection of talent here, there have not been achievements on the order of things like the transistor or some other major breakthrough. Do you do you see any validity in that? ... SA: Do you see a continued relevance to the idea of artificial intelligence [AI]? The term is not used very much anymore. Some people say that's because it's ubiquitous, it's incorporated in lots of products. There are plenty of neuroscientists who say that computers are still clueless. BG: And so are neuroscientists, too. No, seriously, we don't understand the plasticity of the neurons. How does that work? There's even this recent proposal that there is, you know, prion-type shaping as part of that plasticity. We don't understand why a neuron behaves differently a day later than before. What is it that the accumulation of signals on it causes? So whenever somebody says to me, 'Oh, this is like a neural network,' well, how can someone say that? We don't really understand exactly what the state function is and even at a given point in time what the input-to-output equation looks like. So there is a part of AI that we're still in the early stages of, which is true learning. Now, there's all these peripheral problems--vision, speech, things like that--that we're making huge progress in. If you just take Microsoft Research alone in those areas, those used to be defined as part of AI. Playing games used to be defined as part of AI. For particular games, it's going pretty well, but we did it without a general theory of learning. And the reason we worked on chess was really not because we needed somebody to play chess with other than humans; it was because we thought it might tell us about general learning. But instead we just did this minimax, high-speed static evaluation, a minimax search on trees. Fine. I am an AI optimist. We've got a lot of work in machine learning, which is sort of the polite term for AI nowadays because it got so broad that it's not that well defined. But the real core piece is this machine-learning work. We have people who do Bayesian models, Support Vector Machines, lots of things that we think will be the foundation of true general-purpose AI. ... SA: Why is it the most exciting time to be in computer science? BG: ... it's not clear whether we're getting the best and brightest in the U.S. to go into these programs and contribute to solving these problems. SA: Why is that? BG: Oh, it's partly that the bubble burst. It's partly articulating the benefits of the field and the variety of jobs. People have to know that these are social jobs, not just sitting in cubicles programming at night. Our field is still not doing a good job drawing in minorities or women, so you're giving up over half the potential entrants just right there. Carnegie-Mellon has done probably the most on some of these areas, where they do outreach programs down to the high school where they show people what the computer sciences do, they show women and it's actually women who often go out and give these talks. ..."

David Gelernter. Interviewed by Harvey Blume. The Atlantic Unbound (January 29, 1998). "[Q:] You have written, 'The drive to make a machine-person' is irresistible; you say it's the 'culminating tour de force of the history of technology and the history of art, simultaneously.' [A:] That's true, and it's the motivating force behind artificial intelligence. Artificial intelligence has already come up with a lot of powerful and valuable work and will come up with a lot more. But there's a difference between saying that and saying ultimately software and culture will run together. Physical stuff is too important."

Steve Grand. The emotional machine. By Suzy Hansen. Salon.com (January 2, 2002). "Steve Grand designer of the artificial life program Creatures, talks about the stupidity of computers, the role of desire in intelligence and the coming revolution in what it means to be 'alive.'"

Helen Greiner. Conversation with iRobot Founder. Radio broadcast of Talk of the Nation - Science Friday, hosted by Ira Flatow (February 4, 2005). "FLATOW: How did you get interested in this? Have you always been interested in robots? GREINER: I saw Star Wars when I was 11. ... FLATOW: And if somebody wants to get into robotics, what would you tell them? GREINER: I would say, study engineering or sciences, and one of the things we look for when we interview people ... people who have built robots before, like whether as a hobbyist ... because then you can tell it's their passion."

Jeff Hawkins: Q&A;. Interviewed by Jason Pontin. Technology Review (October 13, 2005). "Jeff Hawkins, the chief technology officer of Palm, was the founder of Palm Computing, where he invented the PalmPilot, and also the founder of HandSpring, where he invented the Treo. But Palm and creating mobile devices are only a part-time job for Hawkins. His true passion is neuroscience. Now, after many years of research and meditation, he has proposed an all-encompassing theory of the mammalian neocortex. 'Hierarchical Temporal Memory' (HTM) claims to explain how our brains discover, infer, and predict patterns in the phenomenal world. JP: Is the higher consciousness -- what philosophers sometimes call 'self-consciousness' -- a byproduct of HTM? JH: Yes. I think I understand what consciousness is now. There are two elements to consciousness. First, there is the element of consciousness where we can say, 'I am here now.' This is akin to a declarative memory where you can actively recall doing something. Riding a bike cannot be recalled by declarative memory, because I can't remember how I balanced on a bike. But if I ask, 'Am I talking to Jason?' I can answer 'Yes.' So I like to propose a thought experiment: if I erase declarative memory, what happens to consciousness? I think it vanishes. But there is another element to consciousness: what philosophers and neuroscientists call 'qualia:' the feeling of being alive. ..."

Michael Hawley: In His Own Words. A scientist at MIT's Media Lab reveals the true nature of a college of arts and sciences As told to Calvin Fussman. Discover Magazine ( September 2003; Vol. 24, No. 9). "I'm kind of a perfect mix of my parents. My dad was an electrical engineer at Bell Laboratories in Murray Hill, New Jersey. ... My mom was into English literature and music. I was really lucky to have the yin and the yang. .... I wound up at MIT as a protégé of Marvin Minsky at the Media Lab. I lived in Marvin's attic for a year. It was wonderful, almost indescribable. Marvin's house was like F.A.O. Schwarz after the bomb went off. There was a trapeze hanging in the living room. You'd open the refrigerator and find seal meat stored on a shelf for the dogs of a visiting Iditarod champion. ... When your job is to invent new possibilities for computers...."

N. Katherine Hayles. Author of How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (1999). See: An interview/dialogue with Albert Borgmann and N. Katherine Hayles on humans and machines. From the University of Chicago Press.

James Hendler: A Chat about the Future of Artificial Intelligence with Professor James Hendler. Provided by CNN. Interview date: December 16, 1999. "Dr. James Hendler, Program Manager at the Defense Advanced Research Projects Agency (DARPA), and Professor of Computer Science at the University of Maryland, joined CNN.com to chat about artificial intelligence as part of as our @2000 chat series with leading authors, historians and experts to contemplate life at the turn of the century."

Danny Hillis: Intelligent machines evolve - Great tinker Danny Hillis explains tomorrow's computing. By Mark Williams. Red Herring (April 3, 2001). "So what interests you these days? What I've always been interested in: making intelligent machines. I used to think we'd do it by engineering. Now I believe we'll evolve them. We're likely to make thinking machines before we understand how the mind works, which is kind of backwards."

Douglas Hofstadter: By Analogy - A talk with the most remarkable researcher in artificial intelligence today, Douglas Hofstadter, the author of Gödel, Escher, Bach. By Kevin Kelly. In WIRED (3.11 - Nov 1995). "In 1979, Douglas Hofstadter burst into public consciousness with a book so out of the ordinary it won a Pulitzer Prize for its young first-time author. Titled Gödel, Escher, Bach: An Eternal Golden Braid ... Now, after 15 years, Hofstadter has published another major book."

Feng-Hsiung Hsu:Chess, China, and Education - An interview with Feng-Hsiung Hsu. Ubiquity (July 27 - August 2, 2005; Volume 6, Issue 27). "Feng-Hsiung Hsu, whose book 'Behind Deep Blue' told the story of world chess champion Garry Kasparov was defeated by the IBM computer known as Deep Blue, is now a senior manager and researcher at Microsoft Research Asia."

Bill Joy. "Hope Is a Lousy Defense." Sun refugee Bill Joy talks about greedy markets, reckless science, and runaway technology. On the plus side, there's still some good software out there. By Spencer Reiss. Wired Magazine (December 2003; Issue 11.12).

Takeo Kanade. Toward a More Human Robot - Carnegie Mellon's Takeo Kanade explains why making smarter systems requires better understanding about how people really act. Interview by Cliff Edwards. BusinessWeek Online (November 24, 2004). "Q: What's ripe for innovation? A: Certainly, I'd like to comment on my own area, that is robotics, artificial intelligence [AI], and the like. My own thinking today is that I think we should understand how humans act and use that [insight] to develop a better system that serves for human. You can call it AI. I'm more interested in, and I believe it's useful and enormously valuable to understand, how humans function, not necessarily how humans are made. ... Q: What are the hurdles that robotics and AI need to overcome? A: The hurdle is we do not know ourselves, how we are doing. In general, I call it an invisible robotics -- environmental robotics. The environment as a whole is a robot, not the human individual humanoid or arm or mobile robot. ... Q: Is there a problem in the U.S. of underfunding areas of research? A: I'm less familiar about that area. I'm mostly dealing with places like DARPA [the Defense Advanced Research Projects Agency]. My concern is that we may be reducing what I call playfulness. In research, a large part of it is based on results. We're too result-oriented. The hallmark of the U.S., and I came from Japan and was very impressed with the difference I found, was what I call this playfulness -- people willing to pay money for those things which appeared to be somewhat ridiculous ideas. ..."

Garry Kasparov. Every move you make. New Scientist (July 12, 2003; page 40). "Do you fear that computer intelligence will come to challenge humans in the long term? Machines use 95 per cent calculation and 5 per cent so-called 'positional understanding', which a machine inherits from its creators. Humans use 99 per cent intuition and 1 per cent calculation, but very often we come to the same conclusion. So does it mean that the machine's process is an imitation of human intelligence? Here, the game of chess raises an important issue: should we judge artificial intelligence by the machine's performance or by the result?"

Joseph Konstan on Human-Computer Interaction, Recommender Systems, Collaboration, and Social Good. Ubiquity (March 24 - April 1, 2005; Volume 6, Issue 10). "My takeaway message for the computer scientists here is there are some very interesting opportunities to collaborate with people solving big problems in the world, whether you're interested in AIDS and medical problems, or the kind of work that Negroponte was talking about with the hundred dollar computers for the developing world, or dozens of other things. There are a lot of opportunities there where you can make a difference."

Kraftwerk. Man-Machines of Loving Grace - Kraftwerk return! By Jay Babcock. LA Weekly (June 3 - 9, 2005). "Next Tuesday, German electronic-music pioneers Kraftwerk will perform in Los Angeles for the first time since their now-legendary show at the Hollywood Palladium in 1996. ...We all know Kraftwerk songs -- odes to transportation like 'Autobahn' and 'Trans-Europe Express,' future/now manifestoes like 'Man/Machine' and 'The Robots' -- but it’s in the live context, where the songs are joined to specially designed graphics, that Kraftwerk achieves a purity of all-encompassing vision that secular music rarely touches. It’s all about rapture, and an interaction with -- or longing for -- a relationship with something other than human. On the telephone, Ralf Hutter -- co-founder of Kraftwerk with Florian Schneider, and now approaching 60 years of age -- is helpful and deliberate, like a professor pleased to have a visitor who’s interested in his research on an obscure subject. L.A. WEEKLY: There's a bumper sticker that says 'Drum machines have no soul.' Do you think that is true? ... Would you consider the Kraftwerk concept to be basically optimistic about the relationship between man and machine? ... There’s an almost universal fascination with machines and computers, but at the same time, isn't there a cultural fear of the future, of machines taking over? A fear of cyborgs? ... What do you think about artificial intelligence? Do you think it's possible that a machine can become sentient? ... When you let machines play at concerts -- especially when there are actual robot versions of Kraftwerk onstage in place of the humans -- when you do that, and the audience applauds at the end of the song, what are the people applauding for?..."

Benjamin Kuipers. Making Sense of Common Sense Knowledge. Ubiquity; Volume 4, Number 45 (January 13 - 19, 2004). "A long time ago, I wrote the following definition: 'Commonsense knowledge is knowledge about the structure of the external world that is acquired and applied without concentrated effort by any normal human that allows him or her to meet the everyday demands of the physical, spatial, temporal and social environment with a reasonable degree of success.' I still think this is a pretty good definition (though I might remove the restriction to the "external" world)."

Ray Kurzweil:

  • Nanobots Will Help Battle Ills In Future. By Brian Deagon. Investor's Business Daily & Investors.com (October 21, 2005). "Ray Kurzweil wears many hats. He's a prolific inventor and businessman. He wrote a book on how to live forever. He speaks eloquently on technology, artificial intelligence, genetics and robotics. But he is best known as a futurist. His new book, 'The Singularity Is Near,' is a bold view of what the world could be like in 30 years and beyond. And just how might that world be? Well, your best friend might be one you build yourself. Kurzweil's view of the future includes computers that function just like a human brain, with emotions. ... Kurzweil recently spoke with IBD about this brave new world. IBD: How did you become a futurist? ... IBD: Couldn't a computer go bad, just like some humans do? Kurzweil: I discuss this promise vs. peril of technology in my book. One of the most daunting is pathological artificial intelligence. How do you protect yourself from an intelligent entity that's destructive? My response is that AI will not be off in one corner. It will be deeply integrated into our civilization, society, our bodies and brains. And we will have conflicts with our enhanced intelligence. The way to counteract that is to keep our values of openness, freedom, civil liberties and democracy alive in our civilization. Because we are going to merge with the machines. IBD: So we shouldn't worry about machines taking control? ..."
  • Deciphering a brave new world. By Declan McCullagh. CNET News.com (September 29, 2005). "Ray Kurzweil was one of the most remarkable and prolific inventors of the late 20th century. Now Kurzweil, who can claim credit for developing the first text-to-speech synthesizer and the first CCD flat-bed scanner, is busy inventing a future in which humans merge with machines and the pace of technological development accelerates beyond recognition. ... CNET News.com spoke with Kurzweil on Wednesday about his book tour, his views of the melding of man and machine and the political ramifications of having hyper-intelligence initially available to the very wealthy." [audio available]
  • Robot wars - Technology guru Ray Kurzweil offers a vision of future fighting machines. By Philip Ball. news @ nature.com (February 8, 2005). "BALL: How will warfare change in the next 50 years? KURZWEIL: ... Already, our abilities benefit from close collaboration with machines. Within 50 years, the non-biological portion of the intelligence of our civilization will predominate. Applying non-biological intelligence to areas such as strategy, decision-making and intelligent weapons will characterize military power."
  • An Adventurous Thinker. Interview with Ray Kurzweil. DevSource (December 12, 2004). "Ray Kurzweil was the principal developer of the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first CCD flat-bed scanner and the first commercially marketed large-vocabulary speech recognition. He's a big name in artificial intelligence, nanotechnology, and --- what's this?! --- advances in extended, healthy lifetimes."
  • Machine visionary - Author and inventor Ray Kurzweil is an authority on artificial intelligence. Interviewed by Hamish Mackintosh. The Guardian (February 6, 2003).
  • Ray Kurzweil Speaks His Mind. Sidebar to Sari Kalin's article, Are You There, God? (It's Me, HAL) - Science Meets Spirituality. Darwin Magazine (December 2001). Questions include: Will robots ever become conscious? ... How would we ever prove that a machine is -- or isn't -- conscious? ... Are these computer-induced changes you predict a threat to human civilization as we know it?
  • The Story of the 21st Century. Interviewed by Rebecca Zacks in the January/February 2000 issue of Technology Review.
  • Q&A; with Kurzweil's Ray Kurzweil. Interviewed by Paul C. Judge. BusinessWeek (updated February 12, 1998). "Q: How did you first get involved in speech-recognition technology? A: I started with an interest in pattern recognition, which was the science project that I developed to win the Westinghouse Science Award as a high school student. From there, I moved into optical character recognition. That was a solution in search of a problem. That's what led me into reading machines for the blind. It combined optical character recognition and a speech synthesizer, which took the text from a page that was scanned in and read it out loud in a synthesized voice."

Doug Lenat. The Brain Behind Cyc. By Sid Moody. The Austin Chronicle (December 24, 1999).

Tod Machover. Interview. The composer and electronic instrument maker from MIT's Media Lab discusses his latest project: the Brain Opera. Scientific American (August 1996).

Pattie Maes:

  • Pattie Maes on Software Agents: Humanizing the Global Computer. Interviewed by Charles Petrie and Meredith Wiggins. IEEE Internet Computing Online; Vol. 1, No. 4 (1994). "Traditional AI approaches, which use symbolic knowledge representation that embody fundamental "rules of thought," have been turned upside down by the new school, who write simple, small programs that are designed to let intelligence evolve as the programs interact. ... Maes and her Software Agents Group at MIT have taken this principle of interaction and married it to the Internet with the development of software agents that interact with other agents or humans to provide useful services, usually using a Web interface."
  • Intelligence Augmentation - A Talk with Pattie Maes. Introduction by
    John Brockman. Edge / Third Culture (January 20, 1998). "I started out doing artificial intelligence, basically trying to study intelligence and intelligent behavior by synthesizing intelligent machines, I realized that what I've been doing in the last seven years could better be referred to as intelligence augmentation, so it's IA as opposed to AI. I'm not trying to understand intelligence and build this stand-alone intelligent machine that is as intelligent as a human and that hopefully teaches us something about how intelligence in humans may work, but instead what I'm doing is building integrated forms of man and machine, and even multiple men and multiple machines, that have as a result that one individual can be super-intelligent, so it's more about making people more intelligent and allowing people to be able to deal with more stuff, more problems, more tasks, more information. Rather than copying ourselves, I'm building machines that can do that."

John Markoff: What the Dormouse Said: An interview with John Markoff. Ubiquity (August 10 - 16, 2005; Volume 6, Issue 29). "UBIQUITY: Congratulations on 'What the Dormouse Said' --- it's a fascinating book. Tell us about it. MARKOFF: Well, I guess I'd call it a revisionist history. It about things that happened around Stanford University between roughly 1960 and 1975, and is a kind of pre-history of personal computing and the personal computer industry. What I was trying to do was to get at some of the culture through which the technology was developed. UBIQUITY: Why the cultural emphasis? MARKOFF: Because technology never happens in a vacuum. The book was an effort to try to pin down how personal computing first emerged around the Stanford campus at two laboratories in the 1960's: one was run by John McCarthy, and was called the Stanford Artificial Intelligence Laboratory; and the other was run by Doug Engelbart and known as the Augmentation Research Center or the Augmented Human Intellect Research Center. ..."

Maja Mataric. From Scientific American Frontiers' Cool Careers in Science. "Maja is working on developing the next generation of intelligent robots! How cool is that?!"

John McCarthy. Interviewed April 1983, by Philip J. Hilts. Omni Magazine.

  • for another perspective, see the abstract of John McCarthy's oral history interview

Pamela McCorduck. Q & A with the author of Machines Who Think: 25th anniversary edition. Natick, MA: A K Peters, Ltd., 2004. Questions include: How long has the human race dreamed about thinking machines? What does it mean that a machine beat Garry Kasparov, the world's chess champion? Artificial intelligence - is it real? What so-called smart computers do -- is that really thinking? Shouldn't we just say no to intelligent machines? Aren't the risks too scary?, and What's ahead as AI succeeds even more?

Drew McDermott. Interviewed by Kentaro Toyama. In ACM Crossroads. "He's well-known in AI circles not only for his extensive work in logic, planning, and robotics, but also for his blunt public appraisals of the state of AI research." Question #1 is "What is AI?"

James McLurkin. Almost Human - Robotics in the 21st Century. Watch this interview from the WGBH Thinking Big series. (Broadcast date: October 5, 2005). "James McLurkin, a robotics engineer at the Computer Science & Artificial Intelligence Lab at the Massachusetts Institute of Technology, imagines a world filled with robots, where man-made intelligent machines do the work deemed too dangerous for people -- such as searching for survivors in the rubble of collapsed buildings or exploring the farthest reaches of space. McLurkin acknowledges that such sophisticated robots are a long way off, but he hopes to have a fun-filled career trying to make it happen."

Donald Michie: The very early days. Interviewed by Michael Bain for the Computer Conservation Society's seminar, Artificial Intelligence - Recollections of the Pioneers (October 2002). "Q: What was your earliest contact with the idea of intelligent machinery? A: Arriving at Bletchley Park in 1942 I formed a friendship with Alan Turing, and in April 1943 with Jack Good. The three of us formed a sort of discussion club focused around Turing's astonishing 'child machine' concept. Hisproposal was to use our knowledge of how the brain acquires its intelligence as a model for designing a teachable intelligent machine." You can read the interview (PDF), or watch it (Quicktime, Realmedia) via links from the seminar page.

Marvin Minsky:

  • Why A.I. Is Brain-Dead. Marvin Minsky sits in the "hot seat" and responds to a series of questions from Josh McHugh. Wired Magazine (August 2003).
  • Marvin Minsky Wants Machines To Get Emotional. By Tom Steinert-Threlkeld, ZDNet/Interactive Week. (February 25, 2001). "Because the main point of the book [The Emotion Machine] is that it's trying to make theories of how thinking works. Our traditional idea is that there is something called 'thinking' and that it is contaminated, modulated or affected by emotions. What I am saying is that emotions aren't separate."
  • Consciousness is a Big Suitcase. A Talk with Marvin Minsky. Introduction by
    John Brockman. Edge / Third Culture (February 27, 1998). "Marvin Minsky is the leading light of AI-that is, artificial intelligence. He sees the brain as a myriad of structures. Scientists who, like Minsky, take the strong AI view believe that a computer model of the brain will be able to explain what we know of the brain's cognitive abilities. Minsky identifies consciousness with high-level, abstract thought, and believes that in principle machines can do everything a conscious human being can do."
  • Marvin Minsky: Scientist on the Set - An Interview with Marvin Minsky. A chapter from Hal's Legacy (MIT Press). "David G. Stork: You -- along with John McCarthy, Claude Shannon, Nathaniel Rochester, and others -- are credited with founding the field of artificial intelligence (AI) at the famous Dartmouth conference in 1956. A decade later, in the mid-sixties, when Clarke and Kubrick began work on 2001 where was the field of AI? What were you trying to do?"
  • Understanding Musical Activities. A 1991 interview with Marvin Minsky, edited by Otto Laske. Also available as: A Conversation with Marvin Minsky. AI Magazine 13(3): Fall 1992, 31-45.
  • Also see this 1989 oral history interview from the Charles Babbage Institute.

Tom Mitchell. Interview from the Carnegie Mellon School of Computer Science's Look Who's Talking series. "Tom Mitchell is the director of the Carnegie Mellon University Center for Automated Learning and Discovery (CALD) and Fredkin Professor of AI and Learning. ... [Q:] Learning the brain's algorithms for doing things is very difficult, and is not very well understood as yet. Do you ever find it frustrating trying to get computers to learn things that we ourselves don't know the inner workings of? [A:] That's actually a very interesting observation -- I actually don't get frustrated by that -- why? [Laughs] I don't know! Maybe it's odd, but it's true that much of the work in machine learning -- how to get computers to learn -- has been kind of unguided by anything we know about human learning. It just grew up on its own -- 'ok, how would we engineer this system to look at a lot of data and discover regularities?' -- so people engineered those instead of looking at how humans do it and then trying to duplicate it. But recently, because I've been looking at the brain, I've been starting to learn more about what people know about human learning -- and it's very different. For example, when we humans learn, a big part of what determines whether we succeed or not is all about motivation. And there's nothing in machine learning algorithms that even remotely corresponds to motivation. So it's just a very different phenomenon ... maybe in 10 years we'll understand it better, but right now, the two are very different."

Hans Moravec. Interview. From Robot Books.com. November 28, 1998. "Hans Moravec is Director of the Mobile Robot Laboratory at Carnegie Mellon University. His latest book [Robot: Mere Machine to Transcendent Mind] considers the history and future of intelligent machines."

Brad Myers. CMU's Brad Myers. Technology Research News Editor Eric Smalley carried out an email conversation with Carnegie Mellon University professor Brad Myers. (August 22, 2005). "Myers: Another area that I think is going to take off is intelligent interfaces, where the system actively tries to be helpful and learns from the user."

Allen Newell. By Philip E. Agre. Artificial Intelligence 59(1-2): 415-449 (1993).

  • Also see this 1991 oral history interview from the Charles Babbage Institute.
  • For another perspective, search through Carnegie Mellon's Allen Newell Collection and see his orginal papers, complete with handwritten notations!

Seymour Papert. Sunday Profile presented by Geraldine Doogue. ABC Online (July 11, 2004). "Seymour Papert, a mathematician and pioneer in artificial intelligence, has radical ideas about how the education system should be overhauled. ... Geraldine Doogue: You were involved in the cutting edge of artificial intelligence in the 1960s, what were your ideas then about how far computers could go in replicating human intelligence? Seymour Papert: There’s a huge difference between the way people thought about artificial intelligence then and now. In those sixties, people in AI really thought in sort of galactic cosmic terms. We were interested in the possibility of some kind of artificial entity that would be as intelligent as a person and/or more intelligent. It was obvious, it still is obvious to me though, if you could make something as intelligent as a human it would be much more intelligent because there are many limitations that we have that a machine wouldn’t have. And if it could have all the things that we have it would have much more. ... Geraldine Doogue: Well, do you now think that as an elder of the tribe? Do you look back now and think ‘goodness that was the folly of youth’? Seymour Papert: Oh, I don’t think it’s the folly of youth; I think it will come. What I think has become clearer is that we need some great new insights… Geraldine Doogue: Into artificial intelligence? Seymour Papert: John McCarthy, who is one of the other people involved in this, proposed a measure of greatness of idea, like one Einstein, is one of these ideas that happens once or twice a century. And the idea that you could use computers to do some things that the brain does -- that the mind does -- is maybe an Einstein’s worth of insight. And McCarthy guessed we need, at least, maybe one Einstein’s worth or maybe two Einstein’s. … Seymour Papert: Here’s a little curious thing that I’ve recently become intrigued by. I worked during the 80s developing a way of children doing robotics using LEGO and eventually LEGO made this thing that they marketed under the name of my book Mindstorms which is build LEGO but instead of LEGO just being an architectural passive thing you make things it can do that can act to have behaviour. So you’ve got motors and gears and sensors and a little computer in it, so you can program it to do things. LEGO marketed this for a pre-teen boys which annoyed me a lot. ... Interesting thing that we stumbled on was whenever we get a group of these kids working with this technology, there’s always some, a kid or two who drifts up as the expert. The one that everybody looks to for more knowledge -- it’s always a girl."

Alex 'Sandy' Pentland. Perspectives from the field. Interviewed by Gail Repsher Emery. Washington Technology (June 21, 2004; Vol. 19 No. 6). "The work of MIT's Alex 'Sandy' Pentland encompasses areas such as wearable computing, human-machine interfaces and artificial intelligence. ... WT: Your group pioneered the idea of wearable computers about 15 years ago. How has the field evolved? Pentland: About 15 years ago, the idea of putting computers and sensors on the body sounded quite crazy. But we won, it's here. All of you carry little computers, called cell phones, that are Internet connected and have some sort of sensors. ... WT:Technology can connect people, but it can also watch them without their knowledge. How do we make sure it's used for good purposes? ... WT: When will the technology be capable of knowing what I'm doing and when to take a message or interrupt me? Pentland: We can do that today. ..."

Rosalind Picard. Interviewed in First Monday. "Rosalind Picard is NEC Development Professor of Computers and Communications and Associate Professor of Media Technology at MIT. ... Her research interests include affective computing; texture and pattern modeling; and browsing and retrieval in video and image libraries. Her most recent book, Affective Computing, was just published by MIT Press...."

Kanna Rajan. ‘We Must Continue The Quest To Outer Planets To Discover Our Origins.’ Interviewed by Nivedita Mookerji. The Financial Express (January 19, 2004). "Principal investigator and project lead for the Mars Mission’s on-ground software effort, Kanna Rajan, in an interview to eFE, talks about the IT initiatives in MER, role of artificial intelligence in it, relevance of such missions and more." Also see the related article.

Raj Reddy. Look Who's Talking. Herbert A. Simon University Professor, Carnegie Mellon University. "I came from Stanford where I was an Assistant Professor in the 1960s. I came here in 1969, and I've been here ever since. Most of what I do is in the area of artificial intelligence. In particular, computers that can speak, hear, see, and walk and so on. 10 years after I got here, we started the Robotics Institute. ... What's a day in your life like? ... [W]e are sending out a proposal to NSF [National Science Foundation]. This is being sent by Mel Siegel, Chuck Thorpe, Robotics Grad student M. Bernardine Dias, and I'm kind of part of it. So what we're sending out is what we call 'Technology Peace Corps'. And if it gets funded, it will be an option for undergraduates and graduates in a technical field to go to a third world country and live in a village for 2-3 months to find a socially relevant problem which could be solved through the use of technology. But you're not just acting like a peace-corps person-instead, you are looking at the problems and asking which of these problems can I solve with technology? I'm also always trying to think about research problems that are solvable. For example, with all the concerns about terrorists, there is this security issue, and it turns out we have built this thing called an 'Autonomous Land Vehicle'--a car that drives itself...."

Stuart Russell on the Future of Artificial Intelligence. Ubiquity; Volume 4, Issue 43 (December 24 - January 6, 2004). "UBIQUITY: The original grand vision of artificial intelligence (AI) in the 1950s and '60s seemed to dissipate into many small, disparate projects. Should this fragmentation be written off as an inevitable Humpty-Dumpty problem or is it possible to bring the fragments back together into a single field? RUSSELL: I think we can put it back together in the sense of being able to join the pieces. Of course, the pieces won't be subsumed under one Über theory of intelligence."

John R. Searle. June 1999 interview (in English) conducted at an authors' colloquim at the University of Bielefeld. An excerpt: "Actually, I think you misunderstand my position a little bit in the way you pose the question. I do not claim that all forms of artificial intelligence and cognitive science are based on philosophical errors. Rather I criticize only what I call strong artificial intelligence, or strong AI, and the corresponding branch of cognitive science, the branch of cognitive science that accepts strong AI. Strong AI is the view that the appropriately programmed digital computer thereby necessarily has a mind in exactly the same sense that you and I have minds."

  • Also see Generation5's interview with John Searle (2001): "I knew when I originally formulated the Chinese Room Argument that it was decisive against what I call 'Strong Artificial Intelligence', the theory that says the right computer program in any implementation whatever, would necessarily have mental contents in exactly the same sense that you and I have mental contents. ... What I did not anticipate is that there would be twenty years of continuing debate. ... I think that what I call weak AI, or cautious AI, is immensely useful. ... It is important to keep emphasizing that of course, in a sense, we are robots. We are all physical systems capable of behaving in certain ways. The point, however, is that unlike the standard robots of science fiction, we are actually conscious."

Noel Sharkey. Thinking robots – not quite yet. Professor Noel Sharkey left school at the age of 15 but is now one of our leading robotics experts. Chris Bond talked to him about the future of artificial intelligence. Yorkshire Post Today (March 9, 2005). "'Robotics, or automatons,' he says, 'goes back to around 3000BC and has always been associated with a kind of trickery and magic. Some Egyptian temples had talking statues, they had people inside but it was the same kind of fascination.' The first time a robot was seen in a film was in Fritz Lang's masterpiece Metropolis, but Prof Sharkey argues in reality we haven't come close to re-creating that. But he believes today's films can have a bearing on the future. 'The good thing about movies like Robots is that youngsters will look at what robots can do in it and that will be their creative aim. I continually meet children who come up with solutions to things that engineers couldn't come up with because they haven't learned constraints.'"

Craig Silverstein. Google's man behind the curtain. By Stefanie Olsen. CNET News.com (May 10, 2004). "If there ever was an employee who carried the water for Google, it's Craig Silverstein, employee No. 1, technology director and loyal chanter of the search company's 'don't be evil' mantra. ... In an interview before Google's IPO filing, Silverstein discussed.... When do you think that kind of artificially intelligent search will happen? ..."

Herbert Simon:

  • Herbert Simon: Interviewed June 1994, by Doug Stewart. Omni Magazine. One of the many probing questions is: "What is this the main goal of AI?" Among the diverse subjects covered are the Logic Theorist, the General Problem Solver, BACON, MATER, economics, and cognitive science. [No longer available online.]
  • Herbert Simon: CMU's Simon reflects on how computers will continue to shape the world. By Byron Spice, Science Editor. Pittsburgh Post-Gazette (October 16, 2000). "He began as a political scientist, studying how parks department budgets were made in his native Milwaukee, which led him into economics and business administration. At Carnegie Tech in the mid-1950s, he and Allen Newell incorporated a new tool -- the computer -- into the study of decision making. In the process, they invented the first thinking machine and a field that would become known as artificial intelligence." Read his response to the question: "Q: So a computer could someday deserve a Nobel?"
  • Herbert A Simon, A Day in the Life of. From ACM Crossroads. "What I do to relieve stress: There isn't much stress when you're doing what you like to do. Besides there are always old Marx Brothers or Charlie Chaplin movies."

Aaron Sloman. Interviewed by Patrice Terrier for EACE Quarterly (August 1999; updated 11 July 2002). "[PT] Our readers who are aware of your work published in artificial intelligence journals are not necessarily aware of your work on philosophy of mind. Typically, while cognitive ergonomists assume some commonalities between brains and computers, they could also doubt the importance of being well educated in philosophy of mind for a researcher interested in design and usability issues. [AS] The short answer is this: Those who are ignorant of philosophy are doomed to reinvent it badly. A longer answer was provided in the papers by John McCarthy and myself written for a 'Philosophical Encounter' at the 14th International Joint Conference on AI at Montreal in 1995. ... One of the benefits of philosophical expertise is having the ability to produce good analyses of concepts that are used in specifying human mental capabilities (motivation, intention, attitudes, emotions, values, etc.)"

Will Smith. I, Robocop - Will Smith raps about busting bot outlaws, his secret geek past, and the future of thinking Machines. By Jennifer Hillner. Wired Magazine (July 2004; Issue 12.07). "Will Smith is science fiction's leading man. ... In July, the high tech bad boy goes back to the future in I, Robot as a police detective investigating a murder allegedly committed by a bot. Driving through Manhattan's West Village in his black SUV, the former Fresh Prince admits he's all about getting geeky with it. ... [Q] Like when you were recruited by MIT, but didn't apply. [A] Yeah. I never had any intention of going. My mother graduated from Carnegie Mellon. She was very serious about college, but I wanted to rap. [Q] Can you imagine what your life would have been like if you had gone? [A] I would have made a billion dollars and been broke by now. ... [Q] I understand Proyas asked the entire cast to read Ray Kurzweil's The Age of Spiritual Machines. What did you think of the book? ... [Q] Where do you think robotics is headed? [A] I think that machines will definitely get to the point that they become intuitive. Or they become what appears to be intuitive. In some 7-Elevens, they have intuitive programming for the surveillance cameras. They recognize the mannerisms of people who steal and become intuitive with who they follow. That's very scary. Some people could say, That's not intuition, that's programming. But at some point, after it catches nine out of ten people who are stealing, something works. [Q] Do you worry about Big Brother watching you? ..."

Luc Steels. Creating a Robot Culture. Interviewed by Tyrus L. Manuel. IEEE Intelligent Systems (May/June 2003). "The well-known researcher shares his views on the Turing test, robot evolution, and the quest to understand intelligence."

Austin Tate. Interviewed. Among the many questions posed you'll find are these two from "Patrick Nanson and Jasmeen Mia, eighth grade students doing a school research project on AI ... What would you say is the most "intelligent" computer yet made? Why is that computer considered to be intelligent?"

Astro Teller. Interviewed. From AnnOnline. You can hear an interview with the author of the AI sci-fi book, Exegesis, (1997) and also follow a link to the Barnes & Noble page where you'll find another link to a second interview with the author.

Manuela Veloso. Look Who's Talking. Professor, Carnegie Mellon University. "No doubt, most people associate your name with RoboCup. Can you give a little background on how exactly RoboCup started and how you got involved? So RoboCup started in about 1996, and it was a result of some people in Japan, Yuraki Kitano and myself, and some others getting interested in the problem of multi-robot systems -- systems that involve more than one robot accomplishing tasks. I had been doing research in planning -- in very classical AI [Artificial Intelligence] planning algorithms. But then I became interested in planning and execution and I had been working with some of my students, Garren Haden, my first student, on planning and execution in a real robot. Then in 1994, my student Peter Stone saw a little demo of a one-on-one playing soccer at a major AI conference, and this was a little demo set up by Michael Sohota who was student of British Columbia University in Canada, and he’s advisor was Allen Macworth. Peter was a big soccer fan and he was all interested in this little demo -- so he asked me if he could do his thesis on robot soccer. So the combination of my interest on planning and execution in real robots, my own growing interest in multi-robot problem, and my student Peter Stone’s having seen the demo and his love for soccer all kind of made it happen."

Heinz von Foerster. Interviewed by Stefano Franchi, Güven Güzeldere, and Eric Minch. From Constructions of the Mind: Artificial Intelligence and the Humanities, a special issue of the Stanford Humanities Review. Volume 4, issue 2 (1995). "Heinz von Foerster, in an interview with the editors, and in his accompanying essay, examines an alternative approach to the scientific exploration of human cognitive functions. He speaks about cybernetics, a scientific discipline created by Norbert Wiener and augmented by himself that inaugurated a new scientific approach to the study of the mind. Unfortunately, cybernetics fell into disgrace in the wake of AI's meteoric ascendance to intellectual stardom. Von Foerster explains the intellectual, institutional, and political reasons motivating such an historical evolution." (This passage is taken from the Introduction to the special issue.)

Harry Wechsler. GMU's Harry Wechsler (October 31, 2005). "Technology Research News Editor Eric Smalley carried out an email conversation with Harry Wechsler, Professor of Computer Science and Director of the Distributed and Intelligent Computation Center at George Mason University. Wechsler's research centers around making computers more intelligent by giving them the ability to recognize patterns. ... TRN: Tell me about the trends in pattern recognition research. What are the pluses and minuses of these technologies as they exist today? Wechsler: Not much different from 30 - 40 years back. Some of the big news, e.g., statistical learning theory and support vector machines (SVM) owe their existence to research done in the 60s. ... TRN: Research on giving machines the ability to accurately perceive their surroundings has advanced considerably in recent years but remains a major challenge. What will it take to build machines that can operate effectively in unfamiliar, dynamic environments? ... TRN: Machine perception and pattern recognition technologies are increasingly applied to problems of tracking and understanding human behavior. What are the social and economic implications of these technologies? ... TRN: Can you describe for the layperson what 'backpropagation' is? ... TRN: What are the possibilities and limits of data mining, and what are the social and economic implications of using the techniques you and others are developing?"

William L. "Red" Whittaker. The Tool Guy: Red Whittaker Responds. Astrobiology Magazine (May 24, 2004). "[P]rincipal scientist with the Robotics Institute at Carnegie Mellon University. He is also director of the Field Robotics Center, which he founded in 1986. Projects under his direction include unmanned robots to explore planetary surfaces and volcano interiors, and autonomous land vehicle navigation. ... On April 16, Red Whittaker testified before the President's Commission on Moon, Mars and Beyond about the role robotics will play in the future of space exploration."

R. Michael Young. Games of infinite possibilities. By Jonathan B. Cox. The News & Observer (January 15, 2003). "R. Michael Young, an assistant professor of computer science at N.C. State University, is working on research that might one day make video games more enjoyable. Young, 41, is studying ways to build artificial intelligence -- the ability of computers to act like humans -- into games so that users get movielike stories. With such technology, for example, a game could adjust to a player's actions and provide a different experience every time it is played. He sat down with Connect's Jonathan B. Cox to discuss his work."

Radio & Television Interviews

"IT Conversations is a network of high-end tech talk-radio interviews, discussions and presentations from major conferences delivered live and on-demand via the Internet. It's a one-person labor of love. Doug Kaye is ITC's host, producer, developer, writer, interviewer and engineer. He launched IT Conversations in June 2003 and produces three to five programs each week." Check out the exciting programs featured on the home page, in the archives, or start with:

  • The Voices in Your Head series: "Host Dave Slusher interviews writers, musicians and other creative people about the effect of technology on their art and vice versa."
    • James P. Hogan (December 22, 2004). "James P. Hogan and host Dave Slusher discuss how the film 2001 started Hogan on a career as an author, on his relationship with Marvin Minsky and the world of artificial intelligence...."
  • Also available from IT Conversations are presentations from John Markoff, Peter Norvig, and others.

"NerdTV is a new weekly online TV show from PBS.org technology columnist Robert X. Cringely. NerdTV is essentially Charlie Rose for geeks - a one-hour interview show with a single guest from the world of technology."

The Charlie Rose Show:

  • A Conversation About Artificial Intelligence, with Rodney Brooks (Director, MIT Artificial Intelligence Laboratory & Fujitsu Professor of Computer Science & Engineering, MIT), Eric Horvitz (Senior Researcher and Group Manager, Adaptive Systems & Interaction Group, Microsoft Research), and Ron Brachman (Director, Information Processing Technology Office, Defense Advanced Research Project Agency, and President, American Association for Artificial Intelligence) (December 21, 2004).
  • Ray Kurzweil, talking about his book, The Singularity is Near (November 1, 2005).
  • Gordon Moore, Cofounder of Intel Corporation (November 14, 2005).

We've also assembled a collection of radio and television interviews on our It's Show Time page.

Online Collections of Interviews

A Day In The Life. From ACM Crossroads. A collection of interviews which provide a peek into the lives of computer scientists, interface designers, and others. Be sure to see the one with Herbert Simon.

Interview Archive from ACM's Ubiquity (IT Magazine & Forum). Here's where you'll find interviews such as Emotion and Affect with Don Norman; Diversity in Computing with Valerie Taylor; and Inside PARC with Johan de Kleer.

Gurus of Tech - Conversations from Tech's Cutting Edge. "What's the latest from leaders in the fields of nanotech, genomics, search, and robotics? Here are their progress reports and more." BusinessWeek Online: Technology Special Report (May 6, 2004).

Interview Collection. Generation 5 has a wonderful (and growing) collection of interviews conducted with Tim Crane, Teuvo Kohonen, Steven Levy, Marvin Minsky, Melaine Mitchell, Roger Schank and others.

Interviews. A very nice collection from New Scientist.

Interviews. From The Smithsonian National Museum of American History. "Although the development of modern communications and computers is among the most important aspects of modern American history, historical writing about the development is remarkably sparse. And few of the leaders of the development have written their own memoirs. The Smithsonian Institution is capturing the recollections of some of these people in the form of oral and video histories."

Interviews. From Women in Computer Science, Carnegie Mellon's Women@SCS.

The People of AI. This collection of student conducted interviews is from the overview of Artificial Intelligence created for ThinkQuest. Among those interviewed are Peter Ross, Barbara Hayes-Roth, and David Waltz.

TRN's View from the High Ground: Email Conversations with Researchers in High Places. Conversations from the Technology Research News collection include: CMU's Brad Myers (August 22, 2005), Georgia Tech's Ronald Arkin (September 12, 2005) and GMU's Harry Wechsler (October 31, 2005).

"You might be wondering why we call these documents 'oral histories' rather than 'interviews.' An interview is a finished product that you might see in the newspaper, on TV, or in some other medium. It is meant to convey particular information. An oral history, on the other hand, is considered by historians to be a "primary source," raw data from which they will, in combination with other raw data, create historical narratives." --- from the IEEE History Center's Oral History page


Oral, Video & Personal Histories

Curious Minds: How a Child Becomes a Scientist. Edited by, and with an Introduction by, John Brockman. Pantheon Books (US), Jonathan Cape (UK): August, 2004). Overview from the Edge Foundation, Inc. "Original essays by Nicholas Humphrey * David M. Buss * Robert M. Sapolsky * Mihaly Csikszentmihalyi * Murray Gell-Mann * Alison Gopnik * Paul C. W. Davies * Freeman Dyson * Lee Smolin * Steven Pinker * Mary Catherine Bateson * Lynn Margulis * Jaron Lanier * Richard Dawkins * Howard Gardner * Joseph LeDoux * Sherry Turkle * Marc D. Hauser* Ray Kurzweil * Janna Levin * Rodney Brooks * J. Doyne Farmer * Steven Strogatz * Tim White * V. S. Ramachandran * Daniel C. Dennett * Judith Rich Harris ... fascinating and original collection of essays from twenty-seven of the world’s most interesting scientists about the moments and events in their childhoods that set them on the paths that would define their lives."

Carnegie Mellon School of Computer Science's "Look Who's Talking" collection. Faculty and alumni interviews include Laurie E. Damianos, M. Bernardine Dias, Tom Mitchell, Raj Reddy, Manuela Veloso.

The Charles Babbage Institute (CBI), University of Minnesota, Minneapolis, Oral History Collection. Abstracts of the interviews and most of the transcripts are available online.

  • Here are a few examples of what you'll find:
    • Edward Feigenbaum. Oral history interview by William Aspray, 3 March 1989, Palo Alto, California.
      • Abstract: "Feigenbaum begins the interview with a description of his initial recruitment by ARPA in 1964 to work on a time-sharing system at Berkeley and his subsequent move to Stanford in 1965 to continue to do ARPA-sponsored research in artificial intelligence. The bulk of the interview is concerned with his work on AI at Stanford from 1965 to the early 1970s and his impression of the general working relationship between the IPT Office at ARPA and the researchers at Stanford. He discusses how this relationship changed over time under the various IPT directorships and the resulting impact it had on their AI research. The interview also includes a general comparison of ARPA with other funding sources available to AI researchers, particularly in terms of their respective funding amounts, criteria for allocation, and management style. This interview was recorded as part of a research project on the influence of the Defense Advanced Research Projects Agency (DARPA) on the development of computer science in the United States."
    • J. C. R. Licklider. Oral history interview by William Aspray and Arthur L. Norberg, 28 October 1988, Cambridge, Massachusetts.
      • Abstract: Licklider, the first director of the Advanced Research Projects Agency's (ARPA) Information Processing Techniques Office (IPTO), discusses his work at Lincoln Laboratory and IPTO. Topics include: personnel recruitment; the interrelations between the various Massachusetts Institute of Technology laboratories; Licklider's relationship with Bolt, Beranek, and Newman; the work of ARPA director Jack Ruina; IPTO's influence of computer science research in the areas of interactive computing and timesharing; the ARPA contracting process; the work of Ivan Sutherland."
    • " John McCarthy, Oral history interview by William Aspray, 2 March 1989, Palo Alto, California.
      • Abstract: "McCarthy begins this interview with a discussion of the initial establishment and development of time-sharing at the Massachusetts Institute of Technology and the role he played in it. He then describes his subsequent move to Stanford in 1962 and the beginnings of his work in artificial intelligence (AI) funded by the Advanced Research Projects Agency. This work developed in two general directions: logic-based AI (LISP) and robotics. In the main section of the interview McCarthy discusses his view of the Defense Advanced Research Projects Agency's (DARPA) role in the support of AI research in the U.S. in general and at Stanford in particular. He specifically addresses the following issues: the relative importance of DARPA funding in comparison to other public and private sources, requirements and procedures undertaken to obtain DARPA funds, and changes over time in levels of support and requirements from DARPA. McCarthy concludes this interview with a brief description of the AI Laboratory at Stanford and his continued work on AI (funded by DARPA) with the Formal Reasoning Group."
    • Marvin Lee Minsky. Oral history interview by Arthur L. Norberg, 1 November 1989, Cambridge, Massachusetts.
      • Abstract: "Minsky describes artificial intelligence (AI) research at the Massachusetts Institute of Technology (MIT). Topics include: the work of John McCarthy; changes in the MIT research laboratories with the advent of Project MAC; research in the areas of expert systems, graphics, word processing, and time-sharing; variations in the Advanced Research Projects Agency (ARPA) attitude toward AI with changes in directorship; and the role of ARPA in AI research."
    • Allen Newell. Oral history interview by Arthur L. Norberg, 10-12 June 1991, Pittsburgh, Pennsylvania.
      • Abstract: "Newell discusses his entry into computer science, funding for computer science departments and research, the development of the Computer Science Department at Carnegie Mellon University, and the growth of the computer science and artificial intelligence research communities. Newell describes his introduction to computers through his interest in organizational theory and work with Herb Simon and the Rand Corporation. He discusses early funding of university computer research through the National Institutes of Health and the National Institute of Mental Health. He recounts the creation of the Information Processing Techniques Office (IPTO) under J. C. R. Licklider. Newell recalls the formation of the Computer Science Department at Carnegie Mellon and the work of Alan J. Perlis and Raj Reddy. He describes the early funding initiatives of the Advanced Research Projects Agency (ARPA) and the work of Burt Green, Robert Cooper, and Joseph Traub. Newell discusses George Heilmeier's attempts to cut back artificial intelligence, especially speech recognition, research. He compares research at the Massachusetts Institute of Technology and Stanford's Artificial Intelligence Laboratory and Computer Science Department with work done at Carnegie Mellon. Newell concludes the interview with a discussion of the creation of the ARPANET and a description of the involvement of the research community in influencing ARPA personnel and initiatives."
    • Nils J. Nilsson. Oral history interview by William Aspray, 1 March 1989, Palo Alto, California.
      • Abstract: "Nilsson begins the interview with a brief historical overview of DARPA-sponsored AI research at SRI, including his own work in robotics, research on the Computer Based Consultant, and related research on natural language and speech understanding. He notes the impact of the Mansfield amendment on DARPA funding for these projects at SRI. The major portion of the interview is concerned specifically with his work in robotics during the period 1966-1971. He describes the significance and relationship of this work to the larger field of AI, particularly the intellectual problems it addressed and the enabling technologies it helped develop. In the last section of the interview he gives a general impression of changes over time (from the early 1960s to the early 1970s) in funding trends and research emphases at DARPA. He concludes with a short list of contributions to AI research that came out of DARPA-sponsored work during this period."
    • Raj Reddy. Oral history interview by Arthur L. Norberg, 12 June 1991, Pittsburgh, Pennsylvania.
      • Abstract: "Reddy discusses his work in artificial intelligence (AI), especially speech recognition, from his graduate work at Stanford University through his research as a principle investigator on Defense Advanced Research Projects Agency (DARPA) grants at Carnegie-Mellon University. Other topics include: the interaction of researchers at the Stanford Artificial Intelligence Laboratory, DARPA funding of AI research, the expansion of the principle investigator community over time, and the various directions of AI research from the 1960s to the 1980s."

IEEE History Center Oral Histories. See their list of online oral histories, many of which deal with computers.

The Joshua Lederberg Papers: part of the National Library of Medicine's Profiles in Science archival collection. Materials include:

  • How DENDRAL was conceived and born. Typescript of Lederberg's November 5, 1987 talk at the Association for Computing Machinery Symposium on the History of Medical Informatics."As agreed with your organizers, this will be a somewhat personal history. They have given me permission to recall how I came to work with Ed Feigenbaum on DENDRAL, an exemplar of expert systems and of modelling problem-solving behavior. My recollections are based on a modest effort of historiography, but not a definitive survey of and search for all relevant documents. On the other hand, they will give more of the flow of ideas and events as they happened than is customary in published papers in scientific journals...."
  • Early interest in science: a video clip from Barbara Hyde's March 22, 1996 oral history interview for the American Society for Microbiology. " which is made available through the : The Joshua Lederberg Papers.
  • Overview: Computers, Artificial Intelligence, and Expert Systems in Biomedical Research.

Sloan Project MouseSite: "This project aimed to construct a website that would engage the community of computer scientists and engineers who participated in the early developments of the field of human computer interaction in documenting and writing their own history. We focused on the work of Douglas C. Engelbart and the group of researchers who worked with him at Stanford Research Institute in Menlo Park, California from 1962 until the mid-1970s." It's part of Science and Technology in the Making.

Smithsonian Videohistory Program: Robotics. "Robotics is the applied science of intelligent machines, a field of research that combines electrical, electronic, and mechanical engineering. Steven Lubar, curator in the Division of Engineering and Industry at the Smithsonian's National Museum of American History (NMAH), recorded four sessions with robots designers to document different work styles, environments, and the processes by which engineers make decisions. He captured the style of work at two university settings and a corporate site to understand how their differing objectives influenced technological development. His goal was to interview researchers working with their machines--to document the 'hands-on' aspect of development--and to record the robots in use. Lubar was also interested in documenting the interactions between researchers, the robots, and their environment."

Stanford and the Silicon Valley. Oral history interviews with Douglas Englebart and Bruce Deal.

If you are interested in being the source for an AI-related oral history (or if you just want to send us a recollection or two), please see our online brochure for The Wellspring Initiative and then contact us.

Talking Heads...A Review of Speaking Minds: Interviews with Twenty Eminent Cognitive Scientists. By Patrick J. Hayes and Kenneth M. Ford. 1997. AI Magazine 18 (2): 123-125. [The book review is available online.]

Talking Nets: An Oral History of Neural Networks. Anderson, James A., and Edward Rosenfeld, editors. 1998. Cambridge, MA: MIT Press/Bradford Books. Interviews with founders of the field of study, including how these scientists from different disciplines became interested in neural networks, and what future developments they see. Excerpts are available online.

WISE. Archives of Women in Science & Engineering (WISE) Oral History Project from the Special Collections Department at Iowa State University. "The Project will involve conducting approximately 50 interviews with women who were being educated or working in science and engineering during World War II and the post-war period. These interviews will document the difficult experiences of these women and the inroads made into what had been seen as male areas of research and work as well as providing information concerning the impact of the women's movement."

Related Resources & Web Sites

The Faces of Science: African Americans in the Sciences. An internet presentation from Mitchell C. Brown, Librarian, Princeton University. Be sure to scroll down to the entries for "Computer Scientists".

... more history, herstory, and ourstory can be found on our HISTORY page !

Interview Archive

Oliver Selfridge - in from the start. Interviewed by Peter Selfridge in IEEE Expert, October 1996 (Vol. 11, No. 5). "Driven by his curiosity about the nature of learning, Oliver Selfridge has spent over a half century enmeshed in the most exciting developments in artificial intelligence, communications, and computer science. A participant at the original conference at Dartmouth in 1956 (and at the Western Joint Computer Conference in Los Angeles the year before, which he considers the true start of AI), Selfridge formed working relationships and cemented friendships with AI's founding members--John McCarthy, Marvin Minsky, and Allen Newell, among others--as he went on to become a true AI pioneer himself."