apple tree logo
apple tree logo
Machine Learning

Good Places to Start

Readings Online

Related Web Sites

Related Pages

More Readings
(see FAQ)

Recent News about THE TOPICS (annotated)



 

 
photo of Oliver Selfridge

If an expert system--brilliantly designed, engineered and implemented--cannot learn not to repeat its mistakes, it is not as intelligent as a worm or a sea anemone or a kitten.
-Oliver G. Selfridge, from The Gardens of Learning.

"Find a bug in a program, and fix it, and the program will work today. Show the program how to find and fix a bug, and the program will work forever."
- Oliver G. Selfridge, in AI's Greatest Trends and Controversies

Machine learning refers to a system capable of the autonomous acquisition and integration of knowledge. This capacity to learn from experience, analytical observation, and other means, results in a system that can continuously self-improve and thereby offer increased efficiency and effectiveness.


Good Places to Start

Does Machine Learning Really Work? By Tom M. Mitchell. AI Magazine, 18(3): Fall 1997, 11-20. "Yes. Over the past decade, machine learning has evolved from a field of laboratory demonstrations to a field of significant commercial value. ... This article, based on the keynote talk presented at the Thirteenth National Conference on Artificial Intelligence, samples a number of recent accomplishments in machine learning and looks at where the field might be headed."

Introduction to Machine Learning - Draft of Incomplete Notes. By Nils J. Nilsson. "The notes survey many of the important topics in machine learning circa 1996. My intention was to pursue a middle ground between theory and practice. The notes concentrate on the important ideas in machine learning---it is neither a handbook of practice nor a compendium of theoretical proofs. My goal was to give the reader sufficient preparation to make the extensive literature on machine learning accessible."

  • Chapter 1, Introduction: What is Machine Learning? "Machine learning usually refers to the changes in systems that perform tasks associated with artificial intelligence (AI). Such tasks involve recognition, diagnosis, planning, robot control, prediction, etc. ... To be slightly more specific, we show the architecture of a typical AI 'agent' in Fig. 1.1. ... One might ask 'Why should machines have to learn? Why not design machines to perform as desired in the first place?' There are several reasons why machine learning is important. ... "

Machine learns games 'like a human.' By Will Knight. New Scientist News (January 24, 2005). "A computer that learns to play a 'scissors, paper, stone' by observing and mimicking human players could lead to machines that automatically learn how to spot an intruder or perform vital maintenance work, say UK researchers. CogVis, developed by scientists at the University of Leeds in Yorkshire, UK, teaches itself how to play the children's game by searching for patterns in video and audio of human players and then building its own 'hypotheses' about the game's rules. In contrast to older artificial intelligence (AI) programs that mimic human behaviour using hard-coded rules, CogVis takes a more human approach, learning through observation and mimicry, the researchers say. ... 'A system that can observe events in an unknown scenario, learn and participate just as a child would is almost the Holy Grail of AI,' says Derek Magee from the University of Leeds." Be sure to see the sidebar with related articles & web sites.

Machine Learning Lecture Notes. From Charles R. Dyer, University of Wisconsin - Madison.

Machine Learning. Section 1.2.8 of Chapter One (available online) of George F. Luger's textbook, Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 5th Edition (Addison-Wesley; 2005). "The importance of learning, however, is beyond question, particularly as this ability is one of the most important components of intelligent behavior. ... Although learning is a difficult area, there are several programs that suggest that it is not impossible. One striking program is AM, the Automated Mathematician, designed to discover mathematical laws (Lenat 1977, 1982). Initially given the concepts and axioms of set theory, AM was able to induce such important mathematical concepts as cardinality, integer arithmetic, and many of the results of number theory. AM conjectured new theorems by modifying its current knowledge base and used heuristics to pursue the 'best' of a number of possible alternative theorems. ... Early influential work includes Winston's research on the induction of structural concepts such as 'arch' from a set of examples in the blocks world (Winston 1975 a)."

sketch of a computer reading a manual

A Machine With a Mind of Its Own - Ross King wanted a research assistant who would work 24/7 without sleep or food. So he built one. By Oliver Morton. Wired Magazine (August 2004, Issue 12.08). "The 'robot scientist' (King has resisted the temptation of a jazzy acronym) may look like a mere labor-saving gizmo, shuttling back and forth ad nauseam, but it's much more than that. Biology is full of tools with which to make discoveries. Here's a tool that can make discoveries on its own. ... Stephen Muggleton argues that the life sciences are peculiarly well suited to machine learning. 'There's an inherent structure in biological problems that lends itself to computational approaches,' he says. In other words, biology reveals the machinelike substructure of the living world; it's not surprising that machines are showing an aptitude for it."

Two courses from MIT's OpenCourseWare, "a free and open educational resource for faculty, students, and self-learners around the world."

  • Artificial Intelligence; Spring 2003. Professors Tomás Lozano-Pérez & Leslie Kaelbling. See the Machine Learning lecture slides and accompanying transcripts.
  • Machine Learning; Fall 2002. Professor Tommi Jaakkola. "6.867 is offered under the department's 'Artificial Intelligence and Applications' concentration. The site offers a full set of lecture notes, homework assignments, in addition to other materials used by students in the course. 6.867 is an introductory course on machine learning which provides an overview of many techniques and algorithms in machine learning, beginning with topics such as simple perceptrons and ending up with more recent topics such as boosting, support vector machines, hidden Markov models, and Bayesian networks. The course gives the student the basic ideas and intuition behind modern machine learning methods as well as a bit more formal understanding of how and why they work."

Glossary of Terms. Special Issue on Applications of Machine Learning and the Knowledge Discovery Process. Ron Kohavi and Foster Provost, eds. Machine Learning, 30: 271-274 (1998). "To help readers understand common terms in machine learning, statistics, and data mining, we provide a glossary of common terms."

Readings Online

Applying Metrics to Machine-Learning Tools: A Knowledge Engineering Approach. Fernando Alonso, Luis Mate, Natalia Juristo, Pedro L. Munoz, and Juan Pazos. AI Magazine 15(3): Fall 1994, 63-75. "The field of knowledge engineering has been one of the most visible successes of AI to date. Knowledge acquisition is the main bottleneck in the knowledge engineer's work. Machine-learning tools have contributed positively to the process of trying to eliminate or open up this bottleneck, but how do we know whether the field is progressing? How can we determine the progress made in any of its branches? How can we be sure of an advance and take advantage of it? This article proposes a benchmark as a classificatory, comparative, and metric criterion for machine-learning tools. The benchmark centers on the knowledge engineering viewpoint, covering some of the characteristics the knowledge engineer wants to find in a machine-learning tool."

Machine Learning: A Historical and Methodological Analysis. By Jaime G. Carbonell, Ryszard S. Michalski, and Tom M. Mitchell. AI Magazine 4(3): Fall 1983, 69-79. Abstract: "Machine learning has always been an integral part of artificial intelligence, and its methodology has evolved in concert with the major concerns of the field. In response to the difficulties of encoding ever-increasing volumes of knowledge in modern AI systems, many researchers have recently turned their attention to machine learning as a means to overcome the knowledge acquisition bottleneck. This article presents a taxonomic analysis of machine learning organized primarily by learning strategies and secondarily by knowledge representation and application areas. A historical survey outlining the development of various approaches to machine learning is presented from early neural networks to present knowledge-intensive techniques."

Brain learns like a robot - Scan shows how we form opinions. By Tanguy Chouard. Nature Science Update (June 10, 2004). "Researchers may have pinpointed the brain regions that help us work out good from bad. And their results suggest that humans and robots are more alike than we may care to admit, as both use similar strategies to make value judgements. ... The team also plotted brain activity on a graph to give a mathematical description of processes that underlie the formation of value judgements. The patterns they saw resembled those made by robots as they learn from experience. 'The results were astounding,' says study co-author Peter Dayan. 'There was an almost perfect match between the brain signals and the numerical functions used in machine learning,' he says. This suggests that our brains are following the laws of artificial intelligence."

Machine Learning Research: Four Current Directions. By Tom Dietterich. AI Magazine 18(4): Winter 1997, 97-136. Abstract: "Machine-learning research has been making great progress in many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (1) the improvement of classification accuracy by learning ensembles of classifiers, (2) methods for scaling up supervised learning algorithms, (3) reinforcement learning, and (4) the learning of complex stochastic models."

Learning. An overview by Patrick Doyle. Very informative, though there are some spots that are quite technical.

Journal of Machine Learning Research. "The Journal of Machine Learning Research (JMLR) provides an international forum for the electronic and paper publication of high-quality scholarly articles in all areas of machine learning."

Bookish Math - Statistical tests are unraveling knotty literary mysteries. By Erica Klarreich. Science News (December 20, 2003; Vol. 164, No. 25). "Stylometry ['the science of measuring literary style'] is now entering a golden era. In the past 15 years, researchers have developed an arsenal of mathematical tools, from statistical tests to artificial intelligence techniques, for use in determining authorship. ... For decades, computers have supported the work of experts in stylometry. Now, computers are becoming experts in their own right, as some researchers apply artificial intelligence techniques to the question of authorship. ... In 1993, Robert Matthews of Aston University in England and Thomas Merriam, an independent Shakespearean scholar in England, created a neural network that could distinguish between the plays of Shakespeare and of his contemporary Christopher Marlowe. A neural network is a computer architecture modeled on the human brain, consisting of nodes connected to each other by links of differing strengths. ... A couple of years later, Holmes and Richard Forsyth of the University of Luton in England used the Federalist Papers to test another artificial intelligence technique. They applied genetic algorithms, which use Darwinian principles of natural selection. The idea is to create a set of rules for determining authorship and then let the most useful, or fit, rules survive. ... Yet another analysis of the Federalist Papers was presented at a computer science conference in October. Glenn Fung of Siemens Medical Solutions in Malvern, Pa., used one of artificial intelligence's newest tools, a pattern-recognition technique called support-vector machines."

Machine Learning, Neural and Statistical Classification. Donald Michie, D. J. Spiegelhalter, and C. C. Taylor, editors. "[This] book (originally published in 1994 by Ellis Horwood) is now out of print. The copyright now resides with the editors who have decided to make the material freely available on the web."

AI and the Impending Revolution in Brain Sciences. Powerpoint slides of Tom Mitchell's AAAI Presidential Address, August 2002. [An associated video file is also available from his home page.] "Thesis of This Talk: The synergy between AI and Brain Sciences will yield profound advances in our understanding of intelligence over the coming decade, fundamentally changing the nature of our field."

Statistical Data Mining Tutorials - Tutorial Slides by Andrew Moore, professor of Robotics and Computer Science at the School of Computer Science, Carnegie Mellon University. "The following links point to a set of tutorials on many aspects of statistical data mining, including the foundations of probability, the foundations of statistical data analysis, and most of the classic machine learning and data mining algorithms."

Automated Learning and Discovery State-Of-The-Art and Research Topics in a Rapidly Growing Field. By Sebastian Thrun, Christos Faloutsos, Tom Mitchell, and Larry Wasserman. AI Magazine 20(3): Fall 1999, 78-82. "This article summarizes the Conference on Automated Learning and Discovery (CONALD), which took place in June 1998 at Carnegie Mellon University. CONALD brought together an interdisciplinary group of scientists concerned with decision making based on data. One of the meeting's focal points was the identification of promising research topics, which are discussed toward the end of this article."

Related Web Sites

AI on the Web: Machine Learning. A resource companion to Stuart Russell and Peter Norvig's "Artificial Intelligence: A Modern Approach" with links to reference material, people, research groups, books, companies and much more.

"The Adaptive Systems Group at the Navy Center for Applied Research in Artificial Intelligence is performing state-of-the-art research in machine learning and robotics, with emphasis on techniques that will allow the creation of systems that are more adaptive to changes in their environment and to changes in their own capabilities." Check out their research projects, such as:

  • Continuous and Embedded Learning (Anytime Learning): "Continuous and embedded learning is a general approach to continuous learning in a changing environment. ... The basic idea is to integrate two continuously running modules: an execution module and a learning module. This work is part of an ongoing investigation of machine learning techniques for solving sequential decision problems."

Applications of Machine Learning collection from the Alberta Ingenuity Centre for Machine Learning.

Center for Automated Learning and Discovery (CALD). "What is CALD?The Center for Automated Learning and Discovery (CALD) is an academic department within Carnegie Mellon University's School of Computer Science. We focus on research and education in all areas of statistical machine learning. What is Machine Learning?Machine Learning is a scientific field addressing the question 'How can we program systems to automatically learn and to improve with experience? ..."

"Grammatical Inference, variously refered to as automata induction, grammar induction, and automatic language acquisition, refers to the process of learning of grammars and languages from data. Machine learning of grammars finds a variety of applications in syntactic pattern recognition, adaptive intelligent agents, diagnosis, computational biology, systems modelling, prediction, natural language acquisition, data mining and knowledge discovery. ... This homepage is designed to be a centralized resource information on Grammatical Inference and its applications. We hope that this information will be useful to both newcomers to the field as well as seasoned campaigners"

Index of Machine Learning Courses. Maintained by Vasant Honavar, Artificial Intelligence Research Group, Department of Computer Science, Iowa State University. When you visit the page for any given course, be sure to check out sections such as 'course readings' and 'additional resources' for you're sure to find plenty of gems there.

MLnet OiS. "Welcome to the MLnet Online Information Service (the successor of the ML-Archive at GMD). This site is dedicated to the field of machine learning, knowledge discovery, case-based reasoning, knowledge acquisition, and data mining. Get information about research groups and persons within the community. Browse through the list of software and data sets, and check out our events page for the latest calls for papers. Alternatively have a look at our list of job offerings if you are looking for a new opportunity within the field." This web site is funded by the European Commission. Here are some links to just a few of their collections:

Machine Learning at IBM. "The Machine Learning Group [Haifa] specializes in developing algorithms for automatic pattern recognition, prediction, analysis, classification, and learning of structures."

Machine Learning and Applied Statistics at Microsoft. "The Machine Learning and Applied Statistics (MLAS) group is focused on learning from data and data mining. By building software that automatically learns from data, we enable applications that (1) do intelligent tasks such as handwriting recognition and natural-language processing, and (2) help human data analysts more easily explore and better understand their data."

Machine Learning and Data Mining Group at the Austrian Research Institute for Artificial Intelligence (ÖFAI). Projects, publications, and more.

Machine Learning Dictionary. Compiled by Bill Wilson, Associate Professor in the Artificial Intelligence Group, School of Computer Science and Engineering, University of NSW. "You should use The Machine Learning Dictionary to clarify or revise concepts that you have already met. The Machine Learning Dictionary is not a suitable way to begin to learn about Machine Learning."

Machine Learning in Games. Maintained by Jay Scott. "How computers can learn to get better at playing games. This site is for artificial intelligence researchers and intrepid game programmers. I describe game programs and their workings; they rely on heuristic search algorithms, neural networks, genetic algorithms, temporal differences, and other methods. I keep big list of online research papers. And there's more."

Machine Learning Resources. Maintained by David Aha. Links to a wealth of information await you at this site.

The Machine Learning Systems (MLS) Group at the Jet Propulsion Laboratory, California Institute of Technology. Read about projects such as Onboard Science Analysis - Autonomous Serendipitous Science Acquisition for Planets: "The main driver for sending spacecraft into the solar system is scientific investigation. The desire by JPL and NASA to develop a new generation of small,autonomous, capable spacecraft, that are operable with low requirements for bandwidth, communications, and operations leaves little choice but to move capabilities for science data analysis onboard the spacecraft. Capabilities such as autonomous recognition and acquisition of science targets are essential if one is to achieve the strong requirments of reduced downlink bandwidth as well as reduced operations and sequencing from the ground."

"Sodarace [a joint venture between: soda and queen mary, university of london] is the online olympics pitting human creativity against machine learning in a competition to construct virtual racing robots. ... Sodarace is not just for fun. It is a shared competition for Artificial Intelligence researchers to test their learning algorithms while also being a play space in which to communicate the benefits of Artificial Intelligence research with a wide audience and promote a creative exploration of physics and engineering."

UCI Machine Learning. University of California, Irvine:

  • ML Programs. You'll find FOCL, Hydra, and others.
  • Repository. "This is a repository of databases, domain theories and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms.
  • Research. "Machine learning investigates the mechanisms by which knowledge is acquired through experience. ... Our research involves the development and analysis of algorithms that identify patterns in observed data in order to make predictions about unseen data. New learning algorithms often result from research into the effect of problem properties on the accuracy and run-time of existing algorithms."

Related Pages

More readings

Abu-Mostafa, Yaser. 1995. Machines That Learn From Hints. Scientific American 272(4) (April 1995): 64-69. Machine learning improves significantly by taking advantage of information available from intelligent hints.

Dietterich, Thomas G. 1990. Machine Learning. In Annual Review of Computer Science, Volume 4,1989-1990, ed. Traub, Joseph F., Barbara J. Grosz, Butler W. Lampson, et al., Palo Alto, CA: Annual Reviews, Inc

Kaelbling, L. P., M. L. Littman, and A. W. Moore. 1996. Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4: 237-285.

Kearns, M., and U. Vazirani. 1994. An Introduction to Computational Learning Theory. Cambridge, MA: MIT Press.

Langley, Pat. 1995. Elements of Machine Learning. San Francisco: Morgan Kaufmann.

Luger, George F. 2004. Artificial Intelligence: Structures and Strategies for Complex Problem Solving (5th Edition). Addison-Wesley.

Mechner, David. A. 1998. All Sytems Go. The Sciences 38 (Jan/Feb 1998): 32-7.

Michalski, Ryszard, and Georghe Tecuci, editors. 1993. Machine Learning: A Multi-Strategy Approach, Volume IV. San Francisco: Morgan Kaufmann.

Minton, Steven, editor. 1993. Machine Learning Methods for Planning. San Francisco: Morgan Kaufmann.

Mitchell, Tom. 1997. Machine Learning. McGraw-Hill.

  • Slides for instructors are available.
Nilsson, Nils. 1990. The Mathematical Foundations of Learning Machines. San Francisco: Morgan Kaufmann. A reprinted version of Learning Machines: Foundations of Trainable Pattern-Classifying Systems, N. Nilsson. New York: McGraw Hill, 1965.

Patterson, Dan W. 1990. Early Work in Machine Learning. In Introduction to Artificial Intelligence and Expert Systems by Dan W. Patterson, 367-380. Englewood Cliffs, NJ: Prentice Hall.

Samuel, Arthur L. 1959. Some Studies in Machine Learning Using the Game of Checkers. In Computation and Intelligence: Collected Readings, ed. Luger, George F., Menlo Park, CA/Cambridge, MA/London: AAAI Press/The MIT Press, 1995.

Shavlik, J., and T. Dieterrich, editors. 1990. Readings in Machine Learning. San Mateo, CA: Morgan Kaufmann.

Sussman, G. 1975. A Computer Model of Skill Acquisition. Amsterdam: Elsevier/North Holland. A classic work.

Sutton, Richard S., and Andrew G. Barto. 1998. Reinforcement Learning. Cambridge, MA: MIT Press/Bradford Books. Covers the main concepts and algorithms of reinforcement learning, some history, recent developments, and applications.

Wayner, Peter. 1995. Machine Learning Grows Up. Byte 20 (August 1995): 63-64+.

Weiss, Sholom M., and Casimir A. Kulikowski. 1991. Computer Systems that Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. San Mateo, CA: Morgan Kaufmann.