apple tree logo
apple tree logo
Neural Networks & Connectionist Systems
(a subtopic of Machine Learning)

Good Places to Start

Readings Online

Related Web Sites

Related Pages

More Readings
(see FAQ)

Recent News about THE TOPICS (annotated)



 

 
the human brain
Neural models of intelligence emphasize the brain's ability to adapt to the world in which it is situated by modifying the relationships between individual neurons. Rather than representing knowledge in explicit logical sentences, they capture it implicitly, as a property of patterns of relationships.
- George F. Luger

The human brain is an incredibly impressive information processor, even though it "works" quite a bit slower than an ordinary computer. Many researchers in artificial intelligence look to the organization of the brain as a model for building intelligent machines. Think of a sort of "analogy" between the complex webs of interconnected neurons in a brain and the densely interconnected units making up an artificial neural network (ANN), where each unit--just like a biological neuron--is capable of taking in a number of inputs and producing an output.

Consider this description: "To develop a feel for this analogy, let us consider a few facts from neurobiology. The human brain is estimated to contain a densely interconnected network of approximately 1011 neurons, each connected, on average, to 104 others. Neuron activity is typically excited or inhibited through connections to other neurons. The fastest neuron switching times are known to be on the order of 10-3 seconds---quite slow compared to computer switching speeds of 10-10 seconds. Yet humans are able to make surprisingly complex decisions, surprisingly quickly. For example, it requires approximately 10-1 seconds to visually recognize your mother. Notice the sequence of neuron firings that can take place during this 10-1-second interval cannot possibly be longer than a few hundred steps, giving the switching speed of single neurons. This observation has led many to speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. One motivation for ANN systems is to capture this kind of highly parallel computation based on distributed representations." [From Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).]


Good Places to Start

AI's Next Brain Wave. New research in artificial intelligence could lay the groundwork for computer systems that learn from their users and the world around them. Part four in The Future Of Software series. By Aaron Ricadela. InformationWeek (April 25, 2005). "As the excitement about traditional AI waned in the late '80s, development of artificial neural networks picked up steam. Instead of manipulating and relating symbols about concepts in the world, neural networks operate according to lists of numbers representing problems and potential solutions. These artificial neurons could learn about relationships based on a training set of solutions and eventually became stacked into 'layers,' so the output of one neural network could form the input of another. Researchers at IBM's Watson Laboratory in Yorktown Heights, N.Y., are trying to make the model even more complex, building layered neural networks that behave according to biological characteristics of the nervous systems of vertebrates. The four-year program, called systems neurocomputing, is far reaching; IBM is funding it under a category it calls adventurous research. Charles Peck, program director of neural computing research, has a background in neuroscience, mathematics, and artificial intelligence, and researcher staff member James Kozloski has a Ph.D. in neuroscience from the University of Pennsylvania, where he studied the nervous systems of African river fish. Systems neurocomputing aims to address a conundrum in AI: that it's virtually impossible to write programs that know in advance all the unfamiliar elements of every task they may encounter."

Artificial Neural Networks. Brief introduction with great illustrations. By Nathan Botts. Encyclopedia of Educational Technology (San Diego State University).

Computers and Symbols versus Nets and Neurons, Chapter One of Neural Net notes, by Kevin Gurney, Psychology Department, University of Sheffield.

What is a Neural Net? 'A Brief Introduction' from CorMac Technologies. Very basic and the examples about bank loans and real estate appraisals really help to put everything in context.

Why Neural Networks? From NeuralWare. "In essence, neural networks are mathematical constructs that emulate the processes people use to recognize patterns, learn tasks, and solve problems. Neural networks are usually characterized in terms of the number and types of connections between individual processing elements, called neurons, and the learning rules used when data is presented to the network. Every neuron has a transfer function, typically non-linear, that generates a single output value from all of the input values that are applied to the neuron. Every connection has a weight that is applied to the input value associated with the connection. A particular organization of neurons and connections is often referred to as a neural network architecture. The power of neural networks comes from their ability to learn from experience (that is, from historical data collected in some problem domain).

What is a Neural Network? and related Video Presentation. From NeuroSolutions. "A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. The motivation for the development of neural network technology stemmed from the desire to develop an artificial system that could perform 'intelligent' tasks similar to those performed by the human brain. ... The true power and advantage of neural networks lies in their ability to represent both linear and non-linear relationships and in their ability to learn these relationships directly from the data being modeled."

Simple Networks, Simple Rules: Learning and Creating Categories. An interactive experience from Paul Grobstein, Professor of Biology at Bryn Mawr College, and a founder of the Serendip site. Be sure to check out the "Going Beyond" resources at the end of the article.

Neural Network Applet : "Inspired by neurons and their connections in the brain, neural networks are a representation used in machine learning. After running the back-propagation learning algorithm on a given set of examples, the neural network can be used to predict outcomes for any set of input values." From CISpace -Tools for Learning Computational Intelligence: "Here are some applets that are designed as tools for learning and exploring concepts in artificial intelligence. They are part of the online resources for Computational Intelligence. If you are teaching or learning about AI, you may use these applets freely."

Georgia Tech Researchers Use Lab Cultures To Control Robotic Device. Science Daily, based upon a news release from the Georgia Institute Of Technology (April 28, 2003). "The Hybrot, a small robot that moves about using the brain signals of a rat, is the first robotic device whose movements are controlled by a network of cultured neuron cells. Steve Potter and his research team in the Laboratory for Neuroengineering at the Georgia Institute of Technology are studying the basics of learning, memory, and information processing using neural networks in vitro. Their goal is to create computing systems that perform more like the human brain. ... 'Learning is often defined as a lasting change in behavior, resulting from experience,' Potter said. 'In order for a cultured network to learn, it must be able to behave. By using multi-electrode arrays as a two-way interface to cultured mammalian cortical networks, we have given these networks an artificial body with which to behave.'"

FAQ: Neural Networks. Maintained by Warren S. Sarle. An archived list of questions and answers from the neural nets newsgroup. The listings start with very basic information, but go on to include more technical material.

  • Don't miss the fascinating list of Applications which covers such topics as: Agriculture, Chemistry, Finance, and Economics, Games and Gambling, Industry Materials Science, Medicine, Music, Robotics, and Weather Forecasting.

Readings Online

Perceptrons: Basic Neural Networking. An essay from AI Horizons. "Perceptrons are the easiest data structures to learn for the study of Neural Networking. Think of a perceptron as a node of a vast, interconnected network, sort of like a data tree, although the network does not necessarily have to have a top and bottom. The links between the nodes not only show the relationship between the nodes but also transmit data and information, called a signal or impulse."

What is neuro-fuzzy logic? By Surjit Singh Bhatti. The Tribune (Chandigarh, India; October 24, 2002). "While fuzzy logic uses approximate human reasoning in knowledge-based systems, the neural networks aim at pattern recognition, optimisation and decision making. A combination of these two newspaper with link to news index technological innovations delivers the best results. This has led to a new science called neuro-fuzzy logic in which the explicit knowledge representation of fuzzy logic is augmented by the learning power of simulated neural networks."

Mimicking fraudsters - If your card use has been queried, it's probably because more banks are now using artificial intelligence software to try to detect fraud. By Ken Young. The Guardian. "Credit card fraud losses in the UK fell for the first time in nearly a decade last year, by more than 5% to £402.4m, according to research by the Association of Payment Clearing Services (Apacs). The fall has put a spotlight on the increasing use of neural networks that have the ability to detect fraudulent behaviour by analysing transactions and alerting staff to suspicious activity."

Computers try to outthink terrorists. By Bruce V. Bigelow. The San Diego Union-Tribune (January 13, 2002). Also available from UC San Diego. "Known as machine learning or neural networks, such technology uses the power of computer processing in a fundamentally different way than conventional computers do. Instead of a logic-oriented program that follows a set of step-by-step instructions that lead to a definitive answer, machine learning uses statistical modeling techniques to produce an optimum answer. Such technology is ideal for sifting through vast amounts of data and finding peculiar patterns -- what engineers sometimes call 'signal-t--noise ratio' problems. One of the earliest uses was in anti-submarine warfare -- processing signals from underwater listening networks and identifying the telltale patterns of enemy submarines."

The Rebirth of Artificial Intelligence. Lisa DiCarlo. Forbes (5.16.00). "AI is having a resurgence, courtesy of a ten-year approach called neural networks. Neural networks are modeled on the logical associations made by the human brain. In computer-speak, they're based on mathematical models that accumulate data, or 'knowledge,' based on parameters set by administrators. Once the network is "trained" to recognize these parameters, it can make an evaluation, reach a conclusion and take action."

Connectionism. James W. Garson authored this entry in the Stanford Encyclopedia of Philosophy. Topics include: A Description of Neural Networks, Neural Network Learning and Backpropagation, and Connectionist Representation.

It's Only Checkers, but the Computer Taught Itself. By James Glanz. The New York Times (July 25, 2000). "Two computer scientists have leveled the playing field by asking a computer program called a neural network to do something much more difficult than beat a defenseless human at checkers. Knowing only the rules of checkers and a few basics, and otherwise starting from scratch, the program must teach itself how to play a good game without help from the outside world -- including from the programmers. ... The neural networks that Dr. Fogel has bred into checkers players exist as software programs on his personal computer. To understand them, however, it helps to visualize the physical structures that the software is modeling. Those structures consist of a rather crude representation of the interconnected networks of neurons in the brain."

Artificial Neural Networks. By Alexx Kay. Computerworld (February 12, 2001). "Computers organized like your brain: that's what artificial neural networks are, and that's why they can solve problems other computers can't. ... The first artificial neural network was invented in 1958 by psychologist Frank Rosenblatt. Called Perceptron, it was intended to model how the human brain processed visual data and learned to recognize objects. ... Broadly speaking, there are two methods for training an ANN, depending on the problem it must solve. A self-organizing ANN (often called a Kohonen after its inventor) is exposed to large amounts of data and tends to discover patterns and relationships in that data. Researchers often use this type to analyze experimental data. A back-propagation ANN, conversely, is trained by humans to perform specific tasks. ... Artificial neural networks have proved useful in a variety of real-world applications that deal with complex, often incomplete data. ... visual pattern recognition and speech recognition ... text-to-speech ... handwriting analysis programs (such as those used in popular PDAs) ... control machinery, adjust temperature settings, diagnose malfunctions.... Large financial institutions have used ANNs to improve performance in such areas as bond rating, credit scoring, target marketing and evaluating loan applications ... analyze credit card transactions to detect likely instances of fraud ... other kinds of crime, too."

Neural Networks. Course materials for Graham Kendall's Introduction to Artificial Intelligence course, School of Computer Science & Information Technology, University of Nottingham. The course is run by"In this section of the course we are going to consider neural networks. More correctly, we should call them Artificial Neural Networks (ANN) as we not building neural networks from animal tissue. Rather, we are simulating, on a computer, what we understand about neural networks in the brain. ... We start this section of the course by looking at a brief history of the work done in the field of neural networks."

Bookish Math - Statistical tests are unraveling knotty literary mysteries. By Erica Klarreich. Science News (December 20, 2003; Vol. 164, No. 25). "Stylometry ['the science of measuring literary style'] is now entering a golden era. In the past 15 years, researchers have developed an arsenal of mathematical tools, from statistical tests to artificial intelligence techniques, for use in determining authorship. ... For decades, computers have supported the work of experts in stylometry. Now, computers are becoming experts in their own right, as some researchers apply artificial intelligence techniques to the question of authorship. ... In 1993, Robert Matthews of Aston University in England and Thomas Merriam, an independent Shakespearean scholar in England, created a neural network that could distinguish between the plays of Shakespeare and of his contemporary Christopher Marlowe. A neural network is a computer architecture modeled on the human brain, consisting of nodes connected to each other by links of differing strengths."

Neural-Network Technology Moves into the Mainstream. By Gene J. Koprowski. TechNewsWorld (August 7, 2003). "Real-time data mining -- powered by neural-network technology -- has begun to remake the way large corporations manage customer accounts. The technology has been helping companies gain deep insight into customer purchasing patterns."

Artificial Intelligence, Spring 2003. Professors Tomás Lozano-Pérez & Leslie Kaelbling. Available from MIT OpenCourseWare. "The site features a full set of course notes, in addition to other materials used in the course. [The course] introduces representations, techniques, and architectures used to build applied systems and to account for intelligence from a computational point of view." Their coverage of Neural Networks begins with Slide 12.2.1 in Chapter 12, Machine Learning IV, and continues with a discussion of training neural nets in Chapter 13, Machine Learning V.

A Brief History of Connectionism. By David A. Medler. Neural Computing Surveys, Volume 1. "Connectionism -- within cognitive science -- is a theory of information processing. Unlike classical systems which use explicit, often logical, rules arranged in an hierarchy to manipulate symbols in a serial manner, however, connectionist systems rely on parallel processing of sub-symbols, using statistical properties instead of logical rules to transform information. Connectionists base their models upon the known neurophysiology of the brain and attempt to incorporate those functional properties thought to be required for cognition. ... The processing units may refer to neurons, mathematical functions, or even demons la Selfridge."

Neural Networks for Face Recognition. Companion to Chapter 4 of Tom Mitchell's textbook, Machine Learning. "A neural network learning algorithm called Backpropagation is among the most effective approaches to machine learning when the data includes complex sensory input such as images. This web page provides an implementation of the Backpropagation algorithm described in Chapter 4 of the textbook Machine Learning. It also includes the dataset discussed in Section 4.7 of the book, containing over 600 face images."

Asian Investors Seek Profit in Neural `Karma'. Commentary by Andy Mukherjee. Bloomberg News (March 23, 2004). "Using Paradigm's Forex DayTrader, which predicts movements in major currencies over a 24-hour time frame, the punter made a $46,000 profit in two days. ... DayTrader is one of more than 100 trading systems based on so-called neural networks that are supposed to mimic the way billions of brain cells work together to recognize patterns in complex data. Researchers have tried to replicate the human brain's neural circuitry in activities such as predicting energy prices and measuring creditworthiness. Unlike conventional software, systems based on neural networks aren't limited by their programmers' abilities. They learn better ways to analyze data as more information comes along. U.K.-based Retail Decisions uses neural networks to help online retailers prevent payment fraud. For two decades, researchers at universities in Britain and France have tried to build the perfect 'neural nose' that can discern smells. Such a system could alert the authorities to gas leaks, or warn retailers about foodstuff turning stale. Neural networks started appearing in the financial industry in the 1980s."

Robotrading 101 - Sophisticated computer programs take the human element out of picking winners on Wall Street. By James M. Pethokoukis. U.S. News & World Report (January 28, 2002). "Neural networks function more like the human brain. They can compare existing stock-trading patterns with previous situations and eventually 'learn' what works and what doesn't as the program digests more data."

Fuzzy logic and neural nets: still viable after all these years? Though no longer headliners, fuzzy logic and neural networks are options in tackling challenging applications. By Graham Prophet. EDN Magazine (June 10, 2004). "Neural networks, unlike fuzzy logic, seek to reproduce the versatility of the human brain in recognizing the end-to-end, input-to-output behavior of a system without understanding all the processes taking place within it. Taking as a fundamental model the interconnections of nervous systems within the brain -- neurons and synapses -- neural networks have the attributes of memory and learning. In applying a neural technique to a system, you show the network many examples of known-correct input/output-value pairs. In its learning mode, the network creates network connections with weighted values to match the data you provide and stores the values for the weighted connections that achieve the correct result. By exploring the whole input/output-value space, the network 'learns' to provide a correct response to any given input stimulus, without formally modeling the processes comprising the original system. An essential trick in designing a neural-network architecture is to achieve convergence; that is, as you show it successive input/output examples, it builds the ability to model the complete value space and does not 'forget' the examples it previously learned."

The Neural Network -Teaching Computers To Think. By Keith Schultz. Computer Power User. Volume 2, Issue 1 (January 2002): pages 62-63 in print issue. "The key idea behind neural networks is that they can take in a lot of data, process it in parallel, and provide accurate output, much as the human brain does. For example, when you see a cow, you know right away that it is a cow. You don't have to stop and count legs or look at shape and color. You process all of that data at the same time to know that you see a cow. That's what a neural network does for a computer system."

Brain Power. Editorial by Nigel Shadbolt. IEEE Intelligent Systems (May/June 2003). "Brains have always fascinated AI researchers. Little wonder, since our own brains are the only objects as yet capable of broad-ranging intelligent behavior. Interdisciplinary work between neuroscience and AI has a long history."

Neural nets explained: How neural networks give machines the ability to learn from experience. By Alan Zeichick From the August 2000 issue of Red Herring Magazine.

Related Web Sites

"ANN in the Real World - Neural networks are appearing in ever increasing numbers of real world applications and are making real money." Check out this collection of applications from Makhfi.com's Fascinating World of Neural Nets.

Artificial Neural Networks in Medicine World Map offers "links to people and organisations working with medical applications of artificial neural networks and related techniques." Maintained by Daniel Ruhe.

"The Center for the Neural Basis of Cognition (CNBC) is a joint project of Carnegie Mellon University and the University of Pittsburgh. ... Created in 1994, the CNBC is dedicated to the study of the neural basis of cognitive processes, including learning and memory, language and thought, perception, attention, and planning. Studies of the neural basis of normal adult cognition, cognitive development, and disorders of cognition all fall within the purview of the CNBC. In addition, the CNBC promotes the application of the results of the study of the neural basis of cognition to artificial intelligence, technology, and medicine."

GasNets. "Research into diffusible 'GasNets' has attempted to abstract some of the concepts underlying gaseous neurotransmitters, in particular Nitric Oxide, and incorporate these concepts into a fundamentally new class of artificial neural network." From "the informal GasNets diffusion group" at the University of Sussex.

IEEE Neural Networks Society. Be sure to see their collection of Neural Computing Research Programs Worldwide, Other Neural Network Professional Societies, and Neural Computing Publications Worldwide.

Web Applets for Interactive Tutorials on Artificial Neural Learning. By Fred Corbett. "This tutorial was developed as part of my undergraduate thesis in Computer Engineering at the University of Manitoba, and was supervised by Dr. H. C. Card. The goal of this project was to demonstrate some elementary aspects of artificial neural networks (ANNs) in an interactive and, hopefully, interesting manner. ... This tutorial is currently divided into three sections [Artificial Neuron, Perceptron Learning, and Multi-Layer Perceptron]. Each section deals with a specific aspect of neural networks and includes a JavaTM applet. The sections include a brief introduction, some theory behind the applet, a set of instructions for using the applet, the source code, and the applet itself."

Related Pages

Readings

Abu-Mostafa, Yaser S. 1995. Machines that Learn From Hints. Scientific American 272 (April 1995): 64-69. Machine learning improves significantly by taking advantage of information available from intelligent hints.

Anderson, James A., and Edward Rosenfeld, editors. 1998. Talking Nets: An Oral History of Neural Networks. Cambridge, MA: MIT Press/Bradford Books. Interviews with founders of the field of study, including how these scientists from different disciplines became interested in neural networks, and what future developments they see. Excerpts are available online.

Anderson, James A. 1995. An Introduction to Neural Networks. Cambridge, MA/Bradford Books. Covers concepts in biology and psychology that underlie neural network models, and helps students understand brain functioning in terms of computational modeling.

Arbib, Michael A., editor. 1995. Handbook of Brain Theory and Neural Netwroks. Cambridge, MA: MIT Press. Articles from hundreds of experts charts recent progress in the study of how the brain works and how we can build intelligent machines.

Asakawa, Kazuo, and Hideyuki Takagi. 1994. Neural Networks in Japan. Communications of the ACM 37 (3): 106-112.

Bains, Sunny. 1997. A Subtler Silicon Cell for Neural Networks. Science 277 (September 26, 1997): 1935.

Bishop, C. M. 1995. Neural Networks for Pattern Recognition. Oxford, England: Oxford University Press.

Clark, Andy, and Rudi Lutz, editors. 1992. Connectionism in Context. New York: Springer Verlag.

Cowan, Jack D., and David H. Sharp. 1988. Neural Nets and Artificial Intelligence. In The Artificial Intelligence Debate: False Starts, Real Foundations, ed. Graubard, Stephen R., Cambridge, MA: MIT Press.

Diederich, Joachim, editor. 1990. Artificial Neural Networks : Concept Learning. Los Alamitos, CA: IEEE Computer Society Press.

Fausett, Laurene V. 1994. Fundamentals of Neural Networks : Architectures, Algorithms, and Applications. Englewood Cliffs, NJ: Prentice-Hall.

Feldman, J. A. 1985. Connectionists Models and Parallelism in High Level Vision. Computer Vision, Graphics and Image Processing 31: 178-200.

Feldman, J. A., and D. H. Ballard. 1982. Connectionist Models and their Properties. Cognitive Science 6: 205-254.

Fu, L. M. 1994. Neural Networks in Computer Intelligence. New York: McGraw-Hill.

Hassoun, Mohamad H. 1995. Fundamentals of Artificial Neural Networks. Cambridge, MA: MIT Press.

Haykin, S. 1994. Neural Networks: A Comprehensive Foundation. New York: Macmillan College Publishing.

Hinton, G. E. 1992. How Neural Nets Learn From Experience. Scientific American 267 (September): 144-151.

Hinton, G. E., J. L. McClelland, and D. E. Rumelhart. 1986. Distributed Representations. In Parallel Distributed Processing, ed. D. E. Rumelhart, et. al., Cambridge, MA: Bradford Books/MIT Press.

Jordan, Michael I., and Christopher M. Bishop. 1997. Neural Networks. In The Computer Science and Engineering Handbook, ed. Allen B. Tucker, Jr., 536-556. Boca Raton, FL: CRC Press, Inc.

Kasabov, Nikola K. 1996. Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering. Cambridge, MA: MIT Press.

Luger, George F. 2002. Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 4th Edition. 2002. Addison-Wesley. Chapter One, AI: History and Applications, is available online.

Mitchell, Tom M. 1997. Artificial Neural Networks. In Machine Learning, pp. 81-127. New York: McGraw Hill Companies, Inc. Somewhat technical reading.

Morgan, Nelson, editor. 1990. Artificial Neural Networks : Electronic Implementations. Los Alamitos, CA: IEEE Computer Society Press.

Ratsch, Ulrich, Michael M. Richter, and Ion Olimpiou Stamatescu, editors. 1998. Intelligence and Artificial Intelligence : an Interdisciplinary Debate. New York: Springer

Rumelhart, David E., Bernard Widrow, and Michael Lehr. 1994. The Basic Ideas in Neural Networks. Communications of the ACM 37 (3): 87-92.

Rumelhart, D. E., G. E. Hinton, and R. J. Williams. 1986. Learning Internal Representations by Error Propagation. In Parallel Distributed Processing, Vol. 1, ed. Rumelhart, D. E. and J. L. McClelland, 318-362. Cambridge, MA: MIT Press.

Schwartz, Jacob T. 1988. The New Connectionism: Developing Relationships Between Neuroscience and Artificial Intelligence. In The Artificial Intelligence Debate: False Starts, Real Foundations, ed. Graubard, Stephen R., Cambridge, MA: MIT Press.

Selfridge, Oliver G. 1993. The Gardens of Learning: A Vision for AI. AI Magazine 14(2): 36-48. "I have watched AI since its beginnings ... In 1943, I was an undergraduate at the Massachusetts Institute of Technology (MIT) and met a man whom I was soon to be a roommate with. He was but three years older than I, and he was writing what I deem to be the first directed and solid piece of work in AI (McCulloch and Pitts 1943) His name was Walter Pitts, and he had teamed up with a neurophysiologist named Warren McCulloch, who was busy finding out how neurons worked (McCulloch and Pitts 1943). ... Figure 1 shows a couple of examples of neural nets from this paper - the first AI paper ever."

Simpson, Patrick K. 1990. Artificial Neural Systems : Foundations, Paradigms, Applications, and Implementations. New York: Pergamon Press.

Sun, Ron, and Lawrence A. Bookman., editors. 1995. Computational Architectures Integrating Neural and Symbolic Processes: a Perspective on the State of the Art. Boston: Kluwer Academic.

Widrow, Bernard, David E. Rumelhart, and Michael A. Lehr. 1994. Neural Networks: Applications in Industry, Business and Science. Communications of the ACM 37 (3): 93-105.

Zadeh, Lofti A. 1994. Fuzzy Logic, Neural Networks, and Soft Computing. Communications of the ACM 37 (3): 77-84.