apple tree logo
apple tree logo
Information Retrieval & Extraction
(a subtopic of Applications)

Good Places to Start

Readings Online

Related Web Sites

Related Pages

More Readings
(see FAQ)

Recent News about THE TOPICS (annotated)



 

 
a computer on the go

It is of the highest importance, in the art of detection, to be able to recognise out of a number of facts which are incidental and which are vital...
- Sherlock Holmes

The explosion in storage capacity is adding urgency to the research. Currently, a terabyte of disk space costs about $1,600. In two to three years, it will only cost $400 and, consequently, become increasingly common. A terabyte, however, can hold one person's entire conversations from a lifetime, or all the video if someone kept a camera in his or her head for six months. More stored data and a vaster storage space makes finding something all the more difficult.
- Michael Kanellos

How in the world can anyone find just the right bit of information that they need, out of the available ocean of information, an ocean that continues to expand at an astonishing rate?

Our accustomed systems of retrieving particular bits of information no longer fill the needs of many people. Searching traditional indexes of print publications has been aided by computerized databases, but still usually requires time-consuming serial searching of one database after the other, and then moving on to other methods of searching for internet sources. And what if the information being sought is a sound byte? A video clip? Yesterday's e-mail exchange between respected scientists? Artificial intelligence may hold the key to information retrieval in an age where widely different formats contain the information being sought, and the universe of knowledge is simply too big and growing too rapidly for successful searching to proceed at a human's slow speed.


Good Places to Start

AI Knows It’s Out There. Red Herring (August 22, 2005 print issue). "Intelligent Search ... More upscale, with costs in the hundreds of thousands of dollars, are the intelligent search systems sold by InQuira of San Bruno, California. The systems are based on natural language processing, a branch of AI that enables the system to comprehend what a person is really asking, at least if the question is posed in standard English. 'Pointing customers at documents does not approach the productivity of being able to understand a request and pull the right paragraph up to their screen,' says Bob Macdonald, chief marketing officer at InQuira."

  • Be sure to check out InQuira and the other companies mentioned in the article.

Information Service Agent Research. "The Information Service Agent Lab at Simon Fraser University develops novel techniques for interactive information gathering and integration. The research applies artificial intelligence planning and learning techniques and database technologies to create knowledge bases from large collections of dynamically changing, potentially inconsistent and heterogeneous data sources, permitting users access to information at the right abstraction level."

Projects. Software Agents Group, MIT Media Lab. Wide-ranging approaches to information retrieval that include user profiling, information filtering, privacy, recommender systems, communityware, negotiation mechanisms and coordination.

Inside Google - From the Labs, Google Labs [audio]. Presentation by Peter Norvig at the 2005 O'Reilly Emerging Technology Conference. Available from IT Conversations. "Google has expanded from searching webpages to searching videos, books, places and even files on your own desktop. This expansion is made possible though Google's understanding and classification of information, facilitated by the application of algorithms in the domains of Machine Learning, Natural Language Processing and Artificial Intelligence. ... Peter Norvig is the Director of Search Quality at Google Inc. He is a Fellow and Councilor of the American Association for Artificial Intelligence and co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field."

Information Agents Group at the Information Sciences Institute, University of Southern California.

CIRES - Content Based Image REtrieval System developed by Qasim Iqbal at the Computer and Vision Research Center (CVRC) in the Department of Electrical and Computer Engineering at The University of Texas at Austin. " CIRES is a robust content-based image retrieval system based upon a combination of higher-level and lower-level vision principles. Higher-level analysis uses perceptual organization, inference and grouping principles to extract semantic information describing the structural content of an image. Lower-level analysis employs a channel energy model to describe image texture, and utilizes color histogram techniques. ... The system is able to serve queries ranging from scenes of purely natural objects such as vegetation, trees, sky, etc. to images containing conspicuous structural objects such as buildings, towers, bridges, etc." Be sure to check out the sample queries.

CIIR. The Center for Intelligent Information Retrieval at UMass. "The scope of the CIIR's work is broad and goes significantly beyond traditional areas of information retrieval such as search strategies and information filtering. The research includes both low-level systems issues such as the design of protocols and architectures for distributed search, as well as more human-centered topics such as user interface design, visualization and data mining with text, and multimedia retrieval."

  • You can test drive some of their search programs by selecting "Demonstrations" from their menu.
  • Also available is W. Bruce Croft's article in which he "summarize[s] the experience of the National Science Foundation (NSF) Center for Intelligent Information Retrieval (CIIR) in the area of industrial and government research priorities. [You can access the article, What Do People Want from Information Retrieval?, via a link from the CIIR site or by going to the November 1995 issue of D-Lib Magazine.]

Information/Internet Agents: An Overview. An extensive FAQ from British Telecommunications. "Information agents have come about because of the sheer demand for tools to help us manage the explosive growth of information we are experiencing currently, and which we will continue to experience henceforth. Information agents perform the role of managing, manipulating or collating information from many distributed sources."

Seeking Better Web Searches - Deluged with superfluous responses to online queries, users will soon benefit from improved search engines that deliver customized results. By Javed Mostafa. Scientific American (February 2005). "New search engines are improving the quality of results by delving deeper into the storehouse of materials available online, by sorting and presenting those results better, and by tracking your long-term interests so that they can refine their handling of new information requests. In the future, search engines will broaden content horizons as well, doing more than simply processing keyword queries typed into a text box. They will be able to automatically take into account your location--letting your wireless PDA, for instance, pinpoint the nearest restaurant when you are traveling. New systems will also find just the right picture faster by matching your sketches to similar shapes. They will even be able to name that half-remembered tune if you hum a few bars."

Academia's quest for the ultimate search tool. By Stefanie Olsen. CNET News.com (August 15, 2005). "The University of California at Berkeley is creating an interdisciplinary center for advanced search technologies and is in talks with search giants including Google to join the project, CNET News.com has learned. ... The principal areas of focus: privacy, fraud, multimedia search and personalization. ... The success of the $5 billion-a-year search-advertising business is fueling Internet research and development in many ways. ... The search problems of today are different from those of five years ago. ... Jaime Carbonell, director of CMU's Language Technologies Institute, said his research team is perfecting a technology for personalized search that would solve some of the privacy concerns surrounding the wide-scale collection of sensitive data, such as names and query histories. ... CMU is also working under a government grant on a longer-term project called Javelin, focused on question-and-answer search technology. ... The universities of Texas and Pennsylvania are also exploring different approaches to the same problem. Stanford continues in its role as a breeding ground for search projects. ... Stanford associate professor Andrew Ng, among others, is working on artificial-intelligence techniques for extracting knowledge from text in a search index. ... Stanford, the Massachusetts Institute of Technology and many other universities are working to solve problems presented by the library of tomorrow, which will be largely digitized. Sifting through and organizing billions of digital documents will require new search technology."

... and here are some more articles from our AI in the news collection:

  • Rescuing missed information - Cutting-edge commercial wares give agencies a whole new outlook on searching for information. By Aliya Sternstein. FCW.com (October 17, 2005). "The overhaul of the FirstGov Web portal is providing a high-profile example of the potential of new search technologies for government. Therefore, experts believe agencies will follow industry and adopt cutting-edge search technologies such as metasearch, clustering and topic maps. Those techniques promise to dig deeper into the government's online knowledge base, in addition to making search results much easier to use."
  • Entrepreneurs seek new ways to mine Web. By Kim Peterson. The Seattle Times (May 3, 2005). "[Nosa] Omoigui has created a way for researchers to efficiently search through Medline, a massive, government-owned database of documents related to health sciences and medicine. His search engine is so precise, he said, that it can answer such narrow requests as 'What is the impact of cell death on lymphatic cancer?' or 'Find all research papers on SARS written by Nobel Prize winners.' ... [Oren] Etzioni, 41, was trained in artificial intelligence -- the idea that machines could develop some level of smarts. His training applied perfectly to search. If computer programs were more intelligent and learned as they went along, they would be better at searching for answers, he postulated. Now, a decade after MetaCrawler debuted, Etzioni is working on a search engine called KnowItAll that learns as it goes and gives direct answers to users' questions. Instead of listing links to Web sites, KnowItAll aims to read the sites and pull the answers for you. ... "
  • Building a Smarter Search Engine. Startup profile by Heather Green. BusinessWeek Online (January 4, 2005). "Since its launch three months ago, Clusty has generated buzz for its clean design and clever approach. Using artificial intelligence, Clusty groups search results into different categories."
  • At I.B.M., That Google Thing Is So Yesterday. By James Fallows. The New York Times (December 26, 2004; reg. req'd.). "Suddenly, the computer world is interesting again. ... The most attractive offerings are free, and they are concentrated in the newly sexy field of 'search.' ... [T]oday's subject is the virtually unpublicized search strategy of another industry heavyweight: I.B.M. ... I.B.M. says that its tools will make possible a further search approach, that of 'discovery systems' that will extract the underlying meaning from stored material no matter how it is structured (databases, e-mail files, audio recordings, pictures or video files) or even what language it is in. The specific means for doing so involve steps that will raise suspicions among many computer veterans. These include 'natural language processing,' computerized translation of foreign languages and other efforts that have broken the hearts of artificial-intelligence researchers through the years. But the combination of ever-faster computers and ever-evolving programming allowed the systems I saw to succeed at tasks that have beaten their predecessors. ... ... Jennifer Chu-Carroll of I.B.M. demonstrated a system called Piquant, which analyzed the semantic structure of a passage and therefore exposed 'knowledge' that wasn't explicitly there. After scanning a news article about Canadian politics, the system responded correctly to the question, 'Who is Canada's prime minister?' even though those exact words didn't appear in the article. ... The Semantic Analysis Workbench, demonstrated by Eric Brown and Dave Ferrucci, showed another way of exposing latent meaning."
  • From factoids to facts. At last, a way of getting answers from the web. The Economist (August 26, 2004). "Ask MSR is still a prototype, although Microsoft is trying to improve it and it may be launched commercially under the name AnswerBot. Dr [Eric] Brill, meanwhile, has moved to a more difficult task. One of his most recent papers, written jointly with Radu Soricut of the University of Southern California, is entitled 'Beyond the Factoid'. It describes his efforts to build a system capable of providing 50-word answers to questions such as “What are the rules for qualifying for the Academy Awards?” This is harder than finding a single-word answer, but Dr Brill thinks it should be possible using something called a 'noisy channel' model. Such models are already employed in spell-checking and speech-recognition systems. They work by modelling the transformation between what a user means (in spell-checking, the word he intended to type) and what he does (the garbled word actually typed). ... Rather than relying on a traditional 'artificial intelligence' approach of parsing sentences and trying to work out what a question actually means, this quick-and-dirty method draws instead on the collective, ever-growing intelligence of the web itself."
  • When Machines Become Writers and Editors - Will Newsblaster produce tomorrow's leads? By John V. Pavlik. Online Journalism Review (February 5, 2002). "Most journalists would probably read the lead below and recognize it as a reasonable account of the events that transpired in the Afghan prison uprising. Some readers might wonder about the missing byline: who wrote this lead? The answer would certainly surprise most journalists and other readers, except maybe experts in artificial intelligence. ... The lead was authored by a computer. It's the writing produced by a project called the Columbia Newsblaster.
  • Who You Calling Mediasaurus? The New York Times dodges Michael Crichton's death sentence. By Jack Shafer. Slate Magazine (February 5, 2002). "Replacing the established media within a decade, Crichton predicted, would be an Infotopia in which 'artificial intelligence agents' would roam 'the databases, downloading stuff I am interested in, and assembling for me a front page, or a nightly news show, that addresses my interests.'"
    • Also available as "The media dinosaur: Premature extinction," from MSNBC (February 6, 2002).
    • Mediasaurus. By Michael Crichton. Wired; 1.04 (Sep/Oct 1993).
Readings Online

Learning Probabilistic User Profiles. By Mark Ackerman and et al. (1997). AI Magazine 18 (2): 47-56. Applications for finding interesting web sites and notifying users of changes.

The Web as a Database: New Extraction Technologies and Content Management. Katherine C. Adams (2001). Online Magazine; Volume 25, Number 2. "Information extraction research in the United States has a fascinating history. It is a product of the Cold War. In the late 1980s, a number of academic and industrial research sites were working on extracting information from naval messages in projects sponsored by the U.S. Navy. To compare the performance of these software systems, the Message Understanding Conferences (MUC) were started. These conferences were the first large-scale effort to evaluate natural language processing (NLP) systems and they continue to this day."

Moving Up the Information Food Chain. By Oren Etzioni (1997). AI Magazine 18 (2): 11-18. A look at deploying softbots on the World Wide Web.

When the web starts thinking for itself. By David Green. vnunet's Ebusinessadvisor (December 20, 2002). "The so-called semantic web is an extension of the current web in which data is given meaning through the use of a series of technologies. ... Ontologies provide a deeper level of meaning by providing equivalence relations between terms (i.e. term A on my web page is expressing the same concept as term B on your web page). An ontology is a file that formally defines relations among terms, for example, a taxonomy and set of inference rules. By providing such 'dictionaries of meaning' (in philosophy ontology means 'nature of existence') ontologies can improve the accuracy of web searches by allowing a search program to seek out pages that refer to a specific concept rather than just a particular term as they do now. While XML, RDF and ontologies provide the basic infrastructure of the semantic web, it is intelligent agents that will realise its power."

Is There an Intelligent Agent in Your Future? By James A. Hendler (1999). (This wonderful paper received the AAAI-2000 Effective Expository Writing Award.)

Savvysearch... By Adele Howe, and Daniel Dreilinger (1997). AI Magazine 18 (2): 19-25. Description of a metasearch engine that learns which search engines to query.

Designing Systems That Adapt to Their Users. An AAAI-02 Tutorial by Anthony Jameson, Joseph Konstan, and John Riedl. "Personalized recommendation of products, documents, and collaborators has become an important way of meeting user needs in commerce, information provision, and community services, whether on the web, through mobile interfaces, or through traditional desktop interfaces. This tutorial first reviews the types of personalized recommendation that are being used commercially and in research systems. It then systematically presents and compares the underlying AI techniques, including recent variants and extensions of collaborative filtering, demographic and case-based approaches, and decision-theoretic methods. The properties of the various techniques will be compared within a general framework, so that participants learn how to match recommendation techniques to applications and how to combine complementary techniques."

Microsoft Research seeks better search. By Michael Kanellos. CNET News (April 17, 2003). "Microsoft Research is plugging away at one of the growing dilemmas in computing: so much data, so little time. Scientists in the Redmond, Wash.-based software giant's labs are experimenting with new types of search and user interface technology that will let individuals and businesses tap into the vast amounts of data on the Internet, or inside their own computers, that increasingly will be impractical or impossible to find."

18th century theory is new force in computing. By Michael Kanellos. ZDNet (February 19, 2003).finding info "Search giant Google and Autonomy , a company that sells information retrieval tools, both employ Bayesian principles to provide likely (but technically never exact) results to data searches. ... Probabilistic thinking changes the way people interact with computers. ... 'The idea is that the computer seems more like an aid rather than a final device,' said Peter Norvig, director of security quality at Google. 'What you are looking for is some guidance, not a model answer.' Search has benefited substantially from this shift. A few years ago, common use of so-called Boolean search engines required queries submitted in the 'if, and, or but' grammar to find matching words. Now search engines employ complex algorithms to comb databases and produce likely matches."

IBM aims to get smart about AI. By Michael Kanellos. CNET News (January 20, 2003). "In the coming months, IBM will unveil technology that it believes will vastly improve the way computers access and use data by unifying the different schools of thought surrounding artificial intelligence. The Unstructured Information Management Architecture (UIMA) is an XML-based data retrieval architecture under development at IBM."

The Hidden Web. By Henry Kautz, Bart Selman, and Mehul Shah (1997). AI Magazine 18 (2): 27-35. A project that helps users locate experts on the Web.

Lifestyle Finder: Intelligent User Profiling Using Large-Scale Demographic Data. By Bruce Krulwich (1997). AI Magazine 18 (2): 37-56.

In Search of a Lost Melody - Computer assisted music: identification and retrieval. By Kjell Lemstrom. Finnish Music Quarterly Magazine 3-4/2000.

The Search Engine That Could. Reported by Spencer Michels. The NewsHour (PBS; November 29, 2002). Also available in audio and video formats. Hear/see Larry Page and Sergay Brin, co-founders of Google, Skip Battle, the new CEO at Ask Jeeves, and others.

Diagnosing Delivery Problems in the White House Information-Distribution System. By Mark Nahabedian and Howard Shrobe (1996). AI Magazine 17 (4): 21-29. Use of AI in selective information distribution.

RUSSELL: ...There are other gray areas too. Some people would say that Google is AI. Some people would say its databases. Some people would say its algorithm or theoretical computer science. UBIQUITY: And YOU say? RUSSELL: I'd say it contains some elements of all of the above. It's like asking, where is the dividing line between trees and bushes or bushes and shrubs? It's not clear that there has to be a dividing line."
- from Stuart Russell on the Future of Artificial Intelligence. Ubiquity; Volume 4, Issue 43 (December 24 - January 6, 2004).

Search engines try to find their sound. By Stefanie Olsen. CNET News (May 27, 2004). "Most 'spiders' that crawl and index the Web are effectively blind to audio and video content, making NPR's highly regarded radio programming all but invisible to mainstream search engines. ... Consumers armed with broadband connections at home are driving new demand for multimedia content and setting off a new wave of technology development among search engine companies eager to extend their empires from the static world of text to the dynamic realm of video and audio. ... Most ambitiously of all, a handful [of search engines] are bent on searching inside the files to extract meaning and relevance by examining audio and video features directly. StreamSage is starting to make waves with its audio and video search technology, introduced late last year. The Washington, D.C.-based company developed software after roughly three years of research that uses speech recognition technology to transcribe audio and video. It then uses contextual analysis to understand the language and parse the themes of the content. As a result, it can generate a kind of table of contents for the topics discussed in the files."

The Revolution in Legal Information Retrieval or: The Empire Strikes Back. By Erich Schweighofer (1999). The Journal of Information, Law and Technology 1999(1). "The issue is how to deal with the Artificial Intelligence (AI)-hard problem of making sense of the mass of legal information."

Text Mining Technology - Turning Information Into Knowledge. A white paper from IBM (1998), Daniel Tkach, editor.

The Role of Intelligent Systems in the National Information Infrastructure. The American Association for Artificial Intelligence. Edited by Daniel S. Weld.

A cure for info overload. By Michael Yeomans. Tribune-Review / available from PittsburghLIVE.com (May 18, 2004). "Have you Vivisimoed today? ... At the same time Google's founding duo began their journey to fame and fortune as researchers at Stanford University, a group of Carnegie Mellon University computer scientists initiated their own project in the summer of 1998 to tackle the problem of information overload. 'The only way to address the problem is to let users see a lot more of what's out there, but with less effort,' said Raul Valdes-Perez, 47, who led the CMU effort. He and his two cofounders quickly built a business around the artificial intelligence and linguistics-infused algorithms they developed. Their venture makes it possible to sort the results of an ordinary Web search on the fly into contextual folders that allow the user to more quickly identify a set of Web pages akin to what they are looking for. They called their venture Vivisimo, a Latin-derived word meaning vivacious."

Related Web Sites

ACM Special Interest Group on Information Retrieval (SIGIR). "ACM SIGIR addresses issues ranging from theory to user demands in the application of computers to the acquisition, organization, storage, retrieval, and distribution of information." Be sure to check out their collection of Information Retrieval Resources.

Brainboost Answer Engine. "Brainboost uses Machine Learning and Natural Language Processing techniques to go the extra mile, by actually answering questions, in plain English."

The British Computer Society Information Retrieval Specialist Group.

CMU Text Learning Group. "Our goal is to develop new machine learning algorithms for text and hypertext data. Applications of these algorithms include information filtering systems for the Internet, and software agents that make decisions based on text information." Among their many projects you'll find:

  • Personal WebWatcher is a "personal" agent that accompanies you from page to page as you browse the web, highlighting hyperlinks that it believes will be of interest. Its strategy for giving advice is learned from feedback from earlier tours.
  • ifile is a general mail filtering system that works with a mail client to intelligently filter mail according to the way the user tends to organize mail. ifile uses the machine learning algorithm Naive Bayes to classify e-mail documents.

HP SpeechBot - audio search using speech recognition. From Hewlett-Packard.

  • How Does SpeechBot Work? "After one of these radio programs goes to air, HP uses its speech recognition software to create a time-aligned 'transcript' of the program and build an index of the words spoken during the program. When you use SpeechBot, it searches through the shows we have indexed, trying to match your words with those in the index. SpeechBot then displays the matches for your search in order of likely relevance."

Introduction to Information Extraction Technology. IJCAI-99 Tutorial by Douglas E. Appelt and David Israel, Artificial Intelligence Center, SRI International. In addition to the notes from the tutorial, you'll find these collections of links: Research Projects and Systems, Papers, and Resources and Tools for building information extraction systems.

MARVEL: "The Intelligent Information Management Department at IBM Research is developing a multimedia analysis and retrieval system called MARVEL. MARVEL helps organize the large and growing amounts of multimedia data (e.g., video, images, audio) by using machine learning techniques to automatically label its content. The system recently won the Wall Street Journal 2004 Innovation Award in the multimedia category." A demo is available.

The National Centre for Text Mining (NaCTeM). "We provide text mining services in response to the requirements of the UK academic community. Our initial focus is on applications in the biological and medical domains, where the major successes in the mining of scientific texts have so far occurred."

"NewsInEssence is a system for finding and summarizing clusters of related news articles from multiple sources on the Web. It is under development by the CLAIR group at the University of Michigan." You can see it in action here.

Phibot. A research project of the University of Mainz, the German Research Center of Artificial Intelligence (DFKI) and brainbot technologies AG. "Phibot is an intelligent internet information retrieval tool for scientists. As part of the Adaptive Read Project, phibot is a web-based experiment for collaborative information retrieval." Also see:

"START, the world's first Web-based question answering system, has been on-line and continuously operating since December, 1993. It has been developed by Boris Katz and his associates of the InfoLab Group at the MIT Computer Science and Artificial Intelligence Laboratory. Unlike information retrieval systems (e.g., search engines), START aims to supply users with 'just the right information,' instead of merely providing a list of hits."

SUMMARIST: Automated Text Summarization project from The Natural Language Processing group at the Information Sciences Institute of the University of Southern California (USC/ISI). "Summarization is a hard problem of Natural Language Processing because, to do it properly, one has to really understand the point of a text. This requires semantic analysis, discourse processing, and inferential interpretation (grouping of the content using world knowledge)."

Sun Microsystems Conceptual Indexing Project. "How often have you failed to find what you wanted in an online search because the words you used failed to match words in the material that you needed? Concept-based retrieval systems attempt to reach beyond the standard keyword approach of simply counting the words from your request that occur in a document. The Conceptual Indexing Project is developing techniques that use knowledge of concepts and their interrelationships to find correspondences between the concepts in your request and those that occur in text passages. Our goal is to improve the convenience and effectiveness of online information access. The central focus of this project is the "paraphrase problem," in which the words used in a query are different from, but conceptually related to, those in material that you need."

Related Pages

More Readings

Aluri, Rao, and Donald E. Riggs, editors. 1990. Expert Systems in Libraries. Norwood, NJ: Ablex Pub. Corp.

Association of Research Libraries. 1991. Expert Systems in ARL Libraries. Washington, DC: ARL.

Davies, Peter. 1991. Artificial Intelligence: Its Role in the Information Industry. Medford, NJ: Learned Information, Inc.

Ford, Nigel. 1991. Expert Systems and Artificial Intelligence: An Information Manager's Guide. London: Library Association Pub.

Hovy, Eduard and Dragomir Radev, Cochairs. Intelligent Text Summarization: Papers from the 1998 AAAI Spring Symposium.

Jacobs, Paul S., editor. 1992. Text-Based Intelligent Systems : Current Research and Practice in Information Extraction and Retrieval. Hillsdale, NJ: L. Erlbaum Associates.

Jones, Karen Sparck. 1999. Information Retrieval and Artificial Intelligence. Artificial Intelligence 114(1-2): 257-281.

Kautz, Henry; Chair. 1998. Recommender Systems - Papers from the AAAI Workshop. Technical Report WS-98-08 "Over the past few years a new kind of application, the 'recommender system,' has appeared, based on a synthesis of ideas from artificial intelligence, human-computer interaction, sociology, information retrieval, and the technology of the WWW. Recommender systems assist and augment the natural process of relying on friends, colleagues, publications, and other sources to make the choices that arise in everyday life. Examples of the kinds of questions that could be answered by a recommender system include: What kind of car should I buy? What web-pages would I find most interesting? What people in my company would be best assigned to a particular project team?"

Lyons, Daniel. 1997. The Buzz About Firefly. The New York Times Magazine (June 29, 1997):36-37+.

Maybury, Mark T., editor. 1993. Intelligent Multimedia Interfaces. Menlo Park and Cambridge: AAAI Press/MIT Press. This book covers the ground where artificial intelligence, multimedia computing, information retrieval and human-computer interfaces all overlap.

Michelson, Avra. 1991. Expert Systems Technology and its Implication for Archives. Washington, DC: National Archives and Records Administration.

Special Libraries Association. 1991. Expert Systems and Library Applications : An SLA Information Kit. Washington, DC: Special Libraries Assn.

van Rijsbergen, Keith.1979. Information Retrieval, 2nd Edition. London: Butterworths.

Verity, John W. 1997. Coaxing Meaning Out of Raw Data. Business Week (February 3, 1997):134+.