Information extraction

Source: Wikipedia, the free encyclopedia.

Information extraction (IE) is the task of automatically extracting structured information from

machine-readable documents and other electronically represented sources. Typically, this involves processing human language texts by means of natural language processing (NLP).[1] Recent activities in multimedia
document processing like automatic annotation and content extraction out of images/audio/video/documents could be seen as information extraction.

Recent advances in NLP techniques have allowed for significantly improved performance compared to previous years.[2] An example is the extraction from newswire reports of corporate mergers, such as denoted by the formal relation:

,

from an online news sentence such as:

"Yesterday, New York based Foo Inc. announced their acquisition of Bar Corp."

A broad goal of IE is to allow computation to be done on the previously unstructured data. A more specific goal is to allow

context
.

Information extraction is the part of a greater puzzle which deals with the problem of devising automatic methods for text management, beyond its transmission, storage and display. The discipline of information retrieval (IR)[3] has developed automatic methods, typically of a statistical flavor, for indexing large document collections and classifying documents. Another complementary approach is that of natural language processing (NLP) which has solved the problem of modelling human language processing with considerable success when taking into account the magnitude of the task. In terms of both difficulty and emphasis, IE deals with tasks in between both IR and NLP. In terms of input, IE assumes the existence of a set of documents in which each document follows a template, i.e. describes one or more entities or events in a manner that is similar to those in other documents but differing in the details. An example, consider a group of newswire articles on Latin American terrorism with each article presumed to be based upon one or more terroristic acts. We also define for any given IE task a template, which is a(or a set of) case frame(s) to hold the information contained in a single document. For the terrorism example, a template would have slots corresponding to the perpetrator, victim, and weapon of the terroristic act, and the date on which the event happened. An IE system for this problem is required to "understand" an attack article only enough to find data corresponding to the slots in this template.

History

Information extraction dates back to the late 1970s in the early days of NLP.[4] An early commercial system from the mid-1980s was JASPER built for Reuters by the Carnegie Group Inc with the aim of providing real-time financial news to financial traders.[5]

Beginning in 1987, IE was spurred by a series of Message Understanding Conferences. MUC is a competition-based conference[6] that focused on the following domains:

  • MUC-1 (1987), MUC-3 (1989): Naval operations messages.
  • MUC-3 (1991), MUC-4 (1992): Terrorism in Latin American countries.
  • MUC-5 (1993): Joint ventures and microelectronics domain.
  • MUC-6 (1995): News articles on management changes.
  • MUC-7 (1998): Satellite launch reports.

Considerable support came from the U.S. Defense Advanced Research Projects Agency (DARPA), who wished to automate mundane tasks performed by government analysts, such as scanning newspapers for possible links to terrorism.[citation needed]

Present significance

The present significance of IE pertains to the growing amount of information available in unstructured form.

web of data.[8] Until this transpires, the web largely consists of unstructured documents lacking semantic metadata. Knowledge contained within these documents can be made more accessible for machine processing by means of transformation into relational form, or by marking-up with XML tags. An intelligent agent monitoring a news data feed requires IE to transform unstructured data into something that can be reasoned with. A typical application of IE is to scan a set of documents written in a natural language and populate a database with the information extracted.[9]

Tasks and subtasks

Applying information extraction to text is linked to the problem of text simplification in order to create a structured view of the information present in free text. The overall goal being to create a more easily machine-readable text to process the sentences. Typical IE tasks and subtasks include:

Note that this list is not exhaustive and that the exact meaning of IE activities is not commonly accepted and that many approaches combine multiple sub-tasks of IE in order to achieve a wider goal. Machine learning, statistical analysis and/or natural language processing are often used in IE.

IE on non-text documents is becoming an increasingly interesting topic[when?] in research, and information extracted from multimedia documents can now[when?] be expressed in a high level structure as it is done on text. This naturally leads to the fusion of extracted information from multiple kinds of documents and sources.

World Wide Web applications

IE has been the focus of the MUC conferences. The proliferation of the

enormous amount of data that are available online. Systems that perform IE from online text should meet the requirements of low cost, flexibility in development and easy adaptation to new domains. MUC systems fail to meet those criteria. Moreover, linguistic analysis performed for unstructured text does not exploit the HTML/XML tags and the layout formats that are available in online texts. As a result, less linguistically intensive approaches have been developed for IE on the Web using wrappers, which are sets of highly accurate rules that extract a particular page's content. Manually developing wrappers has proved to be a time-consuming task, requiring a high level of expertise. Machine learning techniques, either supervised or unsupervised
, have been used to induce such rules automatically.

Wrappers typically handle highly structured collections of web pages, such as product catalogs and telephone directories. They fail, however, when the text type is less structured, which is also common on the Web. Recent effort on adaptive information extraction motivates the development of IE systems that can handle different types of text, from well-structured to almost free text -where common wrappers fail- including mixed types. Such systems can exploit shallow natural language knowledge and thus can be also applied to less structured texts.

A recent[when?] development is Visual Information Extraction,[16][17] that relies on rendering a webpage in a browser and creating rules based on the proximity of regions in the rendered web page. This helps in extracting entities from complex web pages that may exhibit a visual pattern, but lack a discernible pattern in the HTML source code.

Approaches

The following standard approaches are now widely accepted:

Numerous other approaches exist for IE including hybrid approaches that combine some of the standard approaches previously listed.

Free or open source software and services

See also

References

  1. PMID 36855134
    .
  2. ^ Christina Niklaus, Matthias Cetto, André Freitas, and Siegfried Handschuh. 2018. A Survey on Open Information Extraction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3866–3878, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
  3. ^ FREITAG, DAYNE. "Machine Learning for Information Extraction in Informal Domains" (PDF). 2000 Kluwer Academic Publishers. Printed in the Netherlands.
  4. S2CID 10237124. Archived from the original
    (PDF) on 2019-02-20.
  5. .
  6. ^ "Linked Data - The Story So Far" (PDF).
  7. ^ "Tim Berners-Lee on the next Web". Archived from the original on 2011-04-10. Retrieved 2010-03-27.
  8. ^ R. K. Srihari, W. Li, C. Niu and T. Cornell,"InfoXtract: A Customizable Intermediate Level Information Extraction Engine",Journal of Natural Language Engineering,[dead link] Cambridge U. Press, 14(1), 2008, pp.33-69.
  9. ^ .
  10. ^ .
  11. ^ Milosevic, Nikola (2018). A multi-layered approach to information extraction from tables in biomedical documents (PDF) (PhD). University of Manchester.
  12. S2CID 19538141
    .
  13. ^ Milosevic, Nikola (2018). A multi-layered approach to information extraction from tables in biomedical documents (PDF) (PhD). University of Manchester.
  14. ^ A.Zils, F.Pachet, O.Delerue and F. Gouyon, Automatic Extraction of Drum Tracks from Polyphonic Music Signals Archived 2017-08-29 at the Wayback Machine, Proceedings of WedelMusic, Darmstadt, Germany, 2002.
  15. ].
  16. .
  17. .
  18. ^ Shimizu, Nobuyuki; Hass, Andrew (2006). "Extracting Frame-based Knowledge Representation from Route Instructions" (PDF). Archived from the original (PDF) on 2006-09-01. Retrieved 2010-03-27.

External links