Word-sense disambiguation
Word-sense disambiguation (WSD) is the process of identifying which
.Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain's
Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources,
Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (homograph) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively.
Variants
Disambiguation requires two strict inputs: a
History
WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics. Warren Weaver first introduced the problem in a computational context in his 1949 memorandum on translation.[1] Later, Bar-Hillel (1960) argued[2] that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge.
In the 1970s, WSD was a subtask of semantic interpretation systems developed within the field of artificial intelligence, starting with Wilks' preference semantics. However, since WSD systems were at the time largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck.
By the 1980s large-scale lexical resources, such as the Oxford Advanced Learner's Dictionary of Current English (OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation was still knowledge-based or dictionary-based.
In the 1990s, the statistical revolution advanced computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques.
The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses, domain adaptation, semi-supervised and unsupervised corpus-based systems, combinations of different methods, and the return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best.
Difficulties
Differences between dictionaries
One problem with word sense disambiguation is deciding what the senses are, as different
Most research in the field of WSD is performed by using WordNet as a reference sense inventory for English. WordNet is a computational lexicon that encodes concepts as synonym sets (e.g. the concept of car is encoded as { car, auto, automobile, machine, motorcar }). Other resources used for disambiguation purposes include Roget's Thesaurus[5] and Wikipedia.[6] More recently, BabelNet, a multilingual encyclopedic dictionary, has been used for multilingual WSD.[7]
Part-of-speech tagging
In any real test, part-of-speech tagging and sense tagging have proven to be very closely related, with each potentially imposing constraints upon the other. The question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the Senseval/SemEval competitions parts of speech are provided as input for the text to disambiguate).
Both WSD and part-of-speech tagging involve disambiguating or tagging with words. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96%[8] accuracy or better, as compared to less than 75%[citation needed] accuracy in word sense disambiguation with supervised learning. These figures are typical for English, and may be very different from those for other languages.
Inter-judge variance
Another problem is inter-judge variance. WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses has been proven to be far more difficult.[9] While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of the senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense.[10]
As human performance serves as the standard, it is an
Sense inventory and algorithms' task-dependency
A task-independent sense inventory is not a coherent concept:[13] each task requires its own division of word meaning into senses relevant to the task. Additionally, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target word selection. The "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language ("bank" could translate to the French banque – that is, 'financial bank' or rive – that is, 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant.
Discreteness of senses
Finally, the very notion of "
Approaches and methods
There are two main approaches to WSD – deep approaches and shallow approaches.
Deep approaches presume access to a comprehensive body of
Shallow approaches do not try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge.
There are four conventional approaches to WSD:
- Dictionary- and knowledge-based methods: These rely primarily on dictionaries, thesauri, and lexical knowledge bases, without using any corpus evidence.
- Semi-supervised or minimally supervised methods: These make use of a secondary source of knowledge such as a small annotated corpus as seed data in a bootstrapping process, or a word-aligned bilingual corpus.
- Supervised methods: These make use of sense-annotated corpora to train from.
- word sense discrimination.
Almost all these approaches work by defining a window of n content words around each word to be disambiguated in the corpus, and statistically analyzing those n surrounding words. Two shallow approaches used to train and then disambiguate are
Dictionary- and knowledge-based methods
The Lesk algorithm[19] is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach[20] searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among the definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word.
An alternative to the use of the definitions is to consider general word-sense
The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument).
Supervised methods
Semi-supervised methods
Because of the lack of training data, many word sense disambiguation algorithms use
The
Other semi-supervised techniques use large quantities of untagged corpora to provide co-occurrence information that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains.
Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-aligned
Unsupervised methods
Representing words considering their context through fixed-size dense vectors (
Other approaches
Other approaches may vary differently in their methods:
- Domain-driven disambiguation;[39][40]
- Identification of dominant word senses;[41][42][43]
- WSD using Cross-Lingual Evidence.[44][45]
- WSD solution in John Ball's language independent NLU combining Patom Theory and RRG (Role and Reference Grammar)
- constraint-based grammars[46]
Other languages
- parallel corpora.[47][48] The creation of the Hindi WordNet has paved way for several Supervised methods which have been proven to produce a higher accuracy in disambiguating nouns.[49]
Local impediments and summary
The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem.
One of the most promising trends in WSD research is using the largest
External knowledge sources
Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be[51][52] classified as follows:
Structured:
- Machine-readable dictionaries (MRDs)
- Ontologies
- Thesauri
Unstructured:
- Collocation resources
- Other resources (such as stoplists, domain labels,[53]etc.)
- Corpora: raw corpora and sense-annotated corpora
Evaluation
Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale, data sets. In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there is different sense inventories.
In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized.
In recent years 2007-2012, the WSD evaluation task choices had grown and the criterion for evaluating WSD has changed drastically depending on the variant of the WSD evaluation task. Below enumerates the variety of WSD tasks:
Task design choices
As technology evolves, the Word Sense Disambiguation (WSD) tasks grows in different flavors towards various research directions and for more languages:
- semi-supervised classification with the manually sense annotated corpora:[54]
- Classic English WSD uses the Princeton WordNetas it sense inventory and the primary classification input is normally based on the SemCor corpus.
- Classical WSD for other languages uses their respective WordNet as sense inventories and sense annotated corpora tagged in their respective languages. Often researchers will also tapped on the SemCor corpus and aligned bitexts with English as its source language
- Classic English WSD uses the
- Cross-lingual WSD evaluation task is also focused on WSD across 2 or more languages simultaneously. Unlike the Multilingual WSD tasks, instead of providing manually sense-annotated examples for each sense of a polysemous noun, the sense inventory is built up on the basis of parallel corpora, e.g. Europarl corpus.[55]
- Multilingual WSD evaluation tasks focused on WSD across 2 or more languages simultaneously, using their respective WordNets as its sense inventories or BabelNet as multilingual sense inventory.[56] It evolved from the Translation WSD evaluation tasks that took place in Senseval-2. A popular approach is to carry out monolingual WSD and then map the source language senses into the corresponding target word translations.[57]
- testing data set.[58]
Software
- Babelfy,[59] a unified state-of-the-art system for multilingual Word Sense Disambiguation and Entity Linking
- BabelNet API,[60] a Java API for knowledge-based multilingual Word Sense Disambiguation in 6 different languages using the BabelNet semantic network
- WordNet::SenseRelate,[61] a project that includes free, open source systems for word sense disambiguation and lexical sample sense disambiguation
- UKB: Graph Base WSD,[62] a collection of programs for performing graph-based Word Sense Disambiguation and lexical similarity/relatedness using a pre-existing Lexical Knowledge Base[63]
- pyWSD,[64] python implementations of Word Sense Disambiguation (WSD) technologies
See also
- Controlled natural language
- Entity linking
- Judicial interpretation
- Semantic unification
- Sentence boundary disambiguation
- Syntactic ambiguity
References
- ^ Weaver 1949.
- ^ Bar-Hillel 1964, pp. 174–179.
- ^ a b Pradhan et al. 2007, pp. 87–92.
- ^ Yarowsky 1992, pp. 454–460.
- ^ Mihalcea 2007.
- ^ A. Moro; A. Raganato; R. Navigli. Entity Linking meets Word Sense Disambiguation: a Unified Approach. Archived 2014-08-08 at the Wayback Machine. Transactions of the Association for Computational Linguistics (TACL). 2. pp. 231–244. 2014.
- from the original on 2023-07-15. Retrieved 2021-04-01.
- ^ Fellbaum 1997.
- ^ Snyder & Palmer 2004, pp. 41–43.
- ^ Snow et al. 2007, pp. 1005–1014.
- ^ Palmer, Babko-Malaya & Dang 2004, pp. 49–56.
- ^ Edmonds 2000.
- ^ Kilgarrif 1997, pp. 91–113.
- ^ Lenat & Guha 1989.
- ^ Wilks, Slator & Guthrie 1996.
- ^ Lesk 1986, pp. 24–26.
- S2CID 13260353.
- ^ Agirre, Lopez de Lacalle & Soroa 2009, pp. 1501–1506.
- ^ Yarowsky 1995, pp. 189–196.
- ISBN 978-0-19-927634-9. Archivedfrom the original on 2022-02-22. Retrieved 2022-02-22.
- ^ Schütze 1998, pp. 97–123.
- ^ ].
- S2CID 1957433.
- ISSN 2307-387X.
- hdl:11573/936571. Archivedfrom the original on 2019-10-28. Retrieved 2019-10-28.
- from the original on 2023-01-21. Retrieved 2023-01-21.
- arXiv:1707.08084. Archivedfrom the original on 2023-01-21. Retrieved 2023-01-21.
- S2CID 15687295.
- ^ ISSN 0891-2017.
- ^ S2CID 52225306.
- ^ Gliozzo, Magnini & Strapparava 2004, pp. 380–387.
- ^ Buitelaar et al. 2006, pp. 275–298.
- ^ McCarthy et al. 2007, pp. 553–590.
- ^ Mohammad & Hirst 2006, pp. 121–128.
- ^ Lapata & Keller 2007, pp. 348–355.
- ^ Ide, Erjavec & Tufis 2002, pp. 54–60.
- ^ Chan & Ng 2005, pp. 1037–1042.
- ISBN 978-0-262-19324-5. Archivedfrom the original on 2023-07-15. Retrieved 2018-12-23.
- ^ Bhattacharya, Indrajit, Lise Getoor, and Yoshua Bengio. Unsupervised sense disambiguation using bilingual probabilistic models Archived 2016-01-09 at the Wayback Machine. Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2004.
- ^ Diab, Mona, and Philip Resnik. An unsupervised method for word sense tagging using parallel corpora Archived 2016-03-04 at the Wayback Machine. Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2002.
- ^ Manish Sinha, Mahesh Kumar, Prabhakar Pande, Laxmi Kashyap, and Pushpak Bhattacharyya. Hindi word sense disambiguation Archived 2016-03-04 at the Wayback Machine. In International Symposium on Machine Translation, Natural Language Processing and Translation Support Systems, Delhi, India, 2004.
- ^ Kilgarrif & Grefenstette 2003, pp. 333–347.
- ^ Litkowski 2005, pp. 753–761.
- ^ Agirre & Stevenson 2007, pp. 217–251.
- ^ Magnini & Cavaglià 2000, pp. 1413–1418.
- ^ Lucia Specia, Maria das Gracas Volpe Nunes, Gabriela Castelo Branco Ribeiro, and Mark Stevenson. Multilingual versus monolingual WSD Archived 2012-04-10 at the Wayback Machine. In EACL-2006 Workshop on Making Sense of Sense: Bringing Psycholinguistics and Computational Linguistics Together, pages 33–40, Trento, Italy, April 2006.
- ^ Els Lefever and Veronique Hoste. SemEval-2010 task 3: cross-lingual word sense disambiguation Archived 2010-06-16 at the Wayback Machine. Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions. June 04-04, 2009, Boulder, Colorado.
- ^ R. Navigli, D. A. Jurgens, D. Vannella. SemEval-2013 Task 12: Multilingual Word Sense Disambiguation Archived 2014-08-08 at the Wayback Machine. Proc. of seventh International Workshop on Semantic Evaluation (SemEval), in the Second Joint Conference on Lexical and Computational Semantics (*SEM 2013), Atlanta, USA, June 14–15th, 2013, pp. 222–231.
- ^ Lucia Specia, Maria das Gracas Volpe Nunes, Gabriela Castelo Branco Ribeiro, and Mark Stevenson. Multilingual versus monolingual WSD Archived 2012-04-10 at the Wayback Machine. In EACL-2006 Workshop on Making Sense of Sense: Bringing Psycholinguistics and Computational Linguistics Together, pages 33–40, Trento, Italy, April 2006.
- ^ Eneko Agirre and Aitor Soroa. Semeval-2007 task 02: evaluating word sense induction and discrimination systems Archived 2013-02-28 at the Wayback Machine. Proceedings of the 4th International Workshop on Semantic Evaluations, pp. 7–12, June 23–24, 2007, Prague, Czech Republic.
- ^ "Babelfy". Babelfy. Archived from the original on 2014-08-08. Retrieved 2018-03-22.
- ^ "BabelNet API". Babelnet.org. Archived from the original on 2018-03-22. Retrieved 2018-03-22.
- ^ "WordNet::SenseRelate". Senserelate.sourceforge.net. Archived from the original on 2018-03-21. Retrieved 2018-03-22.
- ^ "UKB: Graph Base WSD". Ixa2.si.ehu.es. Archived from the original on 2018-03-12. Retrieved 2018-03-22.
- ^ "Lexical Knowledge Base (LKB)". Moin.delph-in.net. 2018-02-05. Archived from the original on 2018-03-09. Retrieved 2018-03-22.
- ^ alvations. "pyWSD". Github.com. Archived from the original on 2018-06-11. Retrieved 2018-03-22.
Works cited
- Agirre, E.; Lopez de Lacalle, A.; Soroa, A. (2009). "Knowledge-based WSD on Specific Domains: Performing better than Generic Supervised WSD" (PDF). Proc. of IJCAI.
- Agirre, E.; Stevenson, M. (2007). "Knowledge sources for WSD". In Agirre, E.; Edmonds, P. (eds.). Word Sense Disambiguation: Algorithms and Applications. New York: Springer. ISBN 978-1402068706.
- Bar-Hillel, Y. (1964). Language and information. Reading, MA: Addison-Wesley.
- Buitelaar, P.; Magnini, B.; Strapparava, C.; Vossen, P. (2006). "Domain-specific WSD". In Agirre, E.; Edmonds, P. (eds.). Word Sense Disambiguation: Algorithms and Applications. New York: Springer.
- Chan, Y. S.; Ng, H. T. (2005). Scaling up word sense disambiguation via parallel texts. Proceedings of the 20th National Conference on Artificial Intelligence. Pittsburgh: AAAI.
- Di Marco, A.; Navigli, R. (2013). "Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction". Computational Linguistics. 39 (3). MIT Press: 709–754. S2CID 1775181.
- Edmonds, P. (2000). "Designing a task for SENSEVAL-2" (Tech. note). Brighton, UK: University of Brighton.
- Fellbaum, Christiane (1997). "Analysis of a handwriting task". Proc. of ANLP-97 Workshop on Tagging Text with Lexical Semantics: Why, What, and How?. Washington D.C.
{{cite book}}
: CS1 maint: location missing publisher (link) - Gliozzo, A.; Magnini, B.; Strapparava, C. (2004). Unsupervised domain relevance estimation for word sense disambiguation (PDF). Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Barcelona, Spain: EMNLP.
- Ide, N.; Erjavec, T.; Tufis, D. (2002). Sense discrimination with parallel corpora (PDF). Proceedings of ACL Workshop on Word Sense Disambiguation: Recent Successes and Future Directions. Philadelphia.
- Lapata, M.; Keller, F. (2007). An information retrieval approach to sense ranking (PDF). Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Rochester, New York: HLT-NAACL.
- Lenat, D.; Guha, R. V. (1989). Building Large Knowledge-Based Systems. Addison-Wesley.
- Lesk, M. (1986). Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone (PDF). Proc. of SIGDOC-86: 5th International Conference on Systems Documentation. Toronto, Canada.
- Litkowski, K. C. (2005). "Computational lexicons and dictionaries". In Brown, K. R. (ed.). Encyclopaedia of Language and Linguistics (2nd ed.). Oxford: Elsevier Publishers.
- Magnini, B.; Cavaglià, G. (2000). Integrating subject field codes into WordNet. Proceedings of the 2nd Conference on Language Resources and Evaluation. Athens, Greece: LREC.
- McCarthy, D.; Koeling, R.; Weeds, J.; Carroll, J. (2007). "Unsupervised acquisition of predominant word senses" (PDF). Computational Linguistics. 33 (4): 553–590. .
- McCarthy, D.; Navigli, R. (2009). "The English Lexical Substitution Task" (PDF). Language Resources and Evaluation. 43 (2). Springer: 139–159. S2CID 16888516.
- Mihalcea, R. (April 2007). Using Wikipedia for Automatic Word Sense Disambiguation (PDF). Proc. of the North American Chapter of the Association for Computational Linguistics. Rochester, New York: NAACL. Archived from the original (PDF) on 2008-07-24.
- Mohammad, S.; Hirst, G. (2006). Determining word sense dominance using a thesaurus (PDF). Proceedings of the 11th Conference on European chapter of the Association for Computational Linguistics. Trento, Italy: EACL.
- Navigli, R. (2006). Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance (PDF). Proc. of the 44th Annual Meeting of the Association for Computational Linguistics joint with the 21st International Conference on Computational Linguistics. Sydney, Australia: COLING-ACL. Archived from the original (PDF) on 2011-06-29.
- Navigli, R.; Crisafulli, G. (2010). Inducing Word Senses to Improve Web Search Result Clustering (PDF). Proc. of the 2010 Conference on Empirical Methods in Natural Language Processing. MIT Stata Center, Massachusetts, US: EMNLP.
- Navigli, R.; Lapata, M. (2010). "An Experimental Study of Graph Connectivity for Unsupervised Word Sense Disambiguation" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 32 (4). IEEE Press: 678–692. S2CID 1454904.
- Navigli, R.; Litkowski, K.; Hargraves, O. (2007). SemEval-2007 Task 07: Coarse-Grained English All-Words Task (PDF). Proc. of Semeval-2007 Workshop (SemEval), in the 45th Annual Meeting of the Association for Computational Linguistics. Prague, Czech Republic: ACL.
- Navigli, R.; Velardi, P. (2005). "Structural Semantic Interconnections: a Knowledge-Based Approach to Word Sense Disambiguation" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 27 (7): 1075–1086. S2CID 12898695.
- Palmer, M.; Babko-Malaya, O.; Dang, H. T. (2004). Different sense granularities for different applications (PDF). Proceedings of the 2nd Workshop on Scalable Natural Language Understanding Systems in HLT/NAACL. Boston.
- Ponzetto, S. P.; Navigli, R. (2010). Knowledge-rich Word Sense Disambiguation rivaling supervised systems (PDF). Proc. of the 48th Annual Meeting of the Association for Computational Linguistics. ACL. Archived from the original (PDF) on 2011-09-30.
- Pradhan, S.; Loper, E.; Dligach, D.; Palmer, M. (2007). SemEval-2007 Task 17: English lexical sample, SRL and all words (PDF). Proc. of Semeval-2007 Workshop (SEMEVAL), in the 45th Annual Meeting of the Association for Computational Linguistics. Prague, Czech Republic: ACL.
- Schütze, H. (1998). "Automatic word sense discrimination" (PDF). Computational Linguistics. 24 (1): 97–123.
- Snow, R.; Prakash, S.; Jurafsky, D.; Ng, A. Y. (2007). Learning to Merge Word Senses (PDF). Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. EMNLP-CoNLL.
- Snyder, B.; Palmer, M. (2004). The English all-words task. Proc. of the 3rd International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (Senseval-3). Barcelona, Spain. Archived from the original on 2011-06-29.
- Weaver, Warren (1949). "Translation" (PDF). In Locke, W.N.; Booth, A.D. (eds.). Machine Translation of Languages: Fourteen Essays. Cambridge, MA: MIT Press.
- Wilks, Y.; Slator, B.; Guthrie, L. (1996). Electric Words: dictionaries, computers and meanings. Cambridge, Massachusetts: MIT Press.
- Yarowsky, D. (1992). Word-sense disambiguation using statistical models of Roget's categories trained on large corpora. Proc. of the 14th conference on Computational linguistics. COLING.
- Yarowsky, D. (1995). Unsupervised word sense disambiguation rivaling supervised methods. Proc. of the 33rd Annual Meeting of the Association for Computational Linguistics.
Further reading
- Agirre, Eneko; Edmonds, Philip, eds. (2007). Word Sense Disambiguation: Algorithms and Applications. Springer. ISBN 978-1402068706.
- Edmonds, Philip; Kilgarriff, Adam (2002). "Introduction to the special issue on evaluating word sense disambiguation systems". Journal of Natural Language Engineering. 8 (4): 279–291. S2CID 17866880.
- Ide, Nancy; Véronis, Jean (1998). "Word sense disambiguation: The state of the art" (PDF). Computational Linguistics. 24 (1): 1–40.
- Jurafsky, Daniel; Martin, James H. (2000). Speech and Language Processing. New Jersey, US: Prentice Hall.
- Kilgarriff, A. (1997). "I don't believe in word senses" (PDF). Comput. Human. 31 (2): 91–113. S2CID 3265361.
- Kilgarriff, A.; Grefenstette, G. (2003). "Introduction to the special issue on the Web as corpus" (PDF). Computational Linguistics. 29 (3): 333–347. S2CID 2649448.
- Manning, Christopher D.; Schütze, Hinrich (1999). Foundations of Statistical Natural Language Processing. Cambridge, Massachusetts: MIT Press.
- Navigli, Roberto (2009). "Word Sense Disambiguation: A Survey" (PDF). ACM Computing Surveys. 41 (2): 1–69. S2CID 461624.
- Resnik, Philip; Yarowsky, David (2000). "Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation". Natural Language Engineering. 5 (2): 113–133. S2CID 19915022.
- Yarowsky, David (2001). "Word sense disambiguation". In Dale; et al. (eds.). Handbook of Natural Language Processing. New York: Marcel Dekker. pp. 629–654.
External links
- Computational Linguistics Special Issue on Word Sense Disambiguation (1998)
- Word Sense Disambiguation Tutorial by Rada Mihalcea and Ted Pedersen (2005).