Category utility
This article includes deprecated; convert to shortened footnotes. (April 2023) ) |
Category utility is a measure of "category goodness" defined in
Probability-theoretic definition of category utility
The
where is a size- set of -ary features, and is a set of categories. The term designates the
The motivation and development of this expression for category utility, and the role of the multiplicand as a crude overfitting control, is given in the above sources. Loosely (Fisher 1987), the term is the expected number of attribute values that can be correctly guessed by an observer using a
Information-theoretic definition of category utility
The
where is the prior probability of an entity belonging to the positive category (in the absence of any feature information), is the conditional probability of an entity having feature given that the entity belongs to category , is likewise the conditional probability of an entity having feature given that the entity belongs to category , and is the prior probability of an entity possessing feature (in the absence of any category information).
The intuition behind the above expression is as follows: The term represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category . Similarly, the term represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category . The sum of these two terms in the brackets is therefore the
Category utility and mutual information
Gluck & Corter (1985) and Corter & Gluck (1992) mention that the category utility is equivalent to the mutual information. Here is a simple demonstration of the nature of this equivalence. Assume a set of entities each having the same features, i.e., feature set , with each feature variable having cardinality . That is, each feature has the capacity to adopt any of distinct values (which need not be ordered; all variables can be nominal); for the special case these features would be considered binary, but more generally, for any , the features are simply m-ary. For the purposes of this demonstration, without loss of generality, feature set can be replaced with a single aggregate variable that has cardinality , and adopts a unique value corresponding to each feature combination in the Cartesian product . (Ordinality does not matter, because the mutual information is not sensitive to ordinality.) In what follows, a term such as or simply refers to the probability with which adopts the particular value . (Using the aggregate feature variable replaces multiple summations, and simplifies the presentation to follow.)
For this demonstration, also assume a single category variable , which has cardinality . This is equivalent to a classification system in which there are non-intersecting categories. In the special case of there are the two-category case discussed above. From the definition of mutual information for discrete variables, the mutual information between the aggregate feature variable and the category variable is given by:
where is the prior probability of feature variable adopting value , is the
If the original definition of the category utility from above is rewritten with ,
This equation clearly has the same form as the (blue) equation expressing the mutual information between the feature set and the category variable; the difference is that the sum in the category utility equation runs over independent binary variables , whereas the sum in the mutual information runs over values of the single -ary variable . The two measures are actually equivalent then only when the features , are independent (and assuming that terms in the sum corresponding to are also added).
Insensitivity of category utility to ordinality
Like the mutual information, the category utility is not sensitive to any ordering in the feature or category variable values. That is, as far as the category utility is concerned, the category set {small,medium,large,jumbo}
is not qualitatively different from the category set {desk,fish,tree,mop}
since the formulation of the category utility does not account for any ordering of the class variable. Similarly, a feature variable adopting values {1,2,3,4,5}
is not qualitatively different from a feature variable adopting values {fred,joe,bob,sue,elaine}
. As far as the category utility or mutual information are concerned, all category and feature variables are nominal variables. For this reason, category utility does not reflect any gestalt aspects of "category goodness" that might be based on such ordering effects. One possible adjustment for this insensitivity to ordinality is given by the weighting scheme described in the article for mutual information.
Category "goodness": models and philosophy
This section provides some background on the origins of, and need for, formal measures of "category goodness" such as the category utility, and some of the history that lead to the development of this particular metric.
What makes a good category?
At least since the time of
In the late
What purpose do concepts serve?
One approach to answering such questions is to investigate the "role" or "purpose" of concepts in cognition. Thus the answer to "What are concepts good for in the first place?" by
The general problem of classification... [is] to provide that things shall be thought of in such groups, and those groups in such an order, as will best conduce to the remembrance and to the ascertainment of their laws... [and] one of the uses of such a classification that by drawing attention to the properties on which it is founded, and which, if the classification be good, are marks of many others, it facilitates the discovery of those others.
From this base,
The ends of scientific classification are best answered when the objects are formed into groups respecting which a greater number of general propositions can be made, and those propositions more important, than could be made respecting any other groups into which the same things could be distributed. The properties, therefore, according to which objects are classified should, if possible, be those which are causes of many other properties; or, at any rate, which are sure marks of them.
One may compare this to the "category utility hypothesis" proposed by Corter & Gluck (1992): "A category is useful to the extent that it can be expected to improve the ability of a person to accurately predict the features of instances of that category." Mill here seems to be suggesting that the best category structure is one in which object features (properties) are maximally informative about the object's class, and, simultaneously, the object class is maximally informative about the object's features. In other words, a useful classification scheme is one in which category knowledge can be used to accurately infer object properties, and property knowledge can be used to accurately infer object classes. One may also compare this idea to Aristotle's criterion of counter-predication for definitional predicates, as well as to the notion of concepts described in formal concept analysis.
Attempts at formalization
A variety of different measures have been suggested with an aim of formally capturing this notion of "category goodness," the best known of which is probably the "cue validity". Cue validity of a feature with respect to category is defined as the conditional probability of the category given the feature (Reed 1972;Rosch & Mervis 1975;Rosch 1978), , or as the deviation of the conditional probability from the category base rate (Edgell 1993;Kruschke & Johansen 1999), . Clearly, these measures quantify only inference from feature to category (i.e., cue validity), but not from category to feature, i.e., the category validity . Also, while the cue validity was originally intended to account for the demonstrable appearance of
One attempt to address both problems by simultaneously maximizing both feature validity and category validity was made by Jones (1983) in defining the "collocation index" as the product , but this construction was fairly ad hoc (see Corter & Gluck 1992). The category utility was introduced as a more sophisticated refinement of the cue validity, which attempts to more rigorously quantify the full inferential power of a class structure. As shown above, on a certain view the category utility is equivalent to the mutual information between the feature variable and the category variable. It has been suggested that categories having the greatest overall category utility are those that are not only those "best" in a normative sense, but also those human learners prefer to use, e.g., "basic" categories (Corter & Gluck 1992). Other related measures of category goodness are "cohesion" (Hanson & Bauer 1989;Gennari, Langley & Fisher 1989) and "salience" (Gennari 1989).
Applications
- Category utility is used as the category evaluation measure in the popular conceptual clustering algorithm called COBWEB (Fisher 1987).
See also
- Abstraction
- Concept learning
- Universals
- Unsupervised learning
References
- Corter, James E.; Gluck, Mark A. (1992), "Explaining basic categories: Feature predictability and information" (PDF), Psychological Bulletin, 111 (2): 291–303, doi:10.1037/0033-2909.111.2.291, archived from the original(PDF) on 2011-08-10
- Edgell, Stephen E. (1993), "Using configural and dimensional information", in N. John Castellan (ed.), Individual and Group Decision Making: Current Issues, Hillsdale, New Jersey: Lawrence Erlbaum, pp. 43–64
- Fisher, Douglas H. (1987), "Knowledge acquisition via incremental conceptual clustering", Machine Learning, 2 (2): 139–172,
- Gennari, John H. (1989), "Focused concept formation", in Alberto Maria Segre (ed.), Proceedings of the Sixth International Workshop on Machine Learning, Ithaca, NY: Morgan Kaufmann, pp. 379–382
- Gennari, John H.; Langley, Pat; Fisher, Doug (1989), "Models of incremental concept formation", Artificial Intelligence, 40 (1–3): 11–61,
- Gluck, Mark A.; Corter, James E. (1985), "Information, uncertainty, and the utility of categories", Program of the Seventh Annual Conference of the Cognitive Science Society, pp. 283–287
- Hanson, Stephen José; Bauer, Malcolm (1989), "Conceptual clustering, categorization, and polymorphy", Machine Learning, 3 (4): 343–372,
- Harnad, Stevan (2005), "To cognize is to categorize: Cognition is categorization", in Henri Cohen & Claire Lefebvre (ed.), Handbook of Categorization in Cognitive Science, Amsterdam: Elsevier, pp. 19–43
- Jones, Gregory V. (1983), "Identifying basic categories", Psychological Bulletin, 94 (3): 423–428,
- PMID 10505339
- Mill, John Stuart (1843), A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence and the Methods of Scientific Investigation, London: Longmans, Green and Co..
- Murphy, Gregory L. (1982), "Cue validity and levels of categorization", Psychological Bulletin, 91 (1): 174–177,
- Reed, Stephen K. (1972), "Pattern recognition and categorization", Cognitive Psychology, 3 (3): 382–407,
- Rosch, Eleanor (1978), "Principles of categorization", in Eleanor Rosch & Barbara B. Lloyd (ed.), Cognition and Categorization, Hillsdale, New Jersey: Lawrence Erlbaum, pp. 27–48
- Rosch, Eleanor; Mervis, Carolyn B. (1975), "Family Resemblances: Studies in the Internal Structure of Categories", Cognitive Psychology, 7 (4): 573–605, S2CID 17258322
- Smith, Edward E.; Medin, Douglas L. (1981), Categories and Concepts, Cambridge, MA: Harvard University Press
- Witten, Ian H.; Frank, Eibe (2005), Data Mining: Practical Machine Learning Tools and Techniques, Amsterdam: Morgan Kaufmann