Pattern recognition
This article needs additional citations for verification. (May 2019) |
Part of a series on |
Machine learning and data mining |
---|
Pattern recognition is the task of assigning a
Pattern recognition systems are commonly trained from labeled "training" data. When no labeled data are available, other algorithms can be used to discover previously unknown patterns. KDD and data mining have a larger focus on unsupervised methods and stronger connection to business use. Pattern recognition focuses more on the signal and also takes acquisition and signal processing into consideration. It originated in engineering, and the term is popular in the context of computer vision: a leading computer vision conference is named Conference on Computer Vision and Pattern Recognition.
In
Pattern recognition algorithms generally aim to provide a reasonable answer for all possible inputs and to perform "most likely" matching of the inputs, taking into account their statistical variation. This is opposed to pattern matching algorithms, which look for exact matches in the input with pre-existing patterns. A common example of a pattern-matching algorithm is regular expression matching, which looks for patterns of a given sort in textual data and is included in the search capabilities of many text editors and word processors.
Overview
A modern definition of pattern recognition is:
The field of pattern recognition is concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories.[4]
Pattern recognition is generally categorized according to the type of learning procedure used to generate the output value.
Sometimes different terms are used to describe the corresponding supervised and unsupervised learning procedures for the same type of output. The unsupervised equivalent of classification is normally known as
The piece of input data for which an output value is generated is formally termed an instance. The instance is formally described by a
Probabilistic classifiers
Many common pattern recognition algorithms are probabilistic in nature, in that they use
- They output a confidence value associated with their choice. (Note that some other algorithms may also output confidence values, but in general, only for probabilistic algorithms is this value mathematically grounded in probability theory. Non-probabilistic confidence values can in general not be given any specific meaning, and only used to compare against other confidence values output by the same algorithm.)
- Correspondingly, they can abstain when the confidence of choosing any particular output is too low.
- Because of the probabilities output, probabilistic pattern-recognition algorithms can be more effectively incorporated into larger machine-learning tasks, in a way that partially or completely avoids the problem of error propagation.
Number of important feature variables
Feature selection algorithms attempt to directly prune out redundant or irrelevant features. A general introduction to feature selection which summarizes approaches and challenges, has been given.[6] The complexity of feature-selection is, because of its non-monotonous character, an optimization problem where given a total of features the
Techniques to transform the raw feature vectors (feature extraction) are sometimes used prior to application of the pattern-matching algorithm.
Problem statement
The problem of pattern recognition can be stated as follows: Given an unknown function (the ground truth) that maps input instances to output labels , along with training data assumed to represent accurate examples of the mapping, produce a function that approximates as closely as possible the correct mapping . (For example, if the problem is filtering spam, then is some representation of an email and is either "spam" or "non-spam"). In order for this to be a well-defined problem, "approximates as closely as possible" needs to be defined rigorously. In decision theory, this is defined by specifying a loss function or cost function that assigns a specific value to "loss" resulting from producing an incorrect label. The goal then is to minimize the expected loss, with the expectation taken over the probability distribution of . In practice, neither the distribution of nor the ground truth function are known exactly, but can be computed only empirically by collecting a large number of samples of and hand-labeling them using the correct value of (a time-consuming process, which is typically the limiting factor in the amount of data of this sort that can be collected). The particular loss function depends on the type of label being predicted. For example, in the case of
For a probabilistic pattern recognizer, the problem is instead to estimate the probability of each possible output label given a particular input instance, i.e., to estimate a function of the form
where the
When the labels are
The value of is typically learned using
where is the value used for in the subsequent evaluation procedure, and , the posterior probability of , is given by
In the Bayesian approach to this problem, instead of choosing a single parameter vector , the probability of a given label for a new instance is computed by integrating over all possible values of , weighted according to the posterior probability:
Frequentist or Bayesian approach to pattern recognition
The first pattern classifier – the linear discriminant presented by
Bayesian statistics has its origin in Greek philosophy where a distinction was already made between the 'a priori' and the 'a posteriori' knowledge. Later Kant defined his distinction between what is a priori known – before observation – and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilities can be chosen by the user, which are then a priori. Moreover, experience quantified as a priori parameter values can be weighted with empirical observations – using e.g., the
Probabilistic pattern classifiers can be used according to a frequentist or a Bayesian approach.
Uses
Within medical science, pattern recognition is the basis for
Optical character recognition is an example of the application of a pattern classifier. The method of signing one's name was captured with stylus and overlay starting in 1990.[citation needed] The strokes, speed, relative min, relative max, acceleration and pressure is used to uniquely identify and confirm identity. Banks were first offered this technology, but were content to collect from the FDIC for any bank fraud and did not want to inconvenience customers.[citation needed]
Pattern recognition has many real-world applications in image processing. Some examples include:
- identification and authentication: e.g., voice-based authentication.[15]
- medical diagnosis: e.g., screening for cervical cancer (Papnet),[16] breast tumors or heart sounds;
- defense: various navigation and guidance systems, target recognition systems, shape recognition technology etc.
- mobility:
In psychology, pattern recognition is used to make sense of and identify objects, and is closely related to perception. This explains how the sensory inputs humans receive are made meaningful. Pattern recognition can be thought of in two different ways. The first concerns template matching and the second concerns feature detection. A template is a pattern used to produce items of the same proportions. The template-matching hypothesis suggests that incoming stimuli are compared with templates in the long-term memory. If there is a match, the stimulus is identified. Feature detection models, such as the Pandemonium system for classifying letters (Selfridge, 1959), suggest that the stimuli are broken down into their component parts for identification. One observation is a capital E having three horizontal lines and one vertical line.[22]
Algorithms
Algorithms for pattern recognition depend on the type of label output, on whether learning is supervised or unsupervised, and on whether the algorithm is statistical or non-statistical in nature. Statistical algorithms can further be categorized as generative or discriminative.
This article may contain embedded lists. by removing items or incorporating them into the text of the article. (May 2014) |
Classification methods (methods predicting categorical labels)
Parametric:[23]
- Linear discriminant analysis
- Quadratic discriminant analysis
- Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class.)
Nonparametric:[24]
- Decision trees, decision lists
- K-nearest-neighboralgorithms
- Naive Bayes classifier
- Neural networks(multi-layer perceptrons)
- Perceptrons
- Support vector machines
- Gene expression programming
Clustering methods (methods for classifying and predicting categorical labels)
- Categorical mixture models
- Hierarchical clustering (agglomerative or divisive)
- K-means clustering
- Correlation clustering
- Kernel principal component analysis (Kernel PCA)
Ensemble learning algorithms (supervised meta-algorithms for combining multiple learning algorithms together)
- Boosting (meta-algorithm)
- Bootstrap aggregating ("bagging")
- Ensemble averaging
- hierarchical mixture of experts
General methods for predicting arbitrarily-structured (sets of) labels
Multilinear subspace learning algorithms (predicting labels of multidimensional data using tensor representations)
Unsupervised:
Real-valued sequence labeling methods (predicting sequences of real-valued labels)
Regression methods (predicting real-valued labels)
- Gaussian process regression(kriging)
- Linear regression and extensions
- Independent component analysis (ICA)
- Principal components analysis(PCA)
Sequence labeling methods (predicting sequences of categorical labels)
- Conditional random fields (CRFs)
- Hidden Markov models (HMMs)
- Maximum entropy Markov models(MEMMs)
- Recurrent neural networks(RNNs)
- Dynamic time warping (DTW)
See also
- Adaptive resonance theory
- Black box
- Cache language model
- Compound-term processing
- Computer-aided diagnosis
- Data mining
- Deep Learning
- Information theory
- List of numerical-analysis software
- List of numerical libraries
- Neocognitron
- Perception
- Perceptual learning
- Predictive analytics
- Prior knowledge for pattern recognition
- Sequence mining
- Template matching
- Contextual image classification
- List of datasets for machine learning research
References
- ISSN 0368-492X.
- ^ "Sequence Labeling" (PDF). utah.edu. Archived (PDF) from the original on 2018-11-06. Retrieved 2018-11-06.
- OCLC 799802313.
- ^ Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Springer.
- S2CID 21050445.).
{{cite journal}}
: CS1 maint: multiple names: authors list (link - ^ Isabelle Guyon Clopinet, André Elisseeff (2003). An Introduction to Variable and Feature Selection. The Journal of Machine Learning Research, Vol. 3, 1157-1182. Link Archived 2016-03-04 at the Wayback Machine
- ^
Iman Foroutan; Jack Sklansky (1987). "Feature Selection for Automatic Classification of Non-Gaussian Data". IEEE Transactions on Systems, Man, and Cybernetics. 17 (2): 187–198. S2CID 9871395..
- ^ For linear discriminant analysis the parameter vector consists of the two mean vectors and and the common covariance matrix .
- from the original on 10 September 2020. Retrieved 26 October 2011.
- S2CID 220665533.
- ISBN 978-0-471-05669-0. Archived from the original on 2020-08-19. Retrieved 2019-11-26.)
{{cite book}}
: CS1 maint: multiple names: authors list (link - ISBN 978-0-470-51706-2, 2009
- ^ THE AUTOMATIC NUMBER PLATE RECOGNITION TUTORIAL Archived 2006-08-20 at the Wayback Machine http://anpr-tutorial.com/ Archived 2006-08-20 at the Wayback Machine
- ^ Neural Networks for Face Recognition Archived 2016-03-04 at the Wayback Machine Companion to Chapter 4 of the textbook Machine Learning.
- from the original on 2019-09-03. Retrieved 2019-08-27.
- ^ PAPNET For Cervical Screening Archived 2012-07-08 at archive.today
- ^ "Development of an Autonomous Vehicle Control Strategy Using a Single Camera and Deep Neural Networks (2018-01-0035 Technical Paper)- SAE Mobilus". saemobilus.sae.org. Archived from the original on 2019-09-06. Retrieved 2019-09-06.
- S2CID 89616974.
- ^ Pickering, Chris (2017-08-15). "How AI is paving the way for fully autonomous cars". The Engineer. Archived from the original on 2019-09-06. Retrieved 2019-09-06.
- )
- ISSN 1474-6670.
- ^ "A-level Psychology Attention Revision - Pattern recognition | S-cool, the revision website". S-cool.co.uk. Archived from the original on 2013-06-22. Retrieved 2012-09-17.
- Gaussianshape.
- ^ No distributional assumption regarding shape of feature distributions per class.
Further reading
- Fukunaga, Keinosuke (1990). Introduction to Statistical Pattern Recognition (2nd ed.). Boston: Academic Press. ISBN 978-0-12-269851-4.
- Hornegger, Joachim; Paulus, Dietrich W. R. (1999). Applied Pattern Recognition: A Practical Introduction to Image and Speech Processing in C++ (2nd ed.). San Francisco: Morgan Kaufmann Publishers. ISBN 978-3-528-15558-2.
- Schuermann, Juergen (1996). Pattern Classification: A Unified View of Statistical and Neural Approaches. New York: Wiley. ISBN 978-0-471-13534-0.
- Godfried T. Toussaint, ed. (1988). Computational Morphology. Amsterdam: North-Holland Publishing Company. ISBN 9781483296722.
- Kulikowski, Casimir A.; Weiss, Sholom M. (1991). Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. San Francisco: Morgan Kaufmann Publishers. ISBN 978-1-55860-065-2.
- Duda, Richard O.; Hart, Peter E.; Stork, David G. (2000). Pattern Classification (2nd ed.). Wiley-Interscience. ISBN 978-0471056690.
- Jain, Anil.K.; Duin, Robert.P.W.; Mao, Jianchang (2000). "Statistical pattern recognition: a review". IEEE Transactions on Pattern Analysis and Machine Intelligence. 22 (1): 4–37. S2CID 192934.
- An introductory tutorial to classifiers (introducing the basic terms, with numeric example)
- Kovalevsky, V. A. (1980). Image Pattern Recognition. New York, NY: Springer New York. OCLC 852790446.
External links
- The International Association for Pattern Recognition
- List of Pattern Recognition web sites
- Journal of Pattern Recognition Research Archived 2008-09-08 at the Wayback Machine
- Pattern Recognition Info
- Pattern Recognition (Journal of the Pattern Recognition Society)
- International Journal of Pattern Recognition and Artificial Intelligence Archived 2004-12-11 at the Wayback Machine
- International Journal of Applied Pattern Recognition
- Open Pattern Recognition Project, intended to be an open source platform for sharing algorithms of pattern recognition
- Improved Fast Pattern Matching Improved Fast Pattern Matching