Multidimensional scaling

Source: Wikipedia, the free encyclopedia.
An example of classical multidimensional scaling applied to voting patterns in the United States House of Representatives. Each blue dot represents one Democrat member of the House, and each red dot one Republican.

Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a data set. MDS is used to translate distances between each pair of objects in a set into a configuration of points mapped into an abstract Cartesian space.[1]

More technically, MDS refers to a set of related

non-linear dimensionality reduction

Given a distance matrix with the distances between each pair of objects in a set, and a chosen number of dimensions, N, an MDS

scatter plot.[2]

Core theoretical contributions to MDS were made by James O. Ramsay of McGill University, who is also regarded as the founder of functional data analysis.[3]


MDS algorithms fall into a

, depending on the meaning of the input matrix:

Classical multidimensional scaling

It is also known as Principal Coordinates Analysis (PCoA), Torgerson Scaling or Torgerson–Gower scaling. It takes an input matrix giving dissimilarities between pairs of items and outputs a coordinate matrix whose configuration minimizes a loss function called strain,[2] which is given by where denote vectors in N-dimensional space, denotes the scalar product between and , and are the elements of the matrix defined on step 2 of the following algorithm, which are computed from the distances.

Steps of a Classical MDS algorithm:
Classical MDS uses the fact that the coordinate matrix can be derived by eigenvalue decomposition from . And the matrix can be computed from proximity matrix by using double centering.[4]
  1. Set up the squared proximity matrix
  2. Apply double centering: using the centering matrix , where is the number of objects, is the identity matrix, and is an matrix of all ones.
  3. Determine the largest eigenvalues and corresponding eigenvectors of (where is the number of dimensions desired for the output).
  4. Now, , where is the matrix of eigenvectors and is the diagonal matrix of eigenvalues of .
Classical MDS assumes metric distances. So this is not applicable for direct dissimilarity ratings.

Metric multidimensional scaling (mMDS)

It is a superset of classical MDS that generalizes the optimization procedure to a variety of loss functions and input matrices of known distances with weights and so on. A useful loss function in this context is called stress, which is often minimized using a procedure called stress majorization. Metric MDS minimizes the cost function called “stress” which is a residual sum of squares:

Metric scaling uses a power transformation with a user-controlled exponent : and for distance. In classical scaling Non-metric scaling is defined by the use of isotonic regression to nonparametrically estimate a transformation of the dissimilarities.

Non-metric multidimensional scaling (NMDS)

In contrast to metric MDS, non-metric MDS finds both a

relationship between the dissimilarities in the item-item matrix and the Euclidean distances between items, and the location of each item in the low-dimensional space.

Let be the dissimilarity between points . Let be the Euclidean distance between embedded points .

Now, for each choice of the embedded points and is a monotonically increasing function , define the "stress" function:

The factor of in the denominator is necessary to prevent a "collapse". Suppose we define instead , then it can be trivially minimized by setting , then collapse every point to the same point.

A few variants of this cost function exist. MDS programs automatically minimize stress in order to obtain the MDS solution.

The core of a non-metric MDS algorithm is a twofold optimization process. First the optimal monotonic transformation of the proximities has to be found. Secondly, the points of a configuration have to be optimally arranged, so that their distances match the scaled proximities as closely as possible.

NMDS needs to optimize two objectives simultaneously. This is usually done iteratively:

  1. Initialize randomly, e. g. by sampling from a normal distribution.
  2. Do until a stopping criterion (for example, )
    1. Solve for by isotonic regression.
    2. Solve for by gradient descent or other methods.
  3. Return and

Louis Guttman's smallest space analysis (SSA) is an example of a non-metric MDS procedure.

Generalized multidimensional scaling (GMD)

An extension of metric multidimensional scaling, in which the target space is an arbitrary smooth non-Euclidean space. In cases where the dissimilarities are distances on a surface and the target space is another surface, GMDS allows finding the minimum-distortion embedding of one surface into another.[5]


The data to be analyzed is a collection of objects (colors, faces, stocks, . . .) on which a distance function is defined,

distance between -th and -th objects.

These distances are the entries of the dissimilarity matrix

The goal of MDS is, given , to find vectors such that

for all ,

where is a

metric or arbitrary distance function.[6] For example, when dealing with mixed-type data that contain numerical as well as categorical descriptors, Gower's distance is a common alternative.[citation needed

In other words, MDS attempts to find a mapping from the objects into such that distances are preserved. If the dimension is chosen to be 2 or 3, we may plot the vectors to obtain a visualization of the similarities between the objects. Note that the vectors are not unique: With the Euclidean distance, they may be arbitrarily translated, rotated, and reflected, since these transformations do not change the pairwise distances .

(Note: The symbol indicates the set of

real numbers
, and the notation refers to the Cartesian product of copies of , which is an -dimensional vector space over the field of the real numbers.)

There are various approaches to determining the vectors . Usually, MDS is formulated as an

optimization problem
, where is found as a minimizer of some cost function, for example,

A solution may then be found by numerical optimization techniques. For some particularly chosen cost functions, minimizers can be stated analytically in terms of matrix eigendecompositions.[2]


There are several steps in conducting MDS research:

  1. Formulating the problem – What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?
  2. Obtaining input data – For example, :- Respondents are asked a series of questions. For each product pair, they are asked to rate similarity (usually on a 7-point Likert scale from very similar to very dissimilar). The first question could be for Coke/Pepsi for example, the next for Coke/Hires rootbeer, the next for Pepsi/Dr Pepper, the next for Dr Pepper/Hires rootbeer, etc. The number of questions is a function of the number of brands and can be calculated as where Q is the number of questions and N is the number of brands. This approach is referred to as the “Perception data : direct approach”. There are two other approaches. There is the “Perception data : derived approach” in which products are decomposed into attributes that are rated on a semantic differential scale. The other is the “Preference data approach” in which respondents are asked their preference rather than similarity.
  3. Running the MDS statistical program – Software for running the procedure is available in many statistical software packages. Often there is a choice between Metric MDS (which deals with interval or ratio level data), and Nonmetric MDS[7] (which deals with ordinal data).
  4. Decide number of dimensions – The researcher must decide on the number of dimensions they want the computer to create. Interpretability of the MDS solution is often important, and lower dimensional solutions will typically be easier to interpret and visualize. However, dimension selection is also an issue of balancing underfitting and overfitting. Lower dimensional solutions may underfit by leaving out important dimensions of the dissimilarity data. Higher dimensional solutions may overfit to noise in the dissimilarity measurements. Model selection tools like
    Bayes factors, or cross-validation
    can thus be useful to select the dimensionality that balances underfitting and overfitting.
  5. Mapping the results and defining the dimensions – The statistical program (or a related module) will map the results. The map will plot each product (usually in two-dimensional space). The proximity of products to each other indicate either how similar they are or how preferred they are, depending on which approach was used. How the dimensions of the embedding actually correspond to dimensions of system behavior, however, are not necessarily obvious. Here, a subjective judgment about the correspondence can be made (see perceptual mapping).
  6. Test the results for reliability and validity – Compute
    R-squared to determine what proportion of variance of the scaled data can be accounted for by the MDS procedure. An R-square of 0.6 is considered the minimum acceptable level. [citation needed
    ] An R-square of 0.8 is considered good for metric scaling and .9 is considered good for non-metric scaling. Other possible tests are Kruskal’s Stress, split data tests, data stability tests (i.e., eliminating one brand), and test-retest reliability.
  7. Report the results comprehensively – Along with the mapping, at least distance measure (e.g.,
    Sorenson index, Jaccard index) and reliability (e.g., stress value) should be given. It is also very advisable to give the algorithm (e.g., Kruskal, Mather), which is often defined by the program used (sometimes replacing the algorithm report), if you have given a start configuration or had a random choice, the number of runs, the assessment of dimensionality, the Monte Carlo method
    results, the number of iterations, the assessment of stability, and the proportional variance of each axis (r-square).


See also