Music information retrieval

Source: Wikipedia, the free encyclopedia.

Music information retrieval (MIR) is the interdisciplinary science of retrieving information from music. Those involved in MIR may have a background in academic musicology, psychoacoustics, psychology, signal processing, informatics, machine learning, optical music recognition, computational intelligence or some combination of these.

Applications

MIR is being used by businesses and academics to categorize, manipulate and even create music.

Music classification

One of the classical MIR research topics is genre classification, which is categorizing music items into one of the pre-defined genres such as classical, jazz, rock, etc. Mood classification, artist classification, instrument identification, and music tagging are also popular topics.

Recommender systems

Several

Pandora, for example, uses experts to tag the music with particular qualities such as "female singer" or "strong bassline". Many other systems find users whose listening history is similar and suggests unheard music to the users from their respective collections. MIR techniques for similarity in music
are now beginning to form part of such systems.

Music source separation and instrument recognition

Music source separation is about separating original signals from a mixture audio signal. Instrument recognition is about identifying the instruments involved in music. Various MIR systems have been developed that can separate music into its component tracks without access to the master copy. In this way, for example, karaoke tracks can be created from normal music tracks, though the process is not yet perfect owing to vocals occupying some of the same frequency space as the other instruments.

Automatic music transcription

Automatic

onset detection, duration estimation, instrument identification, and the extraction of harmonic, rhythmic or melodic information. This task becomes more difficult with greater numbers of instruments and a greater polyphony level
.

Music generation

The

automatic generation of music
is a goal held by many MIR researchers. Attempts have been made with limited success in terms of human appreciation of the results.

Methods used

Data source

social tags
for music.

Feature representation

Analysis can often require some summarising,

beats per minute or rhythm in the piece. There are a number of available audio feature extraction tools[3] Available here

Statistics and machine learning

Other issues

  • Human-computer interaction and interfaces — multi-modal interfaces, user interfaces and usability, mobile applications, user behavior
  • Music perception, cognition, affect, and emotions — music
    similarity metrics
    , syntactical parameters, semantic parameters, musical forms, structures, styles and music annotation methodologies
  • Music archives, libraries, and digital collections — music digital libraries, public access to musical archives, benchmarks and research databases
  • Intellectual property rights and music — national and international copyright issues, digital rights management, identification and traceability
  • Sociology and Economy of music — music industry and use of MIR in the production, distribution, consumption chain, user profiling, validation, user needs and expectations, evaluation of music IR systems, building test collections, experimental design and metrics

Academic activity

See also

References

  1. ^ A. Klapuri and M. Davy, editors. Signal Processing Methods for Music Transcription. Springer-Verlag, New York, 2006.
  2. .
  3. ^ David Moffat, David Ronan, and Joshua D Reiss. "An Evaluation of Audio Feature Extraction Toolboxes". In Proceedings of the International Conference on Digital Audio Effects (DAFx), 2016.

External links

Example MIR applications