Computer music
Computer music is the application of
History
Much of the work on computer music has drawn on the relationship between
Musical melodies were first generated by the computer originally named the CSIR Mark 1 (later renamed CSIRAC) in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were speculative). Research has shown that people speculated about computers playing music, possibly because computers would make noises,[3] but there is no evidence that they did it.[4][5]
The world's first computer to play music was the CSIR Mark 1 (later named CSIRAC), which was designed and built by Trevor Pearcey and Maston Beard in the late 1940s. Mathematician Geoff Hill programmed the CSIR Mark 1 to play popular musical melodies from the very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the first known use of a digital computer for that purpose. The music was never recorded, but it has been accurately reconstructed.[6][7] In 1951 it publicly played the "Colonel Bogey March"[8] of which only the reconstruction exists. However, the CSIR Mark 1 played standard repertoire and was not used to extend musical thinking or composition practice, as Max Mathews did, which is current computer-music practice.
The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark 1, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, "Baa, Baa, Black Sheep", and "In the Mood"; this is recognized as the earliest recording of a computer to play music as the CSIRAC music was never recorded. This recording can be heard at the Manchester University site.[9] Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on SoundCloud.[10][11][6]
Two further major 1950s developments were the origins of digital sound synthesis by computer, and of algorithmic composition programs beyond rote playback. Amongst other pioneers, the musical chemists Lejaren Hiller and Leonard Isaacson worked on a series of algorithmic composition experiments from 1956 to 1959, manifested in the 1957 premiere of the Illiac Suite for string quartet.[12] Max Mathews at Bell Laboratories developed the influential MUSIC I program and its descendants, further popularising computer music through a 1963 article in Science.[13] The first professional composer to work with digital synthesis was James Tenney, who created a series of digitally synthesized and/or algorithmically composed pieces at Bell Labs using Mathews' MUSIC III system, beginning with Analog #1 (Noise Study) (1961).[14][15] After Tenney left Bell Labs in 1964, he was replaced by composer Jean-Claude Risset, who conducted research on the synthesis of instrumental timbres and composed Computer Suite from Little Boy (1968).
Early computer-music programs typically did not run in
Until now partial use has been exploited for musical research into the substance and form of sound (convincing examples are those of Hiller and Isaacson in Urbana, Illinois, US; Iannis Xenakis in Paris and Pietro Grossi in Florence, Italy).[18]
In May 1967 the first experiments in computer music in Italy were carried out by the S 2F M studio in Florence
Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound.[25]
In Japan
This article contains content that is written like an advertisement. (February 2023) |
In Japan, experiments in computer music date back to 1962, when
In the late 1970s these systems became commercialized, notably by systems like the
Advances
Advances in computing power and software for manipulation of digital media have dramatically affected the way computer music is generated and performed. Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a wide variety of algorithms and approaches. Computer music systems and approaches are now ubiquitous, and so firmly embedded in the process of creating music that we hardly give them a second thought: computer-based synthesizers, digital mixers, and effects units have become so commonplace that use of digital rather than analog technology to create and record music is the norm, rather than the exception.[28]
Research
There is considerable activity in the field of computer music as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches. Throughout the world there are many organizations and institutions dedicated to the area of computer and
Music composed and performed by computers
Later, composers such as
Computer-generated scores for performance by human players
Computers have also been used in an attempt to imitate the music of great composers of the past, such as Mozart. A present exponent of this technique is David Cope, whose computer programs analyses works of other composers to produce new works in a similar style. Cope's best-known program is Emily Howell.[33][34][35]
Melomics, a research project from the University of Málaga (Spain), developed a computer composition cluster named Iamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception, Iamus has composed a full album in 2012, also named Iamus, which New Scientist described as "the first major work composed by a computer and performed by a full orchestra".[36] The group has also developed an API for developers to utilize the technology, and makes its music available on its website.
Computer-aided algorithmic composition
Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use. The label computer-aided composition lacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The label algorithmic composition is likewise too broad, particularly in that it does not specify the use of a computer. The term computer-aided, rather than computer-assisted, is used in the same manner as computer-aided design.[37]
Machine improvisation
Machine improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic re-injection. This is different from other improvisation methods with computers that use algorithmic composition to generate new music without performing analysis of existing music examples.[38]
Statistical style modeling
Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's Illiac Suite for String Quartet (1957) and Xenakis' uses of
Implementations
The first implementation of statistical style modeling was the LZify method in Open Music,
OMax is a software environment developed in IRCAM. OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. the OMax Brothers) in the Ircam Music Representations group.[49] One of the problems in modeling audio signals with factor oracle is the symbolization of features from continuous values to a discrete alphabet. This problem was solved in the Variable Markov Oracle (VMO) available as python implementation,[50] using an information rate criteria for finding the optimal or most informative representation.[51]
Use of Artificial Intelligence
The use of artificial intelligence to generate new melodies[52] and cover pre-existing music,[53] is a recent phenomenon that has been reported to disrupt the music industry.[54]
Live coding
Live coding[55] (sometimes known as 'interactive programming', 'on-the-fly programming',[56] 'just in time programming') is the name given to the process of writing software in real time as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live.[57]
See also
- Acousmatic music
- Adaptive music
- Csound
- Digital audio workstation
- Digital synthesizer
- Fast Fourier transform
- Human–computer interaction
- Laptronica
- List of music software
- Module file
- Music information retrieval
- Music notation software
- Music sequencer
- New Interfaces for Musical Expression
- Physical modeling synthesis
- Programming (music)
- Sampling (music)
- Sound and music computing
- Tracker
- Vaporwave
- Vocaloid
References
- ^ Curtis Roads,The Computer Music Tutorial, Boston: MIT Press, Introduction
- ^ Andrew J. Nelson, The Sound of Innovation: Stanford and the Computer Music Revolution, Boston: MIT Press, Introduction
- ^ "Algorhythmic Listening 1949–1962 Auditory Practices of Early Mainframe Computing". AISB/IACAP World Congress 2012. Archived from the original on 7 November 2017. Retrieved 18 October 2017.
- ^ Doornbusch, Paul (9 July 2017). "MuSA 2017 – Early Computer Music Experiments in Australia, England and the USA". MuSA Conference. Retrieved 18 October 2017.
- .
- ^ a b Fildes, Jonathan (17 June 2008). "Oldest computer music unveiled". BBC News Online. Retrieved 18 June 2008.
- S2CID 10593824.
- ^ Doornbusch, Paul. "The Music of CSIRAC". Melbourne School of Engineering, Department of Computer Science and Software Engineering. Archived from the original on 18 January 2012.
- ^ "Media (Digital 60)". curation.cs.manchester.ac.uk. Retrieved 15 December 2023.
- ^ "First recording of computer-generated music – created by Alan Turing – restored". The Guardian. 26 September 2016. Retrieved 28 August 2017.
- ^ "Restoring the first recording of computer music – Sound and vision blog". British Library. 13 September 2016. Retrieved 28 August 2017.
- ISBN 0-313-22158-8. [page needed]
- ISBN 978-0-87930-628-1. Retrieved 4 December 2013.
- ^ Tenney, James. (1964) 2015. “Computer Music Experiences, 1961–1964.” In From Scratch: Writings in Music Theory. Edited by Larry Polansky, Lauren Pratt, Robert Wannamaker, and Michael Winter. Urbana: University of Illinois Press. 97–127.
- ^ Wannamaker, Robert, The Music of James Tenney, Volume 1: Contexts and Paradigms (University of Illinois Press, 2021), 48–82.
- ^ Cattermole, Tannith (9 May 2011). "Farseeing inventor pioneered computer music". Gizmag. Retrieved 28 October 2011.
In 1957 the MUSIC program allowed an IBM 704 mainframe computer to play a 17-second composition by Mathews. Back then computers were ponderous, so synthesis would take an hour.
- PMID 17738556.
The generation of sound signals requires very high sampling rates.... A high speed machine such as the I.B.M. 7090 ... can compute only about 5000 numbers per second ... when generating a reasonably complex sound.
- PMID 21138768.
- doi:10.5518/160/27.
- JSTOR 4617921.
- ^ "Music without Musicians but with Scientists Technicians and Computer Companies". 2019.
- S2CID 191383265.
- ISBN 978-0-19-533161-5.
- ^ a b Dean 2009, p. 1
- ISBN 978-0-262-68078-3.
- ^ ]
- ^ Dean 2009, pp. 4–5: "... by the 90s ... digital sound manipulation (using MSP or many other platforms) became widespread, fluent and stable."
- JSTOR 3680818.
- ^ Tangian, Andranik (2003). "Constructing rhythmic canons" (PDF). Perspectives of New Music. 41 (2): 64–92. Retrieved 16 January 2021.
- ^ Tangian, Andranik (2010). "Constructing rhythmic fugues (unpublished addendum to Constructing rhythmic canons)". IRCAM, Seminaire MaMuX, 9 February 2002, Mosaïques et pavages dans la musique (PDF). Retrieved 16 January 2021.
- ^ Tangian, Andranik (2002–2003). "Eine kleine Mathmusik I and II". IRCAM, Seminaire MaMuX, 9 February 2002, Mosaïques et pavages dans la musique. Retrieved 16 January 2021.
- ^ Leach, Ben (22 October 2009). "Emily Howell: the computer program that composes classical music". The Daily Telegraph. Retrieved 6 October 2017.
- ^ Cheng, Jacqui (30 September 2009). "Virtual Composer Makes Beautiful Music and Stirs Controversy". Ars Technica.
- ^ Ball, Philip (1 July 2012). "Iamus, classical music's computer composer, live from Malaga". The Guardian. Archived from the original on 25 October 2013. Retrieved 15 November 2021.
- ^ "Computer composer honours Turing's centenary". New Scientist. 5 July 2012.
- ^ Christopher Ariza: An Open Design for Computer-Aided Algorithmic Music Composition, Universal-Publishers Boca Raton, Florida, 2005, p. 5
- ^ Mauricio Toro, Carlos Agon, Camilo Rueda, Gerard Assayag. "GELISP: A Framework to Represent Musical Constraint Satisfaction Problems and Search Strategies", Journal of Theoretical and Applied Information Technology 86, no. 2 (2016): 327–331.
- ISBN 978-3-540-66694-3. Retrieved 4 December 2013.
Lecture Notes in Computer Science 1725
- ^ "Using factor oracles for machine improvisation", G. Assayag, S. Dubnov, (September 2004) Soft Computing 8 (9), 604–610
- ^ "Memex and composer duets: computer-aided composition using style mixing", S. Dubnov, G. Assayag, Open Music Composers Book 2, 53–66
- ^ G. Assayag, S. Dubnov, O. Delerue, "Guessing the Composer's Mind : Applying Universal Prediction to Musical Style", In Proceedings of International Computer Music Conference, Beijing, 1999.
- ^ ":: Continuator". Archived from the original on 1 November 2014. Retrieved 19 May 2014.
- ^ Pachet, F., The Continuator: Musical Interaction with Style Archived 14 April 2012 at the Wayback Machine. In ICMA, editor, Proceedings of ICMC, pages 211–218, Göteborg, Sweden, September 2002. ICMA.
- ^ Pachet, F. Playing with Virtual Musicians: the Continuator in practice Archived 14 April 2012 at the Wayback Machine. IEEE MultiMedia,9(3):77–82 2002.
- ^ M. Toro, C. Rueda, C. Agón, G. Assayag. "NTCCRT: A concurrent constraint framework for soft-real time music interaction." Journal of Theoretical & Applied Information Technology, vol. 82, issue 1, pp. 184–193. 2015
- ^ "The OMax Project Page". omax.ircam.fr. Retrieved 2 February 2018.
- ^ Guided music synthesis with variable markov oracle C Wang, S Dubnov, Tenth Artificial Intelligence and Interactive Digital Entertainment Conference, 2014
- ^ "Turn ideas into music with MusicLM". Google. 10 May 2023. Retrieved 22 September 2023.
- ^ "Pick a voice, any voice: Voicemod unleashes "AI Humans" collection of real-time AI voice changers". Tech.eu. 21 June 2023. Retrieved 22 September 2023.
- ^ "'Regulate it before we're all finished': Musicians react to AI songs flooding the internet". Sky News. Retrieved 22 September 2023.
- S2CID 56413136.
- ^ Wang G. & Cook P. (2004) "On-the-fly Programming: Using Code as an Expressive Musical Instrument", In Proceedings of the 2004 International Conference on New Interfaces for Musical Expression (NIME) (New York: NIME, 2004).
- S2CID 62735944.
Further reading
- Ariza, C. 2005. "Navigating the Landscape of Computer-Aided Algorithmic Composition Systems: A Definition, Seven Descriptors, and a Lexicon of Systems and Research." In Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association. 765–772.
- Ariza, C. 2005. An Open Design for Computer-Aided Algorithmic Music Composition: athenaCL. PhD Dissertation, New York University.
- ISBN 978-0-262-52261-8. Archived from the originalon 2 January 2010. Retrieved 3 October 2009.
- Chadabe, Joel. 1997. Electric Sound: The Past and Promise of Electronic Music. Upper Saddle River, New Jersey: Prentice Hall.
- Journal of the Audio Engineering Society21, no. 7:526–534.
- ISBN 978-0-470-71455-3.
- Dodge, Charles; Jerse (1997). Computer Music: Synthesis, Composition and Performance. Thomas A. (2nd ed.). New York: Schirmer Books. p. 453. ISBN 978-0-02-864682-4.
- Doornbusch, P. 2015. "A Chronology / History of Electronic and Computer Music and Related Events 1906–2015 Archived 18 August 2020 at the Wayback Machine"
- Heifetz, Robin (1989). On the Wires of Our Nerves. Lewisburg, Pennsylvania: Bucknell University Press. ISBN 978-0-8387-5155-8.
- S2CID 3483927.
- Manning, Peter (2004). Electronic and Computer Music (revised and expanded ed.). Oxford Oxfordshire: Oxford University Press. ISBN 978-0-19-517085-6.
- Perry, Mark, and Thomas Margoni. 2010. "From Music Tracks to Google Maps: Who Owns Computer-Generated Works?". Computer Law & Security Review 26: 621–629.
- ISBN 978-0-262-68082-0.
- Supper, Martin (2001). "A Few Remarks on Algorithmic Composition". S2CID 21260852.
- ISBN 978-1-57647-079-4.