Music technology (electronic and digital)

From Wikipedia, the free encyclopedia

This 2009 photo shows music production using a digital audio workstation (DAW) with multi-monitor set-up.

Digital music technology encompasses digital instruments, computers, electronic effects units, software, or digital audio equipment by a performer, composer, sound engineer, DJ, or record producer to produce, perform[1] or record music. The term refers to electronic devices, instruments, computer hardware, and software used in performance, playback, recording, composition, mixing, analysis, and editing of music.

Education[]

Professional training[]

Courses in music technology are offered at many different Universities as part of degree programs focusing on performance, composition, music research at the undergraduate and graduate level. The study of music technology is usually concerned with the creative use of technology for creating new sounds, performing, recording, programming sequencers or other music-related electronic devices, and manipulating, mixing and reproducing music. Music technology programs train students for careers in "...sound engineering, computer music, audio-visual production and post-production, mastering, scoring for film and multimedia, audio for games, software development, and multimedia production."[2] Those wishing to develop new music technologies often train to become an audio engineer working in R&D.[3] Due to the increasing role of interdisciplinary work in music technology, individuals developing new music technologies may also have backgrounds or training in computer programming, computer hardware design, acoustics, record producing or other fields.

Use of music technology in education[]

Digital music technologies are widely used to assist in music education for training students in the home, elementary school, middle school, high school, college and university music programs. Electronic keyboard labs are used for cost-effective beginner group piano instruction in high schools, colleges, and universities. Courses in music notation software and basic manipulation of audio and MIDI can be part of a student's core requirements for a music degree. Mobile and desktop applications are available to aid the study of music theory and ear training. Digital pianos, such as those offered by Roland, provide interactive lessons and games using the built-in features of the instrument to teach music fundamentals.[4]

History[]

Development of digital musical technologies can be traced back to the analog music technologies of the early 20th century, such as the electromechanical Hammond organ, which was invented in 1929. In the 2010s, the ontological range of music technology has greatly increased, and it may now be electronic, digital, software-based or indeed even purely conceptual.

Early pioneers included Luigi Russolo, Halim El-Dabh,[5] Pierre Schaeffer, Pierre Henry, Edgard Varèse, Karlheinz Stockhausen, Ikutaro Kakehashi,[6] King Tubby.,[7] and others who manipulated sounds using tape machines—splicing tape and changing its playback speed to alter pre-recorded samples. Pierre Schaefer was credited for inventing this method of composition, known as musique concréte, in 1948 in Paris, France. In this style of composition, existing material is manipulated to create new timbres.[8] Musique concréte contrasts a later style that emerged in the mid-1950s in Cologne, Germany, known as elektonische musik. This style, invented by Karlheinz Stockhausen, involves creating new sounds without the use of pre-existing material. Unlike musique concréte, which primarily focuses on timbre, elektronische musik focuses on structure.[9] Influences of these two styles still prevail today in today's modern music and music technology. The concept of the software digital audio workstation is the emulation of a traditional recording studio. Colored strips, known as regions, can be spliced, stretched, and re-ordered, analogous to tape. Similarly, software representations of classic synthesizers emulate their analog counterparts.

Digital synthesizer history[]

Through the 1970s and 1980s, Japanese synthesizer manufacturers produced more affordable synthesizers than those produced in America,[10] with synthesizers made by Yamaha Corporation, Roland Corporation, Korg, Kawai and other companies. Yamaha's DX7 was one of the first mass-market, relatively inexpensive synthesizer keyboards. The DX7 is an FM synthesis based digital synthesizer manufactured from 1983 to 1989. It was the first commercially successful digital synthesizer.[11][12][13] Its distinctive sound can be heard on many recordings, especially pop music from the 1980s. The monotimbral, 16-note polyphonic DX7 was the moderately priced model of the DX series keyboard synthesizers. Over 200,000 of the original DX7 were made,[13][14][15] and it remains one of the best-selling synthesizers of all time.[12][16] The most iconic bass synthesizer is the Roland TB-303, widely used in acid house music. Other classic synthesizers include the Moog Minimoog, ARP Odyssey, Yamaha CS-80, Korg MS-20, Sequential Circuits Prophet-5, Fairlight CMI, PPG Wave, Roland TB-303, Roland Alpha Juno, Nord Modular and Korg M1.[17]

MIDI history[]

At the NAMM show in Los Angeles of 1983, MIDI was released. A demonstration at the convention showed two previously incompatible analog synthesizers, the Prophet 600 and Roland Jupiter-6, communicating with each other, enabling a player to play one keyboard while getting the output from both of them. This was a massive breakthrough in the 1980s, as it allowed synths to be accurately layered in live shows and studio recordings. MIDI enables different electronic instruments and electronic music devices to communicate with each other and with computers. The advent of MIDI spurred a rapid expansion of the sales and production of electronic instruments and music software.

In 1985, several of the top keyboard manufacturers created the MIDI Manufacturers Association (MMA). This newly founded association standardized the MIDI protocol by generating and disseminating all the documents about it. With the development of the MIDI File Format Specification by Opcode, every music software company's MIDI sequencer software could read and write each other's files.

Since the 1980s, personal computers developed and became the ideal system for utilizing the vast potential of MIDI. This has created a large consumer market for software such as MIDI-equipped electronic keyboards, MIDI sequencers and digital audio workstations. With universal MIDI protocols, electronic keyboards, sequencers, and drum machines can all be connected together.

Computer music history[]

Computer and synthesizer technology joining together changed the way music is made, and is one of the fastest changing aspects of music technology today. Dr. Max Matthews, a telecommunications engineer at Bell Telephone Laboratories' Acoustic and Behavioural Research Department, is responsible for some of the first digital music technology in the 50s. Dr. Matthews also pioneered a cornerstone of music technology; analog to digital conversion.

At Bell Laboratories, Matthews conducted research to improve the telecommunications quality for long-distance phone calls. Owing to long-distance and low-bandwidth, audio quality over phone calls across the United States was poor. Thus, Matthews devised a method in which sound was synthesized via computer on the distant end rather than transmitted. Matthews was an amateur violinist, and during a conversation with his superior, John Pierce at Bell Labs, Pierce posed the idea of synthesizing music through a computer since Matthews had already synthesized speech. Matthews agreed, and beginning in the 1950s wrote a series of programs known as MUSIC. MUSIC consisted of two files—and orchestra file containing data telling the computer how to synthesize sound—and a score file instructing the program what notes to play using the instruments defined in the orchestra file. Matthews wrote five iterations of MUSIC, calling them MUSIC I-V respectively. Subsequently, as the program was adapted and expanded as it was written to run on various platforms, its name changed to reflect its new changes. This series of programs became known as the MUSICn paradigm. The concept of the MUSIC now exists in the form of Csound.[18]

Later Max Matthews worked as an advisor to IRCAM in the late 1980s, The (Institut de Recherche et Coordination Acoustique/Musique, or Institute for Research and Coordination in Acoustics/Music in English). There, he taught Miller Puckette, a researcher. Puckette developed a program in which music could be programmed graphically. The program could transmit and receive MIDI messages to generate interactive music in real-time. Inspired by Matthews, Puckette named the program Max. Later, a researcher named David Zicarelli visited IRCAM, saw the capabilities of Max and felt it could be developed further. He took a copy of Max with him when he left and eventually added capabilities to process audio signals. Zicarelli named this new part of the program MSP after Miller Puckette. Zicarelli developed the commercial version of MaxMSP and sold it at his company, Cycling '74, beginning in 1997. The company has since been acquired by Ableton.[18]

The first generation of professional commercially available computer music instruments, or workstations as some companies later called them, were very sophisticated elaborate systems that cost a great deal of money when they first appeared. They ranged from $25,000 to $200,000.[19] The two most popular were the Fairlight, and the Synclavier.

It was not until the advent of MIDI that general-purpose computers started to play a role in music production. Following the widespread adoption of MIDI, computer-based MIDI editors and sequencers were developed. MIDI-to-CV/Gate converters were then used to enable analogue synthesizers to be controlled by a MIDI sequencer.[20]

Reduced prices in personal computers caused the masses to turn away from the more expensive workstations. Advancements in technology have increased the speed of hardware processing and the capacity of memory units. Powerful programs for sequencing, recording, notating, and mastering music.

Vocal synthesis history[]

Coinciding with the history of computer music is the history of vocal synthesis. Prior to Max Matthews synthesizing speech with a computer, analog devices were used to recreate speech. In the 1930s, an engineer named Holmer Dudley invented the VODER (Voice Operated Demonstrator), an electro-mechanical device which generated a sawtooth wave and white-noise. Various parts of the frequency spectrum of the waveforms could be filtered to generate the sounds of speech. Pitch was modulated via a bar on a wrist strap worn by the operator.[21] In the 1940s Dudley, invented the VOCODER (Voice Operated Coder). Rather than synthesizing speech from scratch, this machine operated by accepting incoming speech and breaking it into its spectral components. In the late 1960s and early 1970s, bands and solo artists began using the VOCODER to blend speech with notes played on a synthesizer.[22]

Meanwhile, at Bell Laboratories, Max Matthews worked with researchers Kelly and Lachbaum to develop a model of the vocal tract to study how its prosperities contributed to speech generation. Using the model of the vocal tract, Matthews used linear predictive coding (LPC)—a method in which a computer estimates the formants and spectral content of each word based on information about the vocal model, including various applied filters representing the vocal tract—to make a computer (an IBM 704) sing for the first time in 1962. The computer performed a rendition of "Bicycle Built for Two.[23]"

In the 1970s at IRCAM in France, researchers developed a piece of software called CHANT (French for "sing"). CHANT was based FOF (Fomant ond Formatique) synthesis, in which the peak frequencies of a sound are created and shaped using granular synthesis—as opposed to filtering frequencies to create speech.[24]

Through the 1980s and 1990s as MIDI devices became commercially available, speech was generated by mapping MIDI data to samples of the components of speech stored in sample libraries.[25]

Synthesizers and drum machines[]

An early Minimoog synthesizer by R.A. Moog Inc. from 1970.

A synthesizer is an electronic musical instrument that generates electric signals that are converted to sound through instrument amplifiers and loudspeakers or headphones. Synthesizers may either imitate existing sounds (instruments, vocal, natural sounds, etc.), or generate new electronic timbres or sounds that did not exist before. They are often played with an electronic musical keyboard, but they can be controlled via a variety of other input devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled using a controller device.

Synthesizers use various methods to generate a signal. Among the most popular waveform synthesis techniques are subtractive synthesis, additive synthesis, wavetable synthesis, frequency modulation synthesis, phase distortion synthesis, physical modeling synthesis and sample-based synthesis. Other less common synthesis types include subharmonic synthesis, a form of additive synthesis via subharmonics (used by mixture trautonium), and granular synthesis, sample-based synthesis based on grains of sound, generally resulting in soundscapes or clouds. In the 2010s, synthesizers are used in many genres of pop, rock and dance music. Contemporary classical music composers from the 20th and 21st century write compositions for synthesizer.

Drum machines[]

A Yamaha RY30 Drum Machine

A drum machine is an electronic musical instrument designed to imitate the sound of drums, cymbals, other percussion instruments, and often basslines. Drum machines either play back prerecorded samples of drums and cymbals or synthesized re-creations of drum/cymbal sounds in a rhythm and tempo that is programmed by a musician. Drum machines are most commonly associated with electronic dance music genres such as house music, but are also used in many other genres. They are also used when session drummers are not available or if the production cannot afford the cost of a professional drummer. In the 2010s, most modern drum machines are sequencers with a sample playback (rompler) or synthesizer component that specializes in the reproduction of drum timbres. Though features vary from model to model, many modern drum machines can also produce unique sounds, and allow the user to compose unique drum beats and patterns.

Electro-mechanical drum machines were first developed in 1949, with the invention of the Chamberlin Rhythmate. Transistorized electronic drum machines later appeared in the 1960s. The Ace Tone Rhythm Ace, created by Ikutaro Kakehashi, began appearing in popular music from the late 1960s, followed by drum machines from Korg and Ikutaro's later Roland Corporation also appearing in popular music from the early 1970s.[26] Sly and the Family Stone's 1971 album There's a Riot Goin' On helped to popularize the sound of early drum machines, along with Timmy Thomas' 1972 R&B hit "Why Can't We Live Together" and George McCrae's 1974 disco hit "Rock Your Baby" which used early Roland rhythm machines.[27]

Early drum machines sounded drastically different than the drum machines that gained their peak popularity in the 1980s and defined an entire decade of pop music. The most iconic drum machine was the Roland TR-808, widely used in hip hop and dance music. Other classic drum machines include the Alesis HR-16, Korg Mini Pops 120, E-MU SP-12, Elektron SPS1 Machinedrum, Roland CR-78, PAiA Programmable Drum Set, LinnDrum, Roland TR-909 and Oberheim DMX.[28]

Sampling technology[]

Digital sampling technology, introduced in the 1980s, has become a staple of music production in the 2000s. Devices that use sampling, record a sound digitally (often a musical instrument, such as a piano or flute being played), and replay it when a key or pad on a controller device (e.g., an electronic keyboard, electronic drum pad, etc.) is pressed or triggered. Samplers can alter the sound using various audio effects and audio processing. Sampling has its roots in France with the sound experiments carried out by Musique Concrete practitioners.

In the 1980s, when the technology was still in its infancy, digital samplers cost tens of thousands of dollars and they were only used by the top recording studios and musicians. These were out of the price range of most musicians. Early samplers include the 12-bit Toshiba LMD-649[29] and the 8-bit Emulator I in 1981. The latter's successor, the Emulator II (released in 1984), listed for $8,000.[19] Samplers were released during this period with high price tags, such as the K2000 and K2500.

The first affordable sampler, the AKAI S612, became available in the mid-1980s and retailed for US$895. Other companies soon released affordable samplers, including the Mirage Sampler, Oberheim DPX-1, and more by Korg, Casio, Yamaha, and Roland. Some important hardware samplers include the Akai Z4/Z8, Ensoniq ASR-10, Roland V-Synth, Casio FZ-1, Kurzweil K250, Akai MPC60, Ensoniq Mirage, Akai S1000, E-mu Emulator, and Fairlight CMI.[30]

One of the biggest uses of sampling technology was by hip-hop music DJs and performers in the 1980s. Before affordable sampling technology was readily available, DJs would use a technique pioneered by Grandmaster Flash to manually repeat certain parts in a song by juggling between two separate turntables. This can be considered as an early precursor of sampling. In turn, this turntablism technique originates from Jamaican dub music in the 1960s, and was introduced to American hip hop in the 1970s.

In the 2000s, most professional recording studios use digital technologies. In recent years, many samplers have only included digital technology. This new generation of digital samplers are capable of reproducing and manipulating sounds. Digital sampling plays an integral part in some genres of music, such as hip-hop and trap. Advanced sample libraries have made complete performances of orchestral compositions possible that sound similar to a live performance.[10] Modern sound libraries allow musicians to have the ability to use the sounds of almost any instrument in their productions.

MIDI[]

Several rack-mounted synthesizers that share a single controller
MIDI allows multiple instruments to be played from a single controller (often a keyboard, as pictured here), which makes stage setups much more portable. This system fits into a single rack case, but prior to the advent of MIDI. it would have required four separate, heavy full-size keyboard instruments, plus outboard mixing and effects units.

MIDI has been the musical instrument industry standard interface since the 1980s through to the present day.[6] It dates back to June 1981, when Roland Corporation founder Ikutaro Kakehashi proposed the concept of standardization between different manufacturers' instruments as well as computers, to Oberheim Electronics founder Tom Oberheim and Sequential Circuits president Dave Smith. In October 1981, Kakehashi, Oberheim and Smith discussed the concept with representatives from Yamaha, Korg and Kawai.[31] In 1983, the MIDI standard was unveiled by Kakehashi and Smith.[32][33]

Some universally accepted varieties of MIDI software applications include music instruction software, MIDI sequencing software, music notation software, hard disk recording/editing software, patch editor/sound library software, computer-assisted composition software, and virtual instruments. Current developments in computer hardware and specialized software continue to expand MIDI applications.

Computers in music technology[]

Following the widespread adoption of MIDI, computer-based MIDI editors and sequencers were developed. MIDI-to-CV/Gate converters were then used to enable analogue synthesizers to be controlled by a MIDI sequencer.[20]

Reduced prices in personal computers caused the masses to turn away from the more expensive workstations. Advancements in technology have increased the speed of hardware processing and the capacity of memory units. Software developers write new, more powerful programs for sequencing, recording, notating, and mastering music.

Digital audio workstation software, such as Pro Tools, Logic, and many others, have gained popularity among the vast array of contemporary music technology in recent years. Such programs allow the user to record acoustic sounds with a microphone or software instrument, which may then be layered and organized along a timeline and edited on a flat-panel display of a computer. Recorded segments can be copied and duplicated ad infinitum, without any loss of fidelity or added noise (a major contrast from analog recording, in which every copy leads to a loss of fidelity and added noise). Digital music can be edited and processed using a multitude of audio effects. Contemporary classical music sometimes uses computer-generated sounds—either pre-recorded or generated and manipulated live—in conjunction or juxtaposed on classical acoustic instruments like the cello or violin. Music is scored with commercially available notation software.[34]

In addition to the digital audio workstations and music notation software, which facilitate the creation of fixed media (material that does not change each time it is performed), software facilitating interactive or generative music continues to emerge. Composition based on conditions or rules (algorithmic composition) has given rise to software which can automatically generate music based on input conditions or rules. Thus, the resulting music evolves each time conditions change. Examples of this technology include software designed for writing music for video games—where music evolves as a player advances through a level or when certain characters appear—or music generated from artificial intelligence trained to convert biometrics like EEG or ECG readings into music.[35] Because this music is based on user interaction, it will be different each time it is heard. Other examples of generative music technology include the use of sensors connected to computer and artificial intelligence to generate music based on captured data, such as environmental factors, the movements of dancers, or physical inputs from a digital device such as a mouse or game controller. Software applications offering capabilities for generative and interactive music include SuperCollider, MaxMSP/Jitter, and Processing. Interactive music is made possible through physical computing, where the data from the physical world affects a computer's output and vice versa.[18]

Vocal synthesis[]


In the 2010s, vocal synthesis technology has taken advantage of the recent advances in artificial intelligence—deep listening and machine learning to better represent the nuances of the human voice. New high fidelity sample libraries combined with digital audio workstations facilitate editing in fine detail, such as shifting of formats, adjustment of vibrato, and adjustments to vowels and consonants. Sample libraries for various languages and various accents are available. With today's advancements in vocal synthesis, artists sometimes use sample libraries in lieu of backing singers.[36]

Timeline[]

  • 1917 : Leon Theremin invented the prototype of the Theremin
  • 1944 : Halim El-Dabh produces earliest electroacoustic tape music[37][5]
  • 1952 : Harry F. Olson and Herbert Belar invent the RCA Synthesizer
  • 1952 : Osmand Kendal develops the Composer-Tron for the Marconi Wireless Company
  • 1956 : Raymond Scott develops the Clavivox
  • 1958 : Evgeny Murzin along with several colleagues create the ANS synthesizer
  • 1959 : Wurlitzer manufactures The Sideman, the first commercial electro-mechanical drum machine
  • 1963 : Keio Electronics (later Korg) produces the DA-20
  • 1963 : The Mellotron starts to be manufactured in London
  • 1964 : Ikutaro Kakehashi debuts Ace Tone R-1 Rhythm Ace, the first electronic drum[26][38][39]
  • 1964 : The Moog Synthesizer is released
  • 1965 : Nippon Columbia patents an early electronic drum machine[40]
  • 1966 : Korg releases Donca-Matic DE-20, an early electronic drum machine[41]
  • 1967 : Ace Tone releases FR-1 Rhythm Ace, the first drum machine to enter popular music[42]
  • 1967 : First PCM recorder developed by NHK[43]
  • 1968 : King Tubby pioneers dub music, an early form of popular electronic music[7]
  • 1969 : Matsushita engineer Shuichi Obata invents first direct-drive turntable, Technics SP-10[44]
  • 1970 : ARP 2600 is manufactured
  • 1973 : Yamaha release Yamaha GX-1,[45] the first polyphonic synthesizer[46]
  • 1974 : Yamaha build first digital synthesizer[47]
  • 1977 : Roland release MC-8, an early microprocessor-driven CV/Gate digital sequencer[26][20]
  • 1978 : Roland releases CR-78, the first microprocessor-driven drum machine[26]
  • 1979 : Casio releases VL-1,[48] the first commercial digital synthesizer[49]
  • 1980 : Roland releases TR-808, the most widely used drum machine in popular music[50]
  • 1980 : Roland introduces DCB protocol and DIN interface with TR-808
  • 1980 : Yamaha releases GS-1, the first FM digital synthesizer
  • 1980 : Kazuo Morioka creates Firstman SQ-01, the first bass synth with a sequencer[51][52][53]
  • 1981 : Roland releases TB-303, a bass synthesizer that lays foundations for acid house music[54]
  • 1981 : Toshiba's LMD-649, the first PCM digital sampler, introduced with Yellow Magic Orchestra's Technodelic[29]
  • 1982 : Sony and Philips introduce compact disc
  • 1982 : First MIDI synthesizers released, Roland Jupiter-6 and Prophet 600[55]
  • 1983 : Introduction of MIDI
  • 1983 : Roland releases MSQ-700, the first MIDI sequencer[56]
  • 1983 : Roland releases TR-909, the first MIDI drum machine[55]
  • 1983 : Roland releases MC-202, the first groovebox[57]
  • 1983 : Yamaha releases DX7, the first commercially successful digital synthesizer[13]
  • 1985 : Akai releases the Akai S612, a digital sampler
  • 1986 : The first digital consoles appear
  • 1987 : Digidesign markets Sound Tools
  • 1988 : Akai introduces the Music Production Controller (MPC) series of digital samplers
  • 1994 : Yamaha unveils the ProMix 01

See also[]

References[]

  1. ^ m:tech educational services. "What is Music Technology?". Archived from the original on 24 January 2011. Retrieved 20 June 2013.
  2. ^ "Music Technology - NYU Steinhardt". steinhardt.nyu.edu. Retrieved 17 April 2018.
  3. ^ wiseGeek. "What Is Audio Engineering?". Retrieved 17 May 2013.
  4. ^ Wise, Stuart; Greenwood, Janinka; Davis, Niki (July 2011). "Teachers' use of digital technology in secondary music education: illustrations of changing classrooms". British Journal of Music Education. 28 (2): 117–134. doi:10.1017/S0265051711000039. ISSN 0265-0517. S2CID 145627220.
  5. ^ a b Holmes, Thom (2008). "Early Synthesizers and Experimenters". Electronic and experimental music: technology, music, and culture (3rd ed.). Taylor & Francis. p. 156. ISBN 978-0-415-95781-6. Retrieved 2011-06-04.
  6. ^ a b The life and times of Ikutaro Kakehashi, the Roland pioneer modern music owes everything to, Fact
  7. ^ a b Michael Veal (2013), Dub: Soundscapes and Shattered Songs in Jamaican Reggae, pages 26-44, "Electronic Music in Jamaica", Wesleyan University Press
  8. ^ "Schaeffer, Pierre | Grove Music". www.oxfordmusiconline.com. doi:10.1093/gmo/9781561592630.article.24734. ISBN 978-1-56159-263-0. Retrieved 2019-10-01.
  9. ^ Toop, Richard (2001). "Stockhausen, Karlheinz | Grove Music". www.oxfordmusiconline.com. doi:10.1093/gmo/9781561592630.article.26808. ISBN 978-1-56159-263-0. Retrieved 2019-10-01.
  10. ^ a b Campbell, Murray; Greated, Clive; Myers, Arnold. Musical Instruments. New York: Oxford University Press.
  11. ^ Edmondson, Jacqueline, ed. (2013). Music in American Life: An Encyclopedia of the Songs, Styles, Stars, and Stories that Shaped our Culture [4 volumes]. ABC-CLIO. p. 398. ISBN 9780313393488. In 1967, John Chowning, at Stanford University, accidentally discovered frequency modulation (FM) synthesis when experimenting with extreme vibrato effects in MUSIC-V. ... By 1971 he was able to use FM synthesis to synthesizer musical instrument sounds, and this technique was later used to create the Yamaha DX synthesizer, the first commercially successful digital synthesizer, in the early 1980s.
  12. ^ a b Shepard, Brian K. (2013). Refining Sound: A Practical Guide to Synthesis and Synthesizers. Oxford University Press. ISBN 9780199376681. The first digital synthesizer to make it into the studios of everyone else, the Yamaha DX7, became one of the most commercially successful synthesizers of all time.
  13. ^ a b c Pinch, T. J.; Bijsterveld, Karin (July 2003). ""Should One Applaud?" Breaches and Boundaries in the Reception of New Technology in Music". Technology and Culture. 44 (3): 536–559. doi:10.1353/tech.2003.0126. S2CID 132403480. By the time the first commercially successful digital instrument, the Yamaha DX7 (lifetime sales of two hundred thousand), appeared in 1983 ... (Note: the above sales number seems about whole DX series)
  14. ^ Johnstone, Robert. "The sound of one chip clapping: Yamaha and FM synthesis". MIT Japan Program: Science, Technology, Management. Center for International Studies, Massachusetts Institute of Technology. MIT JP 94-09.
  15. ^ "NAMM 2015: Yamaha Vintage Synth Museum Tour". sonicstate.com. Retrieved 17 April 2018.
  16. ^ Holmes, Thom (2008). "Early Computer Music". Electronic and experimental music: technology, music, and culture (3rd ed.). Taylor & Francis. p. 257. ISBN 978-0415957816. Retrieved 2011-06-04.
  17. ^ Twells, John. "The 14 Synthesizers that Shaped Modern Music". Fact Music News. Archived from the original on 2014-03-06. Retrieved December 8, 2015.
  18. ^ a b c Strawn, John; Shockley, Alan (2014). "Computers and music | Grove Music". www.oxfordmusiconline.com. doi:10.1093/gmo/9781561592630.article.A2256184. ISBN 978-1-56159-263-0. Retrieved 2019-10-01.
  19. ^ a b Kettlewell, Ben (2002). Electronic Music Pioneers. USA: Pro Music Press.
  20. ^ a b c Russ, Martin (2012). Sound Synthesis and Sampling. CRC Press. p. 192. ISBN 978-1136122149. Retrieved 26 April 2017.
  21. ^ Grundhauser, Eric (2017-01-16). "The Voder, the First Machine to Create Human Speech". Atlas Obscura. Retrieved 2019-10-01.
  22. ^ Gale, Dave (2018-07-18). "The History of the Vocoder - Putting It Into Words". MusicTech. Retrieved 2019-10-01.
  23. ^ "Singing Kelly-Lochbaum Vocal Tract". ccrma.stanford.edu. Retrieved 2019-10-01.
  24. ^ Rodet, Xavier; Potard, Yves; Barriere, Jean-Baptiste (1984). "The CHANT Project: From the Synthesis of the Singing Voice to Synthesis in General". Computer Music Journal. 8 (3): 15. doi:10.2307/3679810. JSTOR 3679810. S2CID 15320133.
  25. ^ Macon, Michael; Jensen-Link, Leslie; George, E. Bryan; Oliverio, James; Clements, Mark (1997-09-01). "Concatenation-Based MIDI-to-Singing Voice Synthesis". Audio Engineering Society. Cite journal requires |journal= (help)
  26. ^ a b c d Reid, Gordon (2004), "The History Of Roland Part 1: 1930–1978", Sound on Sound (November), retrieved 19 June 2011
  27. ^ Mike Collins (2014), In the Box Music Production: Advanced Tools and Techniques for Pro Tools, page 320, CRC Press
  28. ^ Felton, David (August 2012). "Top Ten Classic Drum Machines". Attack Magazine. Retrieved December 8, 2015.
  29. ^ a b Rockin'f, March 1982, pages 140-141
  30. ^ Solida, Scot (24 January 2011). "The 10 most important hardware samplers in history". Music Radar. Retrieved December 8, 2015.
  31. ^ Chadabe, Joel (1 May 2000). "Part IV: The Seeds of the Future". Electronic Musician. Penton Media. XVI (5). Archived from the original on 28 September 2012.
  32. ^ "Technical GRAMMY Award: Ikutaro Kakehashi And Dave Smith". 29 January 2013.
  33. ^ "Ikutaro Kakehashi, Dave Smith: Technical GRAMMY Award Acceptance". 9 February 2013.
  34. ^ ""Digital Audio Workstation" by Colby Leider". Computer Music Journal. 30 (3): 106–107. 2006. ISSN 0148-9267. JSTOR 4617975.
  35. ^ Miranda, Eduardo Reck (2011). "Brain-computer music interface for composition and performance". International Journal on Disability and Human Development. 5 (2): 119–126. doi:10.1515/IJDHD.2006.5.2.119. ISSN 2191-0367. S2CID 6079751.
  36. ^ Bruno, Chelsea A (2014-03-25). Vocal Synthesis and Deep Listening (Master of Music Music thesis). Florida International University. doi:10.25148/etd.fi14040802.
  37. ^ "The Wire, Volumes 275-280", The Wire, p. 24, 2007, retrieved 2011-06-05
  38. ^ Matt Dean (2011), The Drum: A History, page 390, Scarecrow Press
  39. ^ "The 14 drum machines that shaped modern music". factmag.com. 22 September 2016. Retrieved 17 April 2018.
  40. ^ "Automatic rhythm instrument".
  41. ^ "Donca-Matic (1963)". Korg Museum. Korg.
  42. ^ Russell Hartenberger (2016), The Cambridge Companion to Percussion, page 84, Cambridge University Press
  43. ^ Fine, Thomas (2008). "The dawn of commercial digital recording" (PDF). ARSC Journal. 39 (1): 1–17.
  44. ^ Billboard, May 21, 1977, page 140
  45. ^ Peter Manning, Electronic and Computer Music, page 264, Oxford University Press
  46. ^ Yamaha GX-1, Vintage Synth Explorer
  47. ^ "[Chapter 2] FM Tone Generators and the Dawn of Home Music Production". Yamaha Synth 40th Anniversary - History. Yamaha Corporation. 2014.
  48. ^ Mark Vail, The Synthesizer: A Comprehensive Guide to Understanding, Programming, Playing, and Recording the Ultimate Electronic Music Instrument, page 277, Oxford University Press
  49. ^ Igoudin, Alex; Acoustics, Stanford University Center for Computer Research in Music and; Music, Stanford University Dept of (17 April 1997). "Impact of MIDI on electroacoustic art music". CCRMA, Dept. of Music, Stanford University. Retrieved 17 April 2018 – via Google Books.
  50. ^ Wells, Peter (2004), A Beginner's Guide to Digital Video, AVA Books, p. 18, ISBN 2-88479-037-3, retrieved 2011-05-20
  51. ^ "Firstman International". SYNRISE (in German). Archived from the original on 2003-04-20. FIRSTMAN existiert seit 1972 und hat seinen Ursprung in Japan. Dort ist dieFirma unter dem Markennamen HILLWOOD bekannt. HILLWOOD baute dann auch 1973 den quasi ersten Synthesizer von FIRSTMAN. Die Firma MULTIVOX liess ihre Instrumente von 1976 bis 1980 bei HILLWOOD bauen.","SQ-10 / mon syn kmi ? (1980) / Monophoner Synthesizer mit wahrscheinlich eingebautem Sequenzer. Die Tastatur umfasst 37 Tasten. Die Klangerzeugung beruht auf zwei VCOs.
  52. ^ Mark Jenkins (2009), Analog Synthesizers, pages 107-108, CRC Press
  53. ^ A TALE OF TWO STRING SYNTHS, Sound on Sound, July 2002
  54. ^ Vine, Richard (15 June 2011). "Tadao Kikumoto invents the Roland TB-303". The Guardian. Retrieved 9 July 2011.
  55. ^ a b Russ, Martin (2004). Sound synthesis and sampling. p. 66. ISBN 9780240516929.
  56. ^ "Roland - Company - History - Our History".
  57. ^ Roland MC-202 MicroComposer, Electronic Musician, November 2001
  • Cunningham, Mark (1998). Good Vibrations: a History of Record Production. London: Sanctuary Publishing Limited.
  • Edmondson, Jacquelin. Music In American Life.
  • Holmes, Thom (2008). Electronic and Experimental Music. New York: Routledge.
  • Kettlewell, Ben (2002). Electronic Music Pioneers. USA: Pro Music Press.
  • Taylor, Timothy (2001). Strange Sounds. New York: Routledge.
  • Campbell, Murray; Greated, Clive; Myers, Arnold (2004). Musical Instruments. New York: Oxford University Press.
  • Weir, William (21 November 2011). "How the Drum Machine Changed Pop Music". Slate. Retrieved December 9, 2015.
  • "An Audio Timeline". Audio Engineering Society. Retrieved December 8, 2015.

External links[]

Retrieved from ""