Music technology (electronic and digital)


Digital music technology encompasses the use of digital instruments to produce, perform or record music. These instruments vary, including computers, electronic effects units, software, and digital audio equipment. Digital music technology is used in performance, playback, recording, composition, mixing, analysis and editing of music, by professions in all parts of the music industry.

History

In the late 19th century, Thaddeus Cahill introduced the Telharmonium, which is commonly considered the first electromechanical musical instrument. In the early 20th century, Leon Theremin created the Theremin, an early electronic instrument played without physical contact, creating a new form of sound creation.
In the mid-20th century, sampling emerged, with artists like Pierre Schaeffer and Karlheinz Stockhausen manipulating recorded sounds on tape to create entirely new compositions. This laid the foundation for future electronic music production techniques.
In the 1960s, the Moog synthesizer, invented by Robert Moog, popularized analog synthesis. Musician Wendy Carlos demonstrated Robert's invention with the album Switched-On Bach, which consisted of works composed by Johann Sebastian Bach interpreted with the Moog synthesizer. Meanwhile, tape-based studios, like the BBC Radiophonic Workshop, were at the forefront of electronic sound design.
The 1980s saw a major shift towards digital technology with the development of the Musical Instrument Digital Interface standard. This allowed electronic instruments to communicate with computers and each other, transforming music production. Digital synthesizers, such as the Yamaha DX7, became widely popular.
The 1990s and 2000s witnessed the explosive growth of electronic dance music and its various subgenres, driven by the accessibility of digital music production tools and the rise of computer-based software synthesizers.

Education

Professional training

Courses in music technology are offered at many different universities as part of degree programs focusing on performance, composition, music research at the undergraduate and graduate level. The study of music technology is usually concerned with the creative use of technology for creating new sounds, performing, recording, programming sequencers or other music-related electronic devices, and manipulating, mixing and reproducing music. Music technology programs train students for careers in "...sound engineering, computer music, audio-visual production and post-production, mastering, scoring for film and multimedia, audio for games, software development, and multimedia production." Those wishing to develop new music technologies often train to become an audio engineer working in research and development. Due to the increasing role of interdisciplinary work in music technology, individuals developing new music technologies may also have backgrounds or training in electrical engineering, computer programming, computer hardware design, acoustics, record producing or other fields.

Use of music technology in education

Digital music technologies are widely used to assist in music education for training students in the home, elementary school, middle school, high school, college and university music programs. Electronic keyboard labs are used for cost-effective beginner group piano instruction in high schools, colleges, and universities. Courses in music notation software and basic manipulation of audio and MIDI can be part of a student's core requirements for a music degree. Mobile and desktop applications are available to aid the study of music theory and ear training. Some digital pianos provide interactive lessons and games using the built-in features of the instrument to teach music fundamentals.

Analog Synthesizers

Classic analog synthesizers include the Moog Minimoog, ARP Odyssey, Yamaha CS-80, Korg MS-20, Sequential Circuits Prophet-5, Roland TB-303, Roland Alpha Juno. One of the most iconic synthesizers is the Roland TB-303, was widely used in acid house music.

Digital synthesizer history

Classic digital synthesizers include the Fairlight CMI, PPG Wave, Nord Modular and Korg M1.

Computer music history

Max Mathews

Computer and synthesizer technology joining together changed the way music is made and is one of the fastest-changing aspects of music technology today. Max Mathews, an acoustic researcher at Bell Telephone Laboratories' Acoustic and Behavioural Research Department, is responsible for some of the first digital music technology in the 1950s. Mathews also pioneered a cornerstone of music technology; analog-to-digital conversion.
At Bell Laboratories, Matthews conducted research to improve the telecommunications quality for long-distance phone calls. Owing to long-distance and low-bandwidth, audio quality over phone calls across the United States was poor. Thus, Matthews devised a method in which sound was synthesized via computer on the distant end rather than transmitted. Matthews was an amateur violinist, and during a conversation with his superior, John Pierce at Bell Labs, Pierce posed the idea of synthesizing music through a computer. Since Matthews had already synthesized speech, he agreed and wrote a series of programs known as MUSIC. MUSIC consisted of two files: an orchestra file containing data telling the computer how to synthesize sound, and a score file instructing the program what notes to play using the instruments defined in the orchestra file. Matthews wrote five iterations of MUSIC, calling them MUSIC I-V respectively. Subsequently, as the program was adapted and expanded to run on various platforms, its name changed to reflect its new changes. This series of programs became known as the MUSIC-N paradigm. The concept of the MUSIC now exists in the form of Csound.
Later Max Matthews worked as an advisor to IRCAMin the late 1980s. There, he taught Miller Puckette, a researcher. Puckette developed a program in which music could be programmed graphically. The program could transmit and receive MIDI messages to generate interactive music in real-time. Inspired by Matthews, Puckette named the program Max. Later, a researcher named David Zicarelli visited IRCAM, saw the capabilities of Max and felt it could be developed further. He took a copy of Max with him when he left and eventually added capabilities to process audio signals. Zicarelli named this new part of the program MSP after Miller Puckette. Zicarelli developed the commercial version of MaxMSP and sold it at his company, Cycling '74, beginning in 1997. The company has since been acquired by Ableton.

Later history

The first generation of professional commercially available computer music instruments, or workstations as some companies later called them, were very sophisticated elaborate systems that cost a great deal of money when they first appeared. They ranged from $25,000 to $200,000. The two most popular were the Fairlight, and the Synclavier.
It was not until the advent of MIDI that general-purpose computers started to play a role in music production. Following the widespread adoption of MIDI, computer-based MIDI editors and sequencers were developed. MIDI-to-CV/Gate converters were then used to enable analog synthesizers to be controlled by a MIDI sequencer.

MIDI

At the NAMM Show of 1983 in Los Angeles, MIDI was released. A demonstration at the convention showed two previously incompatible analog synthesizers, the Prophet 600 and Roland Jupiter-6, communicating with each other, enabling a player to play one keyboard while getting the output from both of them. This development immediately allowed synths to be accurately layered in live shows and studio recordings. MIDI enables different electronic instruments and electronic music devices to communicate with each other and with computers. The advent of MIDI spurred a rapid expansion of the sales and production of electronic instruments and music software.
In 1985, several of the top keyboard manufacturers created the MIDI Manufacturers Association. This newly founded association standardized the MIDI protocol by generating and disseminating all the documents about it. With the development of the MIDI file format specification by Opcode, every music software company's MIDI sequencer software could read and write each other's files.
Since the 1980s, personal computers became the ideal system for utilizing the vast potential of MIDI. This has created a large consumer market for software such as MIDI-equipped electronic keyboards, MIDI sequencers and digital audio workstations. With universal MIDI protocols, electronic keyboards, sequencers, and drum machines can all be connected together.

Vocal synthesis history until 1980s

VODER

Coinciding with the history of computer music is the history of vocal synthesis. Prior to Max Matthews synthesizing speech with a computer, analog devices were used to recreate speech. In the 1930s, an engineer named Homer Dudley invented the Voice Operating Demonstrator, an electro-mechanical device which generated a sawtooth wave and white-noise. Various parts of the frequency spectrum of the waveforms could be filtered to generate the sounds of speech. Pitch was modulated via a bar on a wrist strap worn by the operator. In the 1940s Dudley, invented the Voice Operated Coder. Rather than synthesizing speech from scratch, this machine operated by accepting incoming speech and breaking it into its spectral components. In the late 1960s and early 1970s, bands and solo artists began using the VOCODER to blend speech with notes played on a synthesizer.

Singing computer

At Bell Laboratories, Max Matthews worked with researchers Kelly and Lochbaum to develop a model of the vocal tract to study how its properties contributed to speech generation. Using the model of the vocal tract,—a method, which would come to be known as physical modeling synthesis, in which a computer estimates the formants and spectral content of each word based on information about the vocal model, including various applied filters representing the vocal tract—to make a computer sing for the first time in 1962. The computer performed a rendition of "Daisy Bell".