Classic Computer Magazine Archive CREATIVE COMPUTING VOL. 9, NO. 7 / JULY 1983 / PAGE 111

Computers make music. Patricia Smith.

In a living room in Oakland, CA, Tim Perkis, John Bischoff and Jim Horton of the League of Automatic Music Composers connect their microcomputers. Each composer has programmed his computer with different musical elements. The computers, which constitute a band, perform. Interacting with each other, they create music.

The sounds from each computer affect the sounds produced by the other two computers. One computer selects melodic patterns. Another calculates which harmonies to play, and the harmony influences rhythmic patterns, Horton explained.

Because the sounds of one computer affect the sounds of another, the musical results are unpredictable. As the Sunday afternoon work session begins, the music is loud, turbulent, and dissonant. But the music changes, sometimes gradually, sometimes suddenly. For a while, the music is eery. At times, it is rhythmic and lively, sounding like jazz. Sometimes it is lighter and pay, sounding almost as if it came from a calliope.

"We're composers, not performers. The sound of what is pleasing is worked out when we are composing. The computers are doing the performing. In a concert, we're listening," Horton said.

At a recent concert at the Mills College Center for Contemporary Music (CCM) in Oakland, David Rosenboom sits down behind the Touche, a computerized instrument he and Don Buchla designed. At times, he stands up or raises and swings his arms as he moves the levers that control the output of continuous, rich melodic sounds. Once, he jumps up and moves over to the piano. Accompanying the Touche, he plays jazzlike rhythms wildly. He returns to the Touche; but before the piece, "Nova Wind," is over, he gets up again and plays the violin.

"Nova Wind," composed in 1981, is one piece from his recently released album "Future Travel" (from Street Records). This album contains "the most elaborate use of live performance, computerized instruments that I have done to date in terms of the complexity of the sound generating process," he said.

At a college lecture, computer designer and composer Andy Moorer talks about synthesizing sounds for musical purposes. Moorer works with a large computer at Stanford's Center for Computer Research in Music and Acoustics (CCRMA).

His demonstration tape illustrates his techniques. In "Perfect Days," there are three distinct sounds: Charles Shere's voice as he reads the Richard Brautigan poem, the sound of a flute played by Tim Weisberg, and a modified sound that is part speech and part flute, that has the vocal qualities of speech, but the pitch of the flute. Current Trends

There is no typical style or sound in computer music. Composers who use computers draw from varying musical traditions and produce very different kinds of compositions. The League of Automatic Music Composers' music, in which microcomputers interact with each other in an improvisatory way, differs from complex ensemble compositions and synthesized musical poems.

Using computers to create sounds is one trend in contemporary music. "It isn't reasonable to think electronic music will replace traditional music. It's simply another medium," CCM program assistant Larry Polansky said.

Although most of the music at Stanford has been produced on the University's large, specially designed computer, other Bay Area composers use small computers, which are attached to synthesizers, keyboards, or other sound-producing devices. These composers, who often construct their own computer instruments, write musical instructions which tell the computer to act in certain ways.

Composer and music system designer Paul De Marinas recently built a computer music exhibit for San Francisco's Exploratorium. In collaboration with New York composer and designer, David Behrman, De Marinas designed six touch-sensitive guitar models. He attached the guitar models to a single-board computer, an Apple computer, and three Casio keyboards. Exploratorium visitors who touch the guitars activate the single-board computer, which activates the Apple. Depending on which keys on which guitars are touched, the Apple figures out what harmonies and rhythms to play on the keyboards.

De Marinas is interested in harmonies and said his pieces are tonal. He designed the Exploratorium piece so that "whatever is played by six kids who intuitively know harmony is immediately successful."

In all of his pieces, De Marinas said, he "aims to make systems that I could play or others could play beautifully, where walks could be taken spontaneously."

De Marinas, who obtained an MFA from Mills College and has taught at San Francisco State, believes the aesthetics of instrumental and electronic music are the same. "Basic musical things are beautiful. An identifiable phrase repeated over itself is capable of becoming beautiful," he said.

Other computer music composers agree that the aesthetic qualities of what makes good music are the same for all musical genres. "We're sometimes dealing with very different compositional procedures or live performance set-ups, but I don't find the aesthetics different," Rosenboom said. Antecedents

"Electronic music has been around for 30 or 40 years, and there have been composers of tremendous importance," Polansky explained.

One is John Cage. Horton, who was doing graduate work in philosophy in the mid-sixties, was influenced by Cage. Cage's music "was starting, exciting, and beautiful," Horton said.

"Cage was in the first generation," Mills graduate student Phil Stone explained, adding he was inspired by Alvin Lucier, who was in the second generation. "I started out at Wesleyan wanting to be a lawyer, but got interested and caught up in music through Lucier."

"Lucier was called a physicist. He explored things about the nature of sound. The pieces he composed are beautiful. They're elegantly simple from complex processes. That was inspiring."

Rosenboom became interested in electronic music at the University of Illinois in the mid-sixties and worked with Lejaren Hiller, whom he describes as a pioneer in computer composition.

"Most people were making tape pieces at that time, but I was particularly interested in live performances because of the potential to somehow manifest processes which we normally think of as compositional, precompositional, or perceptual in real time, that is, in bringing some of the composing activity into live performance," Rosenboom said. Brain Wave Music

During the 1970's, Rosenboom went to York University in Toronto where he was one of the founders of the New Music Department, built the Laboratory of Experimental Aesthetics, and composed and recorded his "brain wave" music.

In these pieces, Rosenboom used a computer to analyze neurological signals in musical perception and then generated music from the results. In his piece "On Being Invisible," electrodes are attached to the head of a person who is the solo performer. The signals are recorded on standard electroencephalographic equipment. A computer analyzes the signals according to a model of how people divide groups of musical phrases into temporal segments, and this analysis determines what sounds are produced on a synthesizer.

"Everything the computer outputs is tested against the model to see if it can determine what are potentially, significant landmarks," Rosenboom said. If the computer determines some structural events are significant to the listener/performer, "it will make it more likely that the kinds of changes that are causing the response will occur again. If the prediction is false as determined by the lack of evoked response, it will tend to cause the sound patterns to change in some way."

Although Rosenboom has plans for more brain wave pieces, he hasn't composed any since coming to Mills College in 1979. "I decided I needed a little break. Also, I wanted to work on some other kinds of music and I got involved with Don Buchla in the development of a digital keyboard instrument," he said.

During his recent concert at Mills College, Rosenboom performed some of his other music--selections from "Future Travel" and a piece in which he used a harmonic and rhythmic computer language to create a composition for four cellos, percussion, and a trombone.

"David has been in the forefront of designing performance, computer controlled instruments and is one of the leading composers in the country," Polansky said of his colleague.

Composer John Chowning first started working with computer music at Stanford in the 1960's. He read an article about Max Matthews and J.R. Pierce, who were doing acoustical research at Bell Laboratories, and then went to Bell Laboratories to see Matthews.

"John came back and started implementing new ideas," CCRMA administrative assistant Patty Wood explained. Chowning began initially working with a large computer in the off-campus facilities of the artificial intelligence division of the computer science department. When the artificial intelligence group moved back to campus, CCRMA acquired more space, as well as their own new, large computer and synthesizer.

Composers who use Stanford's computer music system define parameters of sound (i.e., pitch, tone) with numbers and commands which the computer can read and interpret. At a recent CCRMA demonstration, composer-in-residence Janis Mattox explained that the computer interprets the commands and sends them to the synthesizer, which produces sound. Using Real Sounds

Many people at Stanford, as well as places like Bell Laboratories and the Institut de Recherche et Coordination Acoustique/Musique in Paris, have been doing research into the dynamics of real sound and what makes sound interesting. "It turns out that real sounds are extremely complex, and this is what makes them easy to listen to as opposed to what most people think of as electronic sound," Mattox explained.

With a large computer, composers can analyze real sounds. "We figure if we can get close enough to duplicating real sounds that we can control--not that we want to duplicate the actual sounds--our sounds can be as rich and interesting," Mattox said.

"We can do just about everything an original violin can do; but to have a computer play something exactly like a violin would play it would be pointless and probably impossible," Stanford DMA candidate David Jaffe explained.

For the second movement of "Shaman," Mattox wanted a sound that was intense and primal to go along with a belly dancer's gestures. "I wanted a real ambiguity between an instrument and a voice. I wanted something that sounded like either and shifted back and forth, and I think I got pretty close to that. I'm accompanying it now with some very low drum sounds. It's going to be quite rhythmic," she said.

In another piece based on a Richard Brautigan poem, Moorer took the voice reading the poem and changed some of the dynamics of the speech, for example, by adding some reverberation to portions of it.

In his piece "Silicon Valley Breakdown," Jaffe has synthesized new sounds from guitar sounds. He also played with tempos, speeding them up and slowing them down, and brought voices together and apart at different times.

"With a computer, we can do interesting things like slowing sound down without changing the pitch or changing the pitch without changing the speed," Mattox explained.

For many composers, the expense of a large computer like Stanford's is prohibitive and microcomputers are the only alternative.

In 1976, Horton responded to an advertisement for a computer for $250 with a coupon. "They sent a wonderful computer," he said.

Horton read the instruction books and started building electronics for the computer, learning on his own and from people he knew.

A significant difference between composing for the computer and composing for traditional instruments is that computer music composers don't have to read music. Bu, somewhere along the line, like Horton, they usually learn something about electronics and programming.

"When the microcomputer came along, people interested in it got together--for a long time, at regular Sunday open houses at the East Bay Center for the Performing Arts. We (The League of Automatic Music Composers) have been working together since 1978," Horton said.

In the Oakland living room, Perkis, Bischoff, and Horton's computers sit on separate, but adjoining tables. Wires connect the computers to each other, to mixers, and to control boxes.

"My computer is playing equal tempered melodies and Tim's is playing harmonies. Now hiw computer is tracking to mine," Horton says.

While Horton explains interactions of the computers, Bischoff and Perkis sit intently in front of their computers, moving nobs and levers, Perkis is calm. At times, Bischoff smiles. At one point, Horton's eyes shut and he hums along with his computer.

"Some of the sounds are wonderful," he says.