Extraterrestrial Songlines

Ether Ship



Medium Type
Digital
audio
Year
2023
Genre
Electronic
music
Tools and Software
iOS hardware, electric cello processing, Steinberg Cubase 12 Pro, RML Labs SAWStudio, Plogue Bidule, Audio Ease Altiverb 7, iZotope RX10, Pioneer Hill SpectraPLUSSC
Audio Output
Stereo
Electric cello
Tom McVeety

Movement 1, 00:15:42
Movement 2, 00:30:33
Movement 3, 00:19:06
Movement 4, 00:44:42

Extraterrestrial Songlines is a composition that reflects the different energy states that flow throughout the universe. These can include gravity waves, high energy emissions from quasars, supernova explosions and other energy field emanating from celestial objects. This does not exclude energy signatures that may originate from higher intelligences in the universe. I am referring to all these energy states as songlines similar to how the Aboriginal people relate their creation stories to the starry heaven giving substance for the origins of their people.

These fields of energy permeating the universe are the life giving energy states that are responsible for generating our genetic code which has evolved to our understanding of the fundamental forces of nature developed on our planet. It is suggested by Gary Nolan, astrobiologist at Stanford University, that a higher form of intelligence interfered with our genetic heritage eventually leading humanity to imbue this higher intelligence into sacred sites over the globe as a way to pay respect to these higher intelligences.

One example of utilizing these higher forms of intelligence was incorporating that higher knowledge factored into sacred structures such as Chartres Cathedral in France. Incorporated into the Extraterrestrial Songlines composition are the reverberations of sound within the cathedral. In this way the understanding of the sound that reverberates within the cathedral represents a higher order of harmonic resonance found in the universe. The composition is a result of mixing three separate tracks:

  1. Original track – compositional framework –Willard Van De Bogart – electronic synthesizer – modified electronic harmonica accompaniment – Lemon DeGeorge;
  2. Reverb within Chartres Cathedral – sacred knowledge inclusion;
  3. Electric cello track – interpretation of sacred sound – Tom McVeety;

Three distinct tracks of sound comprise the Extraterrestrial Songlines composition, which is separated into four movements:

Movement 1 is approximately 10 minutes in length.

Movement 2 is approximately 6 minutes in length.

Movement 3 is approximately 8 minutes in length.

Movement 4 is approximately 6 minutes in length.


Track 1.

This track is the application of a subjective framework in which sound is constructed to mirror the protocols of the composition. There are three protocols.

Protocol I

The first thing to do when you make a sound for the first time is to keep it. If you are composing and you want to hear sound you give the sound time to be heard but more importantly a time to be born. The sound is born as a single cell organism and it moves because you allow it to move. You have now created your companion to be with in this grand universe. So in this protocol you want your sound to live, to continue and become a full-fledged entity as a reflection of your ideas in the universe.

Protocol II

Movement becomes the next force to give life to the sound. Movement in this sense means more sound or the realization that you recognize the sound and begin to join the sound. Joining means adding something to the sound. It must be a recognition of the sound so the sound knows you are responsible for its life. That personality of life comes from our force fields. The sound depends on you for its existence. To fortify the sound with more energy you allow the sound to become independent so it can move as it wants to. In technical terms you modulate the sound, or filter the sound, or gate the sound, all these are ways to let the sound enjoy itself.

Protocol III

The next protocol is to identify your sound. Do not let the sound stray and get lost or buried by another sound. In a sense you have made your sound become alive and you are attending to the sound as others can hear how you are relating to the sound, changing the sound, moving the sound around the room by panning the sound. But we are complex in our own sound construction. We enjoy our vibratory construction. But we also have another force we are in communication with and that is our thoughts. Just as we have desires to explore so too does our sound.

 

So this is the time we create another aspect of our sonic creation and we give it directions to explore the space surrounding the sound. This new sound joins the other sounds and it becomes a partner and collaborates on many levels with the initial sound. This is the beginning of a complex sonic dimension where higher intelligences reside. As a composer you have to let the horses out of the stable and they are roaming in space and it’s you, the composer that makes them feel alive as you have created a higher intelligence comprised of sound.
This is sonic xenolinguistics.

Track 2.

The inclusion of sacred knowledge passed down through the millennia has to do with understanding the transcendental aspect to existence and the origins of life. Throughout the ages homage to a higher order of the universe has been incorporated in sacred sites for tens of thousands of years. Following this historical trajectory of recognizing the sacred, the cathedral of Chartres was selected as an example of using the same building blocks to represent the harmonic structure as the universe. Audio engineers have been able to measure the duration of sound as it is reflected off the stone edifice of Chartres cathedral. The original sound track was inserted into the reverb sound space to allow the original track to be washed over with the reverb sound as if heard being played in the cathedral. In this way an aspect of divine knowledge could be harmonically added to the original composition.

Track 3.

This track was an interpretation of the sacred sound space using an electric cello. This third mix was the most difficult because as the two tracks were merged, the entire composition was analyzed spectrographically as shown, with a spectrogram for each of the four movements and one for the entire composition. Playing along with the changes in frequencies was a means to give an additional found sonic life form to the original track and the recognition of a sonic life form as expressed in track 2. In this track the cellist composer was entering the space of a newly created sonic life form, adding a rich tapestry and fullness to the entire composition. In this way the listener can participate with each new sound and follow the newly created composition by travelling the songlines within the universe.



Interpreting the spectrogram images

The frequency axis is the vertical or Y axis, and is mapped as a logarithmic function. The highest frequency of the recording, 20 kHz, is at the top of the spectrogram, and the lowest frequency, 20 Hz at the bottom of the spectrogram. The midpoint of the Y axis is 1 kHz.

The timeline is the horizontal or X axis and is linear, with the music beginning on the left side and ending on the right side.

The colors represent the amplitude of the musical sounds, again using a logarithmic function, according to a decibel scale and following the rainbow color scale. Red and orange represent the loudest, highest amplitudes, with yellow and green representing the middle amplitudes, and blue, indigo and darker values representing the softest sounds.

Sustained sounds with relatively static frequency movement are seen as longer straight lines, while shorted duration sounds are seen as dots or short lines. Sustained sounds that fall or rise in pitch are seen a line with a slope, positive for rising pitch, negative slope for falling pitch.

Over the electric cello, electronics and general recording process

The cello used for these recordings is a custom 6 string solid body electric cello, an electroacoustic instrument. The acoustic energy, although quite low level, is generated by exciting the strings and other areas of the instrument. The instrument’s practical range is over 5 octaves, and natural and artificial harmonics can extend that several more octaves. Because the instrument has unlimited frequency resolution, unlike an instrument with frets or a conventional keyboard instrument, microtonal harmonies are always available.

The acoustic energy from the vibrating strings is transferred to the bridge, where the string energies are sensed with multiple piezo ceramic transducers, and converted into electrical signals. These signals are routed into various types of electronic hardware for sonic amplification, modification and augmentation using solid state and vacuum tube pre-amplifiers, filters, and pairs of compressors, phase shifters, pitch shifters, ring modulators, delays and reverb. Four stereo line outputs are generated by this group of hardware signal processing.

Following the initial hardware processing, the electric cello signals are digitized at 48 kHz, 24 bit resolution for the second stage of signal processing to be done, this time with a computer. The four stereo signal lines are sent into a software mixer that allows loopback techniques for internal routing of the signals from one application to another. The primary signal processing application is instantiated up to four times and connected in both series and parallel to spread the heavy processing load across multiple CPU cores and threads. These four identical programs each produce a portion of the overall output, with each using different types of multiple smaller applications for additional time and frequency domain processing. Following this first stage of software processing, a second application is used for recording and initial editing of the electric cello audio. The original synthesizer and harmonica recordings were transcribed using standard notation into sketches for reference in the orchestration and additional composition of the many individual recordings of the electric cello that would be assembled into the third track.

These new recordings were then moved to a second computer, for additional processing in a much more powerful digital audio workstation. This next portion of the production required many weeks to accomplish and was an iterative process, repeatedly returning to record new material with the signal processing computer and then moving the recordings to the second computer for listening and comparison in the larger context as the third track was built up. The second computer was more capable of handling the large number of electric cello tracks which were then added, adjusted, positioned, arranged and mixed together with the original recordings of the synthesizers and harmonica for a cohesive musical statement and experience.

The Chartres cathedral impulse response reverb became an integral part of the recording process during this time. The nine second response of Chartres reverb time-space used for Extraterrestrial Songlines refers to the longest responses, which are the lower frequencies, below 1 kHz. Much of the high frequency response is no longer heard after about 5 seconds, and this is a very natural characteristic of the time-space and is not considered a fault, it is simply that the lower frequencies remain active for a much longer period of time. Shelving equalization at both low and high frequency ends of the spectrum produces a gently changing slope to the overall response of the reverb, adding subtle changes to the high frequency response, and removing a small amount of the low frequency response.

The particular impulse used in the Extraterrestrial Songlines allowed the creation of a sacred space with a nine second reverb decay time. The synthesizer, harmonica and electric cello tracks of Songlines were brought into and positioned in the reverb space to enable a sense of a timeless and other worldly reality, an acoustic space that is more than a physical place, creating a feeling of a dimension without gravity, where careful listening allows discovery.

Listening with a quiet mind, without a visual stimulus will bring a deeper awareness of the environment within and without, a liminal dimension that allows one to move between the edges of conscious reality, between the normally visible and invisible realms.

Using the Chartres sacred acoustic signature, the Extraterrestrial Songlines path is discovered and navigated within the tone fields created by the musical waveforms.

Composing the 3rd track

A sketch was made from track one which contained the synthesizer and harmonica recordings.

This notational sketch was used for reference in creating additional sounds for the third track.

Ultimately, the third track was built up using a process of only several steps – the creation of imagined sounds, recording and processing and making choices. Imagining new sounds required listening to the synthesizer and harmonica track many many times. The original tracks featured some raucous and chaotic moments, and then shorter periods of less activity, and here and there a little oasis of calm. Many of the sounds of the harmonica are heard with various effects that modify the timbre or add echoes, as it functions as an electroacoustic instrument here. The reeds of the instrument are set into a vibration by the breath of DeGeorge according to the geometry and placement of the reeds and the air pressure flowing through them.

A microphone is used to convert the acoustic sounds into an electrical signal that is sent to a pre-amplifier and signal chain of sound processors that modify the acoustic timbres. Generally the sounds heard from the harmonica are much less abstract than the sounds heard from the synthesizers, which are more frequently in less identifiable timbres and often contain micro-tonal pitches. This often allowed for more abstract ideas to be used when creating sounds to be heard with the synthesizers, and more tonal ideas to be used when working together with the harmonica. One example of this is in the beginning of the 1st movement, where a duet was developed between the harmonica line and a harmonized electric cello line.

The process of applying new ideas in and around the existing sounds was challenging, analogous to a painting process of placing new ideas among existing features, and ensuring that the new colors and objects fit for the position of that moment in time, and for the overall larger time continuum. The third track additions often were imagined sounds representing objects and energies found in vacuum space, so in a sense this was program music, music imagined from an image or a feeling. For example, at the very beginning of the second movement a repeated sound sweep is heard for a moment, possibly evoking an intangible light speed energy.

The additions also included periods with multiple tracks of tone clusters used to provide color and context in constructing quiet areas for transitions and background sounds. To create the chromatic tone clusters, multiple voices of pitch shifters were spread across the sound field, along with other versions of pitched sounds and noise using ring modulation, as well as contemporary performance techniques on the electric cello. These performance techniques include novel use of the bow, such as bow pressure and bow placement on the string in reference to the bridge, fast glissandi, artificial harmonics, partial fingerpad or fingernail pressure when stopping of the string, tremolo pizzicato, and other varieties of non-standard methods of exciting a sound from the string to enable the generation of unique timbres, amplitude envelopes and unusual harmonic series, bringing new colors to the track.

With any musical creation, there will always be the very necessary requirements of ensuring proper amplitude levels and dynamic changes for interest and flow. Dynamic changes could be done through orchestration – the addition or subtraction of sounds and timbres at certain points in the musical flow, or more simply using level changes to create or augment an increasing amplitude level – a crescendo, or a decreasing amplitude level – a decrescendo, or perhaps a transient impulse to provide a surprise level change.

Along with the process of discovering and bringing to life new timbres, there are always happy accidents, where something unintended is brought to life, and becomes a very useful addition to the sonic palette. An example of that might be the careful positioning of the previously recorded sounds to craft a bridge between two disparate events, and discovering that the bridging material is able to connect, mesh and provide an effective transition from the earlier material to the following material.

Biographies

Willard Van De Bogart

My philosophy on sound is what guides my work to make music

incorporating electronic media including electronic synthesizers. I believe that sound is responsible for creating everything through a complex web of harmonic resonance that forms all life as we know it.
Sound for me is the foundation for the creation of life in the universe.

To this end I use my sound as a way to communicate with all living forms through innovative frequency distributions. I do this in my compositions to initiate a xenolinguistic interspecies approach to establishing a sonic resonant conscious connection with other entities which I believe permeate the universe.

Willard Van De Bogart received his master’s in fine arts from the California Institute of Arts, where he studied under Morton Subotnick, co-developer of the Buchla Synthesizer. Van De Bogart developed a performance ensemble, Ether Ship, using the Electronic Music Labs AKS synthesizer, leading to exhibiting at the American Cultural Center on 3 Rue du Dragon in Paris, France, for a piece commissioned by director Don Forester for the 200th anniversary of the United States. Van De Bogart worked alongside Nicolas Schöffer, father of cybernetic art, where intricate displays of sound and light were explored.

As a media consultant with NASA, he participated in the SETI program. Collaboration with Scot Forshaw, a quantum algorithm designer from the UK, led to exhibiting in Beijing and Shanghai, China, for Roy Ascott’s Consciousness Reframed conferences. Further ideas on nano-sound led Van De Bogart to explore the sounds produced by the newly designed protein synthesizer, Eigenprot, developed by Zhao Qin and Markus Buehler at MIT where his ideas on xenolinguistics and interspecies communication were further expanded.

Van De Bogart is a member of the Generative Systems Art and Technology Group from Chicago, founded by Sonia Landy Sheridan in 1969, and is included in their recent book, Weaving Global Minds: Generative Systems, edited by Sheridan. Van De Bogart’s videos can be found on YouTube under Ether Ship as well as sound tracks on SoundCloud and Bandcamp. His published articles on his philosophy can be found in Technoetic Arts, ResearchGate and Academia.edu.

Lemon DeGeorge

Born and raised in Pittsburgh Pennsylvania and surrounding forests. Studied art, film and philosophy at Carnegie Mellon and Penn State Universities. In 1972, the Electric Symphony, an electronic improvisational music group, was co-founded by Willard Van De Bogart along with Walter Steding and Lemon himself. Performed all over the Eastern United states, including Charlotte Morman’s Avant Garde Arts festivals from 1972 to 1975. Moved to San Francisco in 1976, continuing to perform with Willard Van De Bogart, Will Jackson and Tom McVeety, now under the name of Ether Ship.

Worked as a recording engineer at Different Fur studios including the soundtrack for “Apocalypse Now” and “My Life in the Bush of Ghosts”. Formed own studio, Crib Nebula, recording and producing many Bay Area artists, musicians and poets. Became a shaman’s apprentice and a certified arborist. In 1995 traveled to Tuva with Roko and Adrian Belic and Paul Pena to make the Academy Award nominated film, “Genghis Blues”. Played in music groups Phoenix Thunderstone and Neither/Neither World. Continues to perform with Ether Ship, do recording and documentary film and travel to Tuva and Central Asia.

Tom McVeety

After several decades as an RF Engineer, Tom McVeety has refocused on his music career. A cellist and electronic musician, he began development in 1975 of a unique electro-acoustic instrument, his six string solid body electric cello. The evolution of a complex signal processing system paralleled the musical journey to the present.

He holds a graduate degree in Theory and Composition and undergraduate degrees in Electric Engineering, Cello Performance, and Music from the University of New Mexico. In the 1980s he performed solo in New York City at the Cathedral of Saint John the Divine and Merkin Hall and in downtown venues like 8BC, Darinka and Area.

He was featured in a number of live performances on WNYC’s “New Sounds” with host John Schaefer, and later contributed to film composer Mark Isham’s soundtrack for director Kevin Reynolds’ “The Beast”. Working with modern dance choreographers on both East and West Coasts led to performances in New York City, Boston, and San Francisco.

As a New Mexico Artist in Residence, he performed in educational settings throughout New Mexico. A former member of several symphony orchestras, rock bands and other diverse ensembles including contemporary classical, early music, Middle Eastern and flamenco, McVeety employs improvisation and electronic augmentation of acoustical energies in juxtaposition with natural phenomena for his work.