(m)ORPH is an abstract, musical VR experience for Oculus Rift(S) and Quest 1 & 2 created by audio technologist Kasson Crooker. (m)ORPH is not a game, but an interactive experience that lets players interact with audio in real-time to create a dynamically evolving music environment. Sound is literally all around you, yielding an immersive 360 degree spatial audio mix that you get to control and customize, featuring 15 songs from ambient/experimental artists. Perfect for meditation or to escape the stresses of the world, the placid music and chill artistry will transport you to a new state of mind. Experience a whole new way of listening!
F(m)ORPH for Oculus Quest 1 & 2, download here:
(m)ORPH for Oculus Rift and Rift-S, download here:
Through this innovative project, songs from experimental, ambient, and electronic artists are "performance mixed" in real-time to create immersive listening experiences. (m)ORPH features 15 ambient, experimental songs from innovative artists like Christopher Willits, Taylor Deupree (12k Records), Symbion Project, Kodomo, Micah Frank (Puremagnetik), George Hurd, Yumi Iwaki, dvdv, Matthew Mercer, Bloodbeak, Kodacrome, Khems, Dispak, and Black House Triangle. The (m)ORPH Artist Series will be an ongoing release of long-form music that are collaborations with artists who let me mix & remix their songs through my VR-based mixing environment. The unique binaural nature of the songs sounds good on stereo speakers, but really shines via headphones which are necessary for proper binaural listening experiences.
Free soundtrack available to download or stream here:
Livestreams by Kasson, with artist interviews & VR performances here:
15 songs are in the official release of (m)ORPH:
"The Untitled Sea" by Christopher Willits (ambient, guitar, field recordings)
"Minism" by Taylor Deupree (ambient)
"Gishiki" by Symbion Project (ambient, Japanese koto, modular synth, field recordings)
"Backscatter" by Symbion Project (electronic, beats)
"Stargazer" by Chris Child & Micah Frank (ambient, tape loop, found sound)
"Anchorless" by George Hurd (electro-acoustic, field recording, tape loop)
"happiness 00033" by dvdv (ambient, vocals, synths)
"Storm King" by Kodomo (ambient, synth, vocals)
"Play Dead" by Kodacrome (voice, piano, electronics)
"Forest Bathing" by Khems (ambient, voice, field recordings)
"Entropy" by Matthew Mercer (ambient, drone, field recording)
"Koshkina" by Dispak (ambient techno)
"Overexposed" by Bloodbeak ft Mark Stewart (electronics, beats, voice)
"Hakana" by Yumi Iwaki & Kasson Crooker (modular, Japanese koto, spoken word)
"Id Ego Superego" by Black House Triangle (piano, folktek resonant garden)
For updates on (m)ORPH, please follow the project via FB: www.facebook.com/MORPHSOUND
Some generative artwork used in (m)ORPH VR created by Masaru Fujii (twitter.com/ozachou_g)
Several years ago I was at the Seattle Art Museum and learned of a short-lived art movement called Orphism which mostly existed in France from 1912-15 and was a precursor to the Cubism movement. Based on the Greek poet Orpheus who symbolized all things creative, the Orphist art movement was interested in creating art that was "pure lyrical abstraction" and was one of the beginnings of the abstract art movement. It was focused on producing art that was about pure form and sensation and "existence of an infinitude of interrelated states of being" ~ abstraction in its purest, most refined form.
In my musical career arc, I've been getting more and more into abstract, minimal, ambient, electro-acoustic music, with Black House Triangle being the most recent example of this. And thinking about how the Orphist artists threw away conventional ideas of creating art and embraced more experimental ideas, this drove me to pursue ways to be creative in more experimental manners. One way I've found to do this, is to shake up the way I create music and the tools I use to compose and produce. One of the foundations of the (m)ORPH project is to disrupt the way I produce music, specifically the mixing process. I wanted to disrupt this process to such a large extent that it would force me to rethink the initial composition process. Sort of a cyclical feedback loop where the later-stage mix process changes and informs the earlier composing and recording processes. The traditional mixing process always uses a mixing board (either analog hardware or digital software) and it uses 2 basic ideas to create a final mix: Volume and Pan. The final result is always stereo and that bums me out; which is why I've pursued mixing music in 4-channel quad for discrete surround audio systems. To create the final stereo mix you use volume to make certain things louder than other things and pan to place or move instrumentation between the 2 available speakers. One of the aspects of this that bothers me is that "stereo" is an artificial construct to hear music. Someone decided that since we have 2 ears, that reproducing recorded music with 2 speakers was the right way to go, disregarding the fact that we hear audio (music or otherwise) from all around us, in 360 degrees. Live concert performances are not in stereo, they're essentially in binaural using our 2 ears and other physiological aspects to perceive the music coming from all the musicians and the acoustics of the performance space ~ all around us in 360degrees. Another aspect of the mixing process that is frustrating is that it uses loudness/volume to make certain instrumentation more powerful than other instrumentation. But again, this is not the whole story of how humans hear sound. We perceive sound, and how loud that sound is, more by distance and how far or close a certain sound source is to us. If the bird chirping is far away, then we perceive it quieter than the cat meowing on our lap. We perceive sound coming from all around, creating a dense and rich acoustic experience. It’s why hearing a concert in a beautiful acoustic space is such a rewarding listening experience!
There is one additional aspect about traditional mixing, specifically in software DAWs, that is frustrating to me, and that is predicability. DAWs (like ProTools, Ableton) give you immense control over the mixing process. You can draw in volume, pan, EQ, DSP automation and have microscopic control over every aspect of the mix. Everything is stored and re-callable and predictable, so that when you’re ready to do the final mix, all there is to do is hit record, sit back and the mix is automatically done for you. No room for mistakes, no room for happy accidents, no room for rough edges. It’s a very sterile process and contributes to mixes coming out sounding sterile and lacking soul IMHO. Electronic music already has such a clinical, robotic quality to it and the standard stereo mix process just amplifies those aspects rather than helping to make something more pleasing and enlightened.
(m)ORPH aims to remove the artificial process of mixing in stereo (using volume and pan) and create a new mixing process/environment that is more organic and unpredictable, that can better create happy accidents and unexpected rough edges. To take some of the clinical control out of the equation and replace it with a little chaos; to create experimental mixes where I’m more of a conductor in real-time and less of a surgeon.
Over the past 3 years I’ve slowly been working on creating such a tool, which has now come far enough along that I can begin creating mixes in it! I’ve created a virtual 3D space in Unity (game development app) that I can enter through VR. In this space, I’m in the center of a 10 meter icosahedron, hovering in the middle of a no-gravity environment. In this space are small spheres, or Orphs as I call them, also hovering in space. Each Orph has an audio emitter attached to it that plays back a mono (linear or looping) track of audio. In the center of this space is a virtual binaural microphone that can hear everything in the space, but just like the real world, it uses distance to gauge how loud something is. When the Orph is close to the center (and the mic) it gets very loud and when it’s out by the outer edge of the icosahedron you can’t hear it at all. I can bring in dozens of Orphs, each with their own audio track, and place them anywhere I want inside this space, above, below, left, right, behind, and at any distance. At any given moment, if I want to hear something louder, I bring it closer to me and if I want it quieter, I push it away from me. Instead of perceiving the sound from these Orphs in stereo, the binaural mic emulates how humans hear using HRTF ~ a mathematical approximation of how humans perceive sound from all around us. So as I move the Oprhs around in space, instead of getting a stereo mix, I get a spatial audio mix with the various musical elements coming from all around the listener. The resulting mix still comes out as 2-channel binaural stereo, but the various elements have been HRTF-processed so if you wear headphones, the mix is perceived as a richer, more organic, more immersive listening experience! AFAIK, no one has ever created such a unique to be a part of the musical process and the results are equally unique.
So this 3D VR environment helps change up the mix process in a pretty dramatic way, but I was not content to just stop there since the capabilities of a VR environment meant I could implement additional features to shake up the sterile mixing process. I wanted to find ways to bring unexpected interactions and happy accidents into the process as well. Since the Orphs can be placed anywhere in the space, it also means that I can imbue them with momentum so they float around in the space, on their own accord. When they start moving, that means the mix is constantly evolving, with Orphs get closer or farther away from the central mic source, in all directions and at differing speeds Like billiard balls, when the Orph reaches the outer walls of the icosahedron they bounce off (never losing momentum) and go in a new direction. Same happens if Orphs hit each other; they bounce off and head in a new direction. The result of this is a mix that is constantly evolving, even with no input from me. I could just sit back and let the Orphs float around creating unexpected musical interactions with each other. I do have control over the Orphs though and can help guide the arc of the mix. I can point at an Orph, grab it, and move to any location and I can also take away it’s momentum (so it just hovers in one place) or give it additional momentum so it floats around more quickly. With this, I have enough control over the Orphs, and therefore the overall mix, to guide things along in realtime, much like a conductor would do with an orchestra.
There are other features as well that I’ll be adding to the process, but for now, this functionality is enough for me to start creating mixes inside the (m)ORPH environment. This mixing process is so unconventional from normal mixing that it affects the way I even compose music from the onset. I’ve tried bringing in my more traditional music (synthpop with verses, choruses, builds and tear-downs) but they do not translate well. I don’t have enough control over the various instrumentation to create a mix that flows properly over a short period of time. What I have found is that bringing in more experimental musical elements, especially with non-conforming loop points and not slaved to a bpm, means I can create an organic mix that slowly evolves and takes the listener on a journey that traditional mixing cannot. This is where the mixing process shakes up the earlier composing process in fundamentally impactful ways.
Thanks for reading and for listening!