To jump back into the discussion that I started with a bit more detail,
modern recording equipment and mixing studios changed music in much more
profound ways. Not only is the performance space split from the listening
space, and not only is time split, but also there is no actual space in
many recordings. Each microphone, corresponding to an instrument or
group of instruments, can have its own reverberations. Hence, one
performer can be in a big space while another is in a small space, and
that can change during the performance. Spatiality becomes an artistic
element controlled by the mixing engineer long after the performers have
gone home. Some spatial attributes may correspond to a large space while
other to a small space.
Even if the recording engineer was a purist who wanted to capture the
perfect concert hall acoustics, it is not possible to record it for
theoretical reasons. The role of the concert hall is two fold: temporal
spreading (reverberation that extends the duration of notes) and spatial
spreading (enveloping reverberation that acts like aural caffeine).
While the former is important and can be recorded, the latter is only
determined by the listeners configuration.
Musical spatiality, the experience of spatial acoustics, has always been
an artistic element, but up until the late 20th century, concert hall,
churches, and cathedrals were unchangeable without great effort. Hence,
we "think" of them as static. The late 20th century removed the constraint
of using the laws of physics of sound in real environments. Welcome to
the new world.
The full language of musical spatiality, spanning from antiquity to
ultra-modern popular music is explored in Chapters 4 and 5 of my book,
"Spaces Speak, Are You Listening? Experiencing Aural Architecture,"
which has just been released and is available from the major online book
sellers. My discussion establishes the foundation language, but no
doubt, that will get extended as others begin to contribute to the
[log in to unmask]