Carlos Tabachnik asks some very fundamental questions:
I have wondered about these things as, I am sure, most music lovers. I
have no authoritative answer, but I did recently learn that the frequency
spectra of speech sounds determine the emotions perceived by listeners,
even when the listeners are young children. Is it possible that some of
the responses to speech are either innate - that is, hard-wired, or very
quickly learned and incorporated into neural networks? And is it further
possible that these responses carry over in a generalized way to music?
Pure speculation, of course.
Professor Bernard Chasan
Physics Department, Boston University