



Much like our expectation to always see fresh produce on the shelves of the grocery store, we, as consumers of music, have come to expect high-quality consistency from musicians and the performances they give. Adrian DiMatteo looks at the tools, like Autotune, used to meet our demands and uphold these sonic beauty standards.
Music by Adrian DiMatteo - I Believe in Miracles
Musical taste is subjective. Each generation selects iconic hits while styles pass in and out of fashion. From one culture to the next, popular trends vary widely based on local musical traditions, instrumentation, lyric themes and rhythmic characteristics. For most of human history, music could only be experienced live. Players and listeners were engaged in direct interaction. Music lived in the streets, homes, religious buildings, theaters, public and private institutions of society. Modern recording technology has rapidly and dramatically altered this sonic landscape, and we’re just beginning to recognize the unforeseen impacts of digital, multimedia experiences on artistic aesthetics and audience attention spans.
With a virtual library of millions of artists from all corners of the globe, humanity has access to an unprecedented breadth of music, resulting in a cross-pollination of genres that is stimulating a musical free-for-all. Consider the experimental East meets West approach of The Beatles on “Within You Without You,” Miles Davis’ jazz-rock fusion “Bitches Brew,” or Rihanna’s afro-beat inspired “Don’t Stop the Music.”
Before the advent of modern recording technology, music had to travel through direct cultural contact, gradually spreading from one region to the next through a process of dissemination. Of course, music did evolve extensively this way. Jazz was born from African, Caribbean, Native American and European influences converging in the American continent. Romani culture migrated throughout Europe, giving rise to various musical traditions. Classical composers mutually influenced each other, as in the case of French composer Maurice Ravel’s “Boléro”, written for a Russian client and inspired by Spanish music. Even instruments themselves evolved as they came in contact with each other. Consider the modern guitar which can be traced from the lute of ancient Mesopotamia, to the Middle-Eastern oud and on through the centuries.
Today, electricity has given birth to what might be the most complex instrument ever invented - the computer. Microphones are able to convert mechanical air pressure waves (aka sound waves) into electrical currents. This signal can be manipulated with a DAW (digital audio workstation) such as Garage Band, Ableton, Logic or Pro Tools. Advanced, inexpensive recording technology has democratized music production while the internet has made viral popularity possible for anyone with access.
However, digitizing music has two immediate drawbacks. For one, microphones are only able to capture a limited bandwidth (approx 20-20,000hz) with biases to certain frequencies based on the equipment. Second, music is recorded at a sample-rate of ‘sonic snapshots’ much like video creates the illusion of smooth motion through ‘frame-rates’ of rapidly sequenced still images. In other words, the technology captures enough information to convincingly reproduce the illusion of unbroken sound, but it lacks the sensitivity or depth of direct human perception. Perhaps for this reason, many people still seek the intimate experience of a live performance.

(Source)

(Source)
Sound engineers do a lot to filter and process sound captured by microphones. They use equalizers to cut and boost frequencies, effects such as reverb to create the illusion of spacious halls and modify the human voice through a process known as pitch-correction or autotune. Rhythmic ‘imperfections’ can also be smoothed out through quantizing (aligning rhythms to a strict mathematical grid). By the time a final product comes out of the computer and back through a listener’s headphones or speakers, the music is far from the raw signal that went in.

(Source)
Think of it as ‘sonic photoshop.’ Photographs are doctored, colorized and modified in many ways. This can result in a distorted perception of beauty. The New York Daily News quotes Barbara McAneny, M.D. from the board of the American Medical Association:
"In one image, a model's waist was slimmed so severely, her head appeared to be wider than her waist...We must stop exposing impressionable children and teenagers to advertisements portraying models with body types only attainable with the help of photo editing software."
Sound engineers do to sound what graphic designers do to images. The result is that less technique or even ‘raw talent’ is necessary on the part of the performer to produce music. It also creates unrealistic expectations in the listener of what a human should sound like. Traditionally, singers would actually have to sing the melodies they performed. Today, pitch-correction technology can be used live, meaning that performers increasingly lean on computers to augment their vocal reality. Some celebrities have resorted to lip-syncing (pretending to sing while a pre-recorded audio sample is played through speakers). Audiences may react negatively on social media when an artist misrepresents themselves as singing live when in fact they are lip-syncing. Jay-Z even produced a song criticizing auto-tune culture.
Celebrity performers are under a lot of pressure today. Vigorous choreography, elaborate costume changes, enormous venues, hyper-critical fans and grueling tour schedules all contribute to a cut-throat workplace that demands perfection with no rest for the weary. From that perspective, it makes sense why lip-syncing and pitch-correction might be used to produce a more consistent product. After all, the agricultural industry uses hi-tech solutions to homogenize fruits and vegetables for consumer taste. Shouldn't a drummer be able to play along with a click-track through in-ear monitors to regulate the beat?
But it doesn’t stop there. FOX announced a new singing contest “Alter Ego,” in which singers themselves do not appear on stage. Rather, a holographic avatar takes their place. Hatsune Miku is an entirely digital, holographic pop sensation in Japan, and projections of deceased icons Tupac and Michael Jackson have ‘performed’ live at Coachella and the Billboard Music Awards respectively.
Ultimately, audiences decide what to support. At the same time, the music industry produces and promotes what the mainstream consumes. This inevitably influences younger generations of artists. Those seeking mainstream appeal generally conform to the unwritten standards established by industry leaders. Many young musicians dismiss themselves because they lack the sex-appeal, dance skills, fashion sense or extroverted personalities of the superstars they see on screen.
But what does any of that have to do with being a musician? In many ways, being a pop star has become conflated with being a musician, although the two are not the same. It is hard to define music, let alone assert any objective standard of what constitutes ‘good music.’ This moment in global pop culture invites us to ask ourselves, ‘What is a musician?’ If we’re comparing pop stars to musicians, we might as well compare a swiss army knife to a drill. They’re designed for different purposes. Some musicians are specialists, while others explore a diverse range of styles and collaborations, producing music for film and television, sound effects and so on.
Technology has often redefined how humanity works with sound. Whether we’re talking about rudimentary instruments such as a drum or rattle, or significantly more complex instruments like a violin or saxophone, technological advances are reflected in the art-making process. Most recently, electricity and digital music-making have pushed the limits of human expression to new frontiers. While this expands the musician’s palette, it also presents the potential to entirely digitize the role of the artist. Could trained instrumentalists, singers and composers be replaced by digital facsimiles?