Written by Michael Masukawa • Photography by Christian Vierig, Getty Images • December 7, 2016
Google, Spotify, and Pandora… are learning!
Music recommendations from our streaming apps have become so spot-on that they seem more like gifts from someone who knows us well—or perhaps from our future selves—rather than machine-generated selections based on an array of data points.
What do we have to thank for this? There are several factors, but among the most important is the development of “deep learning” algorithms, which are far from the rudimentary and iceberg-like recommendation engines that bored listeners stiff for much of the 2000s. The new algorithms are impressionable, warm, and responsive, and might even know your musical preferences better than you do.
But how does “deep learning” change listeners’ indifference into spine-tingling appreciation? What’s so different than before?
The answer lies in the ability of these algorithms not only to understand “related artists,” but to also understand chord progressions, tempo, and even volume. The algorithms then combine this knowledge with data collected from thousands of user playlists to determine just what you’ll want to hear next.
Naturally, the online music giants want you to stick around, and pumping money and wisdom into their music recommendations helps each of them bolster a key marketable feature. And if you aren’t convinced of the vastness of this endeavor, look no further. (Warning: phrases such as “mel-spectrograms,” “time-frequency representation,” and “convolutional layers” may cause dizziness.)
This technology can do all kinds of things. More than just recommending good music, it’s on the verge of making good music. Well, kind of. But one day it might actually be true! In the meantime, it’s nice to know that our apps are trying really hard to please us. If you’d like to know more about the incredible things this kind of programming can do, take a look here, or here.