In Spotify's API is something called Valence, that describes the musical positiveness conveyed by a track. Tracks with high valence sound more positive (happy, cheerful, euphoric), while tracks with low valence sound more negative (sad, depressed, angry).
I was wondering if someone could elaborate more on this topic? I visited both links and none of them say specifically how valence is measured, other than by "advanced-computer-learning." How is Spotify defining valence? Minor/major key? Lyrical content? Any information you can provide would be greatly appreciated.
My guess is that they run a pretrained model on the audio samples. There are datasets that include the raw audio and human labeled valence/arousal values, so it is possible to train a model, most likely a (deep) neural network.
Hey there you, Yeah, you! 😁 Welcome - we're glad you joined the Spotify Community! While you here, let's have a fun game…