Amazon is rolling out an option for listeners to get more in-depth news briefings via their smart speakers. Rather than just playing headlines from your chosen sources, Alexa users in the US can now get a rolling broadcast of stories from Bloomberg, CNBC, CNN, Fox News, Newsy and NPR (plus video from video from CNBC and Newsy if you have a smart speaker with a screen).
This seems like a logical extension of the offering, even an overdue one. It does pose an interesting question for the (so far limited number of) providers, though — how is this different to making audio for the radio, and therefore how should the content be chosen and tailored? Users can ask Alexa to skip stories they aren’t interested in, but otherwise the news will keep playing until they tell it to stop.
NPR seems to agree that it’s a big step forward, and has been gearing up accordingly. (They are, arguably, best positioned to serve this kind of rolling long-form audio news, given the rest of their output.) In a message to member stations, NPR explained:
“Now, new users who have not had an Alexa account previously will simply have to say, ‘Alexa, play the news.’ Alexa will then ask, ‘Where should I get your news from?’ If the user replies, ‘NPR,’ Alexa will ask for a zip code and confirm a local Member station, which will then be linked to an NPR One-like flow of news.”
As Nieman Lab’s Joshua Benton has pointed out, this feels a lot like smart speakers stepping into the gap left by the fall in radio usage by younger people. Without actually owning a device called a “radio,” you can now get a radio-like service from your smart speaker, and discover the serendipitous possibilities of audio that just keeps playing in the background without the need to subscribe, choose, download or press play.
Nick’s Note: At the end of the day, what matters more: the vessel or the content? Are our worries about “radio decline” more a statement about the distribution structure or the distributed material? This is, of course, a false binary… until it’s not.
Anyway, you might have already heard about this Bloomberg report: “Amazon Workers Are Listening to What You Tell Alexa”… in which the crux of the story is how Amazon depends on actual human beings listening in no some portions of recorded material to improve the platform’s machine learning processes and speech recognition systems. Sure, it sounds creepy, and it provokes our uneasiness/uncertain/under-explored relationship to the limits of what we allow in terms of privacy.
But I’m also partial towards Stratechery’s Ben Thompson going all “oh wait, let’s hold on the freaking out for a second” in a (paywalled) Daily Update earlier this week, pointing to the fact that all of that listening happens with materials that are (said to be) stripped of identifiable information and that this model of improving speech recognition is somewhat commonplace. Not that any of this makes any the creepiness and potential privacy issues any less potent or worthwhile; it should just more center on the fact that we’ve given up a lot by introducing these devices into our homes anyway.