Issue 236,  published November 26, 2019

The Algorithm of You

Not long after the newsletter went out last Tuesday, Google announced the roll-out of something called “Your News Update,” a new feature attached to the Google Assistant that’s meant to give users a more personalized experience when they ask for news through the voice assistant technology.

The feature is premised on serving users playlists of short-form audio news missives from a variety of sources that would be algorithmically-curated around a cluster of topics of possible-to-probable interest to the specific user. As you’d expect, such personalization would be the product of machine learning oriented around the user, which means that it should get better the more it’s being used. (In theory. More on that later.)

Here’s how the announcement post describes the intended experience:

When you say, “Hey Google, play me the news” on any Assistant-enabled phone or smart speaker, Your News Update will begin with a mix of short news stories chosen in that moment based on your interests, location, user history and preferences, as well as the top news stories out there.

If you’re a Steelers fan who follows the stock market and lives in Chicago, for example, you might hear a story about the latest “L” construction, an analysis of last Thursday’s Steelers game and a market update, in addition to the latest national headlines. Keep listening and the experience will extend into longer-form content that dives deeper on your interests. In between stories, the Google Assistant serves as your smart news host that introduces which publishers and updates are next.

 All this is meant to improve upon the standard audio news experience you would currently get on smart speakers — whether it’s an Amazon Echo or Apple HomePod or Sonos One or whatever — in which the call-to-action for a news update triggers a steady trickle of news briefings pcrafted by a range of news organizations you, as the user, typically had to curate beforehand. (That is, if you, the user, didn’t ask for a straightforward digital radio stream.)

When I checked in with the team behind the feature last week, part of the narrative that was given to me focused on how the existing smart speaker news briefing experience tended to be undermined by a few recurring frictions. Chief among them how, when moving between news providers in the flow of the current smart speaker news briefing experience, users were often served as the same stories by different organizations. By letting Google assume more control over the curation frontier, the belief goes, that redundancy would be less of a common occurrence.

The initial rollout of Your News Update expressly focuses on a collection of predetermined media organizations, with which Google has already struck up working relationships. Liz Gannes, the company’s product manager for audio news, described this initial group as “known news organizations,” most of which are already broadcasters in some form or another, the assumption being that experienced broadcasters would likely be capable of creating quality audio content for the feature that would set a decent starting standard.

You can find a partial list of participating publishers in the announcement post, which includes a mix of national US news brands (WaPo, CNN, Fox News), non-US publishers (Evening Standard), subject-specific pubs (The Hollywood Reporter, Billboard), local media orgs (WNYC, KUOW), and broadly expected miscellanea (PRX, NBA). The full roster, I’m told, is about fifty publishers.

As mentioned, Google is working directly with these publishers to shape the supply of audio experiences that will populate the feature. There are some content parameters, among them: each audio story has to be under ten minutes, and each story should be able to stand alone. The reason for this, I’m told, is so that each audio story can be treated as a unit to be ranked within the framework of underlying technology, which, true to Google’s form, is constructed to design the information universe around specific topics and terms.

The content is licensed non-exclusively, which means that the publisher can ultimately use the content elsewhere. This stance is further supported by the delivery mechanism, which involves distribution through open RSS feeds. In case you’re wondering, Google is indeed paying publishers for their participation in the program. There are plans to expand the program internationally at some point next year.

At one point during our conversation, I asked about the big picture. What’s the broader concept driving this feature? Much like the announcement post, Gannes starts by phrasing the narrative as the continuation of a certain interpretation of the early web:

When newspapers were first experimenting with the web about twenty years ago, the sites were fairly basic in nature. There were no ways to search them, there was no linking between them, and articles were typically posted a day after printed. There wasn’t a kind of rich understanding of what was happening within the stories. It’s almost like just posting PDFs online.

Our contention was that, if we could break open the mp3 and understand the audio story, we could create a more intelligent, timely, and personalized audio experience for the user.

The playlist really just the first expression [of this technology]. It’s not like the outcome of the text web was only news aggregators. That was just one of the things that was enabled by a better text understanding. The playlist is just a starting place for better audio understanding.

Which is to say, Your News Update, as a feature, seems like something of a stepping stone. Gannes noted that there’s some consideration for the underlying technology to eventually power features and products outside the Google Assistant environment — perhaps even on the Google Podcast app.

On a hunch, I followed up by asking if there was any roadmap to opening up the supply pipeline to include non-professional media organizations. Is it possible that we’d eventually see the Your News Update feature expand the pipeline to include, say, “audio bloggers” of some sort?

Gannes replied:

We do have some hope to set the feature up to expand benefits of an open platform, where it’s able to democratize the opportunity for voices that didn’t have a space before. But we want to do that in a careful way. We’re trying to first establish the platform with these known voices from the beginning, because we recognize that you have to build from a place of quality if you want someone to tune in.

But we do see a lot of opportunities, and hope to grow it to be more open.

 Stepping stone, indeed.

Two more things on my end to wrap up this thread:

(1) So, I get that the feature is being framed as an incremental evolution of the standard audio news experience that’s been available on smart speakers for a while now; an attempt to sharpen the ability of Google’s voice assistant to function as a dispensary of information indexed on the internet. But the more interesting reading, for me, is to see Google’s shaping of how publishers contribute content to the feature as part of a broader training initiative to establish a more polished supply chain for a Google-facilitated audio web. (This was hinted at in Dieter Bohn’s write-up at The Verge.)

(2) At the risk of sounding like the grumpy Luddite I’m often described of being, I’ll just say I harbor some reservations about a default audio news context in which the stream of news and information is purely facilitated by algorithmic curation. I dunno, I like making choices on my own terms, even if it means a fair bit of friction. That said, I’m not exactly the most typical of human beings, and then again, perhaps I’d feel differently if I wasn’t so interested in the news… or if I had the kind of life where I’d rather spend my energy elsewhere. Hm.