If the last few years of digital controversies has taught us anything, it’s that we should probably not be clicking our way through privacy-policy screens when signing up to new services without reading them. But reading privacy policies can be… a chore. Cue a startup promising that AI can help. The startup is called Guard, and it has developed “an AI that reads policies for you”. It’s enlisting humans to help train that AI: “We show you two privacy dilemmas, and you judge which one you think is more privacy friendly. This data can help us understand privacy better and build an AI that understands privacy just as we do,” explains its website, as it invites people to help its AI learn. To show what it’s capable of, Guard has already put its AI to work on various big digital services’ privacy policies, including a couple in our field: Spotify and YouTube. Spotify gets a 32% rating and a grade ‘C’, with zero scandals but 24 ‘threats’. while YouTube gets 37% and ‘C’, with zero scandals but 39 ‘threats’. Guard reckons that the main privacy threats on Spotify are that users’ data might be sold; that it might be handed to advertisers; that it can be kept indefinitely by Spotify and never deleted; and that “Spotify might use your data in ways you don’t intend them to do”. YouTube gets exactly the same four. It’s an interesting idea, although we daresay the services covered may have views on the fairness of the AI’s verdicts.

Music Ally's Head of Insight More by Stuart Dredge