We’ve written about this before: the music industry may be excited about the growth in sales of smart speakers, but privacy campaigners have their concerns about this device category.
Those concerns are relevant to our industry too: so here’s news of a new experiment conducted by Security Research Labs that’s making headlines. It developed skills for Amazon’s Alexa and Google Assistant to test two ‘possible hacking scenarios… that allow a hacker to phish for sensitive information and eavesdrop on users”.
The details are quite technical, but are outlined in the company’s blog post. “The privacy implications of an internet-connected microphone listening in to what you say are further reaching than previously understood,” claimed SRL. “Users need to be more aware of the potential of malicious voice apps that abuse their smart speakers. Using a new voice app should be approached with a similar level of caution as installing a new app on your smartphone.”
The vulnerabilities have been shared with Amazon and Google as part of SRL’s process. Ars Technica reports that both companies have removed these skills, but are also ‘changing their approval processes to prevent skills and actions from having similar capabilities in the future’.