An AI-entrusted future must have a stringent privacy & data protection law

On May 8, Google CEO Sundar Pichai had the pleasure of presenting the Artificial Intelligence Duplex (IA) assistant at the annual Google I / O conference in California. On the spot he played a Duplex sound recording by making an appointment in the lounge over the phone, complete with the inflections and nuances of human conversation. With what was impossible to decipher if the voice belonged to a human being or a machine. On the whole it was impressive, to say the least, and an affirmation of the fact that the future is here, very close.

The great and dizzying speed with which AI seems to be entering through various aspects of human existence and interaction is accompanied by a crucial conversation around its ethical parameters. The recent incident at the home of an Oregon family, where Amazon’s virtual assistant, Alexa accidentally recorded an audio clip of the family’s conversation and sent it to a random contact without being told, is a clear testimony of amplified threats to privacy and security that are by products of progres in Al and tech.

In relation to companies, they are obliged to report data breaches within 72 hours and penalties for violations are harmful: up to 10 million euros, or 2% of the annual world turnover, or up to 20 million euros , or 4% of the annual world turnover, depending on the severity of the privacy violation.
June 07, 2018