This is good news – a series of guidelines aimed at protecting the American public from burgeoning technologies that utilize artificial intelligence (AI). Here’s my take on how the 5 principles could affect one of the biggest services of all, Google Ads.
“Safe and Effective Systems” – Google have hubris and will resist any external input into how safe theirs is.
“Algorithmic Discrimination Protections” – only covers what is already enshrined in law as discrimination, like race and gender. Google has always been keen to not fail on this one.
“Data Privacy” – again, won’t be an issue, in the US. Google does have problems with using a global network of servers storing the data from particular countries who do not like that practice.
“Notice and Explanation,” argues that users should know whether an automated system is being used by a company in the first place by providing “generally accessible plain language documentation” that includes “clear descriptions” of how the system functions.
This is a big problem for Google. While they can cite commercial sensitivity for many things, they will struggle to simply explain how the machine learning does what it does. Already support staff cannot explain decisions that affect the account. However, the language of the guidelines also mention calibrated to the level of risk based on the context, and Google could argue that there is no risk, therefore nothing needs explaining. And they would have a point. Results can be judged daily, and the service discontinued if not satisfactory.
“Human Alternatives, Consideration, and Fallback” – this is the biggie.
“You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you,” the blueprint says. “Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.”
Google Ads suspends advertisers based on risk profiles created by machine learning. Such suspensions, from the dominant search advertising platform, can destroy livelihoods.
Google suspends accounts that look and feel like bad advertisers, without proof, and without much in the way of genuine recourse.
Google Ads is trending towards doing without humans altogether, and this is exactly the type of law we need to stop such things from happening. We cannot have a world where “the computer decided” is the final word.