iCalvin.org

Google I/O ‘18 and Duplex

google duplex will call salons restaurants and pretend to be human for you

What really fascinates/infuriates me about the tech industry's relationship with Google is that for some reason so many people are still giving them the benefit of doubt despite years of distrustful behavior. Yesterday's fascinating and terrifying demo of the Google Assistant impersonating a human to make a reservation at a salon is a great example. It demonstrates incredible technology and capabilities far beyond the obvious implications that seems to have been developed from the ground up without regard for ethical considerations or regard for harm.

I took a while yesterday to mull over my feelings about this particular demo. Once I saw where they were going with it I was very excited, phone calls suck, and it would be amazing for my digital assistant to take over that painful chore for me. Right now I'm avoiding calling my dentist to reschedule an appointment made for WWDC week, I would love if I could just have Siri make the call for me. However, while expressing concerns with the fact that Google is collecting data and voice recordings from employees of these business without being clear about who is making these requests, and even going out of their way to deceive the employee, a co-worker asked me "But what's the harm?" Their argument, it makes things easier for the user of the Google Assistant, it makes things better for the business by driving customers to them who may have avoided the appointment if forces to make a phone call, and it's good for Google because it creates value for the digital assistant.

The idea that there is no clear harm here is incredibly naive, and that is the benefit that people seem to have decided as a whole not to allow certain companies like Facebook, but still give Google despite them having the same business model. In many ways the exact same argument could have been made about Facebook's consolidation of feeds and content creators onto the Facebook platform. By encouraging content creators to share on Facebook they painted a win-win-win scenario. Creators win by distributing their content to a wide range of users and encouraging follows and repeat consumers through a much more intuitive platform than RSS, Facebook wins by gaining more hosted content to distribute to users, and users win by being able to catch up on a larger portion of their regular internet content from a single entry point. This is how Facebook and 'The Internet' have become synonymous in so many places in the world.

But now we've seen that there were parts of this 'win-win-win' scenario that Facebook was keeping in the shadows. Users lost when the data they contributed to Facebook, through sharing content, following certain creators, leaving feedback, etc., was sold to the highest bidder and used against them to drive America into a period of national disaster. Creators lost when Facebook started introducing algorithms that made it harder and harder to reach customers until they were graciously given the option to buy back the screen space that Facebook had taken away from them. So turns out what was initially seen as a benefit to all parties has over time benefited only Facebook, who still continues to reject their social responsibility in any meaningful way and who is seeing very few consequences as a result.

So now we turn again to Google. Like Facebook, Google is an ad company first and foremost. They offset the cost of their products and services with data collection, and they use the data collected to make sure that they get as much value from you as they can from points all over the web. They do this by downplaying their role, pretending that users know what they share with Google, and insisting that the services they provide are worth far more than the cost to user privacy that they demand. And now they've created a digital assistant that seems (in the demo'd cases) to be able to pass the Turing Test. So what harm could it do? Plenty. To say nothing of the unknowns, like where the tech goes from here, what other things may they do in a phone call, how long will they maintain these recordings, will the assistant be able to follow a person from work to home and harass them for information, let's just focus on yesterday. Like Facebook's news feed this creates a barrier between the place of business and the customer. The customer isn't any more engaged with the business as a result of this phone call, they're engaged with the Assistant. At any time Google could decide to change the rankings, "Earlier you wanted me to call 'Stacy's Nails' to book an appointment for Tuesday, but I was able to get the appointment at 'Manicures by Susan' instead." There is no visibility into why the change was made. Did Stacy not have the date available? Was she fully booked? Would it have cost more than you were willing to pay? On the other side, the business has limited insight into anything about the customer, which is ironic considering who is facilitating the appointment. When asked who the appointment is for, the demo only gave the first name "Karen." Why not her last name? Why not her age, sex, ethnicity, education level, what other salons they've been to in the last year. Google knows this stuff, and if they truly believe that information wants to be free, and that business work better when they learn more about you and can adapt to your needs and desires, then they should be sharing this information when you ask for an appointment, rather than being coy and limiting the information to your first name unless the business pays up for an ad.

The fact of the matter is digital assistants are getting more useful, and that's great, and phone calls are one of many things that they could and should get good and eventually great at. Personally, I can't wait to be able to take advantage. But the ethical considerations must be taken into account, and I would argue that Google has thought very long and hard about this, and has made the wrong decision about how to implement this feature. They are actively misleading people about who they are talking to and what the circumstances are. The user is not the 'client' of the Google Assistant. Why lie? An honest approach would be to come right out and be clear about the situation. "Hi, I'm the Google Assistant for Karen trying to schedule a hair appointment for Tuesday, do you have any availability?" What reason could Google have to not take that route, other than the obvious, that many people get creeped out interacting with Google. The deception isn't necessary here, unless you're trying to hide your influence over the relationship between business and customers. This is not at all good for either party, only Google.

Google has an incredible opportunity and responsibility here. Their technological prowess allow them to define this category. What begins as a novelty demo will eventually turn into table stakes for any digital assistant that wants to compete in a world where Google makes calls to get information for you. They could have use their opportunity to define clear ethical lines that should not be crossed, such as making it clear when you're talking to a computer or when you're talking to a person, giving the person who answers the phone clear insight into how the conversation will be used (every time I call my bank I listen to a message explaining that the recording will be used for training purposes) in the future, give them an opportunity to opt out. The only reason I can think of not to do this is that people may decide they don't want to talk to a robot, even if they couldn't otherwise figure out on their own that is was a robot they're talking to, and hang up. In which case, let them hang up. Or they could come in with the mindset that they can lie to and manipulate the person on the other end of the call to gather as much info as they want, setting the line of appropriate behavior far beyond the bounds of ethical engineering. Surprising no one but disappointing those of use who care, they opted for the latter.

It's no secret that Google dominates the web, and Sundar Pichai spoke at the beginning of the keynote recognizing that Google has a great responsibility and that they have fallen short of that responsibility in the past and they would work to correct it. But actions speak far louder than words, and with all their words and demos yesterday, from bragging that they already know your taste in news and food, to scanning your photos and encouraging you to send batches of photos to people without review, to tricking an unsuspecting individual to contribute to Google's deep learning efforts without consent in a way that could damage their business, Google has clarified their position in the tech industry. They are not to be trusted.

#google #privacy #ethics