Guess it’s time to stock up for WWDC!
#iphone #usbcDo Not Disturb While Driving Tweets
How exactly does something like this even happen? The message is an auto response when someone tries to text you. Maybe if these people have Tweet via SMS enabled, and got some sort of text notification? Interesting
#twitter #iosSecret of my Death by This American Life
The anger felt while others rest in peace
google duplex will call salons restaurants and pretend to be human for you
What really fascinates/infuriates me about the tech industry's relationship with Google is that for some reason so many people are still giving them the benefit of doubt despite years of distrustful behavior. Yesterday's fascinating and terrifying demo of the Google Assistant impersonating a human to make a reservation at a salon is a great example. It demonstrates incredible technology and capabilities far beyond the obvious implications that seems to have been developed from the ground up without regard for ethical considerations or regard for harm.
I took a while yesterday to mull over my feelings about this particular demo. Once I saw where they were going with it I was very excited, phone calls suck, and it would be amazing for my digital assistant to take over that painful chore for me. Right now I'm avoiding calling my dentist to reschedule an appointment made for WWDC week, I would love if I could just have Siri make the call for me. However, while expressing concerns with the fact that Google is collecting data and voice recordings from employees of these business without being clear about who is making these requests, and even going out of their way to deceive the employee, a co-worker asked me "But what's the harm?" Their argument, it makes things easier for the user of the Google Assistant, it makes things better for the business by driving customers to them who may have avoided the appointment if forces to make a phone call, and it's good for Google because it creates value for the digital assistant.
The idea that there is no clear harm here is incredibly naive, and that is the benefit that people seem to have decided as a whole not to allow certain companies like Facebook, but still give Google despite them having the same business model. In many ways the exact same argument could have been made about Facebook's consolidation of feeds and content creators onto the Facebook platform. By encouraging content creators to share on Facebook they painted a win-win-win scenario. Creators win by distributing their content to a wide range of users and encouraging follows and repeat consumers through a much more intuitive platform than RSS, Facebook wins by gaining more hosted content to distribute to users, and users win by being able to catch up on a larger portion of their regular internet content from a single entry point. This is how Facebook and 'The Internet' have become synonymous in so many places in the world.
But now we've seen that there were parts of this 'win-win-win' scenario that Facebook was keeping in the shadows. Users lost when the data they contributed to Facebook, through sharing content, following certain creators, leaving feedback, etc., was sold to the highest bidder and used against them to drive America into a period of national disaster. Creators lost when Facebook started introducing algorithms that made it harder and harder to reach customers until they were graciously given the option to buy back the screen space that Facebook had taken away from them. So turns out what was initially seen as a benefit to all parties has over time benefited only Facebook, who still continues to reject their social responsibility in any meaningful way and who is seeing very few consequences as a result.
So now we turn again to Google. Like Facebook, Google is an ad company first and foremost. They offset the cost of their products and services with data collection, and they use the data collected to make sure that they get as much value from you as they can from points all over the web. They do this by downplaying their role, pretending that users know what they share with Google, and insisting that the services they provide are worth far more than the cost to user privacy that they demand. And now they've created a digital assistant that seems (in the demo'd cases) to be able to pass the Turing Test. So what harm could it do? Plenty. To say nothing of the unknowns, like where the tech goes from here, what other things may they do in a phone call, how long will they maintain these recordings, will the assistant be able to follow a person from work to home and harass them for information, let's just focus on yesterday. Like Facebook's news feed this creates a barrier between the place of business and the customer. The customer isn't any more engaged with the business as a result of this phone call, they're engaged with the Assistant. At any time Google could decide to change the rankings, "Earlier you wanted me to call 'Stacy's Nails' to book an appointment for Tuesday, but I was able to get the appointment at 'Manicures by Susan' instead." There is no visibility into why the change was made. Did Stacy not have the date available? Was she fully booked? Would it have cost more than you were willing to pay? On the other side, the business has limited insight into anything about the customer, which is ironic considering who is facilitating the appointment. When asked who the appointment is for, the demo only gave the first name "Karen." Why not her last name? Why not her age, sex, ethnicity, education level, what other salons they've been to in the last year. Google knows this stuff, and if they truly believe that information wants to be free, and that business work better when they learn more about you and can adapt to your needs and desires, then they should be sharing this information when you ask for an appointment, rather than being coy and limiting the information to your first name unless the business pays up for an ad.
The fact of the matter is digital assistants are getting more useful, and that's great, and phone calls are one of many things that they could and should get good and eventually great at. Personally, I can't wait to be able to take advantage. But the ethical considerations must be taken into account, and I would argue that Google has thought very long and hard about this, and has made the wrong decision about how to implement this feature. They are actively misleading people about who they are talking to and what the circumstances are. The user is not the 'client' of the Google Assistant. Why lie? An honest approach would be to come right out and be clear about the situation. "Hi, I'm the Google Assistant for Karen trying to schedule a hair appointment for Tuesday, do you have any availability?" What reason could Google have to not take that route, other than the obvious, that many people get creeped out interacting with Google. The deception isn't necessary here, unless you're trying to hide your influence over the relationship between business and customers. This is not at all good for either party, only Google.
Google has an incredible opportunity and responsibility here. Their technological prowess allow them to define this category. What begins as a novelty demo will eventually turn into table stakes for any digital assistant that wants to compete in a world where Google makes calls to get information for you. They could have use their opportunity to define clear ethical lines that should not be crossed, such as making it clear when you're talking to a computer or when you're talking to a person, giving the person who answers the phone clear insight into how the conversation will be used (every time I call my bank I listen to a message explaining that the recording will be used for training purposes) in the future, give them an opportunity to opt out. The only reason I can think of not to do this is that people may decide they don't want to talk to a robot, even if they couldn't otherwise figure out on their own that is was a robot they're talking to, and hang up. In which case, let them hang up. Or they could come in with the mindset that they can lie to and manipulate the person on the other end of the call to gather as much info as they want, setting the line of appropriate behavior far beyond the bounds of ethical engineering. Surprising no one but disappointing those of use who care, they opted for the latter.
It's no secret that Google dominates the web, and Sundar Pichai spoke at the beginning of the keynote recognizing that Google has a great responsibility and that they have fallen short of that responsibility in the past and they would work to correct it. But actions speak far louder than words, and with all their words and demos yesterday, from bragging that they already know your taste in news and food, to scanning your photos and encouraging you to send batches of photos to people without review, to tricking an unsuspecting individual to contribute to Google's deep learning efforts without consent in a way that could damage their business, Google has clarified their position in the tech industry. They are not to be trusted.
#google #privacy #ethics
Beautiful day to ride into work, finally! insert skateboard emoji here
#naileditApps of a Feather
I kinda can’t believe they’re doing this again, but Twitter is making more API changes that break functionality in third party apps, without providing any replacements. This is not the first time, and it likely won’t be the last.
That being said, I’m lucky enough that I won’t be terribly impacted by this. My Tweets don’t start as Tweets. My content is hosted here, on my website, that I control. I write two types of posts, titled posts that go onto the main page and the archive, and untitled posts under 280 characters that are destined to be shared on other services.
First choice? Micro.Blog. Once a post has gone on there they handle cross posting to Twitter. That may be where I consume a bunch of content, but I firmly believe that people who really care about their online presence should take steps to control it, and Micro.Blog offers great services to do just that. Ever since I heard about the project I have been very excited about the philosophy behind it, and the past couple years of putting this into practice has completely changed my comfort level with my online presence. It’s seriously better over here folks.
Looks like Twitter is delaying the API cutoff until developers have had access to a beta for 90 days. Great news. But these back and forth from a company, which at the end of the day you can’t fight effectively except hoping that they’ll listen to massive backlash, are exactly why you should be taking control of your content.
#Micro.blog #Twitter #BreakingMyTwitterapple hires googles ai chief
A few important things came to mind while I thought about this news. First, AI at Apple is so much larger than what people experience as ‘Siri,’ from the many other features like App Suggestions and Handoff which are marketed as Siri features, to CoreML, image recognition, and auto-focus on the Camera app. Saying they hired Giannandrea to “boost Siri” is like saying you charged your phone battery so you could send a tweet. There is much more than can (and should) be done here than just making the virtual assistant stop saying silly things when it doesn’t understand you.
Giannandrea will report directly to Cook, meaning he will have influence over the entire product and services line. This could be very good as far as making the wide uses of AI on Apple platforms better, but will also mean that his influence could be hard to identify in the first couple years. If this hire works it will be a long game, not a short one. Which also means that Giannandrea won’t be responsible if we actually do (or don’t) get the Siri improvements long desired at WWDC this year. It is far too late in the iOS 12 timeline for a new player to enter the game and meaningfully shake things up.
Finally, Google and Apple have vastly different cultures and ideologies around AI. I have to believe that Giannandrea wouldn’t have taken this job if he weren’t fully on board with committing to privacy and leaning on on device processing whenever possible. These are core tenants of Apple’s approach in this area, and it would be a damn shame if Apple changed course by bringing a Google guy on board. But even so, there is always a chance that big changes like this don’t end up working out (remember when Chris Lattner was a VP at Tesla?) Google and Apple are such different beasts here it’s easy to imagine that the transition doesn’t go completely smoothly. I’m hoping that’s not the case, mostly because it would show that Apple approached what is unquestionably a big ‘get’ in a rushed manner, but it’s always a possibility.
#AI #Siri #Google #Giannandrea
The more I think about iOS 11.4 and what features are included in this first beta, the more curious it seems. In particular, I was using the early betas of 11.3. They were... rough. In particular, b1 was one of the most frustrating betas I've used since iOS 5. But I was heavily using two features that were destined to be pulled, Multi-Destination AirPlay 2, iMessages in the Cloud. My experiences with those weren't particularly bad. Never noticed a lost iMessage, and my only gripe with AirPlay 2 was some difficulty presenting the AirPlay destination screen from Overcast and Music, a problem which still hasn't gone away.
Obviously my experiences weren't universal here, and iMessage in the Cloud in particular needs to be bulletproof before it rolls out, but I haven't seen any other significant criticism of the states of these features in the early betas that would have justified pulling them so early in the beta cycle. Also, references to ClassKit were found in the first 11.3 beta, so it seems like at some point that was slated for that release too, and considering 11.3 was launched right after Apple's Education event it would have worked out really well. So what happened here?
Here's my completely uninformed theory. ClassKit was delayed so that Apple's whole school story could kick off in June with the end of the school year and the start of prep for next year. At some point though the decision was made for it to not be a patch release on 11.3, but rather a new point release as 11.4. But 11.4 couldn't launch only with a SDK for a niche set of apps that was only going to be in beta anyway. So iMessage in the Cloud and AirPlay 2 were pulled out, not because they weren't ready, but for marketing reasons. This also follows with the recent reports that Apple is giving engineers more flexibility with ship dates. I'm certain that, however ready or not ready these features were, the engineers working on the team appreciated the ability to spend a few more months working rather than shipping the first version and following up with patches.
Most of this is going to be impossible to verify, but it's fun to speculate that Apple is actually committing to this whole "Give engineers more time to iron things out, rather than rushing initial versions of features" attitude. Although it certainly seems in this case that making the 11.4 release feature-full is giving HomePod buyers even more time without key features of the product.
So much got pulled out of this release in the early beta period. This has gotten pretty common in major
.0 releases, but I don’t think I’ve seen it at this scale in a
.X release before.
- iMessages in the Cloud (for the second time)
- AirPlay 2
- iBooks -> Books change
I was really excited about AirPlay 2, and was using pretty heavily for the short period it was in the betas. Certainly ‘felt’ like a beta feature, but no more so than the other problems I had with 11.3b1.
Also, would have loved to see what they have in store for Books and how that would have factored in to the Education event this week.