How the new Apple ecosystem becomes a self learning pod system

Iskander Smit
LABSinfonl
Published in
5 min readJun 7, 2017

--

Last Monday Apple had its yearly WWDC-address, and it showed clearly some of the routes they are planning. I’m not going to do a report on this introductions and new specs, enough tech blogs doing this. I want to focus on a couple of concepts that where framed and others that could be interpreted from the announcements. I tweeted during the talk some of the interesting concepts I will explore here a bit more.

The keynote showed this clear choice to claim Machine Learning as the key differentiator for Apple. Earlier this year (and starting in 2016), Google shifted all focus to Artificial Intelligence (AI). I think this is not just a marketing thing, but it is connected to the core or the companies. AI and ML are of course not two separate concepts, they are related. You create AI by use lots of data and smart algorithms to achieve an intelligent assistent as Google likes to create. And this assistent will become more intelligent as it learns from the interactions with the user and external relations with information sources. Doing this in a certain self initiated way is what we connect to the machine learning meme, and even more to deep learning.

Machine learning is however an even better fit to what Apple is doing. The focus is more on the learning driven by the interactions of the machine with the user. And the aim is not to make the best assistent as a starting point, but to make more intelligent products. The new HomePod speakers are one of the best examples. It is an intelligent speaker for a start that tries to understand its context to create the best soundscapes. It can be expected that this will extend in more than soundscapes.

I really think it is a big deal that Apple is putting so much intelligence and ad-hoc connecting in all kind of devices. They are easily turning into an ecosystem of learning devices. Product that will interact with eachother even without that we as users are needed as hub or intermediair. The AppleTV is an device, but also moving devices like our watches and the AirPods. I wrote a piece on that before.

The other big thing is AR (augmented reality). This concept is much more important and embedded in the next generation personal devices of Apple than VR (virtual reality). Of course they want to be the go-to brand to buy the tools for making the VR content, but AR is embedded part of the OS. And the machine learning and pods-ecosystem is key to creating the fluent experience.
The differentiating experience with AR is the way it mixes reality into the digital layer. ‘Old AR’ as we know it places objects on top of the world without connecting it to the world. Like Layar did in the early days, and like we all did with Pokemon. The new AR try to recognize the world and take this as the stage for the digital content. The Hololens of Microsoft is doing a great job there, only the screen is too small for an immersive feeling, which is stressed by the form factor of the glasses.
Another important example of this new AR are the lenses of Snapchat. How it transforms your faces, and how they started also putting objects in the scenes you shoot.

To do this very smooth and fast in every environment the AR need to understand the space you are. Google is experimenting with Tango for some time now. Here the environment is scanned with special sensors. Just like the Hololens, but than with your normal device. Google is capturing as much of the real world to be prepared for this digital overlay. It is rumoured before that all Google streetview capturing is also capturing these kind of data.

Apple does not have that use amounts of visual data you would think, but you should not forget the amount of data is captured through the years by all the iPhone users. They don’t use this for advertisement and therefor we feel much safer with Apple, still they have captured lots of data. Including all the pictures made by people that are analysed for the image recognition and event clustering. And connecting HomeKit opens up all data from external devices too. The HomePod shows how the sensors can scan a space. Combine all these data and Apple is able to make a very smooth AR experience in the real world.

What about Siri? As Apple is less focusing on building an AI assistent as Google is doing what is the role of Siri? In the end Siri is just as important as interface with the new digital layer on the real world. Apple is starting from a different angle, as they proved with the focus of HomePod on music first. This seems like a smart strategy. Music is much more framed than searching all the information of the world, and therefor less volunerable for errors. I think that Apple is buying time here to improve Siri and uses our interactions with the pod-machine to learn Siri better understanding.

In this article on ‘Agents of Assistents’ a nice model is sketched how these more intelligent agent systems will take more agency. That is of course our next challenge too; what is the agency we want to leave to our intelligent pods, and what are we controlling ourselves. What tasks do we delegate and when do we think we loose control. This is one of the main questions to address the coming times, something that will be more important if the digital layer will become more and more a real life distortion.

We have a fair chance to influence this new reality as Apple is offering an ML-SDK now where we can use the learning pods as companions in creating new services. It challenges designers to not design only the behaviour of the services (apps) from a predicted usage perspective, but design dynamic systems that change behaviour in the dialogue with the user. A development we see coming for a long time but may become more assessible. Of course we still need time to really understand the machines. And we will probably not be able to fully use all layers of machine learning as Apple is known for; step for step introducing more features. As we all are hyping AI and ML as the next big thing, what we saw in the keynote is definitely a big step towars creating new man-machine based services. A true interesting new decade ahead.

--

--

Research & innovation director at Info.nl. Leading LABS. Co-organiser Behavior Design AMS & ThingsCon Amsterdam. Cities of Things Foundation.