Archive for the ‘iOS’ Category

iOS 12 will make the iPhone feel personal again

I quit my job in 2008 to create an app based on the touch UI of the iPhone. I couldn’t stop talking about how Steve Jobs would call the iPhone a “lifestyle device”. It’s the entire reason the app I created as focused around fashion. Fashion, in itself, is very much a personal and a lifestyle choice. Now that smartphone sales have peaked, the arms race of achieving marketshare is over. Apple seems to be switching gears to making the iPhone as enjoyable as it was over ten years ago.

Memoji’s seem to be a moat against snapchat.

If snapchat were to build a phone that allows users to completely express themselves, I’d imagine it’d have something like Memoji’s at the core. One of Snapchats best value proposition is the ability for users to craft messages to a close group of friends. It seems Apple realizes this is of great importance since they seem to be doubling down on Animojis. At first, the idea seems silly, but if you look around, this is a natural progression of the personal avatar. Nintendo has had them for years with the Mii, but they didn’t have the power of Apple’s user graph behind them, or a camera that would track facial expressions. Bitstrips, in it’s early days as a Facebook plugin, was probably the most personal, self-expressing avatar creator out there. I still laugh to this day thinking about the Bitstrips that had inside jokes with friends or family members. Snapchat is making the best of Bitmoji’s, but Memoji’s have the potential to exceed the self-expressing capabilities by users actually speaking and making the facial emotions, rather than crafting drawings of themselves. This experience then has an extra purpose when sharing – the user can actually share their feelings or burdens with someone.

Machine learning at the center of personalization

The updates to the Photo’s app show evidence to how committed Apple is to creating a deep level of personalization that will connect with it’s users. The application of ML for personalized results is at the cornerstone of Spotify’s recommendation engine, which Apple is seeming to take a similar path with Photos.

At WWDC 2017, there is a talk that shows how Apple integrates NLP into Safari and other apps to create more accurate autocomplete suggestions for the keyboard. The demo felt a little creepy, but also seemed very magical. I’m curious if machine learning is at the center of SIRI recommendations and muting notifications for apps you no longer use.

When we talk about machine learning advances in the tech industry, we often overlook Apple’s achievements in this space. Most of the credit goes to their secret culture and SIRI’s poor track record.

SIRI Integrations

The customization of SIRI is my favorite announcement of IOS 12. I still have app Workflow installed on my iPhone and my Apple watch. Since the acquisition, I have been very eager to see what Apple was planning. The UI (at no surprise) looks very much like the Workflow app but now with the power of SIRI. I’ve always wondered how far Apple would let SIRI fall behind before going back to it’s roots to become a “doing engine”. I’ve been lucky enough to see one of the co-founders of SIRI named Dag Kittlaus speak about the origins of SIRI. He showed a few slides from his original pitch deck to investors. One of the slides which he dubbed “the million dollar slide” said: “SIRI, order flowers for my wife”. He then began to say this was the ah-ha moment for investors. Ever since, I’ve had hopes that some day SIRI would finally become the doing engine it was at its origin. Like some of us, I’ve written SIRI off the last few years due to frustration, and better choices… like Alexa – but unfortunately the Echo isn’t a full lifestyle device I take with me everywhere.

I’ve been wanting to develop a SIRI app extension for awhile, but I couldn’t because of domain limitations. Also, the possibility of being a SIRI suggested app is very low, as it has even a more narrow set of requirements.

Now that the API has more domains, the new SIRI API integrations might reduce friction. We could see a whole new category of apps emerge. It’s obvious that SIRI is still behind, but having developers create more integrations is a step in the right direction.

Do not disturb and Notifications

I’ve been reading the book called “Why We Sleep”. It’s had a profound effect on my life as I’ve taken sleep for granted. Until I started reading this book I was unaware of the data that supports the importance of sleep. Sleep, in addition to eating, and exercising, are life aspects that humans plan their days around. Again, Apple seems to focus on personalization with a full DND mode so you can reduce distractions going back to sleep. The ability to shut off the fire hose of notifications at any time, even when I leave a location is empowering and personal.

Self Quantification through Screen Time

I’m a big believer in self quantification. Many of the watchOS’ value propositions with fitness are at the center of self quantification. Screen time, which allow users to self quantify their own phone usage applies this type of game. The obvious use case is to limit our children’s usage around the devices but self awareness is the first step in any lifestyle change. I’ve been off Facebook and Messenger since January as a new years resolution, and I’m contemplating re-using the apps again now that I can set limits. The fact that I now have an option makes it seem that I don’t have to completely have to have an app uninstalled, and for Apple’s ecosystem is a very good thing.

Android and iOS has flipped value props

Since the beginning, Androids value proposition was always about giving users power in customizing their experience at the expense of battery life and UX. It seems Android is now focusing on the UX side with the updated material design and iOS seems to be focusing on customization with the latest release of iOS 12. Time will tell to see which strategy will be more effective in the long run especially since Google has a large advantage with ML capabilities.

Xcode – Building multiple targets with Google Cloud Messaging

If you haven’t seen my slides on How to quickly eat your own dogfood in iOS, you should. It allows you to get the product in your hands faster and can help eliminate any assumptions you have about the UX by actually using it. The biggest caveat you run into is having configurations on a per target basis. For example, Google Cloud Messaging requires a GoogleService-Info.plist file in your bundle that contains a GCM_SENDER_ID and a BUNDLE_ID. The values will be different for both your beta builds and production builds. You can’t have two of these files because you’d have to name them differently and the Google library loads the plist directly by name.

The solution is to modify the GoogleService-Info.plist file after it’s been copied to the bundle. First add a step to your current Build Phase, specifically after “Copy Bundle Resources“. OS X has a built-in tool called PListBuddy which we’ll use to modify the GoogleService-Info.plist once it’s been copied to the bundle. It’s ok to modify it in the bundle before it’s been digitally signed.

/usr/libexec/PlistBuddy -c "Set GCM_SENDER_ID " $BUILT_PRODUCTS_DIR/.app/GoogleService-Info.plist
/usr/libexec/PlistBuddy -c "Set BUNDLE_ID " $BUILT_PRODUCTS_DIR/.app/GoogleService-Info.plist

Perform a clean, then build and you’re good to go. Alternatively, if you don’t want to hide configurations like this in an IDE, put it in a separate .sh file and execute the script. This way you can see that there is a file in source control and it’s not hidden to other developers with an IDE setting.

You could also use this technique to modify your Info.plist values between your prod and beta builds, so you don’t have to have separate .plist files for each of them.