Checked out the WW17 Roundup yet? OK then, let’s start digging into this new stuff a little deeper. And we’ll start with the one with the most buzz around the web,
Machine learning opens up opportunities for creating new and engaging experiences. Core ML is a new framework which you can use to easily integrate machine learning models into your app. See how Xcode and Core ML can help you make your app more intelligent with just a few lines of code.
Vision is a new, powerful, and easy-to-use framework that provides solutions to computer vision challenges through a consistent interface. Understand how to use the Vision API to detect faces, compute facial landmarks, track objects, and more. Learn how to take things even further by providing custom machine learning models for Vision tasks using CoreML.
By “more intelligent” what do we mean exactly here? Why, check out
The API is pretty simple. The only things you can do are:
- loading a trained model
- making predictions
This may sound limited but in practice loading a model and making predictions is usually all you’d want to do in your app anyway…
Yep, probably. Some people are very excited about that approach:
When was the last time you opened up a PDF file and edited the design of the document directly?
PDF is not about making a document. PDF is about being able to easily view a document.
With Core ML, Apple has managed to achieve an equivalent of PDF for machine learning. With their .mlmodel format, the company is not venturing into the business of training models (at least not yet). Instead, they have rolled out a meticulously crafted red carpet for models that are already trained. It’s a carpet that deploys across their entire lineup of hardware.
As a business strategy, it’s shrewd. As a technical achievement, it’s stunning. It moves complex machine learning technology within reach of the average developer…
Well, speaking as that Average Developer here, yep this sure sounds like a great way to dip a toe into $CURRENT_BUZZWORD without, y’know, having to actually work at it. Great stuff!
Here’s some more reactions worth reading:
- Welcoming Core ML
- Machine Learning in iOS Using Core ML
- Apple’s CoreML a Big Step for Machine Learning
- Apple just leveled up the iPhone’s machine learning chops
- Apple’s Core ML: The pros and cons
- Swift World: What’s new in iOS 11 — CoreML and Vision
- Apple just offered a ‘dead giveaway’ that it’s building an AI chip for iPhones, expert says
- Places205-GoogLeNet CoreML (Detects the scene of an image from 205 categories such as an airport terminal, bedroom, forest, coast, and more.)
- ResNet50 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)
- Inception v3 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)
- VGG16 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)
And some samples and tutorials:
- Core ML and Vision: Machine Learning in iOS 11 Tutorial
- Introduction to Core ML: Building a Simple Image Recognition App
- Tutorial: Using iOS 11’s Vision Framework For Object Detection On A Live Video Feed
- MobileNet-CoreML: “The MobileNet neural network using Apple’s new CoreML framework”
- CoreMLExample: “An example of CoreML using a pre-trained VGG16 model”
- UnsplashExplorer-CoreML: “Core ML demo app with Unsplash API”
- Bender: “Easily craft fast Neural Networks on iOS! Use TensorFlow models. Metal under the hood.”
Also note NSLinguisticTagger that’s part of the new ML family here too.