Core MLagueña

Checked out the WW17 Roundup yet? OK then, let’s start digging into this new stuff a little deeper. And we’ll start with the one with the most buzz around the web,

Introducing Core ML

Machine learning opens up opportunities for creating new and engaging experiences.

Core ML

is a new framework which you can use to easily

integrate machine learning models into your app

. See how Xcode and Core ML can help you make your app more intelligent with just a few lines of code.

Vision Framework: Building on Core ML

Vision

is a new, powerful, and easy-to-use framework that provides solutions to computer vision challenges through a consistent interface. Understand how to use the Vision API to detect faces, compute facial landmarks, track objects, and more. Learn how to take things even further by providing custom machine learning models for Vision tasks using CoreML.

By “more intelligent” what do we mean exactly here? Why, check out

iOS 11: Machine Learning for everyone

The API is pretty simple. The only things you can do are:

  1. loading a trained model
  2. making predictions
  3. profit!!!

This may sound limited but in practice loading a model and making predictions is usually all you’d want to do in your app anyway…

Yep, probably. Some people are very excited about that approach:

Apple Introduces Core ML

When was the last time you opened up a PDF file and edited the design of the document directly?

You don’t.

PDF is not about making a document. PDF is about being able to easily view a document.

With Core ML, Apple has managed to achieve an equivalent of PDF for machine learning. With their .mlmodel format, the company is not venturing into the business of training models (at least not yet). Instead, they have rolled out a meticulously crafted red carpet for models that are already trained. It’s a carpet that deploys across their entire lineup of hardware.

As a business strategy, it’s shrewd. As a technical achievement, it’s stunning. It moves complex machine learning technology within reach of the average developer… Well, speaking as that Average Developer here, yep this sure sounds like a great way to dip a toe into $CURRENT_BUZZWORD without, y’know, having to actually work at it. Great stuff!

Here’s some more reactions worth reading:

Here’s some models to try it out with, or you can convert your own built with XGBoost, Caffe, LibSVM, scikit-learn, and Keras :

  • Places205-GoogLeNet CoreML (Detects the scene of an image from 205 categories such as an airport terminal, bedroom, forest, coast, and more.)
  • ResNet50 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)
  • Inception v3 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)
  • VGG16 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)

And some samples and tutorials:

Also note NSLinguisticTagger that’s part of the new ML family here too.

For further updates we miss, check out awesome-core-ml and Machine Learning for iOS !

UPDATES:

YOLO: Core ML versus MPSNNGraph

Why Core ML will not work for your app (most likely)

Blog: Getting Started with Vision MLCamera – Vision & Core ML with AVCaptureSession Inceptionv3 model

Can Core ML in iOS Really Do Hot Dog Detection Without Server-side Processing?

Bringing Machine Learning to your iOS Apps 🤖📲

Creating A Simple Game With CoreML In Swift 4

Custom classifiers in iOS11 using CoreML and Vision

Using Vision Framework for Text Detection in iOS 11

Smart Gesture Recognition in iOS 11 with Core ML and TensorFlow

Awesome-CoreML-Models: “Largest list of models for Core ML (for iOS 11+)”

fantastic-machine-learning: “A curated list of machine learning resources, preferably CoreML”

Announcing Core ML support in TensorFlow Lite

Creating a Custom Core ML Model Using Python and Turi Create

DIY Prisma, Fast Style Transfer app — with CoreML and TensorFlow

Core ML Simplified with Lumina

Style Art: “is a library that process images using COREML with a set of pre trained machine learning models and convert them to Art style.”

Beginning Machine Learning with Keras & Core ML

How I Shipped a Neural Network on iOS with CoreML, PyTorch, and React Native

TensorFlow on Mobile: Tutorial

FastPhotoStyle: “Style transfer, deep learning, feature transform”

Netron: “is a viewer for neural network and machine learning models.”

Leveraging Machine Learning in iOS For Improved Accessibility

CoreMLHelpers: “Types and functions that make it a little easier to work with Core ML in Swift.”

Benchmarking Core ML — Estimating model runtimes on iOS

Detecting Whisky brands with Core ML and IBM Watson services

Swift for TensorFlow, Released & Why data scientists should start learning Swift

Lumina: “A camera designed in Swift for easily integrating CoreML models – as well as image streaming, QR/Barcode detection, …”

Lobe: Visual tool for deep learning models

Alex | June 19, 2017

Leave a Reply