Under The Bridge Under The Bridge

Tag: learning
Core ML 3: How To Train Your Device Model

So things have certainly been moving right along since last we posted in 2017 in the iOS machine learning world haven’t they? Whilst we have our accustomed healthy skepticism of the frothily wild-eyed claims of The Universal Panacea Of Machine Learning you see floating around — 

CONCERNED PARENT: If all your friends jumped off a bridge, would you?

MACHINE LEARNING ALGORITHM: “YESSSS!!!!”

— this new being able to train models on our devices thing as well as all the other new stuff in CoreML 3 is a bit of a tipping point from “curiosity” to “something to think of serious applications for” we’d say!

If you haven’t taken much notice of the lingo so far and need some bringing up to speed, check out this course list from our friends at CourseDuck

The World’s Best Machine Learning Courses & Tutorials in 2020

And if you want lots and lots and lots of cross-platform machine learning resources, check out the awesome list

Awesome-Mobile-Machine-Learning

But we’re focusing on the device training today, and from what we can tell the reigning authority in that space is Matthijs Holleman’s blog, most notably the four piece series On-device training with Core ML completed last month:

  1. Introduction to on-device training
  2. Rock, Paper, Scissors (Lizard? Spock?)
  3. k-Nearest Neighbors
  4. Training a Neural Network

For sure read that whole series, and check out the rest of the blog too, we particularly liked

Core ML and Combine

And now you have a Combine processing chain that, every time you send it a UIImage object with imagePublisher.send(image), will automatically run a Core ML model on that image and process the results. Pretty cool!

And if you like those enough to pay for more, he’s got not just one but two books out: Machine Learning by Tutorials and Core ML Survival Guide — which we’re pretty sure makes him the go-to guru of iOS machine learning!

Other good introductions to new features and related goodies:

Working with Create ML’s MLDataTable to Pre-Process Non-Image Data

4 Techniques You Must Know for Natural Language Processing on iOS

Face Detection and Recognition With CoreML and ARKit, and Snapchat For Cats

Advancements in Apple’s Vision Framework: 2019 Year-in-Review

Text recognition on iOS 13 with Vision, SwiftUI and Combine

Build a Core ML Recommender Engine for iOS Using Create ML

MakeML’s Automated Video Annotation Tool for Object Detection on iOS

And here are some examples of classifiers and detectors we figure look useful, interesting, or just amusing:

Sound Classification on iOS Using Core ML 3 and Create ML

How to Build a Song Recommender Using Create ML MLRecommender

Building a Fake News Detector with Turicreate

Using Core ML and Natural Language for Sentiment Analysis on iOS

Detecting Pets with the iOS Vision Framework

iOS Build Cat Vs Dog Image Classifier Using Vision In 5 minutes

Photo Stacking in iOS with Vision and Metal

Using Sky Segmentation to create stunning background animations in iOS

And in that last category, our Machine Learning Rise Of Skynet Award goes to

Building a Face Detecting Robot with URLSessionWebSocketTask, CoreML, SwiftUI and an Arduino

Some time ago I created a little side project that involved an Arduino-powered servo motor that menacingly pointed at people’s faces with the help of CoreML, mimicking the Team Fortress 2 Engineer’s Sentry Gun. With iOS 13, I decided to re-write that using the new Socket APIs and SwiftUI…

“Mimicking,” he says now, but just you wait…

UPDATES:

How to make your iOS app smarter with sentiment analysis

Core MLagueña

Checked out the WW17 Roundup yet? OK then, let’s start digging into this new stuff a little deeper. And we’ll start with the one with the most buzz around the web,

Introducing Core ML

Machine learning opens up opportunities for creating new and engaging experiences.

Core ML

is a new framework which you can use to easily

integrate machine learning models into your app

. See how Xcode and Core ML can help you make your app more intelligent with just a few lines of code.

Vision Framework: Building on Core ML

Vision

is a new, powerful, and easy-to-use framework that provides solutions to computer vision challenges through a consistent interface. Understand how to use the Vision API to detect faces, compute facial landmarks, track objects, and more. Learn how to take things even further by providing custom machine learning models for Vision tasks using CoreML.

By “more intelligent” what do we mean exactly here? Why, check out

iOS 11: Machine Learning for everyone

The API is pretty simple. The only things you can do are:

  1. loading a trained model
  2. making predictions
  3. profit!!!

This may sound limited but in practice loading a model and making predictions is usually all you’d want to do in your app anyway…

Yep, probably. Some people are very excited about that approach:

Apple Introduces Core ML

When was the last time you opened up a PDF file and edited the design of the document directly?

You don’t.

PDF is not about making a document. PDF is about being able to easily view a document.

With Core ML, Apple has managed to achieve an equivalent of PDF for machine learning. With their .mlmodel format, the company is not venturing into the business of training models (at least not yet). Instead, they have rolled out a meticulously crafted red carpet for models that are already trained. It’s a carpet that deploys across their entire lineup of hardware.

As a business strategy, it’s shrewd. As a technical achievement, it’s stunning. It moves complex machine learning technology within reach of the average developer… Well, speaking as that Average Developer here, yep this sure sounds like a great way to dip a toe into $CURRENT_BUZZWORD without, y’know, having to actually work at it. Great stuff!

Here’s some more reactions worth reading:

Here’s some models to try it out with, or you can convert your own built with XGBoost, Caffe, LibSVM, scikit-learn, and Keras :

  • Places205-GoogLeNet CoreML (Detects the scene of an image from 205 categories such as an airport terminal, bedroom, forest, coast, and more.)
  • ResNet50 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)
  • Inception v3 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)
  • VGG16 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)

And some samples and tutorials:

Also note NSLinguisticTagger that’s part of the new ML family here too.

For further updates we miss, check out awesome-core-ml and Machine Learning for iOS !

UPDATES:

YOLO: Core ML versus MPSNNGraph

Why Core ML will not work for your app (most likely)

Blog: Getting Started with Vision MLCamera – Vision & Core ML with AVCaptureSession Inceptionv3 model

Can Core ML in iOS Really Do Hot Dog Detection Without Server-side Processing?

Bringing Machine Learning to your iOS Apps 🤖📲

Creating A Simple Game With CoreML In Swift 4

Custom classifiers in iOS11 using CoreML and Vision

Using Vision Framework for Text Detection in iOS 11

Smart Gesture Recognition in iOS 11 with Core ML and TensorFlow

Awesome-CoreML-Models: “Largest list of models for Core ML (for iOS 11+)”

fantastic-machine-learning: “A curated list of machine learning resources, preferably CoreML”

Announcing Core ML support in TensorFlow Lite

Creating a Custom Core ML Model Using Python and Turi Create

DIY Prisma, Fast Style Transfer app — with CoreML and TensorFlow

Core ML Simplified with Lumina

Style Art: “is a library that process images using COREML with a set of pre trained machine learning models and convert them to Art style.”

Beginning Machine Learning with Keras & Core ML

How I Shipped a Neural Network on iOS with CoreML, PyTorch, and React Native

TensorFlow on Mobile: Tutorial

FastPhotoStyle: “Style transfer, deep learning, feature transform”

Netron: “is a viewer for neural network and machine learning models.”

Leveraging Machine Learning in iOS For Improved Accessibility

CoreMLHelpers: “Types and functions that make it a little easier to work with Core ML in Swift.”

Benchmarking Core ML — Estimating model runtimes on iOS

Machine Learning in iOS: IBM Watson and CoreML

Detecting Whisky brands with Core ML and IBM Watson services

Swift for TensorFlow, Released & Why data scientists should start learning Swift

Build a Taylor Swift detector with the TensorFlow Object Detection API, ML Engine, and Swift

Lumina: “A camera designed in Swift for easily integrating CoreML models – as well as image streaming, QR/Barcode detection, …”

Lobe: Visual tool for deep learning models

Building a Neural Style Transfer app on iOS with PyTorch and CoreML

What’s New in Core ML 2

Create ML: How to Train Your Own Machine Learning Model in Xcode 10

Create ML Tutorial: Getting Started

Training a Text Classifier with Create ML and the Natural Language Framework

Natural Language in iOS 12: Customizing tag schemes and named entity recognition

NSHipster’s NLLanguage​Recognizer

Creating a Prisma-like App with Core ML, Style Transfer and Turi Create

 NSFWDetector: “A NSFW (aka porn) detector with CoreML”

Running Keras models on iOS with CoreML

Skafos “is machine learning for iOS developers”