Under The Bridge Under The Bridge

Tag: core ml
Core ML 3: How To Train Your Device Model

So things have certainly been moving right along since last we posted in 2017 in the iOS machine learning world haven’t they? Whilst we have our accustomed healthy skepticism of the frothily wild-eyed claims of The Universal Panacea Of Machine Learning you see floating around — 

CONCERNED PARENT: If all your friends jumped off a bridge, would you?

MACHINE LEARNING ALGORITHM: “YESSSS!!!!”

— this new being able to train models on our devices thing as well as all the other new stuff in CoreML 3 is a bit of a tipping point from “curiosity” to “something to think of serious applications for” we’d say!

If you haven’t taken much notice of the lingo so far and need some bringing up to speed, check out this course list from our friends at CourseDuck

The World’s Best Machine Learning Courses & Tutorials in 2020

And if you want lots and lots and lots of cross-platform machine learning resources, check out the awesome list

Awesome-Mobile-Machine-Learning

But we’re focusing on the device training today, and from what we can tell the reigning authority in that space is Matthijs Holleman’s blog, most notably the four piece series On-device training with Core ML completed last month:

  1. Introduction to on-device training
  2. Rock, Paper, Scissors (Lizard? Spock?)
  3. k-Nearest Neighbors
  4. Training a Neural Network

For sure read that whole series, and check out the rest of the blog too, we particularly liked

Core ML and Combine

And now you have a Combine processing chain that, every time you send it a UIImage object with imagePublisher.send(image), will automatically run a Core ML model on that image and process the results. Pretty cool!

And if you like those enough to pay for more, he’s got not just one but two books out: Machine Learning by Tutorials and Core ML Survival Guide — which we’re pretty sure makes him the go-to guru of iOS machine learning!

Other good introductions to new features and related goodies:

Working with Create ML’s MLDataTable to Pre-Process Non-Image Data

4 Techniques You Must Know for Natural Language Processing on iOS

Face Detection and Recognition With CoreML and ARKit, and Snapchat For Cats

Advancements in Apple’s Vision Framework: 2019 Year-in-Review

Text recognition on iOS 13 with Vision, SwiftUI and Combine

Build a Core ML Recommender Engine for iOS Using Create ML

MakeML’s Automated Video Annotation Tool for Object Detection on iOS

And here are some examples of classifiers and detectors we figure look useful, interesting, or just amusing:

Sound Classification on iOS Using Core ML 3 and Create ML

How to Build a Song Recommender Using Create ML MLRecommender

Building a Fake News Detector with Turicreate

Using Core ML and Natural Language for Sentiment Analysis on iOS

Detecting Pets with the iOS Vision Framework

iOS Build Cat Vs Dog Image Classifier Using Vision In 5 minutes

Photo Stacking in iOS with Vision and Metal

Using Sky Segmentation to create stunning background animations in iOS

And in that last category, our Machine Learning Rise Of Skynet Award goes to

Building a Face Detecting Robot with URLSessionWebSocketTask, CoreML, SwiftUI and an Arduino

Some time ago I created a little side project that involved an Arduino-powered servo motor that menacingly pointed at people’s faces with the help of CoreML, mimicking the Team Fortress 2 Engineer’s Sentry Gun. With iOS 13, I decided to re-write that using the new Socket APIs and SwiftUI…

“Mimicking,” he says now, but just you wait…

UPDATES:

How to make your iOS app smarter with sentiment analysis

Build a Touchless Swipe iOS App Using ML Kit’s Face Detection API