Extracting text from an image (OCR) can be very convenient for automating operations upon a user-provided content. iOS13 has a ton of improvements on the subject via the Vision library. To try it out, follow this recipe:
- Create a new playground
- Under the resources, folder add two images that contain text
- Update their names in the code (image1, image2) or rename the files accordingly
- Run the playground and enjoy the console output.
As usual, the code is on GitHub.
- Building Custom Deep Learning Based OCR models
- Building an iOS camera calculator with Core ML’s Vision and Tesseract OCR
- Introduction to Deep Learning
- 2018’s Top 7 Libraries and Packages for Data Science and AI: Python & R
- Embracing Machine Learning as a Mobile Developer
- The Lifecycle of Mobile Machine Learning Models