今年的WWDC上,关于人工智能方面Apple开放了CoreML工具包。
今天就趁着时间还早果断的尝试了一下到底有多容易。
import UIKit
import CoreML
import Vision
首先头文件里CoreML和Vision两个新的包都需要引入。
如果只是模仿Apple官方给出的模型可以不使用Vision包,但是如果要做图片识别那么最好使用Vision的方法。(原因之后会提到)
@IBAction func openLibrary(_ sender: Any) { if UIImagePickerController.isSourceTypeAvailable(UIImagePickerControllerSourceType.photoLibrary) { let imagePicker = UIImagePickerController() imagePicker.delegate = self imagePicker.sourceType = UIImagePickerControllerSourceType.photoLibrary; imagePicker.allowsEditing = true self.present(imagePicker, animated: true, completion: nil) } } func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) { if let pickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage { imagePicked.contentMode = .scaleToFill imagePicked.image = pickedImage } picker.dismiss(animated: true, completion: nil) }
简单的导入图片过程,在此不做解释。
@IBAction func saveImage(_ sender: Any) { let imageData = imagePicked.image?.cgImage let model = try! VNCoreMLModel(for: Resnet50().model) let request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod) let handler = VNImageRequestHandler(cgImage: imageData!) try! handler.perform([request]) }
let model = try! VNCoreMLModel(for: Resnet50().model)//1
这是一个选择模型个的方法,模型我直接选用的Apple推荐的Resnet50()。(其实有了第三方模型转换之后自己编译也很简单)
let handler = VNImageRequestHandler(cgImage: imageData!)//2
将文件转换成模型识别可支持的数据类型(CVPixelBuffer)当然也有别的可转换方法,但是直接使用Vision的方法更容易实现。
let request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)//3
读取模型并返回结果至myResultsMethod方法中。
guard let results = request.results as? [VNClassificationObservation]//4
返回结果存储在results中(数组型数据)
for classification in results { print(classification.identifier, // the scene label classification.confidence) }
至此一个带有人工智能的App就开发完了。
准确来说不到十行代码。