1

I'm trying to use the Google Vision Cocoapod module to recognize faces in my app. When looking through the documentation, I could only find it in objective-C: https://developers.google.com/vision/ios/detect-faces-tutorial

Does there exist a swift version, and if so, where can I find it? If there is no swift version, how can I go about converting this code to swift?

I don't only want to do facial detection. I also want to do landmark detection, which is why I'm not using the native IOS facial detection api.

James Dorfman
  • 1,740
  • 6
  • 18
  • 36

1 Answers1

0

It seems Google Mobile Vision has no Swift documentation, but don't worry! All the methods and properties are automatically mapped to Swift, so you can use them without any overhead.

For example:

UIImage *image = [UIImage imageNamed:@"multi-face.jpg"];
NSArray<GMVFaceFeature *> *faces = [self.faceDetector featuresInImage:self.faceImageView.image
                                                          options:nil];

would become

let image = UIImage(named: "multi-face.jpg")
let faces = self.faceDetector.featuresInImage(self.faceImageView.image, options: nil)

Just follow instructions from this question: How to use Objective-C Cocoapods in a Swift Project?