I am detecting text in live camera using swift but frames are not identified correct words. The application works fine when I try to detect text in image and frames draw at exactly around the world.
for live camera what i did is create a capture video session and implement AVCaptureVideoDataOutputSampleBufferDelegate then in didOutput i am taking buffered image and convert it in UIImage and detect text in it. but the same strategy is not working correctly in this case.
also, didOutput call every time after starting video session and what I want is to only call a function when the user moves his camera or find text.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
print("didOutput")
// do stuff here
guard let hasImage = CMSampleBufferGetImageBuffer(sampleBuffer) else {
print("no image")
return
}
let imageBuffer = hasImage
let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
liveCamImage = self.convert(cmage: ciimage)
DispatchQueue.main.async {
self.drawRectOnText(imagefromCam:self.liveCamImage)
}
}
is there any solution. //conver function
func convert(cmage:CIImage) -> UIImage
{
let context:CIContext = CIContext.init(options: nil)
let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage)
return image
}