Intro:
implementation 'androidx.camera:camera-core:1.0.0-alpha10'
implementation 'androidx.camera:camera-camera2:1.0.0-alpha10'
implementation "androidx.camera:camera-lifecycle:1.0.0-alpha10"
implementation "androidx.camera:camera-view:1.0.0-alpha07"
implementation 'com.google.firebase:firebase-ml-vision:24.0.1'
implementation 'com.google.firebase:firebase-ml-vision-barcode-model:16.0.2'
I am solve this issue via the method FirebaseVisionImage.fromBitmap(bitmap)
where bitmap
- manual cropped && rotated image according to preview configuration.
The steps are:
when you setup ImageAnalysis.Builder()
&& Preview.Builder()
obtain the on-screen rendered size for
androidx.camera.view.PreviewView
element:
previewView.measure(View.MeasureSpec.UNSPECIFIED, View.MeasureSpec.UNSPECIFIED)
val previewSize = Size(previewView.width, previewView.height)
Then pass the size into your own ImageAnalysis.Analyzer
implementation (let's say it will be viewFinderSize
variable, usage see below)
- when
override fun analyze(mediaImage: ImageProxy){
occurs, do the manual crop of received ImageProxy
. I am use the snippet from another SO question about distorted YUV_420_888 image: https://stackoverflow.com/a/45926852/2118862
When you do the cropping, keep in mind that ImageAnalysis
use case receive image aligned in vertical-middle basis within your preview use case. In other words, received image, after rotation will be vertically centred as if it will be inside your preview area (even if your preview area is smaller than the image passed into analysis). So, crop area should be calculated in both directions from the vertical center: up and down.
The vertical size of the crop (height) should be manually calculated on the horizontal size basis. This means, when you receive image into analysis, it has full horizontal size within your preview area (100% of width inside preview is equal 100% of width inside analysis). So, no hidden zones in horizontal dimension. This open the way to calculate the size for vertical cropping. I am done this with next code:
var bitmap = ... <- obtain the bitmap as suggested in the SO link above
val matrix = Matrix()
matrix.postRotate(90f)
bitmap = Bitmap.createBitmap(bitmap, 0, 0, image.width, image.height, matrix, true)
val cropHeight = if (bitmap.width < viewFinderSize!!.width) {
// if preview area larger than analysing image
val koeff = bitmap.width.toFloat() / viewFinderSize!!.width.toFloat()
viewFinderSize!!.height.toFloat() * koeff
} else {
// if preview area smaller than analysing image
val prc = 100 - (viewFinderSize!!.width.toFloat()/(bitmap.width.toFloat()/100f))
viewFinderSize!!.height + ((viewFinderSize!!.height.toFloat()/100f) * prc)
}
val cropTop = (bitmap.height/2)-((cropHeight)/2)
bitmap = Bitmap.createBitmap(bitmap, 0, cropTop.toInt(), bitmap.width, cropHeight.toInt())
The final value in bitmap
variable - is the cropped image ready to pass into FirebaseVisionImage.fromBitmap(bitmap)
PS.
welcome to improve the suggested variant