The problem I have is the following:
I have a custom board, with a screen resolution of 900x500 pixels. The board runs Android 8.0.
I get a video stream (from android auto when a cellphone is connected to the board) of a 1280x720 resolution but with content inside the video of my screen size 900x500.
Basically if you see the full video there is a smaller image inside a bigger frame, with black margins on the side of (1280-900)/2 = 190 and (720-500)/2 = 110, like the following image:
(I had an image to go here but since its my first post I'm not allowed, I'll draw it with ascii.) (EDIT: realized a link is created for the attached image, but I'll leave the ascii just in case))
1280
****************************************************************************
*FULL VIDEO | *
* 110 *
* | *
* ************************************************** *
* * 900 * *
* * * *
* * * *
* * * *
* * * *
* * * *
*----190-----* CONTENT 500 *----190-----* 720
* * * *
* * * *
* * * *
* * * *
* * * *
* ************************************************** *
* | *
* 110 *
* | *
****************************************************************************
I'm using MediaCodec in order to decode the video stream, and SurfaceView to render the video onto the screen (touch events are also needed).
With this setup, the full 1280x720 video is scaled down to the 900x500 screen, so the entire content is seen in the screen (i.e. the entire 1280x720 content is caled down to 900x500). What I would actually need, is to crop the black margins from the side and use only the 900x500 images inside.
This seemed like it would be easy enough to do but I'm actually having trouble getting a solution.
NOTE: In order to make the explanation simpler and make code snippets more understandable, I'm using hard-coded numbers with the resolutions explained above, thou code is actually set up to have different resolutions.
Things I have tried:
- Using SurfaceView method setLayoutParams and pass it a layout with margins:
code snippet
ViewGroup.MarginLayoutParams layoutParams = (ViewGroup.MarginLayoutParams) mSurfaceView.getLayoutParams();
layoutParams.width = 900;
layoutParams.height = 500;
layoutParams.setMargins(190, 110, 190, 110);
mSurfaceView.setLayoutParams(layoutParams);
mSurfaceView.requestLayout();
This of course didn't work. The layout parameter are of the surfaceView itself, so when calling setMargins they are applied to the surfaceView making it smaller: 900 - 190*2 = 520 and 500 - 110*2 = 280. In the end I ended up with a surface of 520x280 (not even the hole screen is used) and with the full video shrinked to that size. So in the end no margins from the video itself were cropped.
What I thought here was do setup the surface view with a width and height of 1280 and 720 respectively (a surfaceView grated than the screen itself) and then cropping the surfaceView with setMargins to the size of the screen. That would be basically what I need.
ViewGroup.MarginLayoutParams layoutParams = (ViewGroup.MarginLayoutParams) mSurfaceView.getLayoutParams();
layoutParams.width = 1280;
layoutParams.height = 720;
layoutParams.setMargins(190, 110, 190, 110);
mSurfaceView.setLayoutParams(layoutParams);
mSurfaceView.requestLayout();
This didn't work either, because since the width and height are bigger than the screen size, Android doesn't make it 1280x720, but rather the maximum size it can (the screen size), so I ended up having the same result as before.
- Using SurfaceView method setScaleX and setScaleY
Like I mentioned, the video is automatically scaled down when rendered into the surface, so what I tried to do was to scale it back up. The width is scaled from 1280 to 900 to fit the surface, so I scaled it back up to 1280 (i.e. scaled its width by 1280/900 = 1.42222...). Same for the height.
ViewGroup.MarginLayoutParams layoutParams = (ViewGroup.MarginLayoutParams) mSurfaceView.getLayoutParams();
layoutParams.width = 900;
layoutParams.height = 500;
mSurfaceView.setLayoutParams(layoutParams);
mSurfaceView.setScaleX(1.44f);
mSurfaceView.setScaleY(1.42f);
mSurfaceView.requestLayout();
This sure enough was the closest approach to getting were I wanted. However, the video got all messed up, particularly its colors. The main reason I suppose is that for example 1280/900 has infinite decimals, so that type of scaling didn't seem very good.
Also the approach of scaling down first, and then scaling back up didn't really sound good from the beginning because information could be lost. If the scaling might of been done with better numbers (for example scaling from 1000 to 500 and then back to 1000) it might of had a better result, but still not a good approach because I want this to work with different resolutions.
- MediaCodec scale to fit and scale to fit with cropping
On the MediaCodec side, I tried both scaling modes:
mCodec.setVideoScalingMode(MediaCodec.VIDEO_SCALING_MODE_SCALE_TO_FIT);
and
mCodec.setVideoScalingMode(MediaCodec.VIDEO_SCALING_MODE_SCALE_TO_FIT_WITH_CROPPING);
The sound of VIDEO_SCALING_MODE_SCALE_TO_FIT_WITH_CROPPING sounded promising, but it only means that the video will be cropped in order to fit the entire surface but maintaining its aspect ratio (so content is not distorted).
It seems that it would be good if there was another option in the MediaCodec like VIDEO_SCALING_MODE_NO_SCALING, but there isn't... just the two options above.
NOTE: While testing, I made a simple app to test things easier, where I would have a surfaceView but instead of using video I simply used an image of 1280x720 and drew it with the SurfaceView.draw method. This had a different result, as the image did not scale, so only the content that fitted in the screen were drawn (the first 900 pixels in with and the first 500 pixels in height). This would be a step forward if I could do the same with MediaCodec, because then I would just need to center the image, and the margins of the video would be left out.
- Working with the MediaCodec inputBuffer.
Another thing I tried, was to modify the video from the buffers of data that MediaCodec decodes. The idea was to pick up the inputBuffer, convert it to a Bitmap, remove the margins of the bitmap, and then converting the bitmap back to a ByteBuffer.
I had some problems while implementing this approach when converting the inputBuffer to a Bitmap. I started reading about it and how the media codec uses YUV and not ARGB, but it would depend on the source on what type of YUV.
I ended up dropping this idea before I got any results, mainly because it seemed like you needed to know what type of YUV you were working with and if it changed the solution wouldn't work. Also this seemed like it would not have the best performance, since you would be doing the process of ByteBuffer -> Bitmap -> ByteBuffer for every frame.
Anyway I tried a few more things but I don't want to make this any longer than it already is. Most of the things I tried were more related to the first two things I mentioned: woking with the SurfaceView.
Any idea on how I can resolve this?
NOTE: I can make any necessary changes, like changing the SurfaceView to a TextureView for example. Also I'm working with a custom board and have full control of the AOSP, not just the app. In fact the app is inside the AOSP, so I can use any hidden methods or even modify the AOSP. However this problem seems like none of that would be necessary, and it could all be resolved in the app with any regular SDK method.
Thanks and sorry for the long question! This is my first one :P