5

Using unity, and the new 1.1 version of ARCore, the API exposes some new ways of getting the camera information. However, I can't find any good examples of saving this as a file to local storage as a jpg, for example.

The ARCore examples have a nice example of retrieving the camera data and then doing something with it here: https://github.com/google-ar/arcore-unity-sdk/blob/master/Assets/GoogleARCore/Examples/ComputerVision/Scripts/ComputerVisionController.cs#L212 and there are a few examples of retrieving the camera data in that class, but nothing around saving that data.

I've seen this: How to take & save picture / screenshot using Unity ARCore SDK? which uses the older API way of getting data, and doesn't really go into detail on saving, either.

What I ideally want is a way to turn the data from Frame.CameraImage.AcquireCameraImageBytes() in the API into a stored jpg on disk, through Unity.

Update

I've since got it working mainly through digging through this issue on the ARCore github page: https://github.com/google-ar/arcore-unity-sdk/issues/72#issuecomment-355134812 and modifying Sonny's answer below, so it's only fair that one gets accepted.

In case anyone else is trying to do this I had to do the following steps:

  1. Add a callback to the Start method to run your OnImageAvailable method when the image is available:

    public void Start()
    {
        TextureReaderComponent.OnImageAvailableCallback += OnImageAvailable;
    }
    
  2. Add a TextureReader (from the computer vision example provided with the SDK) to your camera and your script

  3. Your OnImageAvailable should look a bit like this:

    /// <summary>
    /// Handles a new CPU image.
    /// </summary>
    /// <param name="format">The format of the image.</param>
    /// <param name="width">Width of the image, in pixels.</param>
    /// <param name="height">Height of the image, in pixels.</param>
    /// <param name="pixelBuffer">Pointer to raw image buffer.</param>
    /// <param name="bufferSize">The size of the image buffer, in bytes.</param>
    private void OnImageAvailable(TextureReaderApi.ImageFormatType format, int width, int height, IntPtr pixelBuffer, int bufferSize)
    {
        if (m_TextureToRender == null || m_EdgeImage == null || m_ImageWidth != width || m_ImageHeight != height)
        {
            m_TextureToRender = new Texture2D(width, height, TextureFormat.RGBA32, false, false);
            m_EdgeImage = new byte[width * height * 4];
            m_ImageWidth = width;
            m_ImageHeight = height;
        }
    
        System.Runtime.InteropServices.Marshal.Copy(pixelBuffer, m_EdgeImage, 0, bufferSize);
    
        // Update the rendering texture with the sampled image.
        m_TextureToRender.LoadRawTextureData(m_EdgeImage);
        m_TextureToRender.Apply();
    
        var encodedJpg = m_TextureToRender.EncodeToJPG();
        var path = Application.persistentDataPath;
    
        File.WriteAllBytes(path + "/test.jpg", encodedJpg);
    }
    
Jordan Robinson
  • 855
  • 11
  • 25

2 Answers2

6

In Unity, it should be possible to load the raw image data into a texture and then save it to a JPG using UnityEngine.ImageConversion.EncodeToJPG. Example code:

public class Example : MonoBehaviour
{
    private Texture2D _texture;
    private TextureFormat _format = TextureFormat.RGBA32;

    private void Awake()
    {
        _texture = new Texture2D(width, height, _format, false);
    }

    private void Update()
    {
        using (var image = Frame.CameraImage.AcquireCameraImageBytes())
        {
            if (!image.IsAvailable) return;

            // Load the data into a texture 
            // (this is an expensive call, but it may be possible to optimize...)
            _texture.LoadRawTextureData(image);
            _texture.Apply();
        }
    }

    public void SaveImage()
    {
        var encodedJpg = _texture.EncodeToJPG();
        File.WriteAllBytes("test.jpg", encodedJpg)
    }
}

However, I'm not sure if the TextureFormat corresponds to a format that works with Frame.CameraImage.AcquireCameraImageBytes(). (I'm familiar with Unity but not ARCore.) See Unity's documentation on TextureFormat, and whether that is compatible with ARCore's ImageFormatType.

Also, test whether the code is performant enough for your application.

EDIT: As user @Lece explains, save the encoded data with File.WriteAllBytes. I've updated my code example above as I omitted that step originally.

EDIT #2: For the complete answer specific to ARCore, see the update to the question post. The comments here may also be useful - Jordan specified that "the main part was to use the texture reader from the computer vision sdk example here".

sonnyb
  • 3,194
  • 2
  • 32
  • 42
  • This helps quite a bit, I had to take some of the code from here: https://github.com/google-ar/arcore-unity-sdk/issues/72#issuecomment-355134812 although it just saves a black square instead of the data now. – Jordan Robinson Mar 30 '18 at 22:03
  • Are you able to see the colour image on the screen with your `Texture2D`? I want to narrow down whether the issue is with the `AcquireCameraImageBytes()` step, `LoadRawTextureData()` & `Apply()` step, or the final `EncodeToJpg()` & `WriteAllBytes()` step. – sonnyb Mar 31 '18 at 12:03
  • @JordanRobinson have you resolved the problem with black squares? I am facing the same issue. – midnightcoffee Apr 04 '18 at 14:40
  • 1
    @midnightcoffee yeah if you check the update in the original question that's what I went with and it fixed the black squares, the main part was to use the texture reader from the computer vision sdk example here: https://github.com/google-ar/arcore-unity-sdk/tree/master/Assets/GoogleARCore/Examples/ComputerVision – Jordan Robinson Apr 04 '18 at 14:43
  • Yes, I have copy pasted your code and I am still getting black squares – midnightcoffee Apr 05 '18 at 08:19
  • 1
    I fixed it, not sure what was the reason, but it works now, thanks! Still, I am confused, aren't we getting images from AR Core in YUV format (_An ARCore camera image with its data accessible from the CPU in YUV-420-888 format._ as mentioned [here](https://developers.google.com/ar/reference/unity/struct/GoogleARCore/CameraImageBytes))? How does it work with just loading it into RGBA32 texture without converting it? – midnightcoffee Apr 05 '18 at 09:35
  • Hmm, I'm not sure, but in the [ARCore GitHub issue page](https://github.com/google-ar/arcore-unity-sdk/issues/72) that was mentioned in the update to the opening post, they seem to use TextureFormat.RGBA32 a well... maybe `TextureReader` actually provides the data in RGB format, or maybe the shader is somehow handling it (check out [this post](https://forum.unity.com/threads/opengl-rgba-conversion-extensions.12354/) that seems relevant) – sonnyb Apr 05 '18 at 12:13
1

Since I'm not familiar with ARCore, I shall keep this generic.

  1. Load the byte array into your Texture2D using LoadRawTextureData() and Apply()
  2. Encode the texture using EncodeToJPG()
  3. Save the encoded data with File.WriteAllBytes(path + ".jpg", encodedBytes)
Lece
  • 2,339
  • 1
  • 17
  • 21
  • 1
    I updated the example code in my own answer with your 3rd step which I had forgotten. Thank you. – sonnyb Mar 30 '18 at 19:46
  • @sonny No problem. You submitted before I'd finished typing but I thought I'd post it anyway due to that last key part. It's the same general idea, so all good :) – Lece Mar 30 '18 at 20:00