2

I am using CameraX version 1.0.0-alpha05. The reason of using this version because I am following a pytorch object detection android sample code.

I know of the existance of ssh <username>@<hostname>.local -Y "mplayer tv://device=/dev/video0" but dang, I also do not know how to capture the video feed to be fed into the ImageAnalysis or Preview Object.

Any help or clues is much appreciated!

  final TextureView textureView = getCameraPreviewTextureView();
        final PreviewConfig previewConfig = new PreviewConfig
                .Builder()
                .build();
        final Preview preview = new Preview(previewConfig);
        preview.setOnPreviewOutputUpdateListener(output -> {
            textureView.setSurfaceTexture(output.getSurfaceTexture());
            Matrix m = new Matrix();
            m.postRotate(90, 1000, 350);
            textureView.setTransform(m);
        });
        final var imageAnalysisConfig =
                new ImageAnalysisConfig.Builder()
                        .setTargetResolution(new Size(500, 500))
                        .setCallbackHandler(mBackgroundHandler)
                     .setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
                        .build();

        imageAnalysis = new ImageAnalysis(imageAnalysisConfig);
        imageAnalysis.setAnalyzer((image, rotationDegrees) -> {
            if (SystemClock.elapsedRealtime() - mLastAnalysisResultTime < 500) {
                return;
            }
            //This the portion where pytorch does it's AI predictions
            final R2 result = analyzeImage(image, rotationDegrees);
            if (result != null) {
                mLastAnalysisResultTime = SystemClock.elapsedRealtime();
                runOnUiThread(() -> applyToUiAnalyzeImageResult(result));
            }
        });


        CameraX.bindToLifecycle(this, preview, imageAnalysis);
Ryan
  • 356
  • 1
  • 6
  • 26
  • I am no longer working on this issue but If I were to come back again to work on it, I may probably forgo most of the cameraX objects used here and just take frame by frame via a http connection from the rpi and feed it directly to a AI model. Because I am handling lifecycles of somethings on my own i would probably use android's livedata and be wary of the httpconnection states too. – Ryan Aug 10 '21 at 04:12

0 Answers0