2

My main motive is to set up a video conferencing and run a TensorFlow model on top of the local stream and temporarily save it in the backend.

  • I have implemented both video conferencing and tflite model detection individually. Problem is arising when I try to integrate both of the functionalities
  • I have tried using two services connectycube and agora, both give me the same problem i.e whenever I initialize the sdks of the video conferencing services, they hog up on the camerstream. So when I try to run my tflite model using a cameracontroller its stream stops as soon as the videoconferencing starts
  • Is there any way I can implement both my videoconferencing and object detection together?

THANKS IN ADVANCE

1 Answers1

0

Unfortunately, the flutter_webrtc plugin doesn't support this feature. The author has plans to realize this feature https://github.com/flutter-webrtc/flutter-webrtc/issues/959 but at this moment it is not yet implemented. You can see this thread https://github.com/flutter-webrtc/flutter-webrtc/issues/361 it contains some workarounds, but the worked solution will be released later.