Having implemented a couple years back a mechanism for signaling via a data channel message that a remote user muted their local video (e.g., set enable to false) and then taking the appropriate action on the remote side (e.g., showing the remote user avatar instead of the black video stream), i have been doing some testing on a non directly related function that got me looking at the states of the video tracks (i.e., the video tracks on the receive stream of the peer connection) and i notice that the muted state on the remote video fluctuates sometimes between true and false (though there is no actual change to the remote stream itself).
Hard to tell when this occurs exactly but seems that it MIGHT (no real idea whether this is actually the case or not) correlate to not attaching the media to an object (e.g., HTML video element for playback) for a long period of time (e.g., 10 seconds) and it seems that if it is attached in a short period the videotrack does not show a state of muted=true on the receiving side.
The W3 Media Capture and Streams Spec (see https://w3c.github.io/mediacapture-main/#track-muted) "A MediaStreamTrack is muted when the source is temporarily unable to provide the track with data. A track can be muted by a user. Often this action is outside the control of the application. This could be as a result of the user hitting a hardware switch or toggling a control in the operating system / browser chrome. A track can also be muted by the User Agent." The spec does not seem to address what the causes of this are or might be.
In the case of webRTC, can anyone provide some indication as to why the remote videostream as referenced from the webrtc peer connection might show a muted state of true when the media from the remote is actually flowing. Also, what might be the practical value or usage of the muted state on a remote video stream when it is not actually reflective of the remote state but of some local processing.
Thanks for any thoughts on this.