I'm developing an android app where user can start live streaming using his/her android-camera. I have AWS
& GCP
resources at hand. What I understand after some reading is,
I have to stream/publish, whatever android-camera is picking, to some server over some protocols(
RTMP
orHLS
, etc..)I have to setup server that will pull this input source and packages & stores it in a form that could be streamed/consumed on the mobile/web browser(Basically, an URL) and I believe
AWS's MediaLive
,MediaPackage
, etc.. resources should do that.I could use this URL are MediaSource for players on Android(like
ExoPlayer
)
My problem is I couldn't find good documentation on 1st part. I found this, https://github.com/bytedeco/javacv, which doesn't appear to be production-level work. While trying out 2nd part, while creating MediaLive channel on AWS, I was asked to point the channel to 2 destinations(I don't know what it means) which made me doubt my understanding of this process. I'm looking for some skeleton-procedure with official documentation on how to achieve this.
EDIT 1:
For the Input-Production part, I'm experimenting with this answer. https://stackoverflow.com/a/29061628/3881561
EDIT 2:
I've used https://github.com/ant-media/LiveVideoBroadcaster to send video source to RTMP server. I've created RTMP push input source in MediaLive and a channel with output - Archive(stores .ts files in S3). Now that the flow is working, How can I modify this architecture to allow multiple users to create live streaming?