I know this might be a relatively generic question, but I'm trying to see how I can get pointed in the right direction...
I'm trying to build a live face recognition app, using AWS Rekognition. I'm pretty comfortable with the API, and using static images uploaded to S3 to perform facial recognition. However, I'm trying to find out a way to stream live data into Rekognition. After reading the various articles and documentation that Amazon makes available, I found the process but can't seem to get over one hurdle.
According to the docs, I can use Kinesis to accomplish this. Seems pretty simple: create a Kinesis video stream, and process the stream through Rekognition. The producer produces the stream data into the Kinesis stream and I'm golden.
The problem I have is the producer. I found that AWS has a Java Producer library available (https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/producer-sdk-javaapi.html). Great... Seems simple enough, but now how do I use that producer to capture the stream from my webcam, and send off the bytes to Kinesis? The sample code that AWS provided actually uses static images from a directory, no code to get it integrated with an actual live source like a webcam.
Ideally, I can load my camera as an input source amd start streaming. But I can't seem to find any documentation on how to do this.
Any help, or direction would be greatly appreciated.