0

I am working on an image stitching app, which takes input from camera, estimates image transformation and warped input image by the estimated transformation. As shown in the following figure, image from camera are input into 2 branch of chains. However, the processing of image warping is dependent on the result of transformation estimation. My question is how I can make branch 1 wait for the results of branch 2?

enter image description here

genpfault
  • 51,148
  • 11
  • 85
  • 139

2 Answers2

1

If you make your image warping filter a subclass of GPUImageTwoInputFilter, this synchronization is taken care of for you.

Target the GPUImageVideoCamera instance to your feature matching / transformation estimation filter and the image warping filter, then target your feature matching / transformation estimation filter to the image warping filter. This will cause your video input to come in via the first input image and the results of your feature matching and transformation estimation filter to be in the second image. GPUImageTwoInputFilter subclasses only process and output a frame once input frames have been provided to both their inputs.

This should give you the synchronization you want, and be pretty straightforward to set up.

Brad Larson
  • 170,088
  • 45
  • 397
  • 571
  • Thank you, Brad. The problem is the feature matching/transformation estimation is not really a filter, but CPU processing. The result is a 3x3 homography matrix, which needs to be input to image warping. My current solution is to initialize a GPUImageTextureInput from the GPUImageVideoCamera.textureForOutput when the 3x3 matrix is ready. And then add image warping filter as target of GPUImageTextureInput. Do you think it is possible in this way? I haven't tested the code yet. – user3348157 Mar 26 '14 at 14:38
  • @user3348157 - If you use a GPUImageRawDataInput to feed in your homograph matrix, it will act in the same way. The two-input filter will wait until it has both input frames to act, and you can provide that frame data via the raw data input if it's coming from CPU-bound processing. – Brad Larson Mar 26 '14 at 14:41
  • As I learned from your FilterShowcase example, if a GPUImageTwoInputFilter takes a GPUImageVideoCamera and a GPUImagePicture as inputs, then GPUImagePicture only needs to call [GPUImagePicture processImage] once to unblock GPUImageVideoCamera loop. According to your suggestions, GPUImageRawDataInput has different behavior. I need to call [GPUImageRawDataInput processData] every iteration of the loop to unblock GPUImageVideoCamera. am I right? – user3348157 Mar 26 '14 at 15:00
  • @user3348157 - Hmm, yeah, the raw data input also uses an indefinite timestamp, which causes the two-input filter to not wait for it to provide new frames before running. -processData could probably be extended into -processDataForTimestamp: to allow that and enable proper synchronization. You'd just feed in the timestamp you got from the raw data output or whatever is triggering your CPU-side processing. – Brad Larson Mar 26 '14 at 16:11
  • This works after I added a -processDataForTimestamp to GPUImageRawDataInput and passes a valid CMTime to it everytime I need to trigger the Image Warping filter. Thanks, Brad. – user3348157 Mar 27 '14 at 23:53
  • @user3348157 - If you want to toss that into the framework in a pull request, I'd be glad to accept it. Otherwise, I'll add in my own implementation of this. – Brad Larson Mar 28 '14 at 16:23
  • Hi, Brad. I have created the pull request. the # is 1486. – user3348157 Mar 28 '14 at 18:16
0

I think, you can try to use, something like dispatch_semaphore_t Look here.

Community
  • 1
  • 1
Andrew
  • 573
  • 2
  • 6
  • 18