3

I am using Delphi 6 with DSPACK to do several operations involving audio and DirectX. I have the "input" side figured out where I assign one of the enumerated audio input devices to a TFilter object and connect that filter to a TSampleGrabber object and that gives me the audio buffers I need to send audio to Skype. It is the logical inverse of that graph that I need to figure out. I receive audio buffers from Skype via a socket. I need to create a graph that has a filter that would be the complement to TSampleGrabber. In other words, a TFilter that instead of delivering audio buffers during an event that fires when new audio is available like TSampleGrabber does, would have a similar event that fires when new audio is necessary to feed the graph. At the tail end of this "output" graph would be a TFilter assigned to one of the enumerated audio output devices whose input pins would connect to the output pins of this TSampleGrabber inverse doppelganger.

Does anyone know how to do this? I of course would prefer to avoid writing a custom filter COM object of my own to get this done. I'm hoping that there is an existing TFilter that accepts custom audio buffers to be mixed into a DirectX filter graph.

Robert Oschler
  • 14,153
  • 18
  • 94
  • 227

1 Answers1

2

A common starting point for a data injection filter is Push Source Filters Sample. This creates a filter with output only pin, which injects data into DirectShow pipeline, data can be of any type and typically it is video or audio.

As you mentioned Delphi and DSPack, the latter has this sample ported (see \Demos\D6-D7\Filters\PushSource).

Roman R.
  • 68,205
  • 6
  • 94
  • 158
  • Thanks @Roman R. Do you have any tips or caveats about connecting a filter to more than one output filter? That is, making multiple connect calls from more than one filter's output pins to the same filter's input pins, especially when it comes to setting media format types? – Robert Oschler Oct 03 '11 at 11:17
  • 1
    It is not quite like this. There will be a separate filter instance for every filter graph and DirectShow itself provides no means to connect instances together. You can do it yourself though. Any filter instance can only participate in one graph, a pin an have only one pin for peer to peer connection. So if you are planning a filter which takes multiple connections in multiple applications, the part which manages internal connections is completely up to you. – Roman R. Oct 03 '11 at 11:45
  • I may have expressed my question badly @Roman R. I am not talking about mixing graphs, instead I want to connect the output pins of two filters in the same graph to the input pins of another filter in the same graph (many to one connection). For example, connecting the output pins of two push source filters to the same input pins on another filter, thereby mixing the audio from the two push source filters together. Is that possible and if so, any tips or caveats on doing that, especially when it comes to audio media format types? – Robert Oschler Oct 03 '11 at 12:07
  • 1
    It is possible and is basically fine - all multiplexing filters work this way. You need to take care of one thing though: you should carefully timestamp outgoing samples so that samples from 2+ sources match in time. – Roman R. Oct 03 '11 at 12:54
  • Timestamp? I'm only dealing with audio so as far as I know there aren't any timestamps, just raw audio buffers. I was under the impression that only video frames from something like an RTP server deal with timestamps. If I'm wrong on this and you have a link that talks about timestamps and audio, please let me know. – Robert Oschler Oct 03 '11 at 13:31
  • 1
    Audio data does have timestamps, and they are pretty much important. I can think immediately of a few things where audio time stamp are important: (a) lip synchronization with video+audio playback, how else you can math specific audio fragment to be aired exactly with a specific timeframe? (b) making sure audio data is continuous without sort of gaps? without time stamps downstream peer would have to assume the data is continuous while it may be not (c) rate matching for live source needs to know original source timestamps. – Roman R. Oct 03 '11 at 16:13
  • I just checked TSampleGrabber's OnBuffer() event and you're right, it provides a timestamp parameter (SampleTime: double). I'm used to dealing with the legacy WaveIn API which did not provide a timestamp in its OnBuffer() event, only the audio buffer data. Thanks for pointing that out. – Robert Oschler Oct 03 '11 at 17:20