I already answered a similar question here, but since Swift is involved here I'll try to provide an extended answer as far as I can.
First, I guess by referring to the Japanese tutorial you meant this. I've no experience on Switf/C bridging but if the tutorial actually works it seems that having a header file with the C imports (in this case Tutorials-Bridging-Header.h
that includes the actual ffmpeg headers) is enough. After that, at least according to the tutorial, you can use the ffmpeg data types and functions in your code (at least that is what happens in Tutorial1.swift
- it directly calls avcodec_decode_video2
and others).
If the Swift interop is as easy as it seems then:
1) You need an iOS version of ffmpeg, either use a SourceForge/Github project where you have a XCode project (however, if you need only RTSP and certains codecs you may still need to tweak the project for your needs, since depending on licensing factors you may need to disable some encoders - H.264 in particular) or take the ffmpeg sources and build it yourself using the iOS toolchain, it is actually not that hard (already mentioned in my previous post).
2) Then you need to link with and load ffmpeg (all the av_register_all stuff you see in the tutorials) and feed it the stream:
2a) For RTSP if you now the RTSP url of the stream Googling for avio_open
is a good start, you can feed the url right to it, ffmpeg transports and demuxers should take care of the actual connection and then you can extract the data from the streams using av_read_frame
, somewhat similar to this.
2b) For ONVIF you will need to actually implement the xml requests to retrieve the stream URI, if it is RTSP then play is a regular RTSP stream, if it is a HTTP stream with a standard content type avio_open
should be able to handle it as well.
3) Find the needed decoder in ffmpeg, decode the data obtained from av_read_frame
and present it on your view.