4 Protocols are configured elements in Libav which allow to access
5 resources which require the use of a particular protocol.
7 When you configure your Libav build, all the supported protocols are
8 enabled by default. You can list all available ones using the
9 configure option "--list-protocols".
11 You can disable all the protocols using the configure option
12 "--disable-protocols", and selectively enable a protocol using the
13 option "--enable-protocol=@var{PROTOCOL}", or you can disable a
14 particular protocol using the option
15 "--disable-protocol=@var{PROTOCOL}".
17 The option "-protocols" of the ff* tools will display the list of
20 A description of the currently available protocols follows.
24 Physical concatenation protocol.
26 Allow to read and seek from many resource in sequence as if they were
29 A URL accepted by this protocol has the syntax:
31 concat:@var{URL1}|@var{URL2}|...|@var{URLN}
34 where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
35 resource to be concatenated, each one possibly specifying a distinct
38 For example to read a sequence of files @file{split1.mpeg},
39 @file{split2.mpeg}, @file{split3.mpeg} with @command{avplay} use the
42 avplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
45 Note that you may need to escape the character "|" which is special for
52 Allow to read from or read to a file.
54 For example to read from a file @file{input.mpeg} with @command{avconv}
57 avconv -i file:input.mpeg output.mpeg
60 The ff* tools default to the file protocol, that is a resource
61 specified with the name "FILE.mpeg" is interpreted as the URL
70 Read Apple HTTP Live Streaming compliant segmented stream as
71 a uniform one. The M3U8 playlists describing the segments can be
72 remote HTTP resources or local files, accessed using the standard
74 The nested protocol is declared by specifying
75 "+@var{proto}" after the hls URI scheme name, where @var{proto}
76 is either "file" or "http".
79 hls+http://host/path/to/remote/resource.m3u8
80 hls+file://path/to/local/resource.m3u8
83 Using this protocol is discouraged - the hls demuxer should work
84 just as well (if not, please report the issues) and is more complete.
85 To use the hls demuxer instead, simply use the direct URLs to the
90 HTTP (Hyper Text Transfer Protocol).
94 MMS (Microsoft Media Server) protocol over TCP.
98 MMS (Microsoft Media Server) protocol over HTTP.
100 The required syntax is:
102 mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
109 Computes the MD5 hash of the data to be written, and on close writes
110 this to the designated output or stdout if none is specified. It can
111 be used to test muxers without writing an actual file.
113 Some examples follow.
115 # Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
116 avconv -i input.flv -f avi -y md5:output.avi.md5
118 # Write the MD5 hash of the encoded AVI file to stdout.
119 avconv -i input.flv -f avi -y md5:
122 Note that some formats (typically MOV) require the output protocol to
123 be seekable, so they will fail with the MD5 output protocol.
127 UNIX pipe access protocol.
129 Allow to read and write from UNIX pipes.
131 The accepted syntax is:
136 @var{number} is the number corresponding to the file descriptor of the
137 pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number}
138 is not specified, by default the stdout file descriptor will be used
139 for writing, stdin for reading.
141 For example to read from stdin with @command{avconv}:
143 cat test.wav | avconv -i pipe:0
144 # ...this is the same as...
145 cat test.wav | avconv -i pipe:
148 For writing to stdout with @command{avconv}:
150 avconv -i test.wav -f avi pipe:1 | cat > test.avi
151 # ...this is the same as...
152 avconv -i test.wav -f avi pipe: | cat > test.avi
155 Note that some formats (typically MOV), require the output protocol to
156 be seekable, so they will fail with the pipe output protocol.
160 Real-Time Messaging Protocol.
162 The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia
163 content across a TCP/IP network.
165 The required syntax is:
167 rtmp://@var{server}[:@var{port}][/@var{app}][/@var{instance}][/@var{playpath}]
170 The accepted parameters are:
174 The address of the RTMP server.
177 The number of the TCP port to use (by default is 1935).
180 It is the name of the application to access. It usually corresponds to
181 the path where the application is installed on the RTMP server
182 (e.g. @file{/ondemand/}, @file{/flash/live/}, etc.). You can override
183 the value parsed from the URI through the @code{rtmp_app} option, too.
186 It is the path or name of the resource to play with reference to the
187 application specified in @var{app}, may be prefixed by "mp4:". You
188 can override the value parsed from the URI through the @code{rtmp_playpath}
193 Additionally, the following parameters can be set via command line options
194 (or in code via @code{AVOption}s):
198 Name of application to connect on the RTMP server. This option
199 overrides the parameter specified in the URI.
202 Set the client buffer time in milliseconds. The default is 3000.
205 Extra arbitrary AMF connection parameters, parsed from a string,
206 e.g. like @code{B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0}.
207 Each value is prefixed by a single character denoting the type,
208 B for Boolean, N for number, S for string, O for object, or Z for null,
209 followed by a colon. For Booleans the data must be either 0 or 1 for
210 FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or
211 1 to end or begin an object, respectively. Data items in subobjects may
212 be named, by prefixing the type with 'N' and specifying the name before
213 the value (i.e. @code{NB:myFlag:1}). This option may be used multiple
214 times to construct arbitrary AMF sequences.
217 Version of the Flash plugin used to run the SWF player. The default
220 @item rtmp_flush_interval
221 Number of packets flushed in the same request (RTMPT only). The default
225 Specify that the media is a live stream. No resuming or seeking in
226 live streams is possible. The default value is @code{any}, which means the
227 subscriber first tries to play the live stream specified in the
228 playpath. If a live stream of that name is not found, it plays the
229 recorded stream. The other possible values are @code{live} and
233 Stream identifier to play or to publish. This option overrides the
234 parameter specified in the URI.
237 URL of the SWF player for the media. By default no value will be sent.
240 URL of the target stream.
244 For example to read with @command{avplay} a multimedia resource named
245 "sample" from the application "vod" from an RTMP server "myserver":
247 avplay rtmp://myserver/vod/sample
252 Real-Time Messaging Protocol over a secure SSL connection.
254 The Real-Time Messaging Protocol (RTMPS) is used for streaming
255 multimedia content across an encrypted connection.
259 Real-Time Messaging Protocol tunneled through HTTP.
261 The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used
262 for streaming multimedia content within HTTP requests to traverse
267 Real-Time Messaging Protocol tunneled through HTTPS.
269 The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used
270 for streaming multimedia content within HTTPS requests to traverse
273 @section rtmp, rtmpe, rtmps, rtmpt, rtmpte
275 Real-Time Messaging Protocol and its variants supported through
278 Requires the presence of the librtmp headers and library during
279 configuration. You need to explicitly configure the build with
280 "--enable-librtmp". If enabled this will replace the native RTMP
283 This protocol provides most client functions and a few server
284 functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
285 encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
286 variants of these encrypted types (RTMPTE, RTMPTS).
288 The required syntax is:
290 @var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
293 where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
294 "rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
295 @var{server}, @var{port}, @var{app} and @var{playpath} have the same
296 meaning as specified for the RTMP native protocol.
297 @var{options} contains a list of space-separated options of the form
300 See the librtmp manual page (man 3 librtmp) for more information.
302 For example, to stream a file in real-time to an RTMP server using
305 avconv -re -i myfile -f flv rtmp://myserver/live/mystream
308 To play the same stream using @command{avplay}:
310 avplay "rtmp://myserver/live/mystream live=1"
319 RTSP is not technically a protocol handler in libavformat, it is a demuxer
320 and muxer. The demuxer supports both normal RTSP (with data transferred
321 over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
322 data transferred over RDT).
324 The muxer can be used to send a stream using RTSP ANNOUNCE to a server
325 supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
326 @uref{http://github.com/revmischa/rtsp-server, RTSP server}).
328 The required syntax for a RTSP url is:
330 rtsp://@var{hostname}[:@var{port}]/@var{path}
333 The following options (set on the @command{avconv}/@command{avplay} command
334 line, or set in code via @code{AVOption}s or in @code{avformat_open_input}),
337 Flags for @code{rtsp_transport}:
342 Use UDP as lower transport protocol.
345 Use TCP (interleaving within the RTSP control channel) as lower
349 Use UDP multicast as lower transport protocol.
352 Use HTTP tunneling as lower transport protocol, which is useful for
356 Multiple lower transport protocols may be specified, in that case they are
357 tried one at a time (if the setup of one fails, the next one is tried).
358 For the muxer, only the @code{tcp} and @code{udp} options are supported.
360 Flags for @code{rtsp_flags}:
364 Accept packets only from negotiated peer address and port.
366 Act as a server, listening for an incoming connection.
369 When receiving data over UDP, the demuxer tries to reorder received packets
370 (since they may arrive out of order, or packets may get lost totally). This
371 can be disabled by setting the maximum demuxing delay to zero (via
372 the @code{max_delay} field of AVFormatContext).
374 When watching multi-bitrate Real-RTSP streams with @command{avplay}, the
375 streams to display can be chosen with @code{-vst} @var{n} and
376 @code{-ast} @var{n} for video and audio respectively, and can be switched
377 on the fly by pressing @code{v} and @code{a}.
379 Example command lines:
381 To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
384 avplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
387 To watch a stream tunneled over HTTP:
390 avplay -rtsp_transport http rtsp://server/video.mp4
393 To send a stream in realtime to a RTSP server, for others to watch:
396 avconv -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
399 To receive a stream in realtime:
402 avconv -rtsp_flags listen -i rtsp://ownaddress/live.sdp @var{output}
407 Session Announcement Protocol (RFC 2974). This is not technically a
408 protocol handler in libavformat, it is a muxer and demuxer.
409 It is used for signalling of RTP streams, by announcing the SDP for the
410 streams regularly on a separate port.
414 The syntax for a SAP url given to the muxer is:
416 sap://@var{destination}[:@var{port}][?@var{options}]
419 The RTP packets are sent to @var{destination} on port @var{port},
420 or to port 5004 if no port is specified.
421 @var{options} is a @code{&}-separated list. The following options
426 @item announce_addr=@var{address}
427 Specify the destination IP address for sending the announcements to.
428 If omitted, the announcements are sent to the commonly used SAP
429 announcement multicast address 224.2.127.254 (sap.mcast.net), or
430 ff0e::2:7ffe if @var{destination} is an IPv6 address.
432 @item announce_port=@var{port}
433 Specify the port to send the announcements on, defaults to
434 9875 if not specified.
437 Specify the time to live value for the announcements and RTP packets,
440 @item same_port=@var{0|1}
441 If set to 1, send all RTP streams on the same port pair. If zero (the
442 default), all streams are sent on unique ports, with each stream on a
443 port 2 numbers higher than the previous.
444 VLC/Live555 requires this to be set to 1, to be able to receive the stream.
445 The RTP stack in libavformat for receiving requires all streams to be sent
449 Example command lines follow.
451 To broadcast a stream on the local subnet, for watching in VLC:
454 avconv -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
457 Similarly, for watching in avplay:
460 avconv -re -i @var{input} -f sap sap://224.0.0.255
463 And for watching in avplay, over IPv6:
466 avconv -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
471 The syntax for a SAP url given to the demuxer is:
473 sap://[@var{address}][:@var{port}]
476 @var{address} is the multicast address to listen for announcements on,
477 if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
478 is the port that is listened on, 9875 if omitted.
480 The demuxers listens for announcements on the given address and port.
481 Once an announcement is received, it tries to receive that particular stream.
483 Example command lines follow.
485 To play back the first stream announced on the normal SAP multicast address:
491 To play back the first stream announced on one the default IPv6 SAP multicast address:
494 avplay sap://[ff0e::2:7ffe]
499 Trasmission Control Protocol.
501 The required syntax for a TCP url is:
503 tcp://@var{hostname}:@var{port}[?@var{options}]
509 Listen for an incoming connection
512 avconv -i @var{input} -f @var{format} tcp://@var{hostname}:@var{port}?listen
513 avplay tcp://@var{hostname}:@var{port}
520 User Datagram Protocol.
522 The required syntax for a UDP url is:
524 udp://@var{hostname}:@var{port}[?@var{options}]
527 @var{options} contains a list of &-seperated options of the form @var{key}=@var{val}.
528 Follow the list of supported options.
532 @item buffer_size=@var{size}
533 set the UDP buffer size in bytes
535 @item localport=@var{port}
536 override the local UDP port to bind with
538 @item localaddr=@var{addr}
539 Choose the local IP address. This is useful e.g. if sending multicast
540 and the host has multiple interfaces, where the user can choose
541 which interface to send on by specifying the IP address of that interface.
543 @item pkt_size=@var{size}
544 set the size in bytes of UDP packets
546 @item reuse=@var{1|0}
547 explicitly allow or disallow reusing UDP sockets
550 set the time to live value (for multicast only)
552 @item connect=@var{1|0}
553 Initialize the UDP socket with @code{connect()}. In this case, the
554 destination address can't be changed with ff_udp_set_remote_url later.
555 If the destination address isn't known at the start, this option can
556 be specified in ff_udp_set_remote_url, too.
557 This allows finding out the source address for the packets with getsockname,
558 and makes writes return with AVERROR(ECONNREFUSED) if "destination
559 unreachable" is received.
560 For receiving, this gives the benefit of only receiving packets from
561 the specified peer address/port.
563 @item sources=@var{address}[,@var{address}]
564 Only receive packets sent to the multicast group from one of the
565 specified sender IP addresses.
567 @item block=@var{address}[,@var{address}]
568 Ignore packets sent to the multicast group from the specified
572 Some usage examples of the udp protocol with @command{avconv} follow.
574 To stream over UDP to a remote endpoint:
576 avconv -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
579 To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
581 avconv -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
584 To receive over UDP from a remote endpoint:
586 avconv -i udp://[@var{multicast-address}]:@var{port}