Can be used by the api user to figure out what http features the server supports based on the response received.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This makes do_new_request fail early when dealing with a http/1.0 server, avoiding unnecessary "reconnecting" warnings shown to the user.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This fixes a deadlock when using the hls demuxer's new http_persistent feature
to stream a youtube live stream over HTTPS. The youtube servers are http/1.1
compliant, but return a "Connecton: close". Before this commit, the demuxer
would attempt to send a new request on the partially shutdown connection and
cause a deadlock in the tls protocol.
Signed-off-by: Aman Gupta <aman@tmm1.net>
This mimics logging that was added in 53e0d5d724 for security
purposes.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This will prevent improper use of ff_http_do_new_request() if the user
tries to send a request for a different host to a previously connected
persistent http/1.1 connection.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Signed-off-by: Karthick J <kjeyapal@akamai.com>
Currently if you use the multiple_requests=1 option and try to
receive a chunked-encoded response, http_buf_read() will hang forever.
After this patch, EOF is emulated once a 0-byte final chunk is
received by setting a new flag. This flag is reset in ff_http_do_new_request(),
which is used to make additional requests on the open socket.
Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: Aman Gupta <aman@tmm1.net>
transfer_func variable passed to retry_transfer_wrapper
are h->prot->url_read and h->prot->url_write functions.
These need to return EOF or other error properly.
In case of returning >= 0, url_read/url_write is retried
until error is returned.
Signed-off-by: Daniel Kucera <daniel.kucera@gmail.com>
This commit optimizes HTTP performance by reducing forward seeks, instead
favoring a read-ahead and discard on the current connection (referred to
as a short seek) for seeks that are within a TCP window's worth of data.
This improves performance because with TCP flow control, a window's worth
of data will be in the local socket buffer already or in-flight from the
sender once congestion control on the sender is fully utilizing the window.
Note: this approach doesn't attempt to differentiate from a newly opened
connection which may not be fully utilizing the window due to congestion
control vs one that is. The receiver can't get at this information, so we
assume worst case; that full window is in use (we did advertise it after all)
and that data could be in-flight
The previous behavior of closing the connection, then opening a new
with a new HTTP range value results in a massive amounts of discarded
and re-sent data when large TCP windows are used. This has been observed
on MacOS/iOS which starts with an initial window of 256KB and grows up to
1MB depending on the bandwidth-product delay.
When seeking within a window's worth of data and we close the connection,
then open a new one within the same window's worth of data, we discard
from the current offset till the end of the window. Then on the new
connection the server ends up re-sending the previous data from new
offset till the end of old window.
Example (assumes full window utilization):
TCP window size: 64KB
Position: 32KB
Forward seek position: 40KB
* (Next window)
32KB |--------------| 96KB |---------------| 160KB
*
40KB |---------------| 104KB
Re-sent amount: 96KB - 40KB = 56KB
For a real world test example, I have MP4 file of ~25MB, which ffplay
only reads ~16MB and performs 177 seeks. With current ffmpeg, this results
in 177 HTTP GETs and ~73MB worth of TCP data communication. With this
patch, ffmpeg issues 4 HTTP GETs and 3 seeks for a total of ~22MB of TCP data
communication.
To support this feature, the short seek logic in avio_seek() has been
extended to call a function to get the short seek threshold value. This
callback has been plumbed to the URLProtocol structure, which now has
infrastructure in HTTP and TCP to get the underlying receiver window size
via SO_RCVBUF. If the underlying URL and protocol don't support returning
a short seek threshold, the default s->short_seek_threshold is used
This feature has been tested on Windows 7 and MacOS/iOS. Windows support
is slightly complicated by the fact that when TCP window auto-tuning is
enabled, SO_RCVBUF doesn't report the real window size, but it does if
SO_RCVBUF was manually set (disabling auto-tuning). So we can only use
this optimization on Windows in the later case
Signed-off-by: Joel Cunningham <joel.cunningham@me.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Reported-by: SleepProgger <security@gnutp.com>
Reviewed-by: Steven Liu <lingjiujianke@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Instead of silently ignoring the content_type option in listen mode,
apply its value to the provided "Content-Type:" header.
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Instead of silently ignoring the headers option in listen mode, use
the provided headers.
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Since all URLContexts have the same AVOptions, such AVOptions
will be applied on the outermost context only and removed from the
dict, while they probably make sense on all contexts.
This makes sure that rw_timeout gets propagated to the innermost
URLContext (to make sure it gets passed to the tcp protocol, when
opening a http connection for instance).
Alternatively, such matching options would be kept in the dict
and only removed after the ffurl_connect call.
Signed-off-by: Martin Storsjö <martin@martin.st>
This commit also disables the async fate test, because it
used internal APIs in a non-kosher way, which no longer
exists.
* commit '2758cdedfb7ac61f8b5e4861f99218b6fd43491d':
lavf: reorganize URLProtocols
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Instead of a linked list constructed at av_register_all(), store them
in a constant array of pointers.
Since no registration is necessary now, this removes some global state
from lavf. This will also allow the urlprotocol layer caller to limit
the available protocols in a simple and flexible way in the following
commits.
They allow reconnecting endless live streams which fail with eof
Reviewed-by: Zhang Rui <bbcallen@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The assignment had incorrectly placed parentheses which resulted in ret
always being > 0.
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '5ae178539b91d25710b7bb322d156c31aea9f8bf':
http: Add the trailing endlines if they are missing
Merged-by: Michael Niedermayer <michael@niedermayer.cc>
Send a footer to correctly close client sockets.
This fixes network errors in client applications.
Signed-off-by: Stephan Holljes <klaxa1337@googlemail.com>
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
txoffer (e.g. http://tori.aoi-chan.com/ ) redirects to the same URI on your
first request, and serves the actual file on the second. It's stupid, but AFAIK
technically compliant. We'd previously see the server not handing back a Range
header and return an error; now, instead, we see that there's a redirect and
keep track of the offset we want while trying again at the new URL.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
With this patch http can be used to listen for POST data to be used as an input stream.
Signed-off-by: Stephan Holljes <klaxa1337@googlemail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
In http_open_cnx, the patch restores the AVDictionary if connection needs to be re-tried
because of a authentication/redirect status code.
Previously, if a 401/407/30x status code was encountered, http_open_cnx would restart at the redo label, but any options
used by the underlying protocol would be missing because they were removed by the first attempt.
Signed-off-by: Brandon Lees <brandon@n-hega.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Previously, AVERROR(EIO) was returned on failure of
http_open_cnx_internal(). Now the value is passed to upper level, thus
it is possible to distinguish ECONNREFUSED, ETIMEDOUT, ENETUNREACH etc.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
int ff_http_averror(int status_code, int default_averror)
This helper function returns AVERROR_ value from 3-digit HTTP status
code.
Second argument, default_averror, is used if no specific AVERROR_ is
available. It is introduced because in different places of code
different return codes are used - -1, AVERROR(EIO), AVERROR_INVALIDDATA.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The current code would use any unknown attribute-value pair
as the cookie value.
RFC 6265 states that the first key-value pair is the actual
cookie, and the attribute-value pairs only start after.
With the current code:
Set-Cookie: test=good_value; path=/; dummy=42
gives this:
Cookie: dummy=42
instead of this with the new code:
Cookie: test=good_value
* commit '7ccb847f0f1f28199fa254847b91b6e50fb92832':
http: Reduce scope of a variable in parse_content_encoding()
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '7601f9412a2d3387617a45966b65b452a632c27a':
http: export icecast metadata as an option with name "metadata".
Merged-by: Michael Niedermayer <michaelni@gmx.at>
The cur_*auth_type variables were set before the http_connect call
prior to 6a463e7fb - their sole purpose is to record the
authentication type used to do the latest request, since parsing
the http response sets the new type in the auth state.
CC: libav-stable@libav.org
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit '8bf3bf69ad7333bf0c45f4d2797fc2c61bc8922f':
http: Stop reading after receiving the whole file for non-chunked transfers
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Previously this logic was only used if the server didn't
respond with Connection: close, but use it even for that case,
if the server response is non-chunked.
Originally the http code has relied on Connection: close to close
the socket when the file/stream is received - the http protocol
code just kept reading from the socket until the socket was closed.
In f240ed18 we added a check for the file size, because some
http servers didn't respond with Connection: close (and wouldn't
close the socket) even though we requested it, which meant that the
http protocol blocked for a long time at the end of files, waiting
for a socket level timeout.
When reading over tls, trying to read at the end of the connection,
when the peer has closed the connection, can produce spurious (but
harmless) warnings. Therefore always voluntarily stop reading when
the specified file size has been received, if not using a chunked
transfer encoding. (For chunked transfers, we already return 0
as soon as we get the chunk header indicating end of stream.)
Signed-off-by: Martin Storsjö <martin@martin.st>
Split return value handling from the actual opening.
Incidentally fixes the https -> http redirect issue reported by
Compn on behalf of rcombs.
CC: libav-stable@libav.org
* commit '7bdd2ff6825951f7a6a6008303acfce7c2a63532':
http: Use a constant for the supported header size
Conflicts:
libavformat/http.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '389380c27915b0505fed538cd54c035c891fabd9':
http: Do move the class instantiation in the conditional block
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '28df1d24112c6ad0763985df2faeeb198cfbad69':
http: Provide an option to override the HTTP method
Merged-by: Michael Niedermayer <michaelni@gmx.at>