Re: bugs found in http.e and news.ex
- Posted by jimcbrown (admin) Sep 13, 2012
- 1550 views
Some sites do not give a content_length in the http header (hence when news.ex used http.e, yahoo never had a hit on *anything*), some give bad values.
In HTTP/1.1, Content-Length is omitted in some cases. At a minimum, the Transfer-Encoding header (and its Chunked encoding) must be supported to fix this issue. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.4
(Note: Technically this is could be considered a server bug- Content-Length is always mandatory in HTTP/1.0, so servers should not be sending HTTP/1.1 replies to HTTP/1.0 clients, as they will not be compatible; nonetheless, several major sites [Wikipedia] sometimes do this. Also, servers should not be sending Content-Length along with Transfer-Encoding, yet some do; in these cases, the Content-Length is bogus, as the length computed from the Transfer-Encoding supercedes it).
Ideally, http.e should be fully HTTP/1.1 compliant.
I disagree with you yet again, i say ideally http.e should work.
If http.e was fully HTTP/1.1 compliant, then in these cases it would ignore the bad Content-Length header and use the other headers that provide correct information instead.
So, if http was fully HTTP/1.1 compliant ... then it would work. Best of both worlds. Or do you have a scenario where a fully HTTP/1.1 compliant version would also break?
At least one uses 31-bit values which roll over to negative values and are wrong from then on (wikimedia, but others too).
I suspect you are running into the above isue, rather than rollover.
Really?,
Sounds plausible to me.
so the -216blahblah values i see should be treated how?
By ignoring them in favor of the Transfer-Encoding and Chunking headers, as HTTP/1.1 requires.
And then they decrease to zero and go positive and keep increasing, and then go negative again, which drug should i take to keep believing that BS?
eukat
I think that this statement speaks for itself.