[squid-users] squid with slow client
Amos Jeffries
squid3 at treenet.co.nz
Wed Apr 1 01:07:39 UTC 2015
On 1/04/2015 11:18 a.m., Hector Chan wrote:
> Hi all,
>
> How does squid behave when it is downloading a 5+GB file with a slow
> client? I see my client (curl) exited with error code 18 (
> CURLE_PARTIAL_FILE) when downloading a 5+GB file from squid. It was a
> cache miss, so the file was actually being fetched from the origin server.
> When it is cache hit, I don't see the curl error.
>
> What I observed was that squid was downloading from the origin server at
> about 750 KB/s, and the client was downloading from squid at about 10 to 50
> KB/s. The client was geographically far from squid.
Depends on several factors:
* whether the Content-Length, Transfer-Encoding, or neither is presented
by the server in headers, and
* what the cacheable object size limits are for Squid, and
* how far down into the object the transaction has reached, and
* whether collapsed forwarding is in effect, and
* whether ICAP is being used on it, and
* whether range processing is in effect
Squid has a 64KB server read buffer plus a client write buffer. These
two are limited by the readahead_gap directive if it is smaller. In
general that readahead_gap amount is what Squid has buffered for the
client and will not read from server any more than has been sent to the
client (TCP buffering and I/O latency can make things get a bit higher -
but for Squid purposes that bit is already "sent").
If the size is known from Content-Length (or a Range offset) Squid can
select the appropriate place to store the data from server (as well as
delivering to the client). It could be cache_mem (RAM), or cache_dir
(disk), or transient / not cached (RAM).
If the content-length header is absent (none or Transfer-Encoding used)
all objects start off assuming they can be cached in memory, which is
fast. Once it gets past maximum_object_size_in_memory Squid pushes it
out to disk (which is quite a bit slower), and if it gets so large it
cant even store there it should get updated to transient memory (fast
again). In transient memory the bits sent to the client(s) are discarded
and only the readahead_gap window is retained in RAM.
If Range processing is in effect the range_offset_limit and
quick_abort_* directives determine whether Squid fetches data from the
server before/after the client requested range. This will be fetched at
full speed of the connection between Squid and server.
If collapsed forwarding is in effect the client which initiated the
first fetch is in control of how much is read and how fast. Other
clients are just tagging along receiving duplicate copies of what Squid
has available.
If ICAP is being used there are ICAP I/O buffers of 64KB also affecting
delivery and readahead_gap covers what has not yet been sent to client.
>
> I am using the ufs disk cache:
>
> cache_replacement_policy lru
> minimum_object_size 1 bytes
> cache_dir ufs /data/squid/cache 130000 16 256 max-size=26843545600
I expect you would be better served with using the min-size= parameter
on the cache_dir line instead of setting the global Squid minimum object
size. HTTP/1.1 commonly has a fair amount of 0-byte messages (30x for
example) that are cacheable and might be stored in cache_mem quite
easily for good performance.
>
> Here is the error I found in cache.log:
>
> 2015/03/18 16:53:07.263| client_side_reply.cc(1185) replyStatus:
> clientReplyStatus: transfer is DONE
> 2015/03/18 16:53:07.263| client_side_reply.cc(1201) replyStatus:
> clientReplyStatus: client didn't get all it expected
> 2015/03/18 16:53:07.263| cbdata.cc(510) cbdataReferenceValid:
> cbdataReferenceValid: 0x1707a48
> 2015/03/18 16:53:07.263| client_side.cc(1917) stopSending: sending error
> (local=127.0.0.1:8443 remote=127.0.0.1:54359 FD 8 flags=1):
> STREAM_UNPLANNED_COMPLETE; old receiving erro
> r: none
I read this as something happened with the server communication that
aborted the delivery. eg the server disconnecting with a TCP RST/abort
instead of a normal connection closure.
Although the trace above only indicates that the client was not finished
receiving when the dosconnect happened, no details on what caused it to
happen.
Amos
More information about the squid-users
mailing list