[squid-users] pipeline_prefetch directive
Francesco Chemolli
gkinkie at gmail.com
Fri Jan 3 09:52:40 UTC 2025
On Fri, Jan 3, 2025 at 8:23 AM Jonathan Lee <jonathanlee571 at gmail.com>
wrote:
> Hello fellow Squid Users,
>
> I understand this directive is removed in Squid7 again I am still trying
> to understand more about what it did and does in the older versions of
> software.
>
> pipeline_prefetch historically was on or off for settings however today it
> is n+1 or a numerical value for the variable n.
>
> My question is after many trial and errors what is a good range to use for
> a 4GB memory system,
>
> I have attempted many different n values some being 100, 200, 300, 5, 10
> etc it appears to work well with 100 or maybe it was my other changes with
> using the directive read_ahead_gap 64 KB and testing 16 and 32
>
> what is a good solid go go juice number for pipeline_prefetch? I do notice
> massive increases in facebook loads when I have it at 100 however decreased
> performance with news websites. It is like a can’t win directive for me.
> Finally I thought best to ask.
HTTP is a request-response protocol, and HTTP/1.1 uses a single TCP
connection to issue several requests via the "keep-alive" feature.
The way it is supposed to work is:
- client opens TCP connection
- client sends request #1 on connection
- server sends response #1 on connection
- client sends request #2 on connection
- server sends response #2 on connection
- ...
- client or server closes TCP connection
There are several reasons why a connection might be closed. Among them,
axplicit decision by the client or server for instance due to wanting to
terminate the process to avoid leaking too much memory, or an inactivity
timeout, or because the server doesn't know in advance how long a response
is going to be, so the only way to terminate the response is by closing the
connection.
Sometimes however the client knows in advance several resources it wants to
fetch, and sometimes it will perform these requests optimistically, without
waiting for the previous responses. The flow in this case can look like:
- client opens TCP connection
- client sends request #1 on connection
- client sends request #2 on connection
- server sends response #1 on connection
- server sends response #2 on connection
- ...
- client or server closes TCP connection
pipeline_prefetch instructs squid to try and detect these events, and to
optimistically start processing requests before having completed the
previous responses. This can in theory help reduce page rendering latency,
but it comes with several drawbacks:
- the feature triggers bugs in Squid - see
https://joshua.hu/squid-security-audit-35-0days-45-exploits
- the feature can actually harm performance: if squid needs to close the
TCP connection at the end of response #1, but has already downloaded
uncacheable content for request #2, that content has to be discarded
- modern web clients adopt complex optimization strategies when picking
what resources to download, making this optimization less and less relevant
- that's the reason why it was dropped as a feature: its benefit is
doubtful, the drawbacks are known and measurable, the complexity it adds to
squid is significant. We developers would rather clean the slate and work
on supporting http/2 which makes this whole problem go away
> I do understand it is no longer recommended to be used this is simply for
> speed and the system is secure behind a firewall.
I'd just recommend to not use it :)
--
Francesco
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20250103/335957ac/attachment.htm>
More information about the squid-users
mailing list