[squid-users] Cache digest vs ICP
veiko at linux.ee
Wed Sep 27 09:46:44 UTC 2017
We have cluster of squids in reverse proxy mode. Each one of those is
sibling to others and they all have same originservers as parents. Siblings
are configured with no-proxy keyword to achieve that they don't cache what
other siblings already have in their cache. This is to minimize data usage
costs from origin servers. What is in our cluster should never be fetched
again from origin because it never changes. It's not typical web cache,
it's CDN system for content that we create and control and squid is just
one of the internal parts and not exposed directly to the clients.
So far digest_generation has been set to off and only ICP has been used
between siblings. Mostly because digest stats had shown many rejects (not
containing 100% of cache objects) and documentation about digests is
confusing up to statements that while rebuilding digest, squid will stop
Since we need to have more siblings and more far away from each other, ICP
overhead becomes an issue (time spent on query). Having proper digest with
all of the objects in cache_dir included in digest could be better solution
due to not having delay of initial ICP request. Digest documentation states
that it's including based on refresh_pattern. It's a problem because to get
squid working as we want, we had to use offline_mode on.
* What is the relationship between cache digests and ICP? If they are
active together, how are they used together?
* How are objects added to digest when rebuilding? Does this include lot of
disk i/o like scanning all cache_dir files or is it based on swap.state
* How can i see which objects are listed in cache digest?
* Why does sibling false positive result in sending client 504 and not
trying next sibling or parent? CD_SIBLING_HIT/192.168.1.52 TCP_MISS/504.
How to achieve proceeding with next cache_peer?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the squid-users