[squid-users] Using Digests to reduce traffic between peers, Parent - Sibling configuration question

Jester Purtteman jester at optimera.us
Tue Oct 27 16:41:15 UTC 2015


> -----Original Message-----
> From: squid-users [mailto:squid-users-bounces at lists.squid-cache.org] On
> Behalf Of Amos Jeffries
> Sent: Tuesday, October 27, 2015 5:57 AM
> To: squid-users at lists.squid-cache.org
> Subject: Re: [squid-users] Using Digests to reduce traffic between peers,
> Parent - Sibling configuration question
> 
> On 27/10/2015 5:14 p.m., Jester Purtteman wrote:
> > So, here's my theory:  Setup <expensive-server> so that it caches
> > EVERYTHING, all of it, and catalogs it with this Digest.  It doesn't
> > expire anything, ever, the only way something gets released from that
> > cache is when the drive starts running out of room.  It's digest is
> > then sent to <cheap-server>, which doesn't cache ANYTHING, NOTHING.
> > When a request comes through from a client, <Expensive-Server> checks
> > the refresh rules, and if it isn't too stale it gets served just like
> > it does now, but if it IS expired, it then asks <Cheap-Server> "hey,
> > how expired is this?" and <Cheap-Server> (which has all the bandwidth
> > it could ever want) grabs the content, and digests it.  If the digest
> > for the new retrieval matches something in the digest sent by
> > <expensive-server>, then <cheap-server> sends up a message that says
> > "it's still fresh, the content was written by lazy people or idiots, carry on".
> 
> 
> You just described what HTTP/1.1 revalidation requests do. In your logs as
> REFRESH_*. Though they have to send HTTP headers around to get it to
> work, which is a little more expensive than the Digest would be, the result is
> far more reliable and accurate.
> 
> 
> The Cache Digest is just a one-way hash of the URL entries in the cache
> index. It is for reducing ICP queries to a peer proxy (ie the frontend cheap
> server). If you dont have a cache at both end of the conection it is not useful.
> And like ICP it is full of false matches when used with modern HTTP/1.1.
> 
> Amos
> 

What I'm hearing is, there are facilities for handling that already, don't crack open digest*.cc anytime soon :)

Thank you Amos, both for the response, and for years of diligent effort.  I have probably read hundreds if not thousands of your responses now, your efforts are appreciated by a quiet mob of people scratching their head in the wilderness.  I have other questions, but they're unrelated, I'll let this thread go.



More information about the squid-users mailing list