[squid-users] Is it safe to resize a rock storage file?
Alex Rousskov
rousskov at measurement-factory.com
Tue Oct 17 05:24:07 UTC 2017
On 10/16/2017 09:35 PM, Amos Jeffries wrote:
> On 17/10/17 07:05, Alex Rousskov wrote:
>> On 10/16/2017 09:23 AM, Amos Jeffries wrote:
>>> The -z process for
>>> rock caches actively formats the file used for data storage into cells
>>> and blocks.
>> Not really. "squid -z" adds a small db header, but the rest of the
>> database is assumed to be nothing but zeros. Squid -z used to fill the
>> whole db file with zeros (see SLOWLY_FILL_WITH_ZEROS), but I believe we
>> stopped doing that (by default) and expect an "enlarging truncate" to
>> have the same side effect. The comit log may have more info about that.
> Oh, I thought each block had a TLV and checksum bits as well.
There is slot-specific metadata (and associated consistency checks), but
it is designed to work fine with all-zeros (i.e., initial or empty)
slots. Filling a 1TB disk cache with something just to "initialize" it
would be rather wasteful/annoying...
There are no true slot checksums.
>> * Removing a slot, whole or partial, will invalidate the cache entry
>> that was using that slot (if any). The affected entry will not be added
>> to the shared memory index. Skipping an entry should not lead to cache
>> corruption. It will lead to loss of cache data (and probably some
>> warnings) but that was explicitly allowed in the original question.
> This last one was what I was referring to as corruption.
AFAICT, OP has explicitly stated that some loss of cached data is fine
(i.e., it is not considered cache corruption). In the original question,
corruption was defined as "rubbish data delivered to clients".
> Though I expect
> that *removing* slots would lead memory index pointing to no longer
> valid locations in the rock database.
The in-memory index is created from the on-disk metadata. Bugs
notwithstanding, if on-disk metadata contains an incomplete entry (e.g.,
an entry slot pointer pointing beyond the current database boundary),
then Rock will not add that entry to the in-memory index, and, hence,
Squid will not know about that entry existence.
> Is it harmless to access out-of-range offsets into a memmap'ed file -
> specifically ones which *were* valid when it was initially mapped?
Out-of-range accesses are probably deadly, but no such accesses should
be happening due to database size changes. Please note that the question
was about database size changes while Squid "is not running". When Squid
starts, it creates memory segments and builds its in-memory disk index
there from scratch, ignoring any bad on-disk entries.
HTH,
Alex.
More information about the squid-users
mailing list