[squid-dev] Sad performance trend

Alex Rousskov rousskov at measurement-factory.com
Sat Aug 27 00:32:36 UTC 2016


    Factory ran a bunch of micro-level Polygraph tests, targeting
several Squid subsystems: header parsing, connection management, and SSL
(but no SslBump). We tested a few Squid versions going back to v3.1:

        W1  W2  W3  W4  W5  W6
  v3.1  32% 38% 16% 48% 16+ 9%
  v3.3  23% 31% 14% 42% 15% 8%
  v3.5  11% 16% 12% 36%  7% 6%
  v4.0  11% 15%  9% 30% 14% 5%

Each W column represents a particular Polygraph workload (the workload
details are not important right now). Each percentage cell represents a
single test result for a given Squid version. Absolute numbers mean
little in these quick-and-dirty micro tests, but higher percentages are
better (100% would be comparable to a "wire" -- an ideal proxy with zero
overhead). Comparing numbers from different columns is virtually
meaningless because different workloads have different 100% levels.

If you follow each W column from top (v3.1 stabilizing in ~2010) to
bottom (current v4.0/trunk), you will notice that all studied Squid
subsystems are getting slower and slower. These micro tests exaggerate
the impact of specific subsystems and cannot predict real performance
degradation in a given environment, but things are clearly getting worse
with time. This is a sad trend!

Whether you blame it on an unfortunate initial code state, insufficient
project resources, lack of an official architect, mediocre development
skills, nearly absent quality controls, something else, or the
combination of several factors, we are clearly doing something wrong,
year after year after year...

And what is arguably worse than the bad trend itself, we did not even
_know_ about the problem (or at least its magnitude) until now.

I will not propose any solutions at this time, but will feed this
information into ongoing Squid Foundation discussions about revamping
Squid QA. Your feedback is welcomed, of course.

Thank you,


More information about the squid-dev mailing list