[squid-users] Getting SSL Connection Reset Randomly but rarely

NgTech LTD ngtech1ltd at gmail.com
Mon Jan 31 04:34:59 UTC 2022


What version of amazon linux are you using? 1 or 2?
2 has support for squid 4.17.
There are couple options regarding these resets and not all of them are
squid side.

Eliezer

בתאריך יום ה׳, 27 בינו׳ 2022, 5:59, מאת Usama Mehboob ‏<
musamamehboob at gmail.com>:

> Hi I have squid 3.5 running on amazon linux and it works fine for the most
> part but sometime I see the logs of my clients from webapp saying that
> connection timeout etc. Upon checking the cache logs, I see these
> statements.
>
>
> 2022/01/23 03:10:01| Set Current Directory to /var/spool/squid
> 2022/01/23 03:10:01| storeDirWriteCleanLogs: Starting...
> 2022/01/23 03:10:01|   Finished.  Wrote 0 entries.
> 2022/01/23 03:10:01|   Took 0.00 seconds (  0.00 entries/sec).
> 2022/01/23 03:10:01| logfileRotate: daemon:/var/log/squid/access.log
> 2022/01/23 03:10:01| logfileRotate: daemon:/var/log/squid/access.log
> 2022/01/23 10:45:52| Error negotiating SSL connection on FD 170: (104)
> Connection reset by peer
> 2022/01/23 12:14:07| Error negotiating SSL on FD 139:
> error:00000000:lib(0):func(0):reason(0) (5/-1/104)
> 2022/01/23 12:14:07| Error negotiating SSL connection on FD 409: (104)
> Connection reset by peer
> 2022/01/25 01:12:04| Error negotiating SSL connection on FD 24: (104)
> Connection reset by peer
>
>
>
> I am not sure what is causing it, is it because squid is running out of
> gas? my instance has 16gb of Ram and 4VCPU. I am using SSL BUMP to use
> squid as a transparent proxy within AWS Vpc.
>
> Below is the config file
> --------------ConfigFile-----------------------------------------
>
> visible_hostname squid
>
> #
> # Recommended minimum configuration:
> #
>
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7       # RFC 4193 local private network range
> acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged)
> machines
>
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> ###acl Safe_ports port 21 # ftp testing after blocking itp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> #
> # Recommended minimum Access Permission configuration:
> #
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
>
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> #http_access allow CONNECT SSL_ports
>
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
>
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> #http_access deny to_localhost
>
> #
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #
>
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
>
> # And finally deny all other access to this proxy
>
> # Squid normally listens to port 3128
> #http_port 3128
> http_port 3129 intercept
> https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
> http_access allow SSL_ports #-- this allows every https website
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
> ssl_bump peek step1 all
>
> # Deny requests to proxy instance metadata
> acl instance_metadata dst 169.254.169.254
> http_access deny instance_metadata
>
> # Filter HTTP Only requests based on the whitelist
> #acl allowed_http_only dstdomain .veevasourcedev.com .google.com .pypi.org
> .youtube.com
> #acl allowed_http_only dstdomain .amazonaws.com
> #acl allowed_http_only dstdomain .veevanetwork.com .veevacrm.com .
> veevacrmdi.com .veeva.com .veevavault.com .vaultdev.com .veevacrmqa.com
> #acl allowed_http_only dstdomain .documentforce.com  .sforce.com .
> force.com .forceusercontent.com .force-user-content.com .lightning.com .
> salesforce.com .salesforceliveagent.com .salesforce-communities.com .
> salesforce-experience.com .salesforce-hub.com .salesforce-scrt.com .
> salesforce-sites.com .site.com .sfdcopens.com .sfdc.sh .trailblazer.me .
> trailhead.com .visualforce.com
>
>
> # Filter HTTPS requests based on the whitelist
> acl allowed_https_sites ssl::server_name .pypi.org .pythonhosted.org .
> tfhub.dev .gstatic.com .googleapis.com
> acl allowed_https_sites ssl::server_name .amazonaws.com
> acl allowed_https_sites ssl::server_name .documentforce.com  .sforce.com .
> force.com .forceusercontent.com .force-user-content.com .lightning.com .
> salesforce.com .salesforceliveagent.com .salesforce-communities.com .
> salesforce-experience.com .salesforce-hub.com .salesforce-scrt.com .
> salesforce-sites.com .site.com .sfdcopens.com .sfdc.sh .trailblazer.me .
> trailhead.com .visualforce.com
> ssl_bump peek step2 allowed_https_sites
> ssl_bump splice step3 allowed_https_sites
> ssl_bump terminate step2 all
>
>
> connect_timeout 60 minute
> read_timeout 60 minute
> write_timeout 60 minute
> request_timeout 60 minute
>
> ## http filtering ###
> #http_access allow localnet allowed_http_only
> #http_access allow localhost allowed_http_only
> http_access allow localnet allowed_https_sites
> http_access allow localhost allowed_https_sites
> # And finally deny all other access to this proxy
> http_access deny all
>
> # Uncomment and adjust the following to add a disk cache directory.
> #cache_dir ufs /var/spool/squid 100 16 256
>
> # Leave coredumps in the first cache dir
> coredump_dir /var/spool/squid
>
> #
> # Add any of your own refresh_pattern entries above these.
> #
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern . 0 20% 4320
>
> ------------------------------------------------------------------------------------
> Will appreciate any help, been struggling with it for last week. it is
> hard to reproduce and happens randomly and re-running the failed job goes
> through to success at times. thanks
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20220131/d95514e2/attachment-0001.htm>


More information about the squid-users mailing list