[squid-users] Getting SSL Connection Errors (Eliezer Croitoru)

Usama Mehboob musamamehboob at gmail.com
Fri Feb 25 13:31:55 UTC 2022


Hi Eliezer, I am running on amazon linux 2 ami which I suppose is based on
centos.
I ran the uname -a command and this is what I get;;
Linux ip-172-24-9-143.us-east-2.compute.internal
4.14.256-197.484.amzn2.x86_64 #1 SMP Tue Nov 30 00:17:50 UTC 2021 x86_64
x86_64 x86_64 GNU/Linux

[ec2-user at ip-172-24-9-143 ~]$ openssl version
OpenSSL 1.0.2k-fips  26 Jan 2017

thanks so much and let me know the script and I can run on this machine.
Usama

On Fri, Feb 25, 2022 at 5:34 AM <squid-users-request at lists.squid-cache.org>
wrote:

> Send squid-users mailing list submissions to
>         squid-users at lists.squid-cache.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
>         squid-users-request at lists.squid-cache.org
>
> You can reach the person managing the list at
>         squid-users-owner at lists.squid-cache.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>    1. Re: Getting SSL Connection Errors (Eliezer Croitoru)
>    2. Random trouble with image downloads (Dave Blanchard)
>    3. slow down response to broken clients ? (Dieter Bloms)
>    4. Re: getsockopt failures, although direct access to intercept
>       ports is blocked (Amos Jeffries)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 25 Feb 2022 07:01:12 +0200
> From: "Eliezer Croitoru" <ngtech1ltd at gmail.com>
> To: "'Usama Mehboob'" <musamamehboob at gmail.com>,
>         <squid-users at lists.squid-cache.org>
> Subject: Re: [squid-users] Getting SSL Connection Errors
> Message-ID: <006f01d82a04$b678b770$236a2650$@gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hey Usama,
>
>
>
> There are more missing details on the system.
>
> If you provide the OS and squid details I might be able to provide a
> script that will pull most of the relevant details on the system.
>
> I don?t know about this specific issue yet and it seems like there is a
> SSL related issue and it might not be even related to Squid.
>
> (@Alex Or @Chrisots might know better then me)
>
>
>
> All The Bests,
>
>
>
> ----
>
> Eliezer Croitoru
>
> NgTech, Tech Support
>
> Mobile: +972-5-28704261
>
> Email: ngtech1ltd at gmail.com <mailto:ngtech1ltd at gmail.com>
>
>
>
> From: squid-users <squid-users-bounces at lists.squid-cache.org> On Behalf
> Of Usama Mehboob
> Sent: Thursday, February 24, 2022 23:45
> To: squid-users at lists.squid-cache.org
> Subject: [squid-users] Getting SSL Connection Errors
>
>
>
> Hi I have a squid running on a linux box ( about 16GB ram and 4 cpu ) --
> it runs fine for the most part but when I am launching multiple jobs that
> are connecting with salesforce BulkAPI, sometimes connections are dropped.
> its not predictable and happens only when there is so much load on squid.
> Can anyone shed some light on this? what can I do? is it a file descriptor
> issue?
>
> I see only these error messages from the cache logs
> ```
> PeerConnector.cc(639) handleNegotiateError: Error (error:04091068:rsa
> routines:INT_RSA_VERIFY:bad signature) but, hold write on SSL connection on
> FD 109
> ```
>
> ----------------Config file ----------------
> visible_hostname squid
>
> #
> # Recommended minimum configuration:
> #
>
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.0.0.0/8 <http://10.0.0.0/8>  # RFC1918 possible
> internal network
> acl localnet src 172.16.0.0/12 <http://172.16.0.0/12>  # RFC1918 possible
> internal network
> acl localnet src 192.168.0.0/16 <http://192.168.0.0/16>  # RFC1918
> possible internal network
> acl localnet src fc00::/7       # RFC 4193 local private network range
> acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged)
> machines
>
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> ###acl Safe_ports port 21 # ftp testing after blocking itp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> #
> # Recommended minimum Access Permission configuration:
> #
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
>
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> #http_access allow CONNECT SSL_ports
>
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
>
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> #http_access deny to_localhost
>
> #
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #
>
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
>
> # And finally deny all other access to this proxy
>
> # Squid normally listens to port 3128
> #http_port 3128
> http_port 3129 intercept
> https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
> http_access allow SSL_ports #-- this allows every https website
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
> ssl_bump peek step1 all
>
> # Deny requests to proxy instance metadata
> acl instance_metadata dst 169.254.169.254
> http_access deny instance_metadata
>
> # Filter HTTP Only requests based on the whitelist
> #acl allowed_http_only dstdomain .veevasourcedev.com <
> http://veevasourcedev.com>  .google.com <http://google.com>  .pypi.org <
> http://pypi.org>  .youtube.com <http://youtube.com>
> #acl allowed_http_only dstdomain .amazonaws.com <http://amazonaws.com>
> #acl allowed_http_only dstdomain .veevanetwork.com <
> http://veevanetwork.com>  .veevacrm.com <http://veevacrm.com>  .
> veevacrmdi.com <http://veevacrmdi.com>  .veeva.com <http://veeva.com>  .
> veevavault.com <http://veevavault.com>  .vaultdev.com <http://vaultdev.com>
> .veevacrmqa.com <http://veevacrmqa.com>
> #acl allowed_http_only dstdomain .documentforce.com <
> http://documentforce.com>   .sforce.com <http://sforce.com>  .force.com <
> http://force.com>  .forceusercontent.com <http://forceusercontent.com>  .
> force-user-content.com <http://force-user-content.com>  .lightning.com <
> http://lightning.com>  .salesforce.com <http://salesforce.com>  .
> salesforceliveagent.com <http://salesforceliveagent.com>  .
> salesforce-communities.com <http://salesforce-communities.com>  .
> salesforce-experience.com <http://salesforce-experience.com>  .
> salesforce-hub.com <http://salesforce-hub.com>  .salesforce-scrt.com <
> http://salesforce-scrt.com>  .salesforce-sites.com <
> http://salesforce-sites.com>  .site.com <http://site.com>  .sfdcopens.com
> <http://sfdcopens.com>  .sfdc.sh .trailblazer.me <http://trailblazer.me>
> .trailhead.com <http://trailhead.com>  .visualforce.com <
> http://visualforce.com>
>
>
> # Filter HTTPS requests based on the whitelist
> acl allowed_https_sites ssl::server_name .pypi.org <http://pypi.org>  .
> pythonhosted.org <http://pythonhosted.org>  .tfhub.dev <http://tfhub.dev>
> .gstatic.com <http://gstatic.com>  .googleapis.com <http://googleapis.com>
>
> acl allowed_https_sites ssl::server_name .amazonaws.com <
> http://amazonaws.com>
> acl allowed_https_sites ssl::server_name .documentforce.com <
> http://documentforce.com>   .sforce.com <http://sforce.com>  .force.com <
> http://force.com>  .forceusercontent.com <http://forceusercontent.com>  .
> force-user-content.com <http://force-user-content.com>  .lightning.com <
> http://lightning.com>  .salesforce.com <http://salesforce.com>  .
> salesforceliveagent.com <http://salesforceliveagent.com>  .
> salesforce-communities.com <http://salesforce-communities.com>  .
> salesforce-experience.com <http://salesforce-experience.com>  .
> salesforce-hub.com <http://salesforce-hub.com>  .salesforce-scrt.com <
> http://salesforce-scrt.com>  .salesforce-sites.com <
> http://salesforce-sites.com>  .site.com <http://site.com>  .sfdcopens.com
> <http://sfdcopens.com>  .sfdc.sh .trailblazer.me <http://trailblazer.me>
> .trailhead.com <http://trailhead.com>  .visualforce.com <
> http://visualforce.com>
> ssl_bump peek step2 allowed_https_sites
> ssl_bump splice step3 allowed_https_sites
> ssl_bump terminate step2 all
>
>
> connect_timeout 60 minute
> read_timeout 60 minute
> write_timeout 60 minute
> request_timeout 60 minute
>
> ## http filtering ###
> #http_access allow localnet allowed_http_only
> #http_access allow localhost allowed_http_only
> http_access allow localnet allowed_https_sites
> http_access allow localhost allowed_https_sites
> # And finally deny all other access to this proxy
> http_access deny all
>
> # Uncomment and adjust the following to add a disk cache directory.
> #cache_dir ufs /var/spool/squid 100 16 256
>
> # Leave coredumps in the first cache dir
> coredump_dir /var/spool/squid
>
> #
> # Add any of your own refresh_pattern entries above these.
> #
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern . 0 20% 4320
>
>
>
> thanks
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.squid-cache.org/pipermail/squid-users/attachments/20220225/9f94f144/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 2
> Date: Thu, 24 Feb 2022 23:14:30 -0600
> From: Dave Blanchard <dave at killthe.net>
> To: squid-users at lists.squid-cache.org
> Subject: [squid-users] Random trouble with image downloads
> Message-ID: <20220224231430.f046b84f7c376ed796c5f203 at killthe.net>
> Content-Type: text/plain; charset=US-ASCII
>
> OK, I've got Squid mostly working fine, but have noticed a problem with
> certain image downloads, which in at least one case are coming from
> storage.googleapis.com. (Profile images for a forum.) It's as if Squid
> sometimes randomly fails to download and correctly cache a given image, and
> instead caches a broken or zero'd file. If I try to open that image in a
> new browser tab, sometimes it will just be blank, and other times the
> browser reports ERR_EMPTY_RESPONSE "The server didn't send any data." In
> the former case the image access shows up in the Squid access log as
> TCP_REFRESH_UNMODIFIED, and in the latter case it doesn't show up at all.
> If I download it manually using wget with no proxy, it downloads fine. What
> could possibly be happening here?
>
> --
> Dave Blanchard <dave at killthe.net>
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 25 Feb 2022 08:47:40 +0100
> From: Dieter Bloms <squid.org at bloms.de>
> To: squid-users at lists.squid-cache.org
> Subject: [squid-users] slow down response to broken clients ?
> Message-ID: <20220225074740.pclldvx4rtrpiifc at bloms.de>
> Content-Type: text/plain; charset=us-ascii
>
> Hello,
>
> Sometimes a client tries to reach a destination that is blocked at the
> proxy. The proxy responds with a 403 and the client then immediately
> tries again and again, making hundreds of requests per second. Is it
> possible to add an artificial delay here so that the proxy answers
> the client later?
> Best combined with a rate limit, so that the delays only become active
> when a certain limit of 403 answers is exceeded?
>
>
> --
> Regards
>
>   Dieter
>
> --
> I do not get viruses because I do not use MS software.
> If you use Outlook then please do not put my email address in your
> address-book so that WHEN you get a virus it won't use my address in the
> >From field.
>
>
> ------------------------------
>
> Message: 4
> Date: Fri, 25 Feb 2022 23:30:47 +1300
> From: Amos Jeffries <squid3 at treenet.co.nz>
> To: squid-users at lists.squid-cache.org
> Subject: Re: [squid-users] getsockopt failures, although direct access
>         to intercept ports is blocked
> Message-ID: <b75703c5-3be1-a645-7e05-3b557b5b1336 at treenet.co.nz>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> On 24/02/22 12:05, Andreas Weigel wrote:
> > Hi everyone,
> >
> > I had the following issue with Squid in Transparent Mode (and SSL
> > Interception in mode splice). It is working as expected, however after
> > multiple long-running (talking about several seconds) anti-virus
> > ecap-Processes have finished, I *sometimes* get the following in the log:
> >
> > 2022/02/23 14:56:40.668 kid1| 5,2| src/comm/TcpAcceptor.cc(224)
> > doAccept: New connection on FD 21
> > 2022/02/23 14:56:40.668 kid1| 5,2| src/comm/TcpAcceptor.cc(312)
> > acceptNext: connection on local=[::]:2412 remote=[::] FD 21 flags=41
> > 2022/02/23 14:56:40.668 kid1| 89,5| src/ip/Intercept.cc(405) Lookup:
> > address BEGIN: me/client= 192.168.180.1:2412, destination/me=
> > 192.168.180.10:48582
> > 2022/02/23 14:56:40.668 kid1| ERROR: NF getsockopt(ORIGINAL_DST) failed
> > on local=192.168.180.1:2412 remote=192.168.180.10:48582 FD 37 flags=33:
> > (2) No such file or directory
> > 2022/02/23 14:56:40.669 kid1| 89,9| src/ip/Intercept.cc(151)
> > NetfilterInterception: address: local=192.168.180.1:2412
> > remote=192.168.180.10:48582 FD 37 flags=33
> > 2022/02/23 14:56:40.669 kid1| ERROR: NAT/TPROXY lookup failed to locate
> > original IPs on local=192.168.180.1:2412 remote=192.168.180.10:48582 FD
> > 37 flags=33
>
>
> These can happen if the NAT table entries expire or otherwise get
> dropped by conntrack between the client initiating TCP SYN and Squid
> accept(2) receiving the connection.
>
> Your config looks good to me and the lack of regularity indicates the
> issue is likely this type of transient state situation.
>   Is this happening at times of unusually high client connections
> through the NAT?
>   Is eCAP processing blocking the Squid worker for all those seconds?
>
>
> > 2022/02/23 14:56:40.669 kid1| 5,5| src/comm/TcpAcceptor.cc(287)
> > acceptOne: non-recoverable error: FD 21, [::] [ job2] handler
> > Subscription: 0x55edac3d08d0*1
> >
> > Sometimes, this only appears on on of the two interception ports,
> > sometimes on both. After that, the squid worker does not poll the
> > intercept listen port any longer, i.e. stops working.
>
> That part is likely to be the issue recently worked around by
> <
> http://www.squid-cache.org/Versions/v6/changesets/squid-6-9fd3e68c3d0dfd6035db98ce142cf425be6c5fc1.patch
> >
>
>
> Amos
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> ------------------------------
>
> End of squid-users Digest, Vol 90, Issue 38
> *******************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20220225/d3258692/attachment-0001.htm>


More information about the squid-users mailing list