[squid-users] Getting SSL Connection Errors (Eliezer Croitoru)
Usama Mehboob
musamamehboob at gmail.com
Sat Feb 26 05:58:08 UTC 2022
I think on previous mailing list I pasted the whole content. So I am again
sending my reply in sort of confined way. :)
Eliezer, I am running on amazon linux 2 ami which I suppose is based on
centos.
I ran the uname -a command and this is what I get;;
Linux ip-172-24-9-143.us-east-2.compute.internal
4.14.256-197.484.amzn2.x86_64 #1 SMP Tue Nov 30 00:17:50 UTC 2021 x86_64
x86_64 x86_64 GNU/Linux
[ec2-user at ip-172-24-9-143 ~]$ openssl version
OpenSSL 1.0.2k-fips 26 Jan 2017
thanks so much and let me know the script and I can run on this machine.
Usama
>
> Message: 1
> Date: Fri, 25 Feb 2022 07:01:12 +0200
> From: "Eliezer Croitoru" <ngtech1ltd at gmail.com>
> To: "'Usama Mehboob'" <musamamehboob at gmail.com>,
> <squid-users at lists.squid-cache.org>
> Subject: Re: [squid-users] Getting SSL Connection Errors
> Message-ID: <006f01d82a04$b678b770$236a2650$@gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hey Usama,
>
>
>
> There are more missing details on the system.
>
> If you provide the OS and squid details I might be able to provide a
> script that will pull most of the relevant details on the system.
>
> I don?t know about this specific issue yet and it seems like there is a
> SSL related issue and it might not be even related to Squid.
>
> (@Alex Or @Chrisots might know better then me)
>
>
>
> All The Bests,
>
>
>
> ----
>
> Eliezer Croitoru
>
> NgTech, Tech Support
>
> Mobile: +972-5-28704261
>
> Email: ngtech1ltd at gmail.com <mailto:ngtech1ltd at gmail.com>
>
>
>
> From: squid-users <squid-users-bounces at lists.squid-cache.org> On Behalf
> Of Usama Mehboob
> Sent: Thursday, February 24, 2022 23:45
> To: squid-users at lists.squid-cache.org
> Subject: [squid-users] Getting SSL Connection Errors
>
>
>
> Hi I have a squid running on a linux box ( about 16GB ram and 4 cpu ) --
> it runs fine for the most part but when I am launching multiple jobs that
> are connecting with salesforce BulkAPI, sometimes connections are dropped.
> its not predictable and happens only when there is so much load on squid.
> Can anyone shed some light on this? what can I do? is it a file descriptor
> issue?
>
> I see only these error messages from the cache logs
> ```
> PeerConnector.cc(639) handleNegotiateError: Error (error:04091068:rsa
> routines:INT_RSA_VERIFY:bad signature) but, hold write on SSL connection on
> FD 109
> ```
>
> ----------------Config file ----------------
> visible_hostname squid
>
> #
> # Recommended minimum configuration:
> #
>
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.0.0.0/8 <http://10.0.0.0/8> # RFC1918 possible
> internal network
> acl localnet src 172.16.0.0/12 <http://172.16.0.0/12> # RFC1918 possible
> internal network
> acl localnet src 192.168.0.0/16 <http://192.168.0.0/16> # RFC1918
> possible internal network
> acl localnet src fc00::/7 # RFC 4193 local private network range
> acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged)
> machines
>
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> ###acl Safe_ports port 21 # ftp testing after blocking itp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
>
> #
> # Recommended minimum Access Permission configuration:
> #
> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
>
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> #http_access allow CONNECT SSL_ports
>
> # Only allow cachemgr access from localhost
> http_access allow localhost manager
> http_access deny manager
>
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> #http_access deny to_localhost
>
> #
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #
>
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
>
> # And finally deny all other access to this proxy
>
> # Squid normally listens to port 3128
> #http_port 3128
> http_port 3129 intercept
> https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
> http_access allow SSL_ports #-- this allows every https website
> acl step1 at_step SslBump1
> acl step2 at_step SslBump2
> acl step3 at_step SslBump3
> ssl_bump peek step1 all
>
> # Deny requests to proxy instance metadata
> acl instance_metadata dst 169.254.169.254
> http_access deny instance_metadata
>
> # Filter HTTP Only requests based on the whitelist
> #acl allowed_http_only dstdomain .veevasourcedev.com <
> http://veevasourcedev.com> .google.com <http://google.com> .pypi.org <
> http://pypi.org> .youtube.com <http://youtube.com>
> #acl allowed_http_only dstdomain .amazonaws.com <http://amazonaws.com>
> #acl allowed_http_only dstdomain .veevanetwork.com <
> http://veevanetwork.com> .veevacrm.com <http://veevacrm.com> .
> veevacrmdi.com <http://veevacrmdi.com> .veeva.com <http://veeva.com> .
> veevavault.com <http://veevavault.com> .vaultdev.com <http://vaultdev.com>
> .veevacrmqa.com <http://veevacrmqa.com>
> #acl allowed_http_only dstdomain .documentforce.com <
> http://documentforce.com> .sforce.com <http://sforce.com> .force.com <
> http://force.com> .forceusercontent.com <http://forceusercontent.com> .
> force-user-content.com <http://force-user-content.com> .lightning.com <
> http://lightning.com> .salesforce.com <http://salesforce.com> .
> salesforceliveagent.com <http://salesforceliveagent.com> .
> salesforce-communities.com <http://salesforce-communities.com> .
> salesforce-experience.com <http://salesforce-experience.com> .
> salesforce-hub.com <http://salesforce-hub.com> .salesforce-scrt.com <
> http://salesforce-scrt.com> .salesforce-sites.com <
> http://salesforce-sites.com> .site.com <http://site.com> .sfdcopens.com
> <http://sfdcopens.com> .sfdc.sh .trailblazer.me <http://trailblazer.me>
> .trailhead.com <http://trailhead.com> .visualforce.com <
> http://visualforce.com>
>
>
> # Filter HTTPS requests based on the whitelist
> acl allowed_https_sites ssl::server_name .pypi.org <http://pypi.org> .
> pythonhosted.org <http://pythonhosted.org> .tfhub.dev <http://tfhub.dev>
> .gstatic.com <http://gstatic.com> .googleapis.com <http://googleapis.com>
>
> acl allowed_https_sites ssl::server_name .amazonaws.com <
> http://amazonaws.com>
> acl allowed_https_sites ssl::server_name .documentforce.com <
> http://documentforce.com> .sforce.com <http://sforce.com> .force.com <
> http://force.com> .forceusercontent.com <http://forceusercontent.com> .
> force-user-content.com <http://force-user-content.com> .lightning.com <
> http://lightning.com> .salesforce.com <http://salesforce.com> .
> salesforceliveagent.com <http://salesforceliveagent.com> .
> salesforce-communities.com <http://salesforce-communities.com> .
> salesforce-experience.com <http://salesforce-experience.com> .
> salesforce-hub.com <http://salesforce-hub.com> .salesforce-scrt.com <
> http://salesforce-scrt.com> .salesforce-sites.com <
> http://salesforce-sites.com> .site.com <http://site.com> .sfdcopens.com
> <http://sfdcopens.com> .sfdc.sh .trailblazer.me <http://trailblazer.me>
> .trailhead.com <http://trailhead.com> .visualforce.com <
> http://visualforce.com>
> ssl_bump peek step2 allowed_https_sites
> ssl_bump splice step3 allowed_https_sites
> ssl_bump terminate step2 all
>
>
> connect_timeout 60 minute
> read_timeout 60 minute
> write_timeout 60 minute
> request_timeout 60 minute
>
> ## http filtering ###
> #http_access allow localnet allowed_http_only
> #http_access allow localhost allowed_http_only
> http_access allow localnet allowed_https_sites
> http_access allow localhost allowed_https_sites
> # And finally deny all other access to this proxy
> http_access deny all
>
> # Uncomment and adjust the following to add a disk cache directory.
> #cache_dir ufs /var/spool/squid 100 16 256
>
> # Leave coredumps in the first cache dir
> coredump_dir /var/spool/squid
>
> #
> # Add any of your own refresh_pattern entries above these.
> #
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern . 0 20% 4320
>
>
>
> thanks
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.squid-cache.org/pipermail/squid-users/attachments/20220225/9f94f144/attachment-0001.htm
> >
>
>
>
> ------------------------------
>
> End of squid-users Digest, Vol 90, Issue 38
> *******************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20220226/0bee2457/attachment-0001.htm>
More information about the squid-users
mailing list