[squid-dev] tcp_outgoing_address and HTTPS

Michael Pro michael.adm at gmail.com
Tue Mar 20 11:11:28 UTC 2018


squid-5 master branch, not have  personal/private repository changes,
not use  cache_peer's ability, (if it's matters - not use transparent
proxying ability).

We have a set of rules (ACL's with url regex) for content, depending
on which we make a decision for the outgoing address, for example,
from 10.10.1.xx to 10.10.6.xx
-----log 1part {{{ -----
Acl.cc(151) matches: checked: tcp_outgoing_address 10.10.5.11 = 1
Checklist.cc(63) markFinished: 0x7fffffffe2b8 answer ALLOWED for match
FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist
destroyed 0x7fffffffe2b8
Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist:
destroyed 0x7fffffffe2b8
peer_select.cc(1026) handlePath: PeerSelector3438 found
local=10.10.5.11 remote=17.253.37.204:80 HIER_DIRECT flags=1,
destination #2 for http://iosapps.itunes.apple.com/...xxx...ipa
...
peer_select.cc(1002) interestedInitiator: PeerSelector3438
peer_select.cc(112) ~PeerSelector: http://iosapps.itunes.apple.com/...xxx...ipa
store.cc(464) unlock: peerSelect unlocking key
60080000000000001C0E000001000000 e:=p2IWV/0x815c09500*3
AsyncCallQueue.cc(55) fireNext: entering AsyncJob::start()
AsyncCall.cc(38) make: make call AsyncJob::start [call195753]
AsyncJob.cc(123) callStart: Comm::ConnOpener status in: [ job10909]
comm.cc(348) comm_openex: comm_openex: Attempt open socket for: 10.10.5.11
comm.cc(391) comm_openex: comm_openex: Opened socket local=10.10.5.11
remote=[::] FD 114 flags=1 : family=2, type=1, protocol=6
-----log 1part }}} -----
In the case of normal traffic (http), everything works fine, as shuld.

In the case of HTTPS with traffic analysis (ssl_bump) we have such a picture:
-----log 2part {{{ ------
Acl.cc(151) matches: checked: tcp_outgoing_address 10.10.5.120 = 1
Checklist.cc(63) markFinished: 0x7fffffffe2b8 answer ALLOWED for match
FilledChecklist.cc(67) ~ACLFilledChecklist: ACLFilledChecklist
destroyed 0x7fffffffe2b8
Checklist.cc(197) ~ACLChecklist: ACLChecklist::~ACLChecklist:
destroyed 0x7fffffffe2b8
peer_select.cc(1026) handlePath: PeerSelector569 found
local=10.10.5.120 remote=23.16.9.11:443 PINNED flags=1, destination #1
for https://some.https.com/...xxx...zip
peer_select.cc(1027) handlePath: always_direct = DENIED
peer_select.cc(1028) handlePath: never_direct = DENIED
peer_select.cc(1029) handlePath: timedout = 0
peer_select.cc(1002) interestedInitiator: PeerSelector569
FwdState.cc(443) startConnectionOrFail: https://some.https.com/...xxx...zip
HttpRequest.cc(472) clearError: old error details: 0/0
FwdState.cc(886) connectStart: fwdConnectStart:
https://some.https.com/...xxx...zip
FwdState.cc(905) connectStart: pinned peer connection: 0x812c51018
client_side.cc(4082) borrowPinnedConnection: local=10.10.2.120:47901
remote=23.16.9.11:443 HIER_DIRECT FD 28 flags=1
client_side.cc(4057) validatePinnedConnection: local=10.10.2.120:47901
remote=23.16.9.11:443 HIER_DIRECT FD 28 flags=1
AsyncCall.cc(56) cancel: will not call
ConnStateData::clientPinnedConnectionRead [call20129] because
comm_read_cancel
AsyncCall.cc(56) cancel: will not call
ConnStateData::clientPinnedConnectionRead [call20129] also because
comm_read_cancel
ModKqueue.cc(174) SetSelect: FD 28, type=1, handler=0,
client_data=0x0, timeout=0
comm.cc(964) comm_add_close_handler: comm_add_close_handler: FD 28,
handler=1, data=0x8028bf398
AsyncCall.cc(26) AsyncCall: The AsyncCall SomeCloseHandler
constructed, this=0x802a456d0 [call20139]
comm.cc(975) comm_add_close_handler: comm_add_close_handler: FD 28,
AsyncCall=0x802a456d01
FwdState.cc(987) dispatch: local=127.0.0.1:20990
remote=127.0.0.120:59799 FD 26 flags=1: Fetching GET
https://some.https.com/...xxx...zip
AsyncJob.cc(34) AsyncJob: AsyncJob constructed, this=0x81258fe38
type=HttpStateData [job1763]
store.cc(439) lock: Client locked key 3F020000000000001C0E000001000000
e:=p2IWV/0x812b2df004
...
peer_select.cc(112) ~PeerSelector: https://some.https.com/...xxx...zip
store.cc(464) unlock: peerSelect unlocking key
3F020000000000001C0E000001000000 e:=p2IWV/0x812b2df004
AsyncCallQueue.cc(55) fireNext: entering AsyncJob::start()
AsyncCall.cc(38) make: make call AsyncJob::start [call20141]
AsyncJob.cc(123) callStart: HttpStateData status in: [ job1763]
http.cc(2838) sendRequest: local=10.10.2.120:47901
remote=23.16.9.11:443 HIER_DIRECT FD 28 flags=1, request 0x8125e88005,
this 0x81258fd18.
AsyncCall.cc(26) AsyncCall: The AsyncCall HttpStateData::httpTimeout
constructed, this=0x812492f80 [call20142]
comm.cc(554) commSetConnTimeout: local=10.10.2.120:47901
remote=23.16.9.11:443 HIER_DIRECT FD 28 flags=1 timeout 86400
http.cc(2204) maybeMakeSpaceAvailable: may read up to 131072 bytes
info buf(0/131072) from local=10.10.2.120:47901
remote=213.156.90.131:443 HIER_DIRECT FD 28 flags=1
AsyncCall.cc(26) AsyncCall: The AsyncCall HttpStateData::readReply
constructed, this=0x812493020 [call20143]
Read.cc(57) comm_read_base: comm_read, queueing read for
local=10.10.2.120:47901 remote=23.16.9.11:443 HIER_DIRECT FD 28
flags=1; asynCall 0x812493020*1
ModKqueue.cc(174) SetSelect: FD 28, type=1, handler=1,
client_data=0x80ce04728, timeout=0
AsyncCall.cc(26) AsyncCall: The AsyncCall HttpStateData::wroteLast
constructed, this=0x812493160 [call20144]
-----log 2part }}} -----

I understand that without analyzing the traffic and not knowing the
final goal for the beginning, we can not manage the process further.
Question: how can we break the established channel (unpinn it) along
the old route and establish a new channel along the new route, when we
already know how.

IN 127.0.0.1:443 (22.33.44.55:443 ???) ---> OUT 10.10.1.1 ---> (Catch
22.33.44.55:443/this/is/it.zip) ---> Kill IN ... ??? OUT 10.10.1.1
---> Establish OUT 10.10.5.1 ---> 22.33.44.55:443/this/is/it.zip

I'm willing to pay a large price for traffic congestion in this case,
since the goal justifies it.


More information about the squid-dev mailing list