[squid-dev] [PATCH] Happy Eyeballs: Use each fully resolved destination ASAP
Amos Jeffries
squid3 at treenet.co.nz
Wed May 24 15:04:19 UTC 2017
On 17/05/17 05:41, Alex Rousskov wrote:
> On 05/16/2017 03:31 AM, Amos Jeffries wrote:
>> On 16/05/17 16:33, Alex Rousskov wrote:
>>> The attached patch is the first in a short series of patches that
>>> improve Squid support for the Happy Eyeballs principle. The motivation
>>> for these patches has been discussed on the dns_wait_for_all thread:
>>> http://lists.squid-cache.org/pipermail/squid-dev/2016-September/006831.html
>> NP: This is not implementing the Happy Eyeballs *algorithm*
> The proposed patch contains changes necessary (but not sufficient!) to
> implement a Happy Eyeballs algorithm.
>
>
>> (it is an explicitly standardized algorithm name, not a principle).
> It is both (and more!). The term "Happy Eyeballs" and RFC 6555 cover:
>
> * a set of algorithm requirements (i.e., "principles" or "goals")
> * an abstract algorithm that satisfies those requirements
> * an example implementation of the algorithm in Chrome
>
>
>> "Happy Eyeballs" algorithm is specifically a DNS feature.
> The Happy Eyeballs algorithm is a combination of DNS and TCP (or other
> transport-layer) actions. You might want to reread RFC 6555 if you think
> that Happy Eyeballs is only (or even primarily) about DNS!
>
>
>>> With this change, Squid no longer waits for all request forwarding
>>> destinations to be resolved. Instead, it uses each fully resolved
>>> destination as soon as it is needed. More info is in the patch preamble.
>> More correctly this code does not even begin to resolve some
>> destinations until others are already started being connected to.
> I do not think this is a "more correct" statement. I find it distracting
> and misleading because the "started being connected to" is too vague to
> be reliably interpreted in this context and, to be "correct", it has too
> be interpreted at a such a low/literal level that it looses its relevance.
Okay to be more accurate:
For the 1st destination ps_state::handlePath() calls noteDestination()
which results in the entire TunnelState::startConnecting() /
FwdState::startConnectionOrFail() sequence happening synchronously
before returning to ps_state::handlePaths() and peerSelectDnsResults()
for the 2nd path.
Also the same sequence happens for every Nth destination with
unpredictable values of N.
... There do not seem to be any performance gains from this redesign for
transactions where there are less than 3 upstream servers. Which is the
common case. The synchronous processing may make the one transaction
reach connection faster, but that is just taking CPU cycles from /
slowing other transactions.
It might be worth initiating that noteDestination() sub-path as async
step to avoid the remaining delay. (not asking for it to happen now,
maybe a TODO note).
> In hope to avoid wasting timing on perfecting vague terminology, here is
> a realistic example with two destination domain names (A and B), each
> having two IP addresses (v4 and v6), with IPv6 preferred. To illustrate
> all possible steps, we consider the case where all HTTP transactions
> fail. The patch essentially changes these sequential steps:
>
> 1. Resolve A (getting A4 and A6 IPs).
> 2. Resolve B (getting B4 and B6 IPs).
> 3. Execute HTTP transaction with A6.
> 4. Execute HTTP transaction with A4.
> 5. Execute HTTP transaction with B6.
> 6. Execute HTTP transaction with B4.
>
> to these three groups of partially parallelized steps:
>
> 1. Resolve A (getting A4 and A6 IPs).
>
> 2. Resolve B (getting B4 and B6 IPs).
> 3. Execute HTTP transaction with A6.
>
> 4. Execute HTTP transaction with A4.
> 5. Execute HTTP transaction with B6.
> 6. Execute HTTP transaction with B4.
>
> where steps 2 and 3 now run in parallel. In extreme cases where B
> resolution is slower than HTTP transaction with A6, step 2 and step 4
> now run in parallel as well!
>
> Whether the first DNS byte resolving B will be written to the network
> before or after the first TCP byte for A6 depends on many factors such
> as HTTP persistent connection availability, slow ACLs, and I/O socket
> readiness. Both step 2 and step 3 are not instantaneous actions but
> relatively long activities with several asynchronous subtasks so it is
> pointless to argue how to exactly define their "start" (or "finish")
> time and whether step 2 will always "start" before 3. It is sufficient
> to say that both will start shortly after step 1 and run in parallel.
>
> Finally, we should eventually use asynchronous calls among the peer
> selection code, the FwdState/tunnel jobs, and the DNS resolution code.
> Some of those calls are already asynchronous, but I did not add more of
> them in this patch, primarily because FwdState and TunnelStateData are
> not AsyncJobs (and converting them to become AsyncJobs or equivalent is
> a large out of scope project). When those asynchronous calls are in
> place, even the first code statements of steps 2 and 3 will be
> parallelized (to the extend such parallelization is possible in Squid).
>
>
>> It is a step back to non-deterministic forwarding timeouts.
> How do you define "non-deterministic forwarding timeouts"?
>
> I do not think the patch increases the number of timed out transactions
> in any significant or important way, but if it does, I would like to
> learn about it and either fix the code or disclose the problem. I hope
> the above specific example helps you withdraw this accusation.
I mean the forwarding timeout on "Squid hangs for X" complaints I deal
with in squid-users. The X is no longer determined by which component is
being problematic. It was nice being able to point squid-users questions
in roughly the right direction without resorting to logs very often.
>
>>> The next planned step is to deliver A and AAAA answers to the peer
>>> selection code independently and ASAP. When both changes are combined,
>>> the FwdState/tunneling code will receive IPvX addresses without waiting
>>> for IPvY addresses (and/or other destination names) to be resolved,
>>> making user eyeballs happier and, hence, facilitating IPv6 deployments.
>> *That* is the missing part of Happy Eyeballs.
> Yes, that too, and there is one more part not discussed in the preamble
> because it is currently not planned as a part of this patch series. From
> Squid point of view, Happy Eyeballs implementation needs at least four
> distinct parts:
>
> 1. Squid currently implements: Use parallel A and AAAA queries.
> 2. This patch: ASAP delivery of IPs from peer selection to FwdState.
> 3. The next step: ASAP delivery of IPs from DNS to peer selection.
> 4. A separate project should add: Use parallel TCP connections.
>
> (The above are very rough gists that identify critical Happy Eyeballs
> features, not precise definitions of algorithms.)
>
>
>> * s/Will aborting forwarding/Will abort forwarding/
> Fixed.
>
>
>> * psstate->lastError = NULL; // initiator owns the ErrorState object now
>> - please use nullptr on moved or new lines
> Fixed.
>
> The attached patch contains the above two fixes. No other changes.
>
>
>> * The call to noteDestinationsEnd() with an error from
>> peerSelectDnsResults() appears to abort *all* path connection attempts
>> by the initiator
> When peerSelectDnsResults() is called with a non-nil error, there could
> have been no connection attempts at all because a non-nil error means
> that no paths have been found (i.e., psstate->foundPaths is zero). Here
> is the corresponding calling code, with irrelevant lines removed:
>
> if (psstate->lastError && psstate->foundPaths)
> psstate->lastError = nullptr;
>
> initiator->noteDestinationsEnd(psstate->lastError);
>
>
> Please correct me if I am wrong.
Ah, you are right. I was mistaking the peerSelectDns*() function.
Something is still bothering me about the call sequence when M of N
destinations are passed to FwdState and the Mth one goes through the
FwdState::complete(){ if (reforward()) ... } path due to explicit
upstream 4xx/5xx response. It seems that will end with a forwarding
error if the remaining peers do not yet have DNS results available in
serverDestinations.
eg. the sequence for 3 possible destinations:
1) destination A gets found and noteDestination(A) called
1a) this triggers the async startConnectionOrFail()
1b) A gets connected and request sent ...
2) destination B hit a DNS timeout issue
- so B never gets passed to FwdState and there is long enough delay
for (1c) to happen.
1c) server 5xx response arrives for A,
- the 5xx triggers the FwdState::connect() { if(reforward()) { ... }
} path
- which erases (1) from serverDestinations and calls
startConnectionOrFail()
- startConnectionOrFail() now has serverDestinations.empty() and an
error to deliver
3) destination C gets abandoned because FwdState is now gone
Amos
More information about the squid-dev
mailing list