consensus
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNU/consensus] Fwd: An Update from the Name Resolution Trenches


From: hellekin
Subject: [GNU/consensus] Fwd: An Update from the Name Resolution Trenches
Date: Mon, 26 Jan 2015 11:34:39 -0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.4.0

A very interesting overview of what's going on with DNS these days.
This basic block of the Internet, taken for granted, is a lot more
complex than we think the closer you look at it...

Kudos for mentioning the P2P Names draft and the analysis of DNS and
privacy.  I will respond to the suggestion of a single pTLD for all 6
P2P Names in a following message.

==
hk

-------- Forwarded Message --------
Subject: An Update from the Name Resolution Trenches
Date: Mon, 26 Jan 2015 12:27:57 +0000
From: Hugo Maxwell Connery
To: christian, hellekin, tor-talk

Below is also attached as text to use in a potentially nicer viewer.

== An Update from the Name Resolution Trenches ==

Summary:

In the internet name resolution space,
the only real solutions for privacy are going
to come from the overlay communities, like
tor and gnunet.  a.k.a DNS is too big to fail (change
significantly).  Plus a suggestion below [P2P].

Verbiage:

In response to the IETF saying that "pervasive monitoring
is an attack on the internet" (RFC7258 [1]) the IETF
has established various working groups to examine how
to fit privacy protection to existing highly used
protocols.  The DPRIVE [2] working group is addressing
DNS.

DNS could be described as the largest, highly available,
globally distributed, heirarchical name/value lookup
database ever built.  It is the beginning of almost
any interaction on the net.  And, its architecture,
both is governance and protocol, are a privacy nightmare.

I have been participating in, a little, and watching, a lot,
the DPRIVE working group.  My expectation is that the
result of a year's work by the leaders of the standards
tech sphere will result in two proposals.  (I may well
be wrong).  They are:

A. Query minimization.  That is instead of asking the
root for www.example.org to get a referral to the .org's
name servers, one just asks for that (the NS records of
.org).  This continues down (ask .org for the NS records
of example.org), and then only ask the full question
(give me an IP address for www.example.org) to
those last name servers.

B. Offers of encryption, probably in TLS style, between
the client and the local recursive resolver.

Both solutions will preserve backwards compatibility
(or existing architectures will not need to change
for a long time).  This is because DNS is that important
and that large.

The key missing ingredient is the encryption between
the local resolver and the authoritative resolver.
Why that is unlikely (or technically unwise) is argued
below by Paul Vixie [END;TL;DR] below.

The end result of this is that little is really done
to protect the privacy of the end user.  Consider the
best case, where the local resolver offers encyption,
does not preserve logs of queries and implements query
minimisation.  The end result is worst than using tor.
You have a fairly static community that has its queries
observed in clear text on the wire between the local
resolver and the authoritatives, but there are no
routing changes (i.e same exit node the whole time).

A recently published academic article looks at the
greater space of the name resolution communities
and what they offer [4].  I highly recommend reading
it, if you are interested.

Post RFC7258 it was claimed by many that the best
solutions to online privacy preservation would come
from the tech community, rather than the legislature.

However, the larger systems are just not nimble enough,
and this can be seen from the above, possibly erroneous,
analysis.

The real solutions are coming from the overlay network
communities like tor and gnunet.  They have their
own threat models and are implementing solution to meet
those.  The big boys cannot implement a single solution
to meet these varying threat models.

A key in these solutions is reserving the pseudo top
level domains (pTLDs) which are used by these overlay
networks (e.g .onion), a process which is under way [3].

One could argue that the overlay communities should
not care at all about the IANA, and should the IANA not
be supportive of the above or some similar proposal,
then of course the communities would just continue.
However, achieving acknowledgement by the IANA and
have pTLDs resolved would achieve an important
political victory: wider public legitimacy.

P2P:

I have a suggestion to the tor, gnunet, i2p, and
other overlay communities.  It seems likely to me
that the IANA will not be too happy about reserving
all of .onion, .exit, .gnu, .zkey, .i2p and .bit.

I suggest that you ask for ONLY ONE pTLD.  For
example, .p2p, and then stick all your specifics inside
that.  e.g

 .onion.p2p
 .gnu.p2p

etc..  This would require work on your part, but if
that is the price for public legitimacy in the eyes of
the IANA, I humbly suggest that the price is cheap.


Sincerely,  Hugo Connery
--

References:

1. https://tools.ietf.org/rfc/rfc7258.txt
2. https://datatracker.ietf.org/wg/dprive/charter/
3.
https://datatracker.ietf.org/doc/draft-grothoff-iesg-special-use-p2p-names/?include_text=1
4. "NSA's MORECOWBELL: Knell for DNS"
https://gnunet.org/sites/default/files/mcb-en.pdf


END;TL;DR

From: DNSOP on behalf of Paul Vixie
Sent: Monday, 26 January 2015 08:14
To: dnsop
Subject: Re: [DNSOP] Followup Discussion on TCP keepalive proposals

TL;DR: i'd like to only behave differently if the other side signals its
readiness for it. in a "big TCP" model where thousands or tens of
thousands of sessions remain open while idle (even if only for a few
seconds), we are asking for application, library, kernel, RAM, CPU, and
firewall conditions that are not pervasive in the installed base --
which includes tens of millions of responders who will never be
upgraded, and whose operators are not reading this mailing list, and
will not be reading any new RFCs on the topic.

if we want better TCP/53 behaviour than that required in RFC 1035 4.2.2,
then we should define signalling that requests it, and we should behave
differently if that request is granted.

that's what "first, do no harm" means in an installed base that's
literally the size and shape of The Internet.

longer version:

[-------------------------------------]
John Heidemann
Sunday, January 25, 2015 9:10 PM

...

We are, I think, in the lucky place of having a new feature (multiple
DNS queries over TCP with pipelining and reordering) with SOME level of
responder support and basically zero initiator use.
Do we really need new signaling?

yes, i think so. you're only talking about old-style initiators here.
there are problems on the responder side that i worry more about,
because of the impact that new-style initiators could indirectly but
pervasively have on old-style initiators, due to the behaviour of
old-style responders.


... The other question is harm on the
responder side.  That's why I was trying to get to the bottom of the
assertion that DNS-over-TCP is inherently a DoS.

there may not be a bottom. existing responders who follow RFC 1035 4.2.2
are extremely weak, but are in the critical path for existing initiators
responding to TC=1 (or, in other cases where a UDP response is unusable
or untrustworthy, which i'm loathe to describe in public.)

if a new-style initiator prefers TCP and keeps a connection open longer
than the time it takes to send just the queries it has in hand, and if
the responder is old-style, then it causes significant problems for
old-style initiators. denying service to a by-the-book RFC 1035 4.2.2
TCP responder is childs play. we must not do it on purpose.


I haven't seen
evidence supporting that claim,

i am out of ideas as to what that might require.


... and I think we can all recognize the
installed base of HTTP to show that at least someone can make TCP work
at scale on the server side.

i have not, and i don't think anyone else has either, said that TCP
cannot be made to work at scale. however, TCP/53 as described in RFC
1035 4.2.2 is not part of making DNS-over-TCP work at scale; quite the
opposite.


bind
responders, since 4.8, has accepted pipelining, but with ordered
responses until a currently unreleased patch was put in recently. bind
responders through bind 8 did not read the next (potentially pipelined)
request out of the tcp socket until after it had sent its response to
the previous request, so, there was no parallelism of any resulting
cache miss activities.

Most implementations whose TCP we've examined (bind 9.9 and unbound)
have performance problems when running over TCP.  But performance
problems can be fixed incrementally and in place, unlike correctness
issues where people fail.

the problems we must avoid involve servers whose source code you can't
get access to.


Yes, there are definitely performance problems that will need to be
fixed.  But performance has very different deployment issues
than correctness does.

the problems we must avoid involve servers who will never be upgraded.

...

I haven't seen anyone assert that TCP should become *manditory* for
future DNS.  If it's encouraged, or at least not discouraged, then I
suggest we can abide a multi-year rollout.

the problems we must avoid include those operated by people who do not
read this mailing list, or new RFC's.

--
Paul Vixie



Attachment: update-from-the-name-resolution-trenches.txt
Description: Text document

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]