consensus
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNU/consensus] [SocialSwarm-D] Fwd: FYI: Securing the Future of the


From: carlo von lynX
Subject: Re: [GNU/consensus] [SocialSwarm-D] Fwd: FYI: Securing the Future of the Social Web with Open Standards
Date: Thu, 25 Jul 2013 10:37:09 +0200
User-agent: Mutt/1.5.20 (2009-06-14)

On Thu, Jul 25, 2013 at 12:58:23AM -0700, elijah wrote:
> On 07/24/2013 11:01 AM, carlo von lynX wrote:
> > email needs to be discontinued in the long run. it doesn't serve any of
> > the purposes it was constructed for. it gives the attacker a full view
> > of the social network, a view into the content by default and it also
> > fails at delivering to many recipients promptly and to handle spam.
> 
> This is a straw man argument. Yes, email as currently practiced has
> problems, but there is no reason email cannot be reformed. There is

email has a >25 years track record of not being reformable. it is just
as essential to the internet like FTP or TELNET. in 1991 FTP was the #1
protocol on the internet, now it exists only for aficionados. in 1993
it looked like ssh is the fringe paranoia technology from the free soft-
ware extremists while are normal people were using telnet and rsh. Today
you have to explain that there was something before ssh came along.
facebook has been obsoleting millions of emails already. if it's not us
taking email to the grave, somebody like facebook will.

> enough interest in secure email these days that I am certain the
> problems will be solved. The needed pieces are (1) opportunistic

secure email is fine, just don't use any of the broken old protocols
for it.

> encryption via automatic key discovery/validation, (2) enforced StartTLS

if the key isn't the address, there is no safe way to perform key validation.
x.509 is a failure, you can't trust it.
 
even if you starttls, you are still making direct links from sending to
receiving server. there are two bugs here: (1) the path and meta data is
exposed, (2) servers get to have an important role which is bad because
servers are prone to prism contracts.

and don't call me paranoid because in these weeks we are finding out
the situation is WORSE than i thought when we last met in amsterdam!
so what i said back then WASNT PARANOID ENOUGH!

> (3) meta-data resistant routing. There are a couple good proposals on
> the table for #1, postfix already supports #2 via DANE, and there are
> four good ideas for #3 (auto-alias-pairs, onion-routing-headers,
> third-party-dropbox, mixmaster-with-signatures [1]).

as long as it is backwards compatible to plain old unencrypted email
we are unnecessarily risking downgrade attacks. also we are exposing
our new safe mail system to st00pid spam problems of the past.

email compatibility must at max go as far as IMAP or POP3 to our
localhost onion router.

people prefer facebook mail anyway, so i presume they'll be fine
with retroshare mail or i2p mail or whatever we come up with in the
next weeks.

> I do want to note that email has stood the test of time when it comes to
> many recipients. I remember hosting email lists with 100k subscribers
> and pushing millions of messages monthly on an ancient machine from
> 1998. Worked like a charm.

sure, how many hours did it take to deliver all of those messages?
just because you can barely survive without multicast doesn't mean you
should make a habit of that and stick to things that just weren't bad
enough to get replaced. well, email is getting replaced today - and i
don't want to be on the side of the ones getting replaced.

> [1] details on ideas for meta-data resistant routing in a federated
> client/server architecture

fine, but the federated client/server architecture is unnecessary and
servers are always prone to getting tapped. if you make servers
sufficiently dumb then they're essentially just some more nodes in
the network and there is no technical reason to distinguish clients
and servers much.

> * Auto-alias-pairs: Each party auto-negotiates aliases for communicating
> with each other. Behind the scenes, the client then invisibly uses these
> aliases for subsequent communication. The advantage is that this is
> backward compatible with existing routing. The disadvantage is that the
> user's server stores a list of their aliases. As an improvement, you
> could add the possibility of a third party service to maintain the alias
> map.

sounds like a similar effort to setting up multicast trees, only trees
are more useful because they solve the distribution-to-many challenge.
gnunet provides this in the 'mesh' module.

> * Onion-routing-headers: A message from user A to user B is encoded so
> that the "to" routing information only contains the name of B's server.
> When B's server receives the message, it unwraps (unencrypts) a
> supplementary header that contains the actual user "B". Like aliases,

this i think is the default behaviour of tor and gnunet. gnunet in
particular lets you choose how many onion layers you need per message -
so you can choose freely between paranoid data and low security high
bandwidth or realtime data.

> this provides no benefit if both users are on the same server. As an

both users should only be on the same node if they are in the same
flat sharing the same LAN.  ;)  anything else is a ideological
distortion of the topology which harms the security of the participants.

> improvement, the message could bounce around intermediary servers, like
> mixmaster.

the question of choosing such intermediary servers must not be left to
the intermediary servers or the attack vector is simple: impede the
origin server from communicating with any servers except the ones run
by the NSA -> the NSA finds out where your messages are going, no
matter how many onion slices you added.

> * Third-party-dropbox: To exchange messages, user A and user B negotiate
> a unique "dropbox" URL for depositing messages, potentially using a
> third party. To send a message, user A would post the message to the
> "dropbox". To receive a message, user B would regularly polls this URL
> to see if there are new messages.

"URL" is the wrongest term here. a dropbox would be a node in the network,
so it is a public key address. you only use a dropbox if the thing you
want to store isn't suitable for getting stored in the distributed
hashtable. regular P2P apps use the DHT because it is redundant and
doesn't depend on the "dropbox" staying up.

> * Mixmaster-with-signatures: Messages are bounced through a
> mixmaster-like set of anonymization relays and then finally delivered to
> the recipient's server. The user's client only displays the message if
> it is encrypted, has a valid signature, and the user has previously
> added the sender to a 'allow list' (perhaps automatically generated from
> the list of validated public keys).

i presume you want to let the mixmaster itself decide where the things
are sent to. this was okay in past decade, but today this isn't safe
considering the kind of attack i described above.

P2P technology has made huge steps forward in the past decade and it
is not enough to understand onion routing to grasp all the scientific
progress that happened from there. i, myself, am not an expert - i am
just reflecting some gotchas from reading gnunet's university papers
and i bet christian or others can improve my critique to your idea of
retrofitting onion routing on top of SMTP.

other than that, gnunet already does operate over SMTP if necessary,
so although i wouldn't recommend it, you can already do this stuff.

since we have so many nice recipients of these emails, why don't we
also add libtech and unlike-us?  ;) 

-- 
»»» psyc://psyced.org/~lynX »»» irc://psyced.org/welcome
 »»» xmpp:address@hidden »»» http://my.pages.de/me



reply via email to

[Prev in Thread] Current Thread [Next in Thread]