consensus
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNU/consensus] [whistle] I.0 Looking Through The Prism


From: Guido Witmond
Subject: Re: [GNU/consensus] [whistle] I.0 Looking Through The Prism
Date: Sat, 27 Jul 2013 13:26:57 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130116 Icedove/10.0.12

On 26-07-13 21:37, hellekin wrote:
> On 07/26/2013 12:15 PM, Guido Witmond wrote:
> 
>> You should read Binding Chaos by Heather Marsh.... Plenty of
>> answers there.
>> http://georgiebc.wordpress.com/2013/05/24/binding-chaos/
> 
> *** I promised to translate it. I need to take the time.
> 
>> I wrote about it on the libtech list in: 
>> https://mailman.stanford.edu/pipermail/liberationtech/2013-July/010335.html
> 
> ***
> 
> Brilliant! Allow me to <quote>
> 
> The problem with the web is that is favours a central distribution model
> and forgoes geographical caching. For example, if I read an interesting
> blog and send to URL to a friend in the same room, the data that forms
> the blog has to travel all the way from the original site - over all the
> same paths - a second time for my friend. Just so he can have an
> identical copy.
> 
> He gets an identical copy of the important bits that mattered: the blog.
> He might get different bits that don't matter, the advertisements.
> 
> If we had an easy way for me to transmit the blog to my friend, the
> important bits would have an almost zero cost of transport while the
> unimportant bits need the expensive path
> 
> </quote>
> 
> It reminded me the model for bandwidth allocation on large trunks: if
> you're an ISP, your incentive is to maximize your available bandwidth,
> in order to get allocated more bandwidth faster. Again, a model that
> favors a few giant operators rather than many tiny ones.



> So there, you have it: cutting the middleman by enforcing peer-to-peer
> distribution. Notice how the concept of cloud, and "web apps" do
> exactly the opposite. Unless the app in UnHosted.

It's not that I want to cut out the middleman completely, I need those
long haul links to read interesting blogs. My neighbours are nice people
but don't write enough interesting blogs on cryptography ;-)

I want to avoid the waste with replicating all data all the time,
becoming less dependent on that middleman.


Creating such a decentralised system is hard. It's easier to throw more
hardware at it, again favouring the central model. Making publishing
expensive.



Freenet has an interesting sharing/replication model for this. It
replicates from the publishing node towards the readers, making popular
content spread out. It comes at the cost of deleting unpopular content.
It is the price for sender untraceability. With Freenet you don't know
what's in your cache, so that's a limit to how much of your precious
disk space you assign to Freenets data store. You're not rewarded for
having a large cache.

I want something with a global distributed cache, like Freenet, but one
that allows me to set a 'Keep-flag' to a file. My cache won't expunge it
and I can access it like a file on disk. It is available to others too,
like a torrent seed. Popular content will be shared by many, give me a
light load. Impopular content gives a light load too. If I delete it, it
will get purged from the cache sometime.

This allows me to match my disk space with my caching needs.

My own blog-ramblings will get stored at my disk (with keep flag set).
When it gets popular, it will get spread out, making it possible to
reach a large audience with a small computer and relatively thin connection.


Cheers, Guido.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]