<a href="http://groups.google.com/group/openmanufacturing/t/50f49ae67c75b034" target="_blank">RS: structured mapping of real p2p internet
infrastructure</a>
<ul><span style="font-weight: bold;">Giovanni Lostumbo <<a href="mailto:giovanni.lostumbo@gmail.com" target="_blank">giovanni.lostumbo@gmail.com</a>></span>
Jan 09 08:39AM -0600
<a href="?ui=2&view=bsp&ver=ohhl4rw8mbn4#12d6fbc88b29d451_digest_top">^</a><br>
<br>
Hello,<br>
<br>
As there are many ways do map an infrastructure, one idea I've thought
about<br>
was having all p2p software and operating system package either<br>
pre-installed on a computer and/or available on a GNU GPL distro (like
or<br>
using aspects of GNUnet). The attached illustration is based on a<br>
partial-mesh topology 3 links/node in a 9 computer network (I merely<br>
included duplicate software lists at each PC to illustrate the example
of<br>
modular and compatible PCs in any network). This could be scaled up, but<br>
might need a different software that can readily adapt to different<br>
topologies.<br>
<br>
Giovanni Lostumbo<br>
<br><p> </p></ul>
<span style="font-weight: bold;">Giovanni Lostumbo <<a href="mailto:giovanni.lostumbo@gmail.com" target="_blank">giovanni.lostumbo@gmail.com</a>></span>
Jan 09 03:36PM -0600
<a href="?ui=2&view=bsp&ver=ohhl4rw8mbn4#12d6fbc88b29d451_digest_top">^</a><br>
<br>
Also, typo in the previous: XMPP instead of XMMP. Additionally,<br>
Thunderbird/Evolution Mail clients could be included too. I am
replacing<br>
802.11s with Ronja to consider health issues. All parts should be
modular,<br>
anyways. Platform compatibilities is of course the issue to discuss.<br><a href="http://p2pfoundation.ning.com/forum/topics/ronja-open-source-lineofsight?xg_source=activity" target="_blank">http://p2pfoundation.ning.com/forum/topics/ronja-open-source-lineofsight?xg_source=activity</a><br>
My idea is to follow the evolution of the widest fields of the internet,
for<br>
some of the latest technologies and to map them to combine it for new<br>
functions (See attachment called p2p convergence- on the right panel).
The<br>
idea is inspired by Carl Woese's term, "innovation sharing protocols"<br><a href="http://arxiv.org/abs/q-bio/0605036" target="_blank">http://arxiv.org/abs/q-bio/0605036</a>
and his idea of a "denim fence" "model"<br>
of evolution as in the attached illustration<br>
<br>
Two of the latest developments in the fields of HDD storage capacity and<br>
microprocessors of last year are the advancing beyond the 2.19Terabyte<br>
limit:<br><a href="http://www.extremetech.com/article2/0,2845,2373917,00.asp" target="_blank">http://www.extremetech.com/article2/0,2845,2373917,00.asp</a>
(though using<br>
64-bit OSs, GPT, and UEFI are part of newer platform requirements that
do<br>
this by default)<br>
And and recent developments in the Einstein-Bose condensate:<br><a href="http://science.slashdot.org/story/10/11/26/2059208/German-Scientists-Create-Bose-Einstein-Condensate-Using-Photons%28which" target="_blank">http://science.slashdot.org/story/10/11/26/2059208/German-Scientists-Create-Bose-Einstein-Condensate-Using-Photons(which</a><br>
may extend Moore's Law further into the future), then it is possible<br>
to combine these technologies to do much more as a client than a server<br>
would typically do.<br>
A third example of this is less of a breakthrough, but a more applicable<br>
technology and rather a sequential development in consumer
microprocessing<br>
capabilities, is AMD's 16 core Bulldozer and Intel's SandyBridge
16threaded<br>
8core chips being released in 2011. So, some of the obstacles here are
both<br>
ideological and technological. Some technologies available can reproduce
the<br>
internet as Fidonet, but the needs/wants of today's users either depend
on<br>
or prefer heavy bandwidth for streaming/intensive processing needs. Some
p2p<br>
software, as Simon at computerworldUk says, "The field is immature, but<br>
there are exciting experiments in progress." Looking at the bottlenecks
of<br>
current technologies and documenting what is needed for the "minimal<br>
infrastructure" is an exciting forefront.<br>
<br>
By combining some of these technologies, it may be possible to address<br>
problems such as Wikipedia's storage needs (mentioned recently, if
someone<br>
can provide a link?), and having p2p versions of those servers. The same<br>
could follow for Sourceforge.net. A theory I'm considering here is, if
the<br>
some data needs of Wikipedia or other large site such as Amazon/online
store<br>
increase linearly, and certain breakthroughs in harddrive capacity such
as<br>
the link above extend consumer storage capabilities exponentially (e.g<br>
Multi-layer cells/3D disk writing), then as some next gen technologies<br>
physically "shrink", it may be possible to transmit enough internet<br>
application data through 1Gb or 10Gbps^n fibreoptics wires, or wireless
mesh<br>
protocols such as Ronja. If tomorrow's off-the-shelf computers can
handle<br>
the needs of today's mega server farms, it's possible that the
exponential<br>
increases in multi-core computers with enough storage will be able to
manage<br>
the needs of a real p2p infrastructure's most commonly used web<br>
applications.<br>
<br>
Trends in server architecture include low-power, high density
processors:<br>
512 Intel atom chips running at less than 2Kwh:<br><a href="http://www.anandtech.com/print/3768/" target="_blank">http://www.anandtech.com/print/3768/</a><br>
And potentially, ARM microprocessors applied to this technique, or Intel<br>
Chips that run at 1-100mw per core. and are stacked similarly to
SeaMicro's<br>
setup. Thus, in the future, converging these trends, as an estimate,
it's<br>
possible a consumer desktop would run at less than 1 watt- use 48 cores,<br>
using 2mw each, and can handle massively parallel p2p internet storage<br>
compression and decompression (or RAW uncompressed data functions) with
up<br>
to 18 exabytes of storage capacity using 64-bit processors and the<br>
accompanying RAM possible (16 Exabytes).<br>
But in the near-term, it may be possible to do that with:<br>
Using an 8 or 16 core Zambezi/Bulldozer chip would allow faster<br>
decompression and compressions of .rar/.zip ,etc files of p2p Wikipedia<br>
discussion/page edits and transmitted across networks, addressing<br>
bottlenecks of bandwidth however possible. This isn't an advertisement-
just<br>
an academic exercise in alternative network infrastructure theories.<br>
<br>
Giovanni<br>
<br>
On Sun, Jan 9, 2011 at 2:39 PM, Giovanni Lostumbo <<br clear="all"><br>-- <br>P2P Foundation: <a href="http://p2pfoundation.net">http://p2pfoundation.net</a> - <a href="http://blog.p2pfoundation.net">http://blog.p2pfoundation.net</a> <br>
<br>Connect: <a href="http://p2pfoundation.ning.com">http://p2pfoundation.ning.com</a>; Discuss: <a href="http://listcultures.org/mailman/listinfo/p2presearch_listcultures.org">http://listcultures.org/mailman/listinfo/p2presearch_listcultures.org</a><br>
<br>Updates: <a href="http://del.icio.us/mbauwens">http://del.icio.us/mbauwens</a>; <a href="http://friendfeed.com/mbauwens">http://friendfeed.com/mbauwens</a>; <a href="http://twitter.com/mbauwens">http://twitter.com/mbauwens</a>; <a href="http://www.facebook.com/mbauwens">http://www.facebook.com/mbauwens</a><br>
<br>Think tank: <a href="http://www.asianforesightinstitute.org/index.php/eng/The-AFI">http://www.asianforesightinstitute.org/index.php/eng/The-AFI</a><br><br><br><br><br>