In the late 1990s I was working on a OSI like layering model for peer to peer networks. In the early 2000s, the code red worm hit the internet, and a person I hold highly that was aware of some of my work ended up appealing to my sense of responsibility regarding the possible use of my algorithms in internet worms. After thorough consideration I decided to stop my efforts on a multi layered pure-P2P stack, and remove all information on the lowest layer algorithm that I had come up with.
About half a decade later, when p2p and malware were starting to rather crudely come together, I ended up discussing my algorithm with a security specialist at a conference, and he advised me to create a limited public paper on an hypothetical worm that would use this technology. Worms those days were rather crude and lets call it ‘loud’ , while my hypothetical Shai Hulud worm would be rather stealthy.I did some simulations, and ended up sending a simple high level textual description to some of my CERT contacts, so they would at least know what to look for . APT wasn’t a thing yet in these days, so looking for low footprint patterns that might hide a stealthy worm really was not a priority to anyone.
Now, an other half a decade has passed, and crypto currency, BitCoin, etc have advanced P2P-trust way beyond what I envisioned in the late 1990s. Next to that, the infosec community has evolved quite a bit, and a worm, even a stealthy one should be in the scope of modern APT focused monitoring. Further, I still believe the algorithms may indeed proof useful for the benign purposes that I initially envisioned them for: layered trusted pure P2P.
I won’ t cant give the exact details of the algorithm. Not only do I not want to make things to easy on malware builders, even if I wanted to, quite a bit of life happened between CodeRed and now, things where my original files were lost, so I have to do things from memory (If anyone still has a copy my original Shai Hulud paper, please drop me a message). As it is probably better to be vague than to be wrong, I won’t give details I’m not completely sure about anymore.
So lets talk about tho concepts that inspired the algorithm: The van de Graaf generator and the PacMan video game. Many of you will know the van de Graaf generator (picture above) as something purely fun and totaly unrelated to IT. The metal sphere used in the generator though has a very interesting property. The electrons on a charged sphere are perfectly evenly spaced on the surface of the round sphere, and the mathematics involved that allow one to calculate how this happens are basic high school math. My first version of my algorithm was based on mapping the IPv4 address space unto a spherical coordinate system, and while the math was manageable, I ended up with the reverse problem that projecting a round globe onto a flat map gives, to many virtual IP addresses ended up at the poles.
Than something hit me, in the old Pacman game, if you went of the flat surface on the left end, you ended up coming out on the right and vise versa. If you take the IPv4 address space, use 16 bit for the X axis and 16 bit for the Y axis, you can create what basically is a pacman sphere tile. If you than place the same tile around a center tile on all sides, you end up with a 3×3 square of copies of your pacman tile. Now we get back to our electrons, turns out that if we take a single electron on our central tile, take the position of that electron as the center of a new virtual tile, we can use that virtual tile as a window relevant to the electrodynamics relevant to our individual electron. Further it turns out that if we disregard all but the closest N electrons , the dynamics of the whole pacman sphere remain virtually the same.
Now this was basically the core concept of part one of the algorithm. If we say that every node in a peer to peer network has two positions:
- A static position determined by its IP
- A dynamic position determined by the electromechanical interaction with peers.
A node would start off with its dynamic position being set to its static position, and would start looking for peers by scanning random positions. If an other node was discovered, information on dynamic positions of the node itself and its closest neighbours would be exchanged. Each node would thus keep in contact only with its N closest peers in terms of virtual position. This interaction would make the ‘connected’ nodes virtual locations spread relatively evenly over the address space.
Ones stabilized, there would be a number of triangles between our node and its closest peers equal to the number N. Each individual triangle could be divided into 6 smaller triangles, that could than be divides between the nodes.
Now comes the stealthy part of the algorithm that had me and other so scared that I felt it important to keep this simple algorithm hidden for well over a decade, given that there hardly any overlap between the triangles, each node would have exactly 2N triangles worth of address space to scan for peers, and any of these triangles would be scanned by only one connected peer.
The math for this algorithm is even significantly simpler than the simple high school math needed for the 3 dimensional sphere. I hope this simple algorithm can proof useful for P2P design, and I trust that in 2014, the infosec community has grown sufficiently to deal with the stealthiness that this algorithm will imply if it were to be used in malware. Further I believe that advances in distributed trust have finally made use of this algorithm in solid P2P architectures a serious option. I hope my insights for choosing this moment for finally publishing this potentially powerful P2P algorithm are on the mark and I am not publishing this before the infosec community is ready or after the P2P development community needed it.