Category Archives: Uncategorized

Why HTTPS everywhere is a horrible idea (for now).

trafic sign

While privacy is a valuable thing, and while encryption in general helps improving privacy and can be used to help improve security, in this blog post I will discuss how more encryption can actually harm you when used with a fundamentally flawed public key infrastructure. Before we go on and discuss what the problems are, a big of a background.

When confidential communication between for example your web browser and and for example your bank is needed, how do your browser and the banks web server achieve this. The following steps will take place:

  • Your browser will ask the internet’s Domain Name System (DNS) for the IP address for the IP address of ‘www.yourbank.com’. The Domain Name System will resolve the name and come back with an IP adress.
  • Your browser will initiate a TCP connection to the IP address it got back from the Domain Name System.
  • Once the TCP connection is established, the client and server will initiate the ‘handshake’ phase of the Transport Layer Security protocol.

After the handshake phase, everything should be fine and dandy, but is it? What would an adversary need to do to defeat the confidentiality of your connection? Given that without encryption the adversary already would need access to the transmitted data to read your data, we shall assume the adversary is sniffing all network traffic. Now the first thing an adversary needs to do to defeat the above setup is to fool your browser into thinking it is your bank. It can do this quite easily given that the Domain Name System (in its most basic form) runs on top of the User Datagram Protocol (UDP), a trivial connection-less protocol that can effordlesly be spoofed to make your browser believe your bank’s server is running on the attacker’s IP. So now, after the TCP connection has been established to what your browser beliefs is your bank, the TLS handshake begins. Our attacker could try to impersonate our bank, or he could, and this is where we shall look at, attempt to take a role of ‘Man in the middle’. That is, next to making your browser think it is your bank, it will actually connect to your bank and relay content between your browser and your bank either just so it can snoop on your traffic or until it is ready to strike by changing transaction content. But lets not get ahead of ourselves. Our client has connected to our attacker and our attacker has made a connection to our bank so the attacker’s machine can act as man in the middle. What attack factors can it use?

  • It can launch a downgrade attack by offering an inferior subset of the ciphers offered by the real client to the real server. This way the cipher suite used in the connection can be made to be the weakest common denominator between the real client and the real server. This way it can weaken the strength of the encryption, or force the use of a cipher suite that can later be weakened by other man in the middle tricks.
  • It can provide the client with a rogue certificate for ‘www.yourbank.com’. This will be harder to get by, but doing so would leave the attacker with a fully decrypted stream of traffic.

The last scenario, by many people is often described as being relatively unlikely. Let me try to elaborate why it is not. The certificate offered by the attacker has some security in it. Your browser won’t accept it unless it is signed by a ‘trusted’ Certificate Authority (CA). Your browser will trust only about 50 or so CA’s, so that sounds kinda OK doesn’t it? Well, there is an other trick with the CA based public key infrastructure, not only will it trust ANY of these 50 CA’s to sign ANY domain, it will also trust many sub-CA’s to do so. In total there should be over 600 certificate authorities in over 50 counties that might be used to sign a rogue certificate for your domain. The problem with trusting ANY CA to sign ANY domain arises from the mathematical properties of probability calculus for such cases. What this properties basically result to is the following horrific fact:

The probability that non of 600 equally trustworthy CA’s could somehow be persuaded, tricked or compromised by our attacker in a way that would allow it to get a rogue certificate signed is equal to the probability for a single one of these CA’s to the power of 600.

So if I can put 100% trust in each of these 600, the cumulative trust would be 100%, fine. 99.99%? We are at 94% what is pretty decent sure. 99.9%? Now things start to crumble as we are down to only 55% of cumulative trust. 99%? All commutative trust basically evaporated as we are down to 0.24%.

While these numbers are pretty bad, we must address the subject of attacker resources. If our attacker is our neighbor’s 14 year old kid trying to mess with us on his computer, than 99.99% might pretty well be a realistic number. A group of skilled and determined cyber criminals? 99.9% might pretty well be a realistic number and thus a real concern when communicating with our bank from a technical perspective. There could be economical aspects that would make this type of attack less likely for committing banking fraud. Now how about industrial espionage? Nation states? 99% would sound like a pretty low estimate for those types of adversaries.  Remember, 99% had us down to 0.24% as cumulative trust factor and that is assuming  equal trustworthiness for all CA’s.

So basically in the current CA infrastructure we can safely say that HTTPS protects us from script kiddies and (some) non targeted cybercrime attacks. It might even protect us to a certain level from mass surveillance. But that’s it. It does not protect us from targeted attacks trough organized criminal activities. It does not protect any high stakes intellectual properties from industrial espionage nor does it by itself protect the privacy of political dissidents and the likes.

So you might say: some protection is better than none, right? Well, not exactly. There is one thing that HTTPS and SSL in general is perfectly good at protecting trafic from: “YOU!” 

trojan

Remember Trojan Horses? Programs that act like one thing but actually are an other.  Or how about malicious content on compromised websites that exploits vulnerabilities in your browser or your browser plugin for flash? Nasty stuff running on your machine with access to all of your sensitive data. If they want to get your data out of your computer and into theirs, than HTTPS would be a good way to do it.  Now compare the situation of using HTTPS for all your web traffic to using HTTPS only for connecting to sites you a) visit regularly and b) actually need protecting. In the last situation, unexpected malicious encrypted traffic will stand out.  Its not my bank, its not my e-mail, I’m not ordering anything, why am I seeing encrypted traffic? When using HTTPS for every site that offers it though, we are creating a situation where Trojans and other malware can remain under the radar.

But back to the issue with the CA based infrastructure. There is an other issue, the issue of patterns of trust. When you hire a contractor to work on your shed, there is a common pattern of introduction that is the most major factor in the trust involved in getting your shed fixed. The contractor will introduce you to the carpenter and after that there will be a partial trust relationship between you and the carpenter that is a sibling of the trust relationship you have with the contractor. In modern web based technologies similar relationships are not uncommon, but the CA based architecture is currently the only mechanism available. A mechanism that doesn’t allow for the concept of secure introduction. While domain name based trust might be suitable for establishing trust with our contractor, a form of trust establishment that is completely immune to the kind of problems we face with domain name based trust initiation suffers from. In order to establish the introduction based trust, the server equivalent of the contractor could simply send us a direct unforgeable reference to the server equivalent of the carpenter. Its like the contractor has issued a mobile phone to the carpenter for communication with clients and than gives the phone number to the client. Not a normal phone number but some maybe 60 number long phone number that nobody could dial unless they had been handed that phone number first. The client knows that the person answering the phone should be the carpenter. The carpenter knows that the person calling him on that number should be a client. No CA’s or certificates needed period. Unfortunately though, the simple technology needed for the use of these webkeys currently isn’t implemented in our browsers. In a webkey, no certificate or CA is needed as measures for validating the public key of the server are hard coded into the URL itself.

So one one side we have an outdated DNS system, a set of outdated legacy cipher suits and a dangerously untrustworthy CA infrastructure that undermine the positive side of using HTTPS and on the other side we have untrustworthy programs with way to much privileges running on our machines exposing us to the negative sides of HTTPS. The illusion of security offered by a mediocre security solution like this can be much worse than using no security at all, but the Trojan and malware aspects make it worse than that.  Basically there is quite a lot of things that need fixing before using HTTPS everywhere stops being decremental to security. The following list isn’t complete, but gives some thoughts of the kind of things we need before HTTPS everywhere becomes a decent concept:

  1. Least authority everywhere. First and foremost, trojans and exploited clients should not by default be ‘trusted’ applications with access to our sensitive data.
  2. The current CA based PKI infrastructure with >600 ‘trusted’ CA’s in over 50 countries must be replaced urgently:
    1. NameCoin everywhere. Use of technology like demonstrated by NameCoin is a possible path.
    2. DANE everywhere. The extensive use of DNSSEC with DANE could offer an alternative to CA’s that is a significant improvement over the CA based infrastructure.
  3. TLS1.2 everywhere. Older versions of TLS and many of the ciphers defined in those standards should become deprecated.
  4. DNSSEC everywhere.
  5. WebKeys everywhere. We really need webkey support in the browser.

I realize the above rant may be a tad controversial and not in line with the popular view that encryprion is good medicine.  I do however find it important to illuminate the dark side of HTTPS everywhere and to show that its just one piece from the center of the puzzle while it would be much better to start solving the puzzle with edge pieces instead,

Advertisements

Security: debunking the ‘weakest link’ myth.

stock-footage-the-weakest-link-a-plastic-tie-joining-together-metal-chain-represents-the-weakest-link

“The user is the weakest link”, “”Problem Exists Between Keyboard And Chair”, “Layer 8 issue”. We have all heard these mentioned hundreds of times. Most security specialist truly believe it to be true , but in this blog post I will not only show that NO, the user is not the weakest link, I hope to also show that in fact the ‘believe’ that the user is the weakest link may be the reason that our information security industry appears to be stuck in the 1990s.

Harsh words? sure, but bear with me as I try to explain. Once you understand the fallacy and the impact of the idea that the user is the weakest link in the security chain, you are bound to be shocked by what the industry is selling us today. That they are in fact selling shark cages as protection against malaria.

There are at least six major weak links in today’s information security landscape. The first one  we know about, the user, no denying the user is a weak link, especially when working with many of todays security solutions, but there are five other important weak links we need to look at. Links that arguable all would need to be stronger than the user is in order for our user to be considered the weakest link.  I hope to show that not one, but every single one of these other five links is in fact significantly weaker than our user. Here we have the full list, I will explain each bullet later:

  • The user
  • Disregard for socio-genetic security-awareness
  • Identity centric security models
  • Single granularity abstractions
  • Public/global mutable state
  • Massive size of the trusted code-base

11prehistoric-hunting

Lets first look at what our user is. Our user, at least in most cases, will be a member of the human race. We as humans share many millenniums of history, and during all of these millenniums we arguably have been a species that uses social patterns of cooperation as a way to accomplish great things. One of the pivotal concepts in these cooperative patterns has always been the concept of delegation. Imagine our human history with a severe restriction on delegation. We would probably still be living in caves. If we had not gone extinct that is. Delegation is  part of our culture, its part of our socio-genetic heritage. We humans are ‘programmed’ to know how to handle delegation. Unfortunately however, free delegation is a concept that many a security architect feels to be an enemy of security. When users share their passwords, 99 out of 100 security people will interpret this as a user error. This while the user is simply acting in a way he was programmed to do, he is delegating in order to get work done. So what do security people do? They try to  stop the user from delegating any authority by coming up with better forms of authentication. Forms that completely stop the possibility of delegation of authority. Or they try to educate the user into not sharing his password, resulting in less efficient work processes. The true problem is that lacking secure tokens of ‘authority’ that the user could use for delegation, the user opts to delegate the only token of authority he can, his ‘identity’. We see that not only are we ignoring all of the users strengths in his ability to use patterns of safe collaboration, we are actually fighting our own socio-genetic strengths by introducing stronger authentication that stops delegation. Worse, by training our users, we are  forcing them unlearn what should be their primary strength.

While we are at the subject of passwords, consider delegation of a username/password to equate functional abdication, and consider that safe collaboration requires  decomposition, attenuation and revocability.  Now look what happens when you want to do something on your computer that requires user approval. In many cases, you will be presented with a pop-up that asks you to confirm your action by providing your password. Now wait, if delegation of a password is potential abdication of all the users powers, we are training our users into abdicating any time they want to get some real work done. Anyone heard of Pavlov and his dog? Well, our desktop security solutions apparently are in the business of supplying our users with all the Pavlovian training they need  to become ideal phishing targets. Tell me again how the user is the weakest link!

If we realize that we can tap into the strengths of the users socio-genetic security awareness by  facilitating in patterns of safe collaboration between the user and other users, and between the user and the programs he uses, it becomes clear that  while passwords are horrible, they are horrible for a completely different reason than most security people think. The main problem is not that they are horrible tokens for authentication and that we need better authentication that stops delegation altogether. The problem is that they are horrible, single granularity, non-attenuable and non decomposable tokens of authorization. Our security solutions are to much centered about the concept of identity, and too little about the concept of authority and its use in safe collaborative patterns.

Identity also is a single granularity concept. As malware has shown, the granularity of individual users for access control is meaningless ones a trojan runs under the users identity. Identity locks in access control to a single granularity level. This while access control is relevant at multi levels, going up as far as whole nations and down as deep of individual methods within a small object inside of a process that is an instantiation of a program run by a specific user. Whenever you use identity for access control, you are locking your access control patterns into that one, rather coarse granularity level. This while many of the access control in the cooperative relatively safe interaction patterns between people are not actually that much different from patterns that are possible between individual objects in a running program. Single granularity abstractions such as identity are massively overused and are hurting information security.

Its not just identity, its also how we share state. Global, public or widely shared mutable state creates problems at many granularities.

  • It makes composite systems hard to analyse and review.
  • It makes composite systems hard to test
  • It creates a high potential for violating the Principle Of Least Authority (POLA)
  • It introduces a giant hurdle for reducing the trusted code-base size.

We need only to look at Heartbleed to understand how the size of the trusted code-base is important. In the current access control eco system, the trusted code-base is so gigantic, that there simply aren’t enough eyeballs on the world to keep up with everything. In a least authority ecosystem, openssl would have been part of a much smaller trusted code-base that would have never allowed a big issue such as Heartbleed to stay undiscovered for as long as it has.

So lets revisit our list of weak links and question which one could be identified to be the weakest link.

  • The user
  • Disregard for socio-genetic security-awareness
  • Identity centric security models
  • Single granularity abstractions
  • Public/global mutable state
  • Massive size of the trusted code-base

While I’ m not sure what the weakest link may be, its pretty clear that the user won’ t become the weakest link until we’ve addressed each of the other five.

I hope the above has not only convinced you that indeed the user is not the weakest link, but that many of our efforts to ‘fix’  the user have not only been ineffective, they have even been extremely harmful. We need to stop creating stronger authentication and educating users not to share passwords until we have better alternatives for delegation. We need to stop overusing identity and subjecting our users to pavlovian training that is turning them into ideal phishing victims. When we start realizing that the socio-genetic security awareness of our users are a large almost untapped foundation for a significantly more secure information security ecosystem.

The relevance of Pacman and the van de Graaff generator to peer to peer networking in IPv4 networks.

In the late 1990s I was working on a OSI like layering model for peer to peer networks. In the early 2000s, the code red worm hit the internet, and a person I hold highly that was aware of some of my work ended up appealing to my sense of responsibility regarding the possible use of my algorithms in internet worms.  After thorough consideration I decided to stop my efforts on a multi layered pure-P2P stack, and remove all information on the lowest layer algorithm that I had come up with.

About half a decade later, when p2p and malware were starting to rather crudely come together, I ended up discussing my algorithm with a security specialist at a conference, and he advised me to create a limited public paper on an hypothetical worm that would use this technology. Worms those days were rather crude and lets call it ‘loud’ , while my hypothetical Shai Hulud worm would be rather stealthy.I did some simulations, and ended up sending a simple high level textual description to some of my CERT contacts, so they would at least know what to look for .  APT wasn’t a thing yet in these days, so looking for low footprint patterns that might hide a stealthy worm really was not a priority to anyone.

Now, an other half a decade has passed, and crypto currency, BitCoin, etc have advanced P2P-trust way beyond what I envisioned in the late 1990s.  Next to that, the infosec community has evolved quite a bit, and a worm, even a stealthy one should be in the scope of modern APT focused monitoring. Further, I still believe the algorithms may indeed proof useful for the benign purposes that I initially envisioned them for: layered trusted pure P2P.

Van-de-Graaf-Generator-web

I won’ t cant give the exact details of the algorithm. Not only do I not want to make things to easy on malware builders,  even if I wanted to,  quite a bit of life happened between CodeRed and now, things where my original files were lost, so I have to do things from memory (If anyone still has a copy my original Shai Hulud paper, please drop me a message). As it is probably better to be vague than to be wrong, I won’t give details I’m not completely sure about anymore.

So lets talk about tho concepts that inspired the algorithm: The van de Graaf generator and the PacMan video game.  Many of you will know the van de Graaf generator (picture above) as something purely fun and totaly unrelated to IT. The metal sphere used in the generator though has a very interesting property. The electrons on a charged sphere are perfectly evenly spaced on the surface of the round sphere, and the mathematics involved that allow one to calculate how this happens are basic high school math.  My first version of my algorithm was based on mapping the IPv4 address space unto a spherical coordinate system, and while the math was manageable,  I ended up with the reverse problem that projecting a round globe onto a flat map gives,  to many virtual IP addresses ended up at the poles.

pacman arcade game

Than something hit me, in the old  Pacman game, if you went of the flat surface on the left end, you ended up coming out on the right and vise versa. If you take the IPv4 address space, use 16 bit for the X axis and 16 bit for the Y axis, you can create what basically is a pacman sphere tile. If you than place the same tile around a center tile on all sides, you end up with a 3×3 square of copies of your pacman tile. Now we get back to our electrons, turns out that if we take a single electron on our central tile, take the position of that electron as the center of a new virtual tile, we can use that virtual tile as a window relevant to the electrodynamics relevant to our individual electron. Further it turns out that if we disregard all but the closest N electrons , the dynamics of the whole pacman sphere remain virtually the same.

Now this was basically the core concept of part one of the algorithm. If we say that every node in a peer to peer network has two positions:

  • A static position determined by its IP
  • A dynamic position determined by the electromechanical interaction with peers.

A node would start off with its dynamic position being set to its static position, and would start looking for peers by scanning random positions. If an other node was discovered, information on dynamic positions of the node itself and its closest neighbours would be exchanged. Each node would thus keep in contact only with its N closest peers in terms of virtual position. This interaction would make the ‘connected’ nodes virtual locations spread relatively evenly over the address space.

image9

Ones stabilized, there would be a number of triangles between our node and its closest peers equal to the number N.   Each individual triangle could be divided into 6 smaller triangles, that could than be divides between the nodes.

Su55k02_m21

Now comes the stealthy part of the algorithm that had me and other so scared that I felt it important to keep this simple algorithm hidden for well over a decade, given that there hardly  any overlap between the triangles, each node would have exactly 2N triangles worth of address space to scan for peers, and any of these triangles would be scanned by only one connected peer.

The math for this algorithm is even significantly simpler than the simple high school math needed for the 3 dimensional sphere. I hope this simple algorithm can proof useful for P2P design, and I trust that in 2014, the infosec community has grown sufficiently to deal with the stealthiness that this algorithm will imply if it were to be used in malware. Further I believe that advances in distributed trust have finally made use of this algorithm in solid P2P architectures a serious option.  I hope my insights for choosing this moment for finally publishing this potentially powerful P2P algorithm are on the mark and I am not publishing this before the infosec community is ready or after the P2P development community needed it.

An engineers approach to diet and work-outs (part 3)

This is my third blog post series on my attempts at fighting obesity through the means of applying control theory to dieting and working out. Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems with inputs. In my first post in this series I discussed the Generic Body Health Index based on body fat percentage and relative bodily strength. In my second post, I discussed the importance of working out both as a way to improve health and a way to measure the appropriateness of your diet. In this third post I shall be talking about our inputs, the macro nutrients: protein, carbohydrates and fat. Much of what I will tell you in this blog post will, combined with my previous posts make some of you feel like I am advocating the absolute reverse of what you have been convinced dieting is about. I already tolled you that gaining weight can be a desired outcome. In this post I will tell you two more things that may sound like craziness if like me , you have bought into the different diet fads in the past. The diet I’m proposing may feel like the opposite of a normal diet. A reverse diet.

ReverseDiet

I am going to tell you to eat both sugar and fat and to possibly gain substantial weight while doing so. But please bare with me, it will all start to make sense soon, and you to may start to see that my reverse diet is something that can greatly benefit the body compositional aspects of your health.

We are setting out to apply control theory to exercise and diet and create a control system with your body at the center.

In order to keep our control system simple, and a system with 3 independent inputs is not that simple, we could make the mistake of thinking that calories are a usable simplification of our system. If we only look at the total calories, we could use that number as only input and ignore what sources of calories are used. We could also try to simplify our system by as some suggest, almost completely taking away one of the inputs (either fat or carbs depending who you talk to), use a fixed amount of protein (somewhere between 1g and 3g per kg of fat free body mass depending on who you talk to) and use the remaining macro nutrient (fat or carbs) as our single input variable.

We shall take neither of these strategies and I’l start off explaining why.

Lets start off by looking at two sets processes in our bodies. Synthesizing fats from other types of molecules is called lipogenesis, and breaking down fat for fuel is called lipolysis. Its easy to think of dieting only in ters of these two processes, but there is also Muscle Protein Synthesis (MPS) and Muscle Protein Breakdown (MPB). Ideally we would like to combine lipolysis with MPS, exchanging body fat for new muscles.  Given that we are creating a control system using our own body, its important that our system has some stability in it. We don’t want it to end up oscillating between lipolylis+MPB and lipogenisis+MPS.

For decades the lipid hypothesis had health experts convinced that fat was bad, and even today many people still advocate low fat diets for health reasons. These low fat diets need to get their calories from somewhere, and there is only so much protein the body uses, so the obvious candidate was carbohydrates. As many who like me have tried to loose weight on low fat diets can affirm, you can loose much weight on a low fat diet. Problem is, you will also go straight into MPB on such a low fat, high carb diet. You will loose muscle mass, your bodies BMR will go down, and unless you start starving your body even more, your weight will come up again and in the end you will have traded muscle mass for fat mass, exactly the opposite of what you set out to do. After having tried to apply control theory on a low-fat high-carb diet, I came to the conclusion that, at least for me, my lipo equilibrium lies way below my muscle protein equilibrium. So the basic conclussion is, forget about low-fat high carb for a control system, the best you could do with it would be to use it in a bulk/cut cycle like body builders do, but when aiming for a smooth curve, low fat is not sufficiently stable to work with.

On the other side we have the low-carb advocates. Carbs are basically all just sugars, sugars raise your insulin levels and   insulin will stop lipolysis and stimulate lipogenisis. So the low carb peope have you eat more fat and a minimum amount of carbs. No sugar, no grains, and no taters or starchy vegetables. The low carb diet does absolute wonders for couch potatoes, but remember we are trying to combine diet with serious work outs. Using fat for fuel is a good idea, but your body does not really keep up when you are doing a 90 minute intense workout. Your muscles need sugars to fuel your work out. There are glycogen stores in your liver and muscles, but these won’ t last a full 90 minute or 100 minute when you are trying to have a good workout. If you can’t complete your workout at full strength, your strength will diminish and you may even possibly trigger MPB. So strict low carb is not going to help us that much either.

So in the end, neither the low-fat nor the low-carb approach is going to be the one for us. We have to find some middle ground. After trying different relative percentages, I found the following to be one that works best for me. I am able to complete my workouts, and the system does not suffer from osculations.

We shall define our input in chunks of 10g of macro nutrients. Each chunk in this case represents 55 calories.

  • 3 grams of carbohydrates (12 calories)
  • 3 grams of fat (27 calories)
  • 4 grams of protein (16 calories)

Our single input variable shall be the amount of 10g chunks. But before we look at our control system and at just how many chunks to start with, we shall look at each of the 3 components separately.

carbohydrates

Looking at the carb part of our chunks, we start off by stating that, like low carb people advocate,  there is absolutely no need for grains, taters or any other starchy non-vegetables. Unlike what most dietary experts state however, we shall be adding sugars to our diet. We shall look separately at our workout days and our non work-out days. On days we don’t work out, we shall not waste any of our carb quota on sugary foods. No sugar, but most definitely also no fruit. Try to get most of your resting day carbs from:

  • nuts
  • vegetables

On workout days, you really need to spare your carbs for your work-outs. Try to get sufficient of your carbs from:

  • berries (pre-workout)
  • isotonic drink (during your workout)

Less vegies and nuts on work-out days and no fruit on resting days.  When eating fruit, try to avoid low fiber fruits. Berries IMO are probably your best bet.

fat

While you will be getting much of your calories from fat, in grams its not really that much. While according to the low-carb school, the lipid hypothesis has been falsifies, it may still be a good idea to stick to non suspect fats as much as possible just in case. Avoiding trans fats should be obvious (not even the >0.1% on the label is ok, trans fats should be considered toxic).  Avoid vegetable oils, sunflower oil, and fatty meat. Fatty fish is great, olive oil is great, and most nuts. Diary is a tricky one, pick high protein dairy products like parmasan cheese, and watch out for to much lactose eating away from your carb quota.

  • Fatty fish
  • Olives and olive oil
  • Nuts and peanut oil
  • High protein diary products like Parmesan cheese.

proteine

Now for our proteine. We have seen with our ideal GBHI curve that we may want our GHBI to move in one of 3 general directions or quadrants:

  • major decrease of body fat, slight decrease in strength
  • major increase of relative strength, slight increase of body fat
  • moderate decrease in body fat, moderate increase of relative strength.

Each of these 3 goals  will call for extra focus on different amino acids, I will discuss these in an other post, for now lets just state that we should use a wide range of proteine sources to get sufficient amounts of the different types of amino acids:

  • fish and other sea food.
  • nuts
  • beef
  • high protein dairy
  • eggs

how much

In a follow up post I will be working with you on how to implement your control system based on the 10 gram chunks of macro nutrients described above. When you have started with your work outs on a regular basis, I would advice to start just looking at making sure you are taking your macro nutrients in the proper relative proportions. Try to listen to your body, don’ t eat unless you feel like it and never allow yourself to feel hungry. Try to find a good starting level based on what feels comfortable for you. Give yourself two weeks to figure out a level you feel you could stick with. Once you have your starting level down, we can start with our first 12 workout period. Don’ t change your diet in this period. We are going to use our 12 workouts to measure how well we are doing and to determine afterwards what to adjust. Dont try to intervene prematurely, your body is adjusting and needs some time to show conclusive measurements. stick to the levels that felt comfortable. The workouts should all end with one of the big 3 strength training exercises. At each workout, take the following as measurement:

  • Your body weight
  • Your body fat percentage
  • Your top big-3 performance for that day.

In my next post we shall be discussing how to interpret your measurements and how to adjust your diet accordingly.

I hope that after reading this you see at least some sense in my reverse diet. A diet that tells you to eat relatively fatty, tells you to consume  sugar and a diet that tells you that gaining weight in many cases is quite OK.

An engineers approach to diet and work-outs (part 2)

This is my second blog post series on my attempts at fighting obesity through the means of applying control theory to dieting and working out. Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems with inputs

In my first post in this series on applying control theory to dieting and working out,  I proposed an alternative to the use of the Body Mass Index (BMI) as primary output for our dieting and exercise efforts. Given that a basic control system has an output, a feedback loop end an input, we still need to look at the input and feedback loop. I will come to those later in this series. In this blog post I shall try to elaborate on the Body Strength Index (BSI) component of the  Generic Body Health Index  (GBHI) that I described in my first post, and I will explain why strength training and power lifting complement a good and healthy diet in our attempt at a healthier body composition.

As I described in my first post, the GHBI is a value in the complex plain made up of two components. The first component is the Body Fat Index (BFI) that describes how close our body is to the leanest we can get without going into under-fat conditions. Its sufficient for health purposes to get out of the over-fat range, but as most of us also want to look good on the beach and all, we should not mind overshooting that goal as long as we stay out of the under-fat range. Its important to once more make the distinction between loosing weight and getting leaner. We are going to seriously work out  and are gain muscle mass to get a healthier body composition. This means we may or may not loose any weight while getting leaner. It might even mean that we are going to be gaining weight as a result of getting leaner. This may be a cognitive challenge to many of us. The concept of ‘loosing weight’ as a way to get healthier has been so pervasively entrenched into our collective perception that it takes quite a mental leap to abandon it and to accept that gaining weight while getting leaner is something to be happy about.

The second component is Basic Strength Index (BSI) that describes how close our body is to the strongest we can get without getting into professional power-lifting. There are several reasons why adding this component to the GHBI, and getting serious about our body strength makes sense:

  • By working out all your muscles, you are telling your body that you are “using” all your muscles. Given that its a ‘use it or loose it’ game, this is essential to keep your body from eating your muscles while leaving your fat-mass untouched.
  • When you get stronger you will actually gain muscle mass.
  • Muscle mass consumes calories. I’m not talking about calories burned during work-out, these are mostly sugars, it consumes calories 24/7. So when you increase muscle mass you increase your base metabolic rate.
  • Muscle mass acts as a sugar store for your body. This helps absorb carbohydrate spikes in your diet that otherwise would go straight to increasing your fat-mass.
  • Higher muscle mass reduces the bodies relative fat mass.
  • Focusing on muscle strength rather than muscle mass gives most of the above advantages without your body getting to bulky. You will end up with compact muscles with a high metabolic increase per kg of muscle mass rather than with body-building muscles with a relatively low metabolic increase per kg.
  • Negative changes to your power stats are a good indication that your diet is off, this allows us to react relatively quickly to an unbalanced diet.

Again we run into our psychological cognitive wall. By increasing our muscle mass we are deliberately increasing our weight. If we aren’t loosing fat at at least the same rate this means we are gaining weight. Whats more, as I described in the previous post, if both your body strength and your body fat are relatively low, you will want to focus first on balance rather than body fat.During such an initial period, you will slightly increase your body fat and significantly increase your muscle and total mass. A phase that bodybuilders and power lifters often refer to as bulking. You could easily gain 5kg or even 10kg in such a period, and when getting in shape this increase in body mass can be quite a psychological burden.

I’ve said it a couple of times before, your body weight and BMI are not that relevant for your progress. Its essential that you get used to the idea:

Changes in your total body weight are in no way indicative of changes in your body health.

bmi-comparison

So now that we have established the importance of working out and of using strength training as a tool for improving our body composition, we have a look at what our work-out schedule should ideally look like:

  • Work out every muscle at least once a week and at most twice a week.
  • Make sure you work out for a total of at least 5 hours per week, double that if you can manage it.
  • Give every major muscle group at least two resting days between workouts.
  • Start each exercise with  a 8..10 rep set, increase the weight progressively up to the point where you can only manage 1 or 2 reps.
  • Make sure the big 3 (squat, bench-press, dead-lift) are part of your weekly routine, preferably on different days.
  • If you must do cardio, do high-intensity cardio and do it at the end of your workout. Avoid using muscles during cardio you also used during the strength part of your workout.
  • Try to work out around the same time on every day you work out. So if you work out in the evening on week days also try to work out in the evening in the weekends.

Squat-ReuseBench-pressdeadlifts-600x400

The big 3 that I just mentioned are going to be our primary measuring tool for calculating the BSI. If you are very strong and aren’t doing professional power lifting, benching plus squatting plus dead-lifting a total of seven times your own body weight should be quite an impressive accomplishment, especially if at the same time we are striving for low total body fat. The BSI puts this ‘seven time your own body weight’ as the ultimate strength goal to strive for (just like the low value of the healthy body fat percentage is the ultimate fat percentage level to strive for) .  Working out and eating healthy and sufficiently is going to help us towards this goal. Eating sufficiently however isn’t in the end going to get us towards the goal of truly getting leaner, so we will need to strike a balance and find a path between the caloric  deficit that helps us quickly loose body fat and the caloric surplus that helps us to quickly get stronger and gain muscle mass. One major help in striking the balance on a caloric level lies in picking the right macro nutrients and in picking the right time to consume them. In my next post in this series I shall be elaborating on what I found is a good ratio for the different macro nutrients, and on how you should time our intake of these macro nutrients relative to our workouts.

An engineers approach to diet and work-outs (part 1)

This is the first in what I hope will be an interesting blog post series on my attempts at fighting obesity through the means of applying control theory to dieting and working out.   Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behaviour of dynamical systems with inputs. My first attempts at trying to harness my body’s use of nutrients with control theory failed miserably in a way not dissimilar to how my first attempt at creating an amplifier  while studying electronics failed miserably. In folow-up posts I will talk about different aspects that went wrong, but in this post I shall focus on the most essential aspect of applying control theory to any system: picking the proper parameters to use in the feedback loop.

As many people coping with obesity do at first, I too made the horrible mistake of focusing on my scale and my Body Mass Index (BMI), thinking these numbers were somehow indicative of my health.  The body mass index is basically an index that describes the relative weight for someone of a certain height. The problem is, the body weight however is composed of multiple components, including:

  • Muscle mass
  • Fat mass
  • Water mass

What we mostly care about with respect to obesity is not the total mass, but mostly the size of the fat mass compared to the bodies total mass, or the total body fat percentage (TBFP). The current use of the BMI by nutritional professionals and throughout the medical profession and throughout society stems from the statistically significant correlation between the BMI and  TBFP within populations.  The problem is however that ‘improving’ ones BMI does not necessarily imply any improvement to the TBFP.  You could for example under specific conditions loose weight, basically eating your muscle mass while actually gaining  fat mass, or you could loose weight by dehydration, both leading to a higher TBFP.

In the end, and I realize this is difficult as the idea is so deeply rooted, we should stop believing that weight is a useful measure for individual body compositional and health goals. Instead of the BMI we need to look at different numbers. So what parameters are a good measure of our general health and of a healthy or unhealthy body composition. As stated, the TBFP is an important and relatively undiluted  number. Lets star by creating a simple scale that most likely will yield a number between 0 and 10 for most probably anyone who is struggling with obesity tendencies.  Lets define the Body Fat Index as:

BFI = \frac{TBFP - LBFP}{5}

That is, we take your total body fat percentage, subtract from that the lowest number from the dark green section of the below chart, and divide the result by 5.

BodyFatRangeChartLarge

So a 43 year old male with a body fat percentage of 46% would end up with:

BFI = \frac{46 - 11}{5} = 7

There is a second dimension we need to look at regarding a healthy and stable body composition. You may have heard the phrase “use it or loose it”, well basically that’s how your body works when you start starving yourself, especially if you are also eating the wrong things while starving yourself.  If you don’t exercise all of your muscles regularly, are on a calorie deprived diet, but at the same time are bombarding your body with insulin by getting much of your calories from fruit juices, you leave your body no other option than to start consuming muscle mass. You weren’t using those muscles, and the fructose induces insulin spikes will make sure you won’t be using your bodies fat as an energy source, so your body will basically start eating your muscles. And to make things worse, with less muscles your body will burn less calories, further reducing your chances of loosing fat.

You need to use these muscles, grow them if possible so they help out at burning calories, and you need to monitor them to make sure you aren’t eating them by starving yourself. The best way to do the later is by keeping track of your strength. Carbs are bad if you don’t work out, but if you are getting into sports, you will need sufficient pre-workout carbs to fuel your workout. If you eat to little calories all together, or to little protein for muscle repair. your strength will suffer. If you start cutting to fast for your body to keep up with, your strength will suffer. If you eat well, you will get progressively stronger from your exercises. As such, your strength is a good indication of how well your body is doing. So in addition to our BFI above, we shall define a Body Strength Index (BSI) that we also aim for should have a value that for the most of us is between 0 and 10. We define:

BSI = 14 - 2\frac{S + B + D}{W}

That is, we first add up your squat, your bench and your dead lift strength and divide that by half your body weight. Than we subtract that number from 14. So if for example you weigh 100kg, your bench is 125, your dead lift 225 and your squat 275, the result would be:

BSI = 14 - 2\frac{125 + 225 + 275}{100} = 14 - 2\frac{625}{100} = 14 - 12.5= 1.5

With the BSI, your strength training becomes a measuring tool for measuring how well your body is doing. How well your diet is working an if you aren’t taking your diet beyond the point where is helping you.

Now we come to the interesting part, how do we combine the BFI and the BSI in a useful way that can help us apply control theory to our work out and dieting routine? We combine the two by defining a Generic Body Health Index that is complex number:

GBHI = BFI + BSI i

The absolute value of GHBI is defined by Pythagoras’s theorem:

|ghbi| = \sqrt{BFI^2 +BSI^2}

While this absolute value is the value we are aiming to ultimately reduce, if there is a large difference in the values of the two components, its probably wisest to focus on the component that’s contributing most to the absolute value first. If we subscribe to the idea that its a good idea to not focus to much on either component but to balance the two, a way to find a good balance in our projected goals for body improvement  would be to define a circle segment that starts at  the point in the complex plain defined by GBHI and that ends at 0 +0i under an angle of exactly 45 degree.

 

gbhi

As the above example shows, our ideal path may warrant for one of the two components to suffer slightly in order to more effectively address the one that needs most attention, and, and this is just as important, to allow us to be able to define a smooth line suitable for critical dampening. In this case our individual is rather strong and extremely fat so he/she should allow a little loss of strength in order to loose fat first. Other individuals may need to allow gaining some fat to easier allow for gaining substantial strength. The basic idea is that we define a circle segment that aims for both a balance between strength and leanness and for providing a smooth path to an ultimate attainable goal.

I hope this post has shown how my GBHI makes sense as an alternative to the over used BMI, and how projecting a circle segment on the complex plain defines a desirable path towards a healthier stronger and leaner body.  In part two of this series I’ll try to address how and why combining a basically low-carb diet with substantial complementary pre-workout carbs seems to be a good basis to base our control system input on. How low-fat high carb destabilizes the BFI part of our control system while low-carb high-fat interferes with  progress on the BSI part.  Basically both the low-carb and the low-fat approaches lead to sub optimal results at best, my personal experiences with applying control theory to my diet have made me come up with what I think is a reasonable yet somewhat cumbersome middle ground where timing of different calorie sources is essential. I’ve been able to trace back any lapse I had to failure of applying strict timing discipline. More on that in my second post in this series.

Killing the goose that lays the golden eggs (infosec)

We all know there is a lot of money being spent on information security products, services, training, etc,  and we all know there is still a lot of damages from cyber-crime and other types of information security breaches. But only when we add up the numbers, and look at what free market principles have turned the information security industry into, it becomes clear that there is something very very wrong with information security today. Many of my best friends work in this industry, so I’d imagine I might have a few less friends after posting this blog post today, but I feel that the realisation I have about the industry just screams to be shared with the world. Please don’t kill the messenger, but feel free to set me straight if you feel my assertions below are in any way unfair.

If we look at information security, we see that the market size of the information security industry is somewhere around 70 billion USD per year. If we look at what this bus-load of money is supposed to protect us from, cyber-crime and other information security related damages, we see that is a lot of room for improvement. The total yearly global damages from cyber-crime and other information security failure related incidents currently according to different sources seems to be somewhere between 200 billion and 500 billion. If we take it to be somewhere in the middle, we can estimate the total yearly cost for information security products, services and failure of information security to round and about 420 bilion USD per year. To put this into perspective, with 7 billion people on this planet  and a world wide GDP per capita of about 10000 USD, that adds up to about 0.6% of the worlds total GDP. If we scale this to the GDP per capita of some western countries like the US or the Netherlands, we end up with every US man woman and child on average paying $300,- a year for information security related cost, or about €200,- for every man woman and child in the Netherlands.  For for example a US family of four this would add up to about $200 for information security products and services and $1000 for damages, or about $100 a month.

Information security apparently is both relatively inefficient and relatively expensive.  So what’s the problem with information security? Can’t we fix it to at least be more effective?

As anyone who has been reading my blog before will probably know, I’m very much convinced that using different techniques and paradigms to either reduce the size of the trusted code-base, or to sync information security models with our socio-genetic security awareness,  it should be possible to greatly improve the integrity of information technology systems, and more importantly, to reduce the impact and cost of security breaches. I’m pretty much convinced that with the right focus this could mean that we could make information security about an order of magnitude more effective, potential at  an order of magnitude less cost.

If we were to translate this to the numbers above, we should be able to reduce the damage done by cyber-crime and other infosec security breaches for our US family of four to about $100. That would be about 1% of the total global IT spending, while at the same time reducing the global cost of information security related products and services for our family to about $20,-, or about 0.2% of the total global IT spending.

Sounds good, right? Well no, at least not from an investors point of view apparently. While to most of us this should sound like a desirable cost reduction, this apparently isn’t a realistic idea. When half a decade ago I was attempting to get investors to buy into investing into an info-sec product I wanted to build a start-up around, it turned out that potential investors don’t really like the idea of reducing the information security market size by an order of magnitude, or even the idea of making information security significantly more effective. To them doing so would be the equivalent of killing the goose that lays the golden eggs.

goose

So if investors aren’t going to allow the infosec industry to become the lean and mean information technology protection machine that we all want it to be, how can we kill the goose without solid investments?

From a commercial perspective, and this is basically my personal interpretation of the feedback I got from my talks for what I thought would be potential investors or partners,  information security products should:

  • Not significantly reduce revenues from other information security investments by the same investors.
  • Never saturate the market with one-time sales, so either it should require periodic updates or it should generate substantial consulting and/or training related revenues.
  • Allow the arms-race to continue. Keep it interesting and economically viable for the bad guys to invest in braking today’s security products so tomorrow we can create new products and services we can sell.

In contrast, for the people buying information security products should:

  • Reduce the total cost of IT system ownership.
  • Be low-maintenance.
  • Be cognitively compatible with (IT) staff.
  • Make it economically uninteresting for the bad-guys to continue the arms-race.

So do economic free market principles make it impossible to move information security into the realm that allows the second list of desirables to be satisfied? In the current IT landscape it seems that it does. Information security vendors are rather powerful and very capable of spreading the fear uncertainty and doubt that is needed to scare other parties from reducing the need for their services and products. This seems especially obvious in the case of operating system vendors.  The OLPC BitFrost project for example has shown the world what is possible security wise with the simple concept of mutually exclusive privileges for software.  It would be trivial for Google to implement such a scheme for Android, effectively eradicating over 90% of today’s android malware, making additional AV software lose most of its worth. Apple introduced the concept of a PowerBox based flexible jail to its desktop operating system, potentially effectively eradicating the need for AV. A bit later AV vendors launched a media offensive claiming Apple was years behind on its main competitor regarding security and stating they were willing to help Apple clean up the mess. Given that most of us think that infosec vendors know more about infosec than OS vendors, especially given the earlier track records of what used to be the OS-market monopolist. Inforse vendors, especially AV vendors know very well how to play the FUD game with the media in such a way that they seem to effectively keep OS vendors from structurally plugging the holes they need for selling their outdated technology. I’m pretty sure that Microsoft, Google and Apple are perfectly capable to find solutions that make their OSses significantly more secure without AV products than they would ever be with any upcoming generation of add-on AV protection. OLPC’s BitFrost has shown what is possible without the need for backward compatibility while HP-Labs Polaris and I dare claim my MinorFS project together have shown that very much is possible in the realm of retrofitting least authority operating systems.OS vendors are making small steps, but given that they are rightfully scared of the media power that FUD spreading AV companies can apparently command, they can not be expected to kill the goose that lays the golden eggs.

So how about open source? Forget about Linux, at least the kernel related stuff, much of the development on Linux is being done by companies with a large interest in infosec services, and the companies that haven’t have much to fear from AV company induced FUD in the media . But the concept of open source goose killing is quite an interesting one. We are trying to reduce global infosec related cost by many many billions, while a few hands full of projects that each would require the equivalent of just mere millions in man hours each would likely be sufficient to combine to make such major impact on the technical level. Investors won’t help, its not in their interest. OS vendors have to much to lose when they pick a fight with AV vendors, and openly investing in goose killing would be an outright declaration of war against the AV industry. While spare-time open source projects can produce great products, spare time is scarce and for most of us open source spare time development is a relatively low priority. So to make any impact, at least part of the people working on such projects should have development of these products as a source of income. Volunteers are invaluable but we can’t work with volunteers alone if we want to overthrow the information security industry. We don’t want to fall into the same trap that infosec vendors and investors have fallen in, any commercial interest in end product would be contrary to the goals we are trying to achieve. So how could we fund these developers?

The best position and the only one that has a slight chance of success would seem to be that of a non-profit charity organisation. An charity organisation free from commercial ties to infosec and OS vendors and service providers. Such an organisation could act with the purpose of:

  • Funding the partially payed-development of  free and open-source initiatives that show promise of both reducing the global IT security related cost and increasing integrity, confidentiality, privacy and availability of computing devices and IT infrastructure.
  • Coordinating contact between volunteer developers and new projects and handling procedures to allow talented volunteers to go from volunteer to (part-time) payed developers.
  • Marketing these projects.
  • Defending all such projects against legal and media FUD campaigns by the AV industry.

Could this become reality? I think with the right people it could. I know I could not play more than a small role in the creation, but I would definitely put private time and money into such an organisation and if and when others would do likewise, we would have a great place to start from. I think its important in order for the information security field to progress that we kill the goose that lays the golden eggs. OS vendors used to be what was holding back infosec, now however its the information security industry itself, most notably the AV industry that has almost become a media variant a protection racket scheme.