Tag Archives: infosec

MinorFs2

People who have read my blog, have read my article, of been at any of my public talks know about the problems of the unix $HOME and $TEMP facilities. MinorFs, a set of least authority file-systems aimed to solve this problem in a relatively ‘pure’ way. That is, it was a single granularity, single paradigm solution that gave pseudo persistent processes their own private storage that could be decomposed and delegated to other pseudo persistent processes.

A few years after I released MinorFs and a few years after I wrote a Linux Journal article about MinorFs, although its painful to admit, its time to come to the conclusion that that my attempts to make the world see the advantages of the purity of model that the stack AppArmor/MinorFs/E provided has failed.

At the same time, the problem with a $HOME and $TEMP directory that are shared between all the programs a user may run is becoming a bigger and bigger problem.  On one side we have real value being stored in the users $HOME dir by programs like bitcoin. On the other side we have HTML5 that relies more and more on the browser being able to reliably store  essential parts and information of rich internet application.

The realization of these two facts made me come to an important conclusion: Its time for major changes to MinorFs.  Now I had two options. Do I patch a bunch of changes on top of the existing Perl code base, or do I start from scratch. In the past I had tried to get MinorFs accepted as an AppArmor package in Ubuntu. At that point I ran into the problem that MinorFs had rather exotic perl modules as dependencies.  So if I ever want a new version of MinorFs to be accepted as a companion package for AppArmor, I would have to rewrite it quite a bit to not use those exotic dependencies. Add to this the major changes needed to get MinorFs as practical as it gets without compromising security, I had to come to the conclusion that there was little to no benefit in re-using the old MinorFs code.  This made me have to change my earlier assertion to: Its time to write a new version of MinorFs from scratch.

So what should this new version of MinorFs do in order to make it more practical? What should I do to help ang det it packaged in the major distributions that package AppArmor? The second question is easy, be carefull with dependencies. The first question however turned out to be less simple.

I have a very persistent tendency to strive for purity of model and purity of design. But after a few years of seeing that such purity can lead to failure to adopt, I had to take a major leap and convince myself that where purity gets in the way of likelihood to be adopted, purity had to make way.

After a lot of thinking I managed to concentrate all of the impurity into a single place. A place that allowed for configuration by the user in such a way that a purity sensitive user, packagers and administrators could create a relatively pure system by way of the config, while practically inclined users, packagers and administrators could just ignore purity al together. The place where I concentrated the impurity is the persistence id service. This service that didn’t exist in the old MinorFs maps process-id’s to persistence-id’s, but it does this in a way where one process might get a persistence-id that implies a whole different level of granularity than the persistence-id an other process maps to. Where the old MinorFs had only one level of granularity (the pseudo persistent process), MinorFs2 allows different processes to exist at different granularity levels according to the needs and possibilities of their code-base.

This is the base of the first of my practical approaches. It suffers one program that requires the existing user level granularity to co-exist with for example an other program that benefits from living at the finest granularity level that MinorFs2 provides.  I tried to come up with every potentially usable granularity level I could come up with. In practice some of these levels might turn out to be useless or unneeded, but from a practical viewpoint its better to have to much than to have the one missing that would be the best fit for a particular program.

So what would be the most important practical implication of allowing multiple granularities down to user level granularity? The big goal would be : Allow to simply and effectively  replace the usage of the normal $HOME and $TEMP with the usage of MinorFs2.

We should make it possible to mount MinorFs2 filesystems at /tmp and /home and have all software function normally, but without having to worry about malware or hackers with access to the same user id gaining access to their secrets, or being able to compromise their integrity.

This practical goal completely ignores the benefits of decomposition and delegation, but it does make your computer a much safer place, while in theory still allowing application developers an upgrade path to fine grained least authority.

An other practical choice I had to make was replacing the use of symbolic links with  using overlay file-systems for minorfs2_home_fs and minorfs2_temp_fs and dissalow  ‘raw’ access to minorfs2_cap_fs to unconfined processes.  I won’t get into the details about what this entails, but basicaly I had to abandon the golden rule that states: ‘don’t prohibit what you cant enforce‘.  Unconfined processes have access to the guts of the processes running under the same uid. This makes them capable of stealing the sparse capabilities that MinorFs uses. I took the practical approach to:

  • Limit the use of raw sparse caps to delegation and attenuation (less to steal)
  • Disallow unconfined processes from directly using sparse caps

This is an other practical issue that makes stuff a bit impure. In the pure world there would be no unconfined processes in a pure world, no way to steal sparse caps from the guts of an other process. So what do we do, we break a golden rule and close the gap as good as we can. Knowing that:

  • If the unconfined malicious process can convince a confined process to proxy for it shall be able tu use the stolen sparse cap.
  • If a non malicious unconfined process wants to use a sparse cap it can’t.

It hurts having to make such impure design decisions,  it feels like I’m doing something bad,  badly plugging a hole by breaking legitimate use cases.  I hope that the pain of  the practical approach will work out being worth  it  and I’ll be able to create something with a much higher adoption rate than the old MinorFs.

Advertisements

Dividing by uncertainty

I have a great deal of respect for the work done by ISECOM in their OSSTMM. Its overall a great and accessibly written document on doing security audits in a thorough and methodically. Its a document however, that suffers from the same, lets call it deterministic optimism that is paramount in the information security industry.  That is, an optimism that is a result of a failure to come to grasp with the nature of uncertainty.  While in this post I talk about the OSSTMM and how its failure to deal with uncertainty makes it overly optimistic about the True Protection that it helps to calculate, the OSSTMM is actually probably the best thing there is in this infosec sub field, so if even the OSSTMM doesn’t get this right, the whole subfield may be in for a black-swan event that will proof the point I am trying to make here.

So what is this uncertainty I talk about?  People tend to prefer hard numbers to fuzzy concepts like stochastics , but in many cases, certainly in information security, hard numbers are mostly impossible to get at, even when using a methodological approach like OSSTMM provides.  This means we can do one of two things:

  1. Ignore the uncertainty of our numbers.
  2. Work the uncertainty into our model.

A problem is that without an understanding of uncertainty, it is hard to know when its safe to opt for the first option and when it is not. If a variable, for example OSSTMM its Opsec(sum) variable has a level of uncertainty, a better representation than a single number could be a probability density function. A simplified variant of the probability density function is a simple histogram like the one below.

So instead of the hard number (10) we have a histogram of numbers and their probability.  So when does working with such histograms yield  significant different results than working with the hard numbers? Addition yields the same results, multiplication yields results that are generally close enough, but there is one operation where the histogram will give you a result that can, depending on your level of uncertainty and the proximity of your possible values to the dreaded zero value, and that is division.

In OSSTMM for example, the True Protection level is calculated by subtracting a security limit SecLimit from a base value. This means that if we  underestimate SecLimit we will end up being to optimistic about the true protection level.  And how is SecLimit calculated? Exactly, by dividing by an uncertain value. Worse, by dividing by an uncertain value that was itself calculated by dividing by an uncertain value.

To understand why this dividing by uncertainty can yield such different results when forgetting to take the uncertainty into account, we can device an artificial histogram to show how this happens. Lets say we have an uncertain number X with an expected value of 3. Now lets say the probability density histogram looks as follows:

  • 10% probability of being 1
  • 20% probability of being 2
  • 40% probability of being 3
  • 20% probability of being 4
  • 10% probability of being 5

Now lets take the formula Y = 9 / (X^2). When working with the expected value of 3, the result would be 9/(3*3) = 9/9=1. Lets look what happens if we apply this same formula to our histogram:

  • 10% probability of being 9
  • 20% probability of being 2.25
  • 40% probability of being 1
  • 20% probability of being 0.5625
  • 10% probability of being 0.36

Looking at the expected value of the result from the histogram, we see that this value ends up being almost twice the value we got when not taking the uncertainty into account. The lower numbers with low probabilities become the dominant factors in the result.

Note that I am in no way suggesting that these numbers will be anywhere as bad for the OSSTMM SecLimit variable, this depends greatly on the level of uncertainty of our input variables and its proximity to zero, but the above example does illustrate that not taking uncertainty into account when doing divisions can have big consequences.  In the case of OSSTMM, these consequences could make the calculated true protection level overly optimistic, what could in some cases lead to not implementing the level of security controls that this uncertainty would warrant.  This example shows us a very important and simple lesson about uncertainty. When dividing by an uncertain number, unless the uncertainty is small, be sure to include the uncertainty into your model or be prepared to get a result that is dangerously incorrect.