Archive for March, 2011

The Double Inverted Inconsistency Principle

Tuesday, March 15th, 2011

In a discussion, when pointing out inconsistencies in your opponent’s opinion by giving two examples that you disagree with, remember that it can be made into a point against you, just by inverting the two examples – plus/minus some rhetorical decoration.

Here is an easy one, taken from a discussion on Facebook:

“Liberals make no sense at all. On the one hand, they complain about the war killing innocent people, and on the other hand, they are okay with killing innocent unborn children.”

This is the plain inverted version:

“Conservatives make no sense at all. On the one hand, they complain about killing innocent unborn children, and on the other hand, they are okay with the war killing innocent people.”

Of course, this requires some adjusted rhetorical decoration:

“Conservatives make no sense at all. On the one hand, they complain about abortion, and on the other hand, they are okay with the war killing innocent women and children.”

Here is another one, from Twitter (source not given to protect the individual):

“Don’t critisize German nuclear power plants, because Japan is far away”, say people who demand total surveillance in Germany whenever Bin Laden farts.

This one is full of rhetorical devices that need to swap sides. You could say something like:

“Ignore terrorist threats in Germany, because New York, London and Madrid are far away”, say people who want to immediately turn off 22% of Germany’s electricity because 8.9 earthquakes and tsunamis happen there, too!

Leave security to security code. Or: Stop fixing bugs to make your software secure!

Monday, March 7th, 2011

If you read about operating system security, it seems to be all about how many holes are discovered and how quickly they are fixed. If you look inside an OS vendor, you see lots of code auditing taking place. This assumes all security holes can be found and fixed, and that they can be eliminated more quickly that new ones are added. Stop fixing bugs already, and take security seriously!

Recently, German publisher heise.de interviewed Felix “fefe” von Leitner, a security expert and CCC spokesperson, on Mac OS X security:

heise.de: Apple has put protection mechanisms like Data Execution Prevention, Address Space Layout Randomization and Sandboxing into OS X. Shouldn’t that be enough?

Felix von Leitner: All these are mitigations that make exploiting the holes harder but not impossible. And: The underlying holes are still there! They have to close these holes, and not just make exploiting them harder. (Das sind alles Mitigations, die das Ausnutzen von Lücken schwerer, aber nicht unmöglich machen. Und: Die unterliegenden Lücken sind noch da! Genau die muss man schließen und nicht bloß das Ausnutzen schwieriger machen.)

Security mechanisms make certain bugs impossible to exploit

A lot is wrong about this statement. First of all, the term “harder” is used incorrectly in this context. Making “exploiting holes harder but not impossible” would mean that an attacker has to put more effort into writing exploit code for a certain security-relevant bug, but achieve the same at the end. While this is true for some special cases (with DEP on, certain code execution exploits can be converted into ROP exploits), but the whole point of mechanisms like DEP, ASLR and sandboxing is to make certain bugs impossible to exploit (e.g. directory traversal bugs can be made impossible with proper sandboxing) – while other bugs are unaffected (DEP can’t help against trashing of globals though an integer exploit). So mechanisms like DEP, ASLR and sandboxing make it harder to find exploitable bugs, not harder to exploit existing bugs. In other words: Every one of these mechanisms makes certain bugs non-exploitable, effectively decreasing the number of exploitable bugs in the system.

As a consequence, it does not matter whether the underlying bug is still there. It cannot be exploited. Imagine you have an application in a sandbox that restricts all file system accesses to /tmp – is it a bug if the application doesn’t check all user filenames redundantly? Does the US President have to lock the bedroom door in the White House or can he trust the building to be secure? Of course, a point can be made for diverse levels of barriers in systems of high security where a single breach can be desastrous and fixing a hole can be expensive (think: Xbox), but if you have to set priorities, it is smarter for the President to have security around the White House than locking every door behind himself.

Symmetric and asymmetric security work

When an operating systems company has to decide on how to allocate its resources, it needs to be aware of symmetric and asymmetric work. Finding and fixing bugs is symmetric work. You are as efficient finding and fixing bugs as attackers are finding and exploiting them. For every hour you spend fixing bugs, attackers have to spend one more hour searching for them, roughly speaking. Adding mechanisms like ASLR is asymmetric work. It may take you 1000 hours to get it implemented, but over time, it will waste more than 1000 hours of your attackers’ time – or make the attacker realize that it’s too much work and not worth attacking the system.

Leave security to security code

Divide your code into security code and non-security code. Security code needs to be written by people who have a security background, they keep the design and implementation simple and maintainable, and they are aware of common security pitfalls. Non-security is code that never deals with security. It can be written by anyone. If a non-security project requires a small module that deals with security (e.g. verifies a login), push it into a different process – which is then security code.

Imagine for example a small server application that just serves some data from your disk publicly. Attackers have exploited it to serve anything from disk or spread malware. Should you fix your application? Mind you, your application by itself has nothing to do with security. Why spend time on adding a security infrastructure to it, fixing some of the holes, ignoring others, and adding more, instead of properly partitioning responsibilities and having everyone do what they can do best: The kernel can sandbox the server so it can only access a single subdirectory, and it can’t write to the filesystem. And the server can stay simple as opposed to being bloated with security checks.

How many bugs in 1000 lines of code?

A lot of people seem to assume they can find and fix all bugs. Every non-trivial program contains at least one bug. The highly security-critical first-stage bootloader of the original Xbox was only 512 bytes in size. It consisted of about 200 hand-written assembly instructions. There was one bug in the design of the crypto system in the code, as well as two code execution bugs, one of which could be exploited in two different ways. In the revised version, one of the code execution bugs was fixed, and the crypto system had been replaced with one that had a different exploitable bug. Now extrapolate this to a 10+ MB binary like an HTML5 runtime (a.k.a web browser) and think about whether looking for holes and fixing them makes a lot of sense. And keep in mind that a web browser is not all security-critical assembly carefully handwritten by security professionals.

Conclusion

So stop looking for and fixing holes, this won’t make an impact. If the hackers find one, instead of researching “how this could happen” and educating the programmers responsible for it, construct a system that mitigates these attacks without the worker code having to be security aware. Leave security to security code.