Archive for the ‘whines’ Category

Racism in Monstropolis

Tuesday, June 21st, 2011

Sometimes, freezeframe fun does not provide fun, but sadness.

In the Pixar movie Monsters Inc., you can see the following file of a child at 12 min 40 sec:

Monsters scare children at night, and this is how they keep track of them. The file comes with a blueprint of the room, a list of date stamps, business-critical notes like “scared of snakes”, and the standard data like name, gender, age, and… uh, race??

Albert Lozano, age 8, seems to be “hispanic”, and for monsters, this is apparently a feature that is important to them.

Oh well, that’s Monstropolis, a world inhabited by monsters that scare little children. Modern societies, on the other hand, have understood that “race” is a detail that is just as useful to track as shoe size. Oh wait.


Leave security to security code. Or: Stop fixing bugs to make your software secure!

Monday, March 7th, 2011

If you read about operating system security, it seems to be all about how many holes are discovered and how quickly they are fixed. If you look inside an OS vendor, you see lots of code auditing taking place. This assumes all security holes can be found and fixed, and that they can be eliminated more quickly that new ones are added. Stop fixing bugs already, and take security seriously!

Recently, German publisher heise.de interviewed Felix “fefe” von Leitner, a security expert and CCC spokesperson, on Mac OS X security:

heise.de: Apple has put protection mechanisms like Data Execution Prevention, Address Space Layout Randomization and Sandboxing into OS X. Shouldn’t that be enough?

Felix von Leitner: All these are mitigations that make exploiting the holes harder but not impossible. And: The underlying holes are still there! They have to close these holes, and not just make exploiting them harder. (Das sind alles Mitigations, die das Ausnutzen von Lücken schwerer, aber nicht unmöglich machen. Und: Die unterliegenden Lücken sind noch da! Genau die muss man schließen und nicht bloß das Ausnutzen schwieriger machen.)

Security mechanisms make certain bugs impossible to exploit

A lot is wrong about this statement. First of all, the term “harder” is used incorrectly in this context. Making “exploiting holes harder but not impossible” would mean that an attacker has to put more effort into writing exploit code for a certain security-relevant bug, but achieve the same at the end. While this is true for some special cases (with DEP on, certain code execution exploits can be converted into ROP exploits), but the whole point of mechanisms like DEP, ASLR and sandboxing is to make certain bugs impossible to exploit (e.g. directory traversal bugs can be made impossible with proper sandboxing) – while other bugs are unaffected (DEP can’t help against trashing of globals though an integer exploit). So mechanisms like DEP, ASLR and sandboxing make it harder to find exploitable bugs, not harder to exploit existing bugs. In other words: Every one of these mechanisms makes certain bugs non-exploitable, effectively decreasing the number of exploitable bugs in the system.

As a consequence, it does not matter whether the underlying bug is still there. It cannot be exploited. Imagine you have an application in a sandbox that restricts all file system accesses to /tmp – is it a bug if the application doesn’t check all user filenames redundantly? Does the US President have to lock the bedroom door in the White House or can he trust the building to be secure? Of course, a point can be made for diverse levels of barriers in systems of high security where a single breach can be desastrous and fixing a hole can be expensive (think: Xbox), but if you have to set priorities, it is smarter for the President to have security around the White House than locking every door behind himself.

Symmetric and asymmetric security work

When an operating systems company has to decide on how to allocate its resources, it needs to be aware of symmetric and asymmetric work. Finding and fixing bugs is symmetric work. You are as efficient finding and fixing bugs as attackers are finding and exploiting them. For every hour you spend fixing bugs, attackers have to spend one more hour searching for them, roughly speaking. Adding mechanisms like ASLR is asymmetric work. It may take you 1000 hours to get it implemented, but over time, it will waste more than 1000 hours of your attackers’ time – or make the attacker realize that it’s too much work and not worth attacking the system.

Leave security to security code

Divide your code into security code and non-security code. Security code needs to be written by people who have a security background, they keep the design and implementation simple and maintainable, and they are aware of common security pitfalls. Non-security is code that never deals with security. It can be written by anyone. If a non-security project requires a small module that deals with security (e.g. verifies a login), push it into a different process – which is then security code.

Imagine for example a small server application that just serves some data from your disk publicly. Attackers have exploited it to serve anything from disk or spread malware. Should you fix your application? Mind you, your application by itself has nothing to do with security. Why spend time on adding a security infrastructure to it, fixing some of the holes, ignoring others, and adding more, instead of properly partitioning responsibilities and having everyone do what they can do best: The kernel can sandbox the server so it can only access a single subdirectory, and it can’t write to the filesystem. And the server can stay simple as opposed to being bloated with security checks.

How many bugs in 1000 lines of code?

A lot of people seem to assume they can find and fix all bugs. Every non-trivial program contains at least one bug. The highly security-critical first-stage bootloader of the original Xbox was only 512 bytes in size. It consisted of about 200 hand-written assembly instructions. There was one bug in the design of the crypto system in the code, as well as two code execution bugs, one of which could be exploited in two different ways. In the revised version, one of the code execution bugs was fixed, and the crypto system had been replaced with one that had a different exploitable bug. Now extrapolate this to a 10+ MB binary like an HTML5 runtime (a.k.a web browser) and think about whether looking for holes and fixing them makes a lot of sense. And keep in mind that a web browser is not all security-critical assembly carefully handwritten by security professionals.

Conclusion

So stop looking for and fixing holes, this won’t make an impact. If the hackers find one, instead of researching “how this could happen” and educating the programmers responsible for it, construct a system that mitigates these attacks without the worker code having to be security aware. Leave security to security code.

Comparing Digital Video Downloads of Interlaced TV Shows

Monday, November 29th, 2010

In the days of CRT monitors, TV shows used to be broadcast in interlaced mode, which is unsupported by modern flat-panel displays. All online streaming services and video stores provide progressive video, so they must deinterlace the data first. This article compares the deinterlacing strategies of Apple iTunes, Netflix, Microsoft Zune, Amazon VoD and Hulu by comparing their respective encodings of a Futurama episode.

If you have dealt with video formats before, you probably know about interlacing, a 1930s trick to achieve both high spatial and temporal resolution at half the (analog) data rate: In NTSC countries, there are 60 fields per second (PAL: 50), and every field is half the vertical resolution of a full frame. When film footage at 24 frames per second has to be played at 30 fps (NTSC), every frame has to be shown 1.25 times – in other words, every fourth frame has to be shown twice. This introduces jerky motion (judder), but it can be improved by using the 60 Hz temporal resolution: Frame A gets shown for 2 fields, frame B for 3 fields, frame C for 2 fields, and so on. This way, every source frame gets shown for 2.5 fields, i.e. 1.25 frames – this method is called a telecine 2:3 pulldown.

A lot of TV material is produced at 24 fps and telecined, for several reasons: Standard movie cameras can be used instead of TV cameras, 24 fps can be converted to 25 fps PAL more easily than 30 fps NTSC, and for cartoons, this means that only 24 (or 12) frames have to be drawn for every second.

Unfortunately, interlacing only works with ancient CRT TVs – modern LCD screens can only show progressive video. And while DVDs are specified to encode interlaced video, more modern formats like MPEG-4/H.264 and VC-1 usually carry progressive data. So when playing DVDs, the DVD player or the TV have to deal with the interlacing problem, and in case of modern file formats, it’s the job of the converter/encoder.

The naive way of converting an interlaced source to progressive is to combine every two fields into a frame. This works great if the original source material was 30 fps progressive (which is rare for NTSC but common for PAL), but for telecined video, since two out of every six frames are combined from two different fields, this leads to ugly combing effects.

If the source material was 24 fps, an inverse telecine can be done, recovering the original 24 frames per second. Unfortunately, it is not always this easy, since interlaced video may switch between methods, and sometimes use different methods at the same time, e.g. overlaying 30 fps interlaced captions on top of a 24 fps telecined picture, or compositing two telecined streams with a different phase. “Star Trek: The Next Generation” is a famous offender in this category – just single-step through the title…

In the following paragraphs, let us look at an episode of Futurama and how the deinterlacing was done by the different providers of the show. Futurama was produced in 24 fps and telecined. Some of the editing seems to have been done on the resulting interlaced video, so the telecine pattern is not 100% consistent.

NTSC DVD

The NTSC DVD is basically just an MPEG-2-compressed version of the US CCIR 601 broadcast master. It encodes 720×480 anamorphic pixels (which can be displayed as 640×480 or 720×540) and has all the original interlacing intact. This is a frame at 640×480 and properly inverse telecined:

DVD

Hulu

Hulu

Hulu (480p version) took the original image without doing any cropping on the sides. You can clearly see this picture is only half the vertical resolution, meaning one of the fields got discarded. It seems this was Hulu’s deinterlacing strategy, since throughout the complete video, everything is half the vertical resolution, whether there is motion or not. This also keeps the video at 30 fps, and effectively shows every fourth frame twice, introducing stronger judder.

iTunes

iTunes

iTunes crops the picture to get rid of the black pixels in the overscan area and scales it to 640×480. They run a full-blown 60 Hz deinterlace filter on the video. Such a filter is meant to take a live television signal as an input, with a temporal resolution of 60 Hz. While this looks fine on frames with no or little motion, vertical resolution is halved as soon as there is motion. Basically, it is the wrong filter. Like Hulu, iTunes preserves the 30 fps, introducing a stronger judder. (The video encoding is H.264 at 1500 kbit/sec.)

Netflix

Netflix

Netflix seems to do the same as iTunes – maybe they even got the data from iTunes? The image is cropped and scaled to 640×480, they run a deinterlace-filter and retain the 30 fps, leading to halved resolution when there is motion, and stronger judder.

Amazon Video on Demand

Amazon Video on Demand

Amazon Video on Demand with its horribly inconvenient Unbox Player (Windows only, requires 1 GB of extra downloads and two reboots) did a better job. Like Netflix and iTunes, they cropped the picture and scaled it to 640×480, but they actually did a real inverse telecine. In some segments (like the end credits), the algorithm failed because of inconsistencies of the original telecine, so it reverted to half the vertical resolution. And like the others, Amazon also encodes at 30 fps, i.e. judder. (The video encoding is VC-1 at 2600 kbit/sec.)

Zune

Zune

Microsoft’s Zune Store provides a cropped video at 640×480 at the original 24 fps and with a bitrate of 1500 kbit/sec (VC-1). Looking through it frame by frame reveals that they used a brilliant detelecine/deinterlace algorithm. On the DVD, the panning at the beginning of the “Robot Hell” song is very tricky: It breaks the standard telecine pattern (PPPIIPPPII becomes PPPIPPPI), it seems every fifth frame was removed.




The pan consists of a pattern of three progressive frames, and then one interlaced frame, which is composed of the previous frame and the current frame. Consequently, every fourth frame has half its resolution wasted by the repeated lines of the previous frame, i.e. every fourth frame only exists at half resolution in the DVD master material.

Hulu discards half the vertical resolution for every frame anyway, and the deinterlacing algorithms of iTunes and Netflix discard half the resolution whenever there is motion. The Amazon algorithm does a good job when the telecine pattern is correct, but in this case, it gets confused and encodes all frames of the pan in half resolution. The Zune algorithm does a brilliant job here: The progressive frames stay at full resolution, and it extracts the half-resolution picture out of every fourth frame:




This is the fourth picture at full size – you can see half the vertical resolution is missing (it was never there in the first place!), but the algorithm did a very good interpolation job:

Robot Hell (Zune)

The Zune video is almost perfect. It recombines all fields correctly and recovers all single fields, scaling them up so that it’s hardly visible there is information missing. If you ignore the 720 vs. 640 horizontal pixels, the resulting 24 fps video contains all information of the DVD version, but with all interlacing removed, and with zero judder. Too bad it’s not H.264, but DRMed and only plays on Windows (XP+), Zune and Windows Phone 7.

Summary

Provider Cropping Resolution Deinterlacing fps Encoder Bitrate (kbits/sec)
NTSC DVD no 720×480 none 30 MPEG-2 6500
Hulu no 640×480 discard 30 H.264? ?
iTunes yes 640×480 30 Hz deinterlace 30 H.264 1500
Netflix yes 640×480 30 Hz deinterlace 30 H.264/VC-1 ?
Amazon VoD yes 640×480 detelecine+decomb 30 VC-1 2600
Zune yes 640×480 fuzzy detelecine 24 VC-1 1500

Note: H.264 and VC-1 compress significantly better than MPEG-2; a rule of thumb is to divide the MPEG-2 bitrate by 2.3 to get a comparable H.264/VC-1 bitrate. So the Amazon bitrate is fine and the video is about the same quality (sharp picture, no compression artefacts) as the DVD, but the iTunes and Zune versions are not (artifacts can be seen on single frames).

It is scary how little effort seems to be going into video conversion/encoding at major players like iTunes, Netflix and Hulu. Amazon did a kind of okay job converting the source material properly, and only Microsoft did an excellent job. The NTSC DVDs still give you the maximum quality – but of course, if you watch them on an LCD, the burden of deinterlacing is on your side. Handbrake with “detelecine” (for the bulk of it) and “decomb” (for exceptions) turned on, and with a target framerate of “same as source” will generate a rather good MP4 video similar to Amazon’s, but without the judder.

Are there any stores I missed? Can someone check the PAL DVD as well as digital PAL and NTSC broadcasts? What is the magical detelecine/deinterlace program Microsoft uses?

See also: Comparing Bittorrent Files of Interlaced TV Shows

The Intel 80376 – a Legacy-Free i386 (with a Twist!)

Tuesday, November 16th, 2010

25 years after the introduction of the 32 bit Intel i386 CPU, all Intel compatibles still start up (and wake up!) in 16 bit stone-age mode, and they have to be switched into 32/64 bit mode to be usable.

Wouldn’t it be nice if a modern i386/x86_64 CPU started at least in 32 bit protected mode? Can’t they make a legacy-free CPU that does not support 16 bit mode at all? Such a CPU exists, well, existed. It’s the 1989-2001 Intel 80376, an embedded version of the Intel i386.

The datasheet describes all the interesting differences. The 80376 does not support any 16 bit mode, so the “D” bit in segment descriptors must be set to 1 (page 25), forcing 32 bit code and data segments. 286-style descriptors are not supported either (page 27). (The 0×66 and 0×67 opcode prefixes still exists, so code can work on 16 bit registers and generate 16 bit addresses (page 14), just like an i386 in 32 bit mode.)

Since the CPU does not support 16 bit modes, it cannot do real mode, so CR0.PE is always 1. Consequently, a 80376 starts up in 32 bit protected mode, but otherwise, startup is just like on the i386 (page 19): EIP is 0x0000FFF0, CS is 0xF000, CS.BASE is 0xFFFF0000, CS.LIMIT is 0xFFFF, and the other segment registers are 0×0000, with a base of 0×00000000 and a limit of 0xFFFF. No GDT is set up, and in order to get the system into a sane state, loading a GDT and reloading the segment registers is still necessary. Too bad they didn’t set all bases to 0, all limits to 0xFFFFFFFF and EIP to 0xFFFFFFF0.

The 80376 is designed to be forward-compatible with the i386, so unsupported features are documented as “reserved” or “must be 0/1″, and legacy properties like the garbled segment descriptors are unchanged. All (properly written) 80376 software should also run on an i386 (page 1) – except for the first few startup instructions of course. Intel provides the following code sequence (page 20) that is to be executed directly after RESET to distinguish between the 80376 and the i386:

smsw bx
test bl, 1
jnz is_80376

This tests for CR0.PE, which is hardcoded to 1 on the 80376 and is 0 on RESET on an i386. The three instructions are bitness agnostic, i.e. the encoding is identical in 16 and 32 bit mode.

Sounds like the perfect CPU? Well, here comes the catch: The 80376 doesn’t do paging. CR2 and CR3 don’t exist (it is undocumented whether accessing them causes an exception), CR0.PG is hardcoded to 0 (page 8) and the #PF exception does not exist (page 17). A man can dream though… a man can dream.

For Lisa, the World Ended in 1995

Wednesday, October 27th, 2010

If you try to set the clock in Lisa OS 3.1 to 2010, you’re out of luck:

You can only enter years from 1981 to 1995. That’s a span of 15 years – why? And what happens if the clock runs past the end of 1995?

Well, it wraps around to 1 Jan 1980.

But why does it not allow entering 1980 then? That’s why:

Whenever the clock is set to 1980, it thinks the clock is not set up properly. So it is a 4 bit counter. Too bad, a 5 bit counter could have made it into 2011, and we all know that’s way more than ever needed.

Name that Ware

Tuesday, October 26th, 2010