Category Archives: security

Copy Protection Traps in GEOS for C64

Major GEOS applications on the Commodore 64 protect themselves from unauthorized duplication by keying themselves to the operating system's serial number. To avoid tampering with this mechanism, the system contains some elaborate traps, which will be discussed in this article.

GEOS Copy Protection

The GEOS boot disk protects itself with a complex copy protection scheme, which uses code uploaded to the disk drive to verify the authenticity of the boot disk. Berkeley Softworks, the creators of GEOS, found it necessary to also protect their applications like geoCalc and geoPublish from unauthorized duplication. Since these applications were running inside the GEOS "KERNAL" environment, which abstracted most hardware details away, these applications could not use the same kind of low-level tricks that the system was using to protect itself.

Serial Numbers for Protection

The solution was to use serial numbers. On the very first boot, the GEOS system created a 16 bit random number, the "serial number", and stored it in the KERNAL binary. (Since the system came with a "backup" boot disk, the system asked for that disk to be inserted, and stored the same serial in the backup's KERNAL.) Now whenever an application was run for the first time, it read the system's serial number and stored it in the application's binary. On subsequent runs, it read the system's serial number and compared it with the stored version. If the serial numbers didn't match, the application knew it was running on a different GEOS system than the first time – presumably as a copy on someone else's system: Since the boot disk could not be copied, two different people had to buy their own copies of GEOS, and different copies of GEOS had different serial numbers.

Serial Numbers in Practice

The code to verify the serial number usually looked something like this:

.,D5EF  20 D8 C1    JSR $C1D8 ; GetSerialNumber
.,D5F2  A5 03       LDA $03   ; read the hi byte
.,D5F4  CD 2F D8    CMP $D82F ; compare with stored version
.,D5F7  F0 03       BEQ $D5FC ; branch if equal
.,D5F9  EE 18 C2    INC $C218 ; sabotage LdDeskAcc syscall: increment vector
.,D5FC  A0 00       LDY #$00  ; ...

If the highest 8 bits of the serial don't match the value stored in the application's binary, it increments the pointer of the LdDeskAcc vector. This code was taken from the "DeskTop" file manager, which uses this subtle sabotage to make loading a "desk accessory" (a small helper program that can be run from within an application) unstable. Every time DeskTop gets loaded, the pointer gets incremented, and while LdDeskAcc might still work by coincidence the first few times (because it only skips a few instructions), it will break eventually. Other applications used different checks and sabotaged the system in different ways, but they all had in common that they called GetSerialNumber.

(DeskTop came with every GEOS system and didn't need any extra copy protection, but it checked the serial anyway to prevent users from permanantly changing their GEOS serial to match one specific pirated application.)

A Potential Generic Hack

The downside of this scheme is that all applications are protected the same way, and a single hack could potentially circumvent the protection of all applications.

A generic hack would change the system's GetSerialNumber implementation to return exactly the serial number expected by the application by reading the saved value from the application's binary. The address where the saved valus is stored is different for every application, so the hack could either analyze the instructions after the GetSerialNumber call to detect the address, or come with a small table that knows these addresses for all major applications.

GEOS supports auto-execute applications (file type $0E) that will be executed right after boot – this would be the perfect way to make this hack available at startup without patching the (encrypted) system files.

Trap 1: Preventing Changing the Vector

Such a hack would change the GetSerialNumber vector in the system call jump table to point to new code in some previously unused memory. But the GEOS KERNAL has some code to counter this:

                                ; (Y = $FF from the code before)
.,EE59  B9 98 C0    LDA $C098,Y ; read lo byte of GetSerialNumber vector
.,EE5C  18          CLC
.,EE5D  69 5A       ADC #$5A    ; add $5A
.,EE5F  99 38 C0    STA $C038,Y ; overwrite low byte GraphicsString vector

In the middle of code that deals with the menu bar and menus, it uses this obfuscated code to sabotage the GraphicsString system call if the GetSerialNumber vector was changed. If the GetSerialNumber vector is unchanged, these instructions are effectively a no-op: The lo byte of the system's GetSerialNumber vector ($F3) plus $5A equals the lo byte of the GraphicsString vector ($4D). But if the GetSerialNumber vector was changed, then GraphicsString will point to a random location and probably crash.

Berkely Softworks was cross-developing GEOS on UNIX machines with a toolchain that supported complex expressions, so they probably used code like this to express this:

    ; Y = $FF
    lda GetSerialNumber + 1 - $FF,y
    clc
    adc #<(_GraphicsString - _GetSerialNumber)
    sta GraphicsString + 1 - $FF,y

In fact, different variations of GEOS (like the GeoRAM version) were separate builds with different build time arguments, so because of different memory layouts, they were using different ADC values here.

Note that the pointers to the GetSerialNumber and GraphicsString have been obfuscated, so that an attacker that has detected the trashed GraphicsString vector won't be able to find the sabotage code by looking for the address.

Trap 2: Preventing Changing the Implementation

If the hack can't change the GetSerialNumber vector, it could put a JMP instruction at the beginning of the implementation to the new code. But the GEOS KERNAL counters this as well. The GetSerialNumber implementation looks like this:

.,CFF3  AD A7 9E    LDA $9EA7 ; load lo byte of serial
.,CFF6  85 02       STA $02   ; into return value (lo)
.,CFF8  AD A8 9E    LDA $9EA8 ; load hi byte of serial
.,CFFB  85 03       STA $03   ; into return value (hi)
.,CFFD  60          RTS       ; return

At the end of the system call function UseSystemFont, it does this:

.,E6C9  AD 2F D8    LDA $D82F ; read copy of hi byte of serial
.,E6CC  D0 06       BNE $E6D4 ; non-zero? done this before already
.,E6CE  20 F8 CF    JSR $CFF8 ; call second half of GetSerialNumber
.,E6D1  8D 2F D8    STA $D82F ; and store the hi byte in our copy
.,E6D4  60          RTS       ; ...

And in the middle of the system call function FindFTypes, it does this:

.,D5EB  A2 C1       LDX #$C1
.,D5ED  A9 96       LDA #$96  ; public GetSerialNumber vector ($C196)
.,D5EF  20 D8 C1    JSR $C1D8 ; "CallRoutine": call indirectly (obfuscation)
.,D5F2  A5 03       LDA $03   ; read hi byte of serial
.,D5F4  CD 2F D8    CMP $D82F ; compare with copy
.,D5F7  F0 03       BEQ $D5FC ; if identical, skip next instruction
.,D5F9  EE 18 C2    INC $C218 ; sabotage LdDeskAcc by incrementing its vector
.,D5FC  A0 00       LDY #$00  ; ...

So UseSystemFont makes a copy of the hi byte of the serial, and FindFTypes compares the copy with the serial – so what's the protection? The trick is that one path goes through the proper GetSerialNumber vector, while the other one calls into the bottom half of the original implementation. If the hack overwrites the first instruction of the implementation (or managed to disable the first trap and changed the system call vector directly), calling though the vector will reach the hack, while calling into the middle of the original implementation will still reach the original code. If the hack returns a different value than the original code, this will sabotage the system in a subtle way, by incrementing the LdDeskAcc system call vector.

Note that this code calls a KERNAL system function that will call GetSerialNumber indirectly, so the function pointer is split into two 8 bit loads and can't be found by just searching for the constant. Since the code in UseSystemFont doesn't call GetSerialNumber either, an attacker won't find a call to that function anywhere inside KERNAL.

Summary

I don't know whether anyone has ever created a generic serial number hack for GEOS, but it would have been a major effort – in a time before emulators allowed for memory watch points. An attacker would have needed to suspect the memory corruption, compared memory dumps before and after, and then found the two places that change code. The LdDeskAcc sabotage would have been easy to find, because it encodes the address as a constant, but the GraphicsString sabotage would have been nearly impossible to find, because the trap neither uses the verbatim GraphicsString address nor GetSerialNumber function.

Usual effective hacks were much more low-tech and instead "un-keyed" applications, i.e. removed the cached serial number from their binaries to revert them into their original, out-of-the-box state.

Wikileaks Movie “The Fifth Estate” pirated my “Xbox Hacking” Slides

Xbox hacking has made it to the silver screen, and Felix Domke and me (Michael Steil) are movie stars! …and so are at least 14 of my presentation slides!

This a picture from the Julian Assange and Wikileaks movie The Fifth Estate (2013), starring Benedict Cumberbatch and Daniel Brühl, directed by Bill Condon:

“Linux is Inevitable”? Sounds like something I would say. In fact, looks like a slide from my presentation at the 24th Chaos Communication Congress in Berlin in December 2007:

Coincidence? Let’s get some context. The scene in the movie is indeed set at the 24th Chaos Communication Congress (24C3), where Julian Assange (Cumberbatch) presents his vision about Wikileaks in the break between two talks. From the movie’s screenplay (ironically leaked by Wikileaks):

                      OTTO 
           I'm afraid the small conference
           rooms are all booked.

                       DANIEL
                 (off the schedule)
           What about the auditorium? It's
           empty 'til the X-Box Security talk.

At the 24C3, Felix Domke and me indeed presented Why Silicon-Based Security is still that hard: Deconstructing Xbox 360 Security on day 2 at 16:00 in the main auditorium. (In reality, Julian Assange did not present in the main auditorium, but in a workshop area.)

 To one side of the stage, THE NEXT SPEAKER sets up a few
 deconstructed X-BOX 360s beside a CORKBOARD covered with
 exhibits for his talk: 'Deconstructing Xbox 360 Security.'


To be clear: These are pictures from the movie, not actual pictures from the conference. They really deconstructed an Xbox 360 for the movie!

 Daniel, also on stage, watches as Julian pulls out a WAD of
 TWINE, moves to the corkboard.

 The X-Box guy looks up, CONCERNED, as Julian wraps the twine
 around a PUSH PIN holding up part of the X-Box exhibit,
 STRINGING THE TWINE to another pin holding up another part.

 Julian POINTS to the two pins and the twine. ILLUSTRATING.

                       JULIAN 
                 (CONTINUED) 
           Two people and a secret. The
           beginning of any conspiracy, of all
           corruption. As it grows...

 Daniel watches, RIVETED, as Julian STRETCHES the twine to
 another pin. And another. And another...

                       JULIAN 
                 (CONTINUED) 
           More people... and more secrets.

 CLOSE ON THE BOARD as Julian RAPIDLY RUNS TWINE AROUND PINS,
 ensnaring more of the exhibit in his web. It's MESMERIZING.

                       JULIAN 
                 (CONTINUED) 
           But. If we can find one moral man,
           one whistleblower...

 Julian focuses on a PIN at the CENTER of his web of twine.
 CLOSE ON THE PIN as he LOOPS MORE TWINE AROUND IT...

                       JULIAN 
                 (CONTINUED) 
           Someone willing to expose these
           secrets --

 Julian PULLS THE TWINE TAUT... then YANKS THE CENTER PIN...
 PULLING THE TWINE SO THAT ALL THE PINS POP OUT OF THE BOARD.
 THE FLYERS FALL, SCATTERING ALL AROUND JULIAN.

                       JULIAN 
                 (CONTINUED) 
           That man... can topple the most
           repressive of regimes.

 Julian pins a WIKILEAKS FLYER on the NOW EMPTY CORKBOARD.

                       X BOX GUY 
           Was zur hoelle!

                       JULIAN 
           And there's the problem.
           Retribution.

                       X BOX GUY 
           Otto, my talk is in ten minutes.

 The X-Box guy, PISSED, looks to Otto who's with a CUTE
 HACKER GIRL in back of the EMPTY AUDITORIUM, maybe A DOZEN
 HACKERS.

In the movie, the “Xbox Guy” actually shouts “What the hell!” with a German accent.

I assume the person with the voltmeter is supposed to be Felix, and the angry German with the long hair (called “Game Console Hacker” in the credits, played by Christoph Franken) is supposed to be me.

On the corkboard in the movie, there are about two dozen printed slides.

This is a reconstructed version created from many individual frames of the scene:

It looks like the “Xbox Guy” is named “Denis Schnegg”, and the name of the talk is “Hacking Game Consoles”.

Most slides I could decipher are direct copies from slides from either our 24C3 talk, or our Google Tech Talk in 2008. Here are the screen captures of the slides in the movie, and the corresponding slides from our talks:

1 24c3, slide 6
2 Google, slide 6
3 24c3, slide 8
4 24c3, slide 7
5 Google, slide 24
6 Google, slide 26
7 Google, slide 28
8 Google, slide 36
9 Google, slide 8
10 Google, slide 9
11 Google, slide 15
12 Google, slide 11
13 Google, slide 14
25 24c3, slide 2

For reference, here are the full presentations the scene in the movie is based on:

Why Silicon-Based Security is still that hard: Deconstructing Xbox 360 Security (24C3)

The Xbox 360 Security System and its Weaknesses (Google TechTalk)

And since the producers of the movie consider it fair use to copy 14 of my slides without giving me credit, it must also be fair use to quote the scene of the movie here:

Leave security to security code. Or: Stop fixing bugs to make your software secure!

If you read about operating system security, it seems to be all about how many holes are discovered and how quickly they are fixed. If you look inside an OS vendor, you see lots of code auditing taking place. This assumes all security holes can be found and fixed, and that they can be eliminated more quickly that new ones are added. Stop fixing bugs already, and take security seriously!

Recently, German publisher heise.de interviewed Felix “fefe” von Leitner, a security expert and CCC spokesperson, on Mac OS X security:

heise.de: Apple has put protection mechanisms like Data Execution Prevention, Address Space Layout Randomization and Sandboxing into OS X. Shouldn’t that be enough?

Felix von Leitner: All these are mitigations that make exploiting the holes harder but not impossible. And: The underlying holes are still there! They have to close these holes, and not just make exploiting them harder. (Das sind alles Mitigations, die das Ausnutzen von Lücken schwerer, aber nicht unmöglich machen. Und: Die unterliegenden Lücken sind noch da! Genau die muss man schließen und nicht bloß das Ausnutzen schwieriger machen.)

Security mechanisms make certain bugs impossible to exploit

A lot is wrong about this statement. First of all, the term “harder” is used incorrectly in this context. Making “exploiting holes harder but not impossible” would mean that an attacker has to put more effort into writing exploit code for a certain security-relevant bug, but achieve the same at the end. While this is true for some special cases (with DEP on, certain code execution exploits can be converted into ROP exploits), but the whole point of mechanisms like DEP, ASLR and sandboxing is to make certain bugs impossible to exploit (e.g. directory traversal bugs can be made impossible with proper sandboxing) – while other bugs are unaffected (DEP can’t help against trashing of globals though an integer exploit). So mechanisms like DEP, ASLR and sandboxing make it harder to find exploitable bugs, not harder to exploit existing bugs. In other words: Every one of these mechanisms makes certain bugs non-exploitable, effectively decreasing the number of exploitable bugs in the system.

As a consequence, it does not matter whether the underlying bug is still there. It cannot be exploited. Imagine you have an application in a sandbox that restricts all file system accesses to /tmp – is it a bug if the application doesn’t check all user filenames redundantly? Does the US President have to lock the bedroom door in the White House or can he trust the building to be secure? Of course, a point can be made for diverse levels of barriers in systems of high security where a single breach can be desastrous and fixing a hole can be expensive (think: Xbox), but if you have to set priorities, it is smarter for the President to have security around the White House than locking every door behind himself.

Symmetric and asymmetric security work

When an operating systems company has to decide on how to allocate its resources, it needs to be aware of symmetric and asymmetric work. Finding and fixing bugs is symmetric work. You are as efficient finding and fixing bugs as attackers are finding and exploiting them. For every hour you spend fixing bugs, attackers have to spend one more hour searching for them, roughly speaking. Adding mechanisms like ASLR is asymmetric work. It may take you 1000 hours to get it implemented, but over time, it will waste more than 1000 hours of your attackers’ time – or make the attacker realize that it’s too much work and not worth attacking the system.

Leave security to security code

Divide your code into security code and non-security code. Security code needs to be written by people who have a security background, they keep the design and implementation simple and maintainable, and they are aware of common security pitfalls. Non-security is code that never deals with security. It can be written by anyone. If a non-security project requires a small module that deals with security (e.g. verifies a login), push it into a different process – which is then security code.

Imagine for example a small server application that just serves some data from your disk publicly. Attackers have exploited it to serve anything from disk or spread malware. Should you fix your application? Mind you, your application by itself has nothing to do with security. Why spend time on adding a security infrastructure to it, fixing some of the holes, ignoring others, and adding more, instead of properly partitioning responsibilities and having everyone do what they can do best: The kernel can sandbox the server so it can only access a single subdirectory, and it can’t write to the filesystem. And the server can stay simple as opposed to being bloated with security checks.

How many bugs in 1000 lines of code?

A lot of people seem to assume they can find and fix all bugs. Every non-trivial program contains at least one bug. The highly security-critical first-stage bootloader of the original Xbox was only 512 bytes in size. It consisted of about 200 hand-written assembly instructions. There was one bug in the design of the crypto system in the code, as well as two code execution bugs, one of which could be exploited in two different ways. In the revised version, one of the code execution bugs was fixed, and the crypto system had been replaced with one that had a different exploitable bug. Now extrapolate this to a 10+ MB binary like an HTML5 runtime (a.k.a web browser) and think about whether looking for holes and fixing them makes a lot of sense. And keep in mind that a web browser is not all security-critical assembly carefully handwritten by security professionals.

Conclusion

So stop looking for and fixing holes, this won’t make an impact. If the hackers find one, instead of researching “how this could happen” and educating the programmers responsible for it, construct a system that mitigates these attacks without the worker code having to be security aware. Leave security to security code.

Playstation 3 Hacking – Linux Is Inevitable

In the talk “Why Silicon Security is still that hard” by Felix Domke at the 24th Chaos Communication Congress in 2007 (in which he described how he hacked the Xbox 360, and bushing had a cameo at the end explaining how they hacked the Wii), I had a little part, in which I argued that “Linux Is Inevitable”: If you lock down a system, it will eventually get hacked. In the light of the recent events happening with PlayStation 3 hacking, let’s revisit them.

This is the original slide from 2007:

device

y

security

hacked

for

effect

PS2

1999

?

?

piracy

-

dbox2

2000

signed kernel

3 months

Linux

pay TV decoding

GameCube

2001

encrypted boot

12 months

Homebrew

piracy

Xbox

2001

encrypted/signed bootup, signed executables

4 months

Linux

Homebrew

piracy

iPod

2001

checksum

<12 months

Linux

-

DS

2004

signed/encrypted executables

6 months

Homebrew

piracy

PSP

2004

signed bootup/executables

2 months

Homebrew

piracy

Xbox 360

2005

encrypted/signed bootup,encrypted/signed executables, encrypted RAM, hypervisor, eFuses

12 months

Linux

Homebrew

leaked keys

PS3

2006

encrypted/signed bootup,encrypted/signed executables, hypervisor, eFuses, isolated SPU

not yet

-

-

Wii

2006

encrypted bootup

1 month

Linux

piracy

AppleTV

2007

signed bootloader

2 weeks

Linux

Front Row piracy

iPhone

2007

?

1 month

Homebrew

international

SIM-Lock revenue

The table shows the relationship between the quality of a device’s security system and the time it took to hack it, as well as the original motivation for hacking and the side effects (collateral damage) it caused.

Correlation security/time to hack

There is a pretty clear correlation betwen the quality of the security system and the time required for hacking it – with the notable exception being the GameCube, which had rather weak security, but since its release coincided with the much more powerful Xbox, much of the hacker community neglected the GameCube until the Xbox was done. What can also be seen is that recently, devices tend to get hacked more quickly; probably simply because there are more and more people interested in hacking.

Correlation Linux/time to hack

The other exception is the PlayStation 3, which was not hacked until about three and a half years after its introduction. I argued that this was because there was only very little motivation to hack it: Sony shipped the devices with the “Other OS” option and even sponsored a port of Linux to it, allowing any user to install Linux if they wanted. Although Linux was running on top of a hypervisor and did not have access to all of the features of the device, it seems to have been enough to take the enough motivation to hack it out of the hacker community.

Linux/homebrew is the primary motivation

This is supported by the by the fact that the motivation for hacking every system in the table was either homebrew (i.e. running unautorized hobbyist applications) or Linux. Hackers seem to love to convert their devices into Linux computers to run a big library of existing software, or to hack the device to make it possible to run versions of existing emulators and games on the native OS.

Piracy is a side effect

None of the hacks in the table was done with the motivation to allow running copied games – but whenever the point of the security system was to prevent piracy, hacking it inevitably enabled piracy as a side effect. Some security systems protected other things like pay TV keys and SIM-locks; these also fell as side effects.

2010 update

In September 2009, Sony started shipping the “slim” model of the PlayStation 3, with the “Other OS” feature removed. With firmware 3.21 in April 2010, the feature was also removed from existing original models that users chose to upgrade – which was required for using any of the online features. The missing “Other OS” feature on the slim model motivated George Hotz (geohot) to hack into hypervisor mode (Jan 2010), but this approach did not lead to a working hack of the security system. In August 2010, the Australian company OzMods announced the commercial “PSJailbreak” USB dongle that hacks into non-hypervisor mode, allowing piracy and homebrew (“Backup Manager” says “backups and homebrew”).

Although this is the first time that a commercial company is first to hack a system, and the first time that piracy seems to have been a key motivation, removal of “Other OS” might have been another motivation, and geohot’s previous attempts might have helped as an entry point for this hack.

Usually, an open hacker community develops a hack, and commercial companies convert them into modchips. This time, a company developed a hack and a modchip, and the community reverse engineered it and ported the exploit code onto several other devices, allowing people to hack the PlayStation 3 without a dedicated device. And I’m sure Linux will be adapted soon to run in the new environment.

Conclusion

What do we learn from this? Linux is inevitable. Or maybe it should be “Homebrew is inevitable”. In the history of mankind, there has yet to be a popular system that is locked down to only allow certain software to run, but does not get hacked to run arbitrary code. I still dare to say that if Sony had not removed “Other OS”, the PlayStation 3 would have been the first system to not get hacked. At all.

(Here is an updated 2010 version of the table:)

device

y

security

hacked

for

effect

PS2

1999

?

?

piracy

-

dbox2

2000

signed kernel

3 months

Linux

pay TV decoding

GameCube

2001

encrypted boot

12 months

Homebrew

piracy

Xbox

2001

encrypted/signed bootup, signed executables

4 months

Linux

Homebrew

piracy

iPod

2001

checksum

<12 months

Linux

-

DS

2004

signed/encrypted executables

6 months

Homebrew

piracy

PSP

2004

signed bootup/executables

2 months

Homebrew

piracy

Xbox 360

2005

encrypted/signed bootup,encrypted/signed executables, encrypted RAM, hypervisor, eFuses

12 months

Linux

Homebrew

leaked keys

PS3

2006

encrypted/signed bootup,encrypted/signed executables, hypervisor, eFuses, isolated SPU

4 years

Piracy

Homebrew

-

Wii

2006

encrypted bootup

1 month

Linux

piracy

AppleTV

2007

signed bootloader

2 weeks

Linux

Front Row piracy

iPhone

2007

signed/encrypted bootup/executables

11 days

Homebrew

SIM-Lock

piracy

iPad

2010

signed/encrypted bootup/executables

1 day

Homebrew

piracy

Dangerous Xbox 360 Update Killing Homebrew

On Tuesday, Microsoft has released an Xbox 360 software update that overwrites the first stage bootloader of the system. Although there have been numerous software updates for Microsoft’s gaming console in the past, this is the first one to overwrite the vital boot block. Any failure while updating this will break the Xbox 360 beyond repair. Statistics from other systems have shown that about one in a thousand bootloader updates goes wrong, and unless Microsoft has a novel solution to this problem, this puts tens of thousands of Xboxes at risk.

It seems that this update is being done to fix a vulnerability already known to the Free60 Project. This vulnerability has been successfully exploited to run arbitrary code, and a complete end user compatible hack has been in development for some time and is planned to be released on free60.org shortly. It will allow users to take back control of their Xboxes and run arbitrary code like homebrew applications or Linux right after turning on the console and without the need of a modchip, finally opening up the Xbox 360 to a level of hacking as the original Xbox.

Because of the dangerousness of the update and the homebrew lockout, the Free60 Project advises all Xbox 360 users to not update their systems to the latest software version. The Project website at http://free60.org/ will provide the latest information on this ongoing topic, including the final hack software.

Free60 (www.free60.org) is a project that aims to enable Xbox 360 users to run homebrew applications and operating systems like Linux on their consoles. The effort is headed by Felix Domke and Michael Steil, who have a background in dbox2, Xbox and GameCube hacking, and who have spoken at various conferences about their findings. Two years ago, Free60 released a hack that allowed arbitrary code execution using a game (“King Kong Hack”) as well as an adapted version of Linux, but this possibility has been disabled by Microsoft in subsequent updates of the Xbox 360 software.

Felix and Michael have repeatedly argued that game console manufacturers should open up their platforms to Linux and homebrew, similar to what Sony has done with the PlayStation 3.

(Felix Domke, Michael Steil, Free60 Project; 11 August 2009)

A Lot of Security

I happened to drive through Cupertino, CA, USA last Wednesday and ended up in this situation:

Oh-oh, they got me. But they were not after me, they escorted two vans onto some company’s campus.


Five police cars, two police motorcycles, and lots of people with suits and sunglasses. For some reason, this outfit doesn’t have the same effect on me any more since “The Matrix”.



The people in the vans went into the building through a side door:




Here are some details:



A blonde woman in white, a woman with a red dress, a man in a brown uniform with a suitcase, and lots of more men in suits. The vans had license places from Maryland.

The question of today’s security puzzle is: Who is the very important person?