Monthly Archives: April 2009

The History of OS Migration

Operating system vendors face this problem once or twice a decade: They need to migrate their user base from their old operating system to their very different new one, or they need to switch from one CPU architecture to another one, and they want to enable users to run old applications unmodified, and help developers port their applications to the new OS. Let us look at how this has been done in the last 3 decades, looking at DOS/Windows, Macintosh, Amiga and Palm.

CP/M to PC-DOS/MS-DOS

CP/M was an 8 bit operating system by Digital Research that ran on all kinds of Intel 8080-based systems. Seattle Computer Products “86-DOS”, which later became MS-DOS (called “PC-DOS” on IBM machines), was a clone of CP/M, but for the Intel 8086, much like DR’s own CP/M-86 (which later became DR-DOS).

While not binary compatible, the Intel 8086 was “assembly source compatible” with the 8080, which meant that it was easily possible to convert 8 bit 8080/Z80 assembly source into 8086 assembly, since the two architectures were very similar (backward-compatible memory model; one register set could be mapped onto the other) and only the instruction encoding was different.

Since MS-DOS implemented the same ABI and memory map, it was source-code compatible with CP/M. On a CP/M system, which could access a total of 64 KB of memory, the region from 0×0000 to 0×0100 (Zero Page) was reserved to the operating system and contained, among other things, the command line arguments. The running application was located from 0×0100 up, and the operating system was at the top of memory, with the application stack growing down from just below the start of the OS. The memory model of the 8086 partitions memory into (overlapping) chunks of contiguous 64 KB, so one of these segments is basically a virtual 8080 machine. MS-DOS “.COM” files are executables below 64 KB that are loaded at address 0×0100 of such a segment. 0×0000-0×0100 is called the Program Segment Prefix and is very similar to the CP/M zero page, the stack grows down from the end of code segment, and the operating system resides in a different segment.

Because of the high compatibility of the CPUs and the ABIs, a port of a program from CP/M to DOS was pretty painless. Such a direct port could only support up to 64 KB of memory (like WordStar 3.0 from 1982), but it was also possible to maintain a single source base for both CP/M and MS-DOS just by using a few macros and two different assemblers.

DOS 2.0 then introduced more powerful APIs (file handles instead of FCBs, subdirectories, relocatable .EXE files), obsoleting most of the CP/M API – but DOS kept CP/M compatibility until the last version.

CP/M to PC-DOS/MS-DOS
Change New CPU
New OS codebase
Running new applications Native
Running old applications Not supported
Running old drivers Not supported
Porting applications High level of source/ABI compatibility

DOS to Windows

Microsoft Windows was first architected as a graphical shell on top of MS-DOS: All device and filesystem access was done by making DOS API calls, so all MS-DOS drivers ran natively, and Windows could use them. DOS applications could still be used by just exiting Windows.

Windows/386 2.1 changed this model, as it was a real operating system kernel that ran a number of “virtual 8086 mode” (V86) virtual machines side by side: One for the MS-DOS operating system and one per Windows application. The DOS VM was used by Windows to call out to device drivers and the filesystem, so it was basically a driver compatibility environment running inside a VM. The user could start any number of additional DOS VMs to run DOS applications, and each of these contained a copy of DOS. Windows hooked memory accesses to screen RAM as well as some system calls to route them to the Windows graphics driver or through the “root” DOS VM.

Windows 3.x started using Windows-native drivers that replaced calls into the DOS VM, and had the DOS VM call up to Windows for certain device accesses. The standard Windows 95 installation didn’t use the DOS VM for drivers or the filesystem at all, but could do so if necessary.

DOS was not only a compatibilty environment for old drivers and applications, it was also the command line of Windows, so when Windows 95 introduced long file names, it trapped DOS API calls to provide a new interface for this functionality to command line tools.

MS-DOS to Windows
Change New OS
Running new applications Native
Running old applications Virtual machine with old OS
Running old drivers Virtual machine with old OS
Porting applications No migration path

DOS to Windows NT

Windows NT was never based on DOS, but still allowed running MS-DOS applications since its first version, NT 3.1. Like non-NT Windows, it runs DOS applications in V86 mode. But instead of running a copy of MS-DOS, using its logic and trapping its device accesses, NT just runs the application in V86 mode and traps all system calls and I/O accesses and maps them to NT API calls. It is not a virtual machine: V86 mode is merely used to provide the memory model necessary to support DOS applications.

One common misconception is that the Windows NT command line is a “DOS box”: The command line interpreter and its support tools are native NT applications, and a Virtual DOS Machine (NTVDM) is not started until a real DOS program is launched from the command line.

DOS to Windows NT
Change New OS
Running new applications Native
Running old applications API reimplementation
Running old drivers Not supported
Porting applications No migration path

Windows 3.1 (Win16) to Windows 95 (Win32)

Since the release of Windows NT 3.1 in 1993, it was clear that it would replace classic Windows eventually, but although it had the same look-and-feel and good Win16 and decent DOS compatibility, every current version of NT typically required quite high-end hardware. The migration from Windows to Windows NT was done by slowly making Windows more like Windows NT, and when they were similar enough, and even low-end computers were powerful enough to run NT well, switching the users to the new codebase.

The big step to make Windows more like Windows NT was supporting NT’s 32 bit Win32 API: The first step was the free “Win32S” update to Windows 3.1, which provided a subset (thus the “S”) of the Win32 API on classic Windows. Win32S extended the Windows kernel to create a 32 bit address space for all 32 bit applications (NT had a separtate address space for each application). It also provided ported versions of some new NT libraries (e.g. RICHED32.DLL), as well as 32 bit DLLs that accepted the low-level Win32 API calls (“GDI” and “USER”) and forwarded them to the Win16 system (“thunking”).

Windows 95 included this functionality by default, ran 32 bit applications in separate address spaces, supported more of the Win32 API and included several 32 bit core applications (like Explorer), but a good chunk of the core system was still 16 bit. With Windows 95, most developers switched to writing 32 bit applications, making them instantly available as native applications on Windows NT.

Windows 3.1 (Win16) to Windows 95 (Win32)
Change New CPU mode/bitness
Running new applications Thunking
Running old applications Native
Running old drivers Native
Porting applications High level of source compatibility

Windows 9X to Windows NT

The second step in the migration from 16 bit Windows to Windows NT was the switch from Windows ME to the NT-based Windows XP in 2001. Windows NT (2000/XP/…) was a fully 32 bit operating system with the Win32 API, but it also allowed running 16 bit Windows applications by forwarding their Win16 API calls to the Win32 libraries (thunking).

The driver model of Windows NT 3.1/3.5/4.0 (“Windows NT Driver Model”) and classic Windows (“VxD”) was different, so Windows 98 (successor of Windows 95) and Windows 2000 (successor to Windows NT 4.0) both supported the new “Windows Driver Model”. A single driver could now work on both operating systems, but each OS continued to support their original driver mode.

When Microsoft switched the home users to the NT codebase, most current applications, games and drivers worked on Windows XP as well. It was only the system tools that had to be rewritten.

Windows 9X to Windows NT
Change New OS
Running new applications Native
Running old applications Win16: Thunking
Win32: Native
Running old drivers Providing the same API for the old OS
Porting applications High level of source compatibility
Providing the same API for the old OS

Windows i386 (Win32) to Windows x64/x86_64 (Win64)

The switch from 32 bit Windows to 64 bit Windows is currently in progress: Windows XP was the first version to be available for AMD64/Intel64, and both Windows Vista and Windows 7 are available as both 32 bit and 64 bit editions. On the 64 bit edition, the kernel is 64 bit native, and so are all libraries and most applications. The 32 bit API is still supported using the “WOW64″ (Windows-on-Windows 64-bit) subsystem: A 32 bit application links against all 32 bit libraries, but the low-level API calls it wants to make get translated by the WOW64 DLL into their 64 bit counterparts.

Since drivers run in the same address space as the kernel, 32 bit drivers could not be easily supported on 64 bit Windows, and thus are not. Support for DOS and Win16 applications was dropped on 64 bit Windows.

Windows i386 (Win32) to Windows x64/x86_64 (Win64)
Change New CPU mode/bitness
Running new applications Native
Running old applications Thunking
Running old drivers Not supported
Porting applications High level of source compatibility

Macintosh on 68K to Macintosh on PowerPC

Apple switched their computers from using Motorola 68K processors to Motorola/IBM PowerPC processors between 1994 and 1996. Since the Macintosh operating system, called System 7 at that time, was mostly written in 68K assembly, it could not be easily converted into a PowerPC operating system. Instead, most of the system was run in emulation: The new “nanokernel” handled and dispatched interrupts and did some basic memory management to abstract away the PowerPC, and the tightly integrated 68K emulator ran the old operating system code, which was modified to hook into the nanokernel for interrupts and memory management. So System 7.1.2 for PowerPC was basically a paravirtualized operating system running inside emulation on top of a very thin hypervisor.

The first version of Mac OS for PowerPC ran most of the operating system inside 68K emulation, even drivers, but some performance-sensitive code was native. The executable loader detected binaries with PowerPC code in them and could run them natively inside the same context. Most communication to the OS APIs went back through the emulator. Later versions of Mac OS replaced more and more of the 68K code with PowerPC code.

Macintosh on 68K to Macintosh on PowerPC
Change New CPU
Running new applications Thunking
Running old applications Paravirtualized old OS in emulator
Running old drivers Paravirtualized old OS in emulator
Porting applications High level of source compatibility

Classic Mac OS to Mac OS X

Just like Microsoft switched from Windows to Windows NT, Apple switched from Classic Mac OS to Mac OS X. While Classic Mac OS was a hacky OS with cooperative multitasking and without memory protection that still ran some of the OS code in 68K emulation, Mac OS X was based on NEXTSTEP, a modern UNIX-like operating system with a completely different API.

When Apple decided to migrate towards a new operating system, they ported the system libraries of Classic Mac OS (“Toolbox”) to Mac OS X, omitting the calls that could not be supported on the modern OS (and replacing them with alternatives), and called the new API “Carbon”. They provided the same API for (Classic) Mac OS 8.1 in 1998, so developers could already update their applications for OS X, while maintaining compatibility with Classic Mac OS. When Mac OS X was introduced in 2001, binaries of “carbonized” applications would then run unmodified on both operating systems. This is similar to the “make Windows more like Windows NT” approach by Microsoft.

But since not all applications were expected to exist as carbonized versions with the introduction of OS X, the new operating system also contained a virtual machine called “Classic” or “Blue Box” in which the unmodified Mac OS 9 was run together with any number of legacy applications. Hooks were installed inside the VM to route network and filesystem requests to the host OS, and window manager integration allowed the two desktop environments to blend almost seamlessly together.

Classic Mac OS to Mac OS X
Change New OS
Running new applications Native
Running old applications Classic: Virtual machine with old OS
Carbon: Intermediate API for both systems
Running old drivers Virtual machine with old OS
Porting applications Intermediate API for both systems

Mac OS X on PowerPC to Mac OS X on Intel

In 2005, Apple announced that they would switch CPUs a second time, this time away from the PowerPC towards the Intel i386 architecture. Being a modern operating system mostly written in C and Objective C, it could be easily ported to i386 – actually, Apple claims that they have always maintained i386 versions of the whole operating systems since the first release.

In order to run legacy applications that had not yet been ported to i386, Apple included the emulator “Rosetta” with the operating system; but this time, it was not tighltly integrated into the kernel as with the 68K to PowerPC switch, but the kernel only added support to run an external recompiler with the application as a parameter whenever a PowerPC application was launched. Rosetta translated all application code as well as the libraries it linked against, and interfaced to the native OS kernel.

Mac OS X on PowerPC to Mac OS X on Intel
Change New CPU
Running new applications Native
Running old applications User mode emulator
Running old drivers Not supported
Porting applications High level of source compatibility

Mac OS X (32 bit) to Mac OS X (64 bit)

The next switch for Apple was the migration from 32 bit Intel (i386) to 64 bit (x86_64) in Mac OS X 10.4 in 2006. Although the whole operating system could have been ported to 64, as it was done with Windows, Apple decided to take an approach which is closer to the Windows 95 one: The kernel stayed 32 bit, but got support for 64 bit user applications. All applications and drivers on the system were still 32 bit, but some system libraries were also available as ported 64 bit versions. A 64 bit application thus linked against 64 bit libraries, and made 64 bit syscalls that were converted to 32 bit calls inside the kernel.

Mac OS X 10.5 then provided all libraries in 64 bit versions, but the kernel remained 32 bit. OS X 10.6 will be the first version with a 64 bit kernel, requiring new 64 bit drivers.

Mac OS X (32 bit) to Mac OS X (64 bit)
Change New CPU mode/bitness
Running new applications Thunking
Running old applications Native
Running old drivers Native
Porting applications Carbon: Not supported
Cocoa: High level of source compatibility

AmigaOS on 68K to AmigaOS on PowerPC

The Amiga platform was the same OS on the same 68K CPU architecture in the Commodore days between 1985 and 1994, but third-party manufacturers offered PowerPC CPU upgrade boards since 1997. The closed source operating system could not be ported to PowerPC by these third parties, so AmigaOS continued to run on the 68K CPU, and an extension in the binary loader detected PowerPC code and handed it off to the other CPU. All system calls then went though a thunking library back to the 68K.

AmigaOS 4 (2006) is a native port of AmigaOS to the PowerPC, which was a major effort, since a lot of operating system code had to be converted from BCPL to C first. 68K application support is done by emulating the binary code and interfacing it to the native API.

AmigaOS on 68K to AmigaOS on PowerPC (3.x)
Change New CPU
Running new applications Thunking (new CPU)
Running old applications Native (old CPU)
Running old drivers Native
Porting applications High level of source compatibility
AmigaOS on 68K to AmigaOS on PowerPC (4.0)
Change New CPU
Running new applications Native
Running old applications User mode emulator
Running old drivers Not supported
Porting applications High level of source compatibility

Palm OS on 68K to Palm OS on ARM

Palm switched from 68K processors to the ARM architecture with Palm OS 5 in 2002. The operating system was ported to ARM, and a the “Palm Application Compatibility Environment” (“PACE”) 68K emulator was included to run old applications. But Palm discouraged developers from switching to ARM code and did not even provide an environment in the OS to run native ARM applications. They claimed that most applications on Palm OS did most of their work in the native operating system code anyway, so they would not see a significant speedup.

But for applications that were heavily CPU bound and contained compression or crypto code, Palm provided a way to run small chunks of native ARM code inside a 68K application. These “ARMlets” (later called “PNOlets” for “Palm Native Object”) could be called from 68K code and provided a minimal interface with a single integer for input and output, so the developer had to write the code to pass extra parameters in structs and care about endianness and alignment. ARM code could neither call back to 68K code, nor could it call operating systems API directly.

Sticking with 68K code for most applications practically meant having a virtual architecture for user mode programs, not unlike Java or .NET. The switch to ARM was mostly unnoticed by developers, and this approach could have allowed Palm to switch architectures again in the future with little effort.

Palm OS on 68K to Palm OS on ARM
Change New CPU
Running new applications Not supported (PNOlet for subroutines)
Running old applications User mode emulation
Running old drivers Not supported
Porting applications Not supported (PNOlet for subroutines)

Summary

Let us summarize the OS and CPU switches and how the different vendors approached their respective problems.

Bitness

Switching to a new CPU mode is the easiest change in an operating system, since old applications code can run natively, and API calls can be translated. An operating system can either stay in the old bitness and convert calls from new applications for the old system, or move up to the new bitness and convert calls fom old applications. There are also two ways where to hook the calls: An operating systems could hook into high-level API calls like creating a GUI window, but this is hard to do, since the high-level API is typically very wide, and it is very hard to get a converter for so many calls correct and compatible. Alternatively, the OS can convert low-level system calls. With this solution, the interface is quite narrow. But since all old applications link against the old libraries and new applications against the new libraries, equivalent libraries will end up twice in memory if the user runs old and new applications concurrently.

New CPU mode/bitness
OS Old mode New mode Thunking direction Thunking level
Windows 16 bit 32 bit new to old library
Windows NT 32 bit 64 bit old to new kernel
Mac OS X 32 bit 64 bit new to old kernel

For the 16 to 32 bit switch in Windows, the operating system stayed 16 bit and converted 32 bit calls into 16 bit calls at the API level. When Windows NT switched from 32 bit to 64 bit, the whole OS became 64 bit, and low-level kernel calls were converted for old applications. The same switch was done differently on Mac OS X: The OS stayed 32 bit, and 64 bit calls were translated at the kernel level.

The solutions of Windows NT and Mac OS X are quite similar, as they both run all 32 bit code with 32 bit libraries, and all 64 bit code with 64 bit libraries, and it is just the kernel that is different. For Windows, this has the advantage of having access to more than 4 GB in kernel mode, as well as some speedup from the new registers in x86_64 long mode, and for Mac OS X, it has the advantage of running old 32 bit drivers unmodified. (In a second step, Mac OS X later switched to a 64 bit kernel.)

CPU

It is harder to switch to a new CPU, because the new CPU just cannot run the old application code any more, and some operating systems cannot be easily adapted to a new CPU.

New CPU
OS Old CPU New CPU Running old apps Thunking level
CP/M, DOS 8080/Z80 8086 Developer has to recompile -
Macintosh 68K PowerPC Run OS and app in emulation -
Mac OS X PowerPC i386 User mode emulation kernel
Amiga 68K PowerPC Dual-CPU thunking library
Palm 68K ARM User mode emulation library

Mac OS X Intel and Palm OS ARM were written in a platform independent enough way so that they could be ported to the new architecture. They both included recompilers that ran the old code. This is the easy way. Amiga OS could not be ported, because the source code was not available. So systems had both CPUs, the original operating system code ran on the old CPU, and new applications ran on the new CPU, switching back to the old CPU for system calls.

For Classic Macintosh (68K to PowerPC), the OS source code was available, but could not be ported easily, so it was done similarly to the Amiga, although with a single CPU: Most of the old operating system ran inside emulation, and new applications ran natively, calling back into the emulator for system calls.

DOS was a reimplementation of the old OS by a different company that did not support running old binary code. Instead, it made the developer recompile their code.

OS

Switching to a new operating system, but keeping your users and developers is the hardest of all switches.

New OS
Old OS New OS Running old apps
CP/M DOS Compatible API
DOS Windows Virtual machine with old OS
DOS Windows NT API emulation
Windows 9X Windows NT Compatible API
Mac OS Mac OS X Classic: Virtual machine with old OS
Carbon: Compatible API

The approach to take depends on the plans with the API of the old operating system. If the API is good enough to be worth supporting in the new OS, the new OS should just have the same API. This has been the case for the CP/M to DOS and the Windows 9X to Windows NT migrations. In a way, this was also true for Classic Mac OS to Mac OS X, but in this case, Carbon was not the main API of the new OS, but one of three APIs (Carbon, Cocoa, Java; while everything but Cocoa is pretty much deprectated today).

If the old API is not worth maintaining on the new OS, but it is important that old applications run very well, it makes sense to run the old operating system in a virtual machine, together with its applications. This was done by Windows to run DOS applications as well as Mac OS X to run old Mac OS applications.

If the OS interface of the old operating system is relatively small and easy, or perfect accuracy is not necessary, the best solution might be API emulation, i.e. hooking the system calls of the old application and mapping them into the new operating system. This was done by Windows NT to run DOS applications, and was only moderately compatible.

Conclusion

It is interesting how different the solutions for all these OS migrations were: There have been hardly two instances that followed the same approach. The reason for it might be that the situations were all subtly different, and a lot of time was spend to work out the perfect solution for the specific problem.

But there is a trend that can be seen: As systems are getting more modern, solutions tend to get less hacky, and migrations tend to happen in many small steps instead of few big steps. Modern operating systems like Windows NT and Mac OS X can be ported to new architectures quite easily, emulators help running old applications, and thunking can be used to interface with the native syscall interface. Because of the abstractions in a system, an operating system can be ported to a new architecture or a new CPU bitness in steps, with some parts in the new system, and others still in the old system. These abstractions also allow developers swapping out complete subsystems or rearchitecting parts of the operating system without much user impact. It is getting more and more convenient for OS developers – but unfortunately, it’s also getting less exciting.

Links

1234

The Easiest Way to Reset an i386/x86_64 System

Try this in kernel mode:

uint64_t null_idtr = 0;
asm("xor %%eax, %%eax; lidt %0; int3" :: "m" (null_idtr));

This can be quite helpful when doing operating system development on an i386/x86_64 system. You can use this for the regular restart case or when a kernel panic is supposed to restart immediately and you cannot make any assumptions on what is still working in the system.

You can also use this for debugging very low-level code if you don’t have a serial port or even an LED to report the most basic information: First make sure your code is reached by putting the reset code there. Then remove it again and put this code in:

if (condition)
    reset();
else
    for(;;);

The system will either hang or reset, depending on the condition.

LodgeNet Reverse Engineering

Many hotels (at least in the USA) equip their room TVs with a “LodgeNet” entertainment system. The TV will show regular free television channels, but also have an interactive channel controlled by the remote that features video on demand and video games.


The setup consists of a regular TV, a set-top-box and a Nintendo 64 controller.

If you play with the system a bit, it’s be quite easy to reverse engineer the architecture:

All TVs in the hotel are connected to the same internal TV cable. The cable carries one channel per room, and the TV is locked to that channel. This gives the TV a point-to-point channel to a multiplexer in the server room.

This analog multiplexer has the free-to-air channels as inputs as well as “channel 00″, which is the entertainment system. Another multiplexer switches channel 00 between the “welcome” screen and one of the servers. It is the servers that provide the image as soon as the user browses through the menu or selects video on demand or a game.


The remote is custom. The volume buttons will control the TV directly, and all other buttons are sent to the set-top-box.


The Nintendo 64 controller has extra buttons and a custom (RJ) connector.


The set-top-box has an IR receiver at the front, and sits on the cable between the multiplexer in the server room and the TV (“CABLE IN”, “TV”), in order to be able to send the commands from the remote. “DATA” connects to the Nintendo 64 controller. It is unknown what “IR” and “MTI” would be for.


If you switch channels, the set-top-box tunnels the command to the multiplexer. The on-screen-display is generated by the multiplexer. Channel “00″ is the entertainment system, which by default is connected to the “welcome” channel, a series of static screens that are hotel-global.


The volume button on the remote targets the TV directly. You can tell that the on-screen-display is generated by the TV and not the multiplexer.


Switching channels with the buttons on the TV itself does not succeed. Otherwise you could watch your neighbor’s channel.


If you press the MENU key, the second-level multiplexer will switch you away from the global “welcome” channel and connect you to one of the servers. Since most hotel guests do not use the entertainment system most of the time, the number of servers is only a fraction of the number of rooms. While the multiplexer is finding a free server, it displays a “Please Wait” picture for about a second.


The user is connected to the server that is now dedicated to his TV. Key presses on the remote (except volume and channel) will be sent to the server.

A very obvious attack on the system would be to connect the cable to a TV receiver that allows switching channels.

Does anyone know more about this system? How are the games done – emulation or dedicated N64 hardware? Are the movies streamed from disk? Is there a dedicated storage server? How is the system updated with new movies? What operating system are the servers running?

Bringup History of Mac OS X

The heritage of different operating systems has been discussed many times. Mac OS X includes code from Mach and BSD, AmigaOS is based on TRIPOS, MS-DOS is a CP/M-86 clone and Windows NT is modeled after VMS. But what machines and operating systems were used for cross-compilation and bringup of these systems? In order to find this out about Mac OS X, I talked to a few people working at NEXT and Apple, and people that worked on Mach and BSD.

Currently, Apple only ships Intel-based machines. Mac OS X for Intel was released in 2006. The Intel version had been “leading a secret double life” since 2000, i.e. Mac OS X existed for Intel all the time, but was not released. In that time, Mac OS X was never self-hosting; instead, it was cross-compiled on PowerPC Macs. The first released version of Intel Mac OS X was a version of 10.4 for the Pentium 4 based “Developer Transition Kit” in 2005.

The first version of what would later be Mac OS X was “Rhapsody DR1″ released in 1997. It ran on PowerPC 604 Macintoshes (the 603 was not supported because it lacked a hardware pagetable walker) and was cross-compiled from OpenStep 4.2 running on Intel Pentium II CPUs. Rhapsody, which was basically OpenStep 5.0, also continued to run on Intel, but as mentioned before, Intel became a second-class architecture. Actually, somewhere between Rhapsody and OS X 10.0, there was a time when the GUI was not built for Intel.

Nextstep 3.1 from 1993 was the first version of Nextstep/OpenStep to support Intel CPUs (and also PA-RISC and SPARC) next to the existing Motorola 68K support. Bringup was done on a 25 MHz 68040 NeXTstation running NextStep 2.1, and the target CPU was a 486DX50.

At NEXT, the systems used for bringup of the original 68030 NeXT Computer were Sun machines running SunOS 3.0, a BSD-derivative. The NEXT engineers ported the Mach 2.0 kernel from VAX to M68K. So this port was not only cross-architecture, but cross-OS.

Mach 1.0 was first written for VAX and was brought up on VAX-based 4.2BSD machines at Carnegie-Mellon University. Coincidently, 4.2BSD evolved into SunOS 1.0 (and 4.2BSD/VAX machines were used for its bringup), and the BSD codebase also ended up in Nextstep.

Now that we arrived at BSD, we could go back the history of UNIX, which I am not going to do at this point.

So, to summarize, BSD/VAX was used to develop Mach/VAX, and SunOS/M68K was used to port Mach to 68K, Nextstep/68K was used to port itself to i386, and the i386 version to port itself to PowerPC, and then the PowerPC version to maintain the i386 version. Today, Mac OS X for Intel is self-hosting again.

Now I would love to see is the same kind of bringup history for Linux (Minix, …), AmigaOS (SunOS 1.x, …), BeOS, Copland (Mac OS?), Windows NT (VMS? DOS?), OS/2 (DOS? AIX?), MS-DOS (CP/M?), and of course the common bringup ancestors of so many systems, BSD and UNIX. It would also be interesting to extend this information with the respective compilers used. So if you can contribute something in the comments here or on your blogs, that would be great.

cbmbasic 1.0 with Plugins

I moved cbmbasic development to SourceForge and released version 1.0, which has the following added features:

  • RDTIM/SETTIM support (George Talbot)
  • LOAD”$” on Win32 (Lorenzo)
  • RND() is now random (Wolfram Sang)
  • The C code now hooks into the cbmbasic plugin infrastructure. This lets developers add additional statements, functions etc. Right now, you can turn this on with “SYS 1″ (turn off with “SYS 0″), and use the new statements LOCATE y,x (set cursor position), SYSTEM string (run command line command) and the extended WAIT port,mask, which implements the Bill Gates easter egg.

Amiga/Lorraine Mugs

Every touristy place has them: Souvenirs with given names on them. If you have an uncommon name, or a friend with an uncommon name, you might look through the whole collection – and notice that they have generic ones like “#1 FRIEND” (i case you really don’t find your friend’s name), and, sometimes, generic ones in Spanish.

Who can resist a Las Vegas souvenir mug with “AMIGA” on it? Especially if you can get “AMIGA” and “LORRAINE” together at twice the price?

Note to self: I travel too much.

Zuse Z1 at the Deutsches Technikmuseum

My last blog post showed the Zuse Z3 (1939-1941), the world’s first working digital Turing-complete computer. Let’s go back two more steps: The Zuse Z1 (1936-1938) shared its design with the Z3: It read its program from punched film and used floating point as its internal representation of numbers. But since it was all mechanical, it never worked reliably.

Just like the Z3, the Z1 was destroyed in the second world war. The Deutsches Technikmuseum in Berlin has a replica built by Zuse in the 1980s – but it’s not working either.


(MP4/H.264 Video, 384×288, 00:42 min, 12.5 MB)



In 1925, Zuse receives an award document for his shovel excavator from the firm Walthers Metallbaukasten.



Patent application for mechanical memory (1937)

The Z1 in the living room of Zuse’s parents in Kreuzberg, Berlin (1937).