August 17, 2018
by James Mawson
The flu took me completely out of action recently. It hit me pretty hard.
And, as tends to happen with these things, I ended up binge watching more TV and movies in two weeks hidden under a blanket than in 2 years as a member of wider society.
In the most delirious moments, the vicious conspiracy of fever and painkillers gave me no choice but to stick to bad 80s action movies.
When I was a little more lucid, though, I got really stuck into some documentaries around the early days of desktop computing: Computerphile episodes, Silicon Cowboys, Micro Men, Youtube interviews, all sorts of stuff.
Here are the big things that have stuck with me from it:
There was an established computing industry in the 1970s – but these companies played very little direct role in what was to come.
Xerox’s Palo Alto Research Centre had an important role to play in developing desktop technologies – with absolutely zero intention of ever commercialising anything. The entire thing was funded entirely from Xerox’s publicity budget.
But for the most part, computers were sold to universities and enterprises. These were large, expensive machines guarded by a priesthood.
The smallest and most affordable machines in use here were minicomputers like the DEC PDP-11. “Mini” is, of course, a relative term. These were the size of several fridges and cost several years worth of the average wage.
So what if you wanted a computer of your own? Were you totally stranded? Not quite. You could always buy a bunch of chips and build and program the whole damn thing yourself.
This had become increasingly accessible, thanks to the development of the microprocessor, which condensed the separate components of a CPU into a single chip. As the homebrew computer scene grew, hobby electronics companies started offering kits.
It was out of this scene that desktop computing industry actually grew – both Apple and Acorn computers were founded by hobbyists. Their first commercial products evolved from what they’d built at home.
Businesses that catered to the electronics hobbyist market, like Tandy and Radio Shack, were also some of the earliest to enter the market.
The first desktop computers were a massive leap forward in terms of bringing computing to ordinary people, they were still fairly primitive. We’re talking beeps, monochrome graphics, and a 30 minute wait to load your software from cassette tape.
And the only way to steer it was from the command line. It’s definitely much more accessible than building and programming your own computer from scratch, but it’s still very much in nerd territory.
By 1987, you’ve got most of what we’re familiar with: point and click interfaces, full colour graphics, word processors, spreadsheets, desktop publishing, music production, 3D gaming. The floppy drives had made loading times insignificant – and some machines even had hard drives.
Your mum could use it.
Things still got invented after that. The internet has obviously been a game changer. Screens are completely different. And there are any number of new languages.
For the most part, though, desktop computers came together in a decade. Since then, we’ve just been making better and better versions of the same thing.
Back in the ’90s, it seemed a fairly ubiquitous part of computer geek culture that Bill Gates was kind of a dick. In magazines, on bulletin boards and the early internet, it was just taken for granted that Microsoft dominated the market not with a superior product but with sharp business practices.
I was too young to really know if that was true, but I was happy to go along with it. It turns out that there was actually plenty of truth in that. An MS-DOS PC was hardly the best computer of the 1980s.
The Acorn Archimedes, for instance, had the world’s fastest processor in a desktop computer, many times faster than the 386, and an operating system so far ahead of its time that Microsoft shamelessly plagiarised it 8 years for Windows 95.
And the Motorola 68000 series of CPUs used in many machines such as the Apple Macintosh and Commodore Amiga was also faster, and were vastly better for input/output intensive work such as graphics and sound.
So how did Microsoft win?
Well they had a head start piggybacking with IBM, who very successfully marketed the PC as a general purpose business computer. At this point, the PC was already one of the first platforms that many software developers would write for.
Then, as what was then known as the “IBM clone” market began and grew, Bill Gates was very aggressive about getting MS-DOS onto as many machines as possible by licensing it on very generous terms to companies like Compaq and Amstrad. This was a short term sacrifice of profits in pursuit of market share. It also helped the PC to become the affordable choice for families.
As this market share grew, the PC became the more obvious platform to first release your software on. This created a snowball effect, where more software support made the PC the more obvious computer to buy, increasing market share and attracting more software development.
In the end, it didn’t matter how much better your computer was when all the programs ran on MS-DOS.
On first glance, Gates looks like the consummate monopolist. Actually, he did a lot more open up access to new players and foster innovation and competition.
In the early days of desktop computing, every manufacturer more or less maintained its own proprietary platform, with its own hardware, operating system and software support. That meant if you wanted a certain kind of computer, there was one company who built it so you bought it from them.
By opening the PC market to new entrants, selling the operating systems to anyone who wanted them, and setting industry standards that anyone could build to, PC makers had to compete directly on price and performance.
Apple still have the old model of a closed, proprietary platform, and you’ll pay vastly more for an equivalent machine – or perhaps one whose specs haven’t improved in 3 years.
It was also great for software developers not to have to port their software across so many platforms. I had first hand experience of this growing up – when I was really young, there were more than a dozen computers scattered around the house, because Dad was running his software business from home, and when he needed to port a program to a new machine, he needed the machine. But by the time I was finishing primary school, it was just the mac and the PC.
Handling the compatibility problem, throwing Windows on top of it, and offering it on machines at all price points did so much to bring computing to ordinary people.
At this point, I’m pretty sure someone in the audience is saying “yeah, but we could have done that open source”. Look, I like Linux for what it’s good for, but let’s be real here. Linux doesn’t really have a GUI environment – it has dozens of them, each with different quirks to learn.
One thing that they all have in common though is that they’re not really proper operating system environments, more just nice little boxes to stick your web browser and word processor. The moment you need to install or configure anything, guess what? It’s terminal time. Which is rather excellent if you’re that way inclined, but realistically, that’s a small fraction of humanity.
If Bill Gates never came up with an everyman operating system that you could run on an affordable machine, would someone else have? Probably. But he’s the guy that actually did it.
The deal that really made Microsoft is also the deal that eventually cost IBM their entire market share of the PC platform they created and of the desktop computer market as a whole.
IBM were in a hurry to bring their PC to market, they built almost all of it from off-the-shelf components. Bill Gates got the meeting to talk operating systems because his mother sat on a board with. IBM offered to buy the rights to the operating system, but Gates offered instead to license it.
There was really no reason that IBM had to take that deal. There was nothing all that special about MS-DOS. They could have bought a similar operating system from someone else. I mean, that’s exactly what Gates did: he went to another guy in Seattle, bought the rights to a rip off of CP/M that worked on the Intel 8086, and tweaked it a bit.
To be fair to IBM, in 1980, it wasn’t obvious yet how crucial it would be to hold a dominant operating system. That came later. At that point, the OS was kind of just a bit of code to run the hardware – a component. It was normal for every computer manufacturer to have its own . It was normal for developers to port their products across them.
But it’s also just weren’t inclined to take a skinny twenty-something seriously.
Compaq famously reverse engineered the BIOS, and other manufacturers followed them into the market. IBM now had competition, but were still considered the market leaders and standard setters – it was their platform and everyone else was a “clone”.
They were still cocky.
So when the 386, IBM decided they weren’t in any hurry to do anything with it. The logic was that they already held the manufacturing rights to the 286, so they might as well get as much value out of that as they could. This was crazy: the 386 was more than twice as fast at the same clock speed, and it could go to much higher clock speeds.
Compaq jumped on it. Suddenly IBM were the slowpokes in their own market.
Having totally lost all control and leadership in the PC market, they fought back with a new, totally proprietary platform: the PS/2. But it was way too late. The genie was out of the bottle. This had the same software support problems that were working against every other company with their won closed, proprietary platform. It didn’t last.