Enterprise = Overly Complex and Slow
A lot of the “enterprise” hardware and software I’ve been exposed to these last few years is really badly designed. Take for example, those Dell rack-mount servers I see everywhere. On the surface, they are really nice machines – dual CPUs, RAID with hot-swap, all kinds of fancy diagnostic tools. Right off the bat, though, they take forever to reboot. And of course, everything you have to do on them (work on RAID, upgrading Windows, installing just about any software) requires a reboot. The management software is awful (several different programs, all of them bad) and a waste of time. When I was working on getting Ghost to run (and why is that such a pain?), I saw lots of people that had similar problems with the hardware RAID that I have had with 2650 servers at customer sites. Nobody ever gets any real answers from Dell: The answer is either so easy you could figure it out for yourself, or there is no answer. Why do you have to wait hours after creating a new RAID array for the drives to be “cleared” or “scrubbed”? I can understand why you have to do this when rebuilding a failed drive, but on a new RAID array, there is no data to be rebuilt. There is no reason to write 0s to every byte on the drive (or copy whatever old junk is there). When the OS creates partitions, it’s going to assume the space needs to be overwritten, anyway. Or, like the inexpensive RAID guys do, they could let you just start with 1 drive, then build the RAID mirror after you get everything installed. But no, this is “enterprise”, where everybody likes kicking back while the servers reboot, build RAID arrays and do other busywork, I guess.
Then there are problems with the “enterprise” software running on the servers. Exchange Server is a well known offender (at least in my book). It’s getting slightly better with each release (I don’t miss the M: drive, do you?), but they still have fundamental problems with their database. Why do you have to rebuild the database (and lose data) whenever the server doesn’t get shutdown cleanly? I run lots of databases (and file systems that work like databases) that don’t have this problem. Why is it such a pain to backup and restore? When the Exchange Server acknowledges receipt of a message from another email server, Exchange Server should have already stored that message in its DB and flushed the disk so that no message ever gets lost – even if the power fails. That’s the way our big UNIX email servers work. Microsoft rushed Exchange Server out, then layered features without cleaning up the engine.
Now, we add Blackberry on top of already fragile Exchange Server. If these guys were smart, they would make their software extremely tolerant of any possible problems with Exchange Server. But no, they have to use every MAPI bell and whistle to access the mailboxes (making their software picky about what other software is installed on the software – there is an entire chapter in the install guide about getting Calendar synchronization to work) and store half the data in their own MSDE database (one more database to manage) and the other half in Exchange Server mailboxes (now, we have to keep 3 data stores synchronized – Exchange Mailbox, MSDE database and Blackberry mailbox). Why doesn’t the activation email work? I guess the support guy knew that it wouldn’t… he said something like: Well, the activation password in the email should work, but I always manually set the activation password. The number of versions, service packs and hot fixes is mind boggeling. Do you really need 4 versions of the software to figure out that users want to to synchronize contacts, calendars, messages and to-do lists quickly, reliablty and with a minimum of fuss?
If I were designing this stuff, the world would be a different place. In fact, I designed and wrote a large enterprise software packages that solved complex business problems. At a user’s conference, one of the users of a package I wrote told me that it was the most bug free piece of software he had ever used. That wasn’t by accident. I carefully minimized my dependencies (so the install wouldn’t be fragile), used an ultra reliable database engine (it didn’t have as many features as some, but we never had database corruption problems), had a super reliable install/uninstall program (I had to write my own to get the quality level I wanted) and carefully tested on every platform we supported (we had automated testing tools).
Server hardware has gotten inexpensive and gigabit Ethernet networks are really fast. The opportunity is there for fast, ultra reliable enterprise applications that run on clusters of inexpensive servers. Failed hard drive, motherboard, CPU, RAM chip, power supply? No problem – the other servers in the cluster keep serving requests and we fix and restore the failed server at our leisure. Applications running a little slow since you added 20 new users? Just add another server to the cluster. None of the “enterprise” vendors are taking advantage of this capability, though. Instead, they continue to create software that is overly complicated (internally and for end users) and fragile.
[...] these libraries offer, I have seen it severely abused on more than one occasion. Crank up a copy of Blackberry Enterprise Server (BES) and let it run for a week or two. You’ll find your Windows Event Viewer overflowing [...]