Welcome to the Future!

So if you’re reading this message, you’re on the new server (the old one had a message talking about the server migration).

The main reason for the move was the crappy service that I’ve been experiencing at DreamHost for the last year and a half or so. After reading the incident reports, it’s become blatantly obvious that they’ve outgrown their capacity. Issues with networking crippling all of the DNS servers (best practices is to have DNS set up at different locations on different networks so that if one goes down, at least the others will be able to serve IPs), to failing hardware taking out multiple sites at a time (in this case, you should fail over the disks to another machine while the troublesome machine is being worked on), to file servers going down due to bad firmware updates or bad hardware. Downtime has been for hours if not days at a time, which is unacceptable.

While it’s true that you get what you pay for, based on the symptoms of the problems and in my professional opinion, it’s obvious that disaster recovery at the old place just isn’t a high priority.

I’ve found a new host whom, after reading their rationales for offering shared hosting (they never did before) and after reading how their things are set up (essentially segregating the storage from the hardware and having backup hardware next to it so if one part fails, they can fail over to the new hardware; downtime should be considerably less than the old place), I decided to give them a chance. I’ve been hosting a couple of VPS’es with these guys for the last couple of years as well and the only downtime on those that I’ve received are the ones that have been scheduled. So based on that track record, I’m willing to give their shared hosting service a chance. I figure that their ability to fail over due to hardware problems will be a heck of a lot quicker than the old place, and so all I really have to worry about is networking issues.

Will this fix the problems of availability? Time will tell. I will say that I’m paying almost 70% less than what I was paying at the old place, and if this works out, it’ll be a steal as I’m essentially getting more for less (and probably with better service to boot). If not, then worse case, I’ll get the same service level as I did at the old place, and will be able to use the money I save to spend on at least nine slurpees a month while I wait for them to complete any fixes! Win/win for everyone, methinks 😛

Anyways, what’s next on the list is to wait for the DNS propagation to complete, then ensure that all my settings have been migrated so the new site looks and operates just like the old one. Once done, I’ll get to what I said I needed to do in the last post, which is to upgrade WordPress and fix commenting.

Gentoo and Openoffice 2.3.0

OpenOffice 2.3 made it into portage today. Trying to emerge it brought up errors. Here is a list of things I tried to get it to work. Still unknown if all the steps are necessary.

The first was:

1 module(s):
need(s) to be rebuilt
ERROR: error 65280 occurred while making /var/tmp/portage/app-office/openoffice-2.3.0/work

I noticed that I didn’t have dev-java/sax installed, so I tried emerging that, but it made no difference; it died in the same spot with the same error.

Looking at the Gentoo Forums, I noticed someone had an error regarding xulrunner. I didn’t have that installed so I tried emerging it.

I then got a similar error message as above, but this time, it was also complaining about the jfreereports module needing to be rebuilt. The closest thing to portage was jfreechart, so I emerged that and tried again.

OpenOffice finally compiled.

I’m not sure what exactly my system was missing. I know that doing both the above allowed it to compile successfully, but I’m not sure if sax, for example, is really needed (or if xulrunner is really needed with some tweaks to the ebuild). I won’t complain though.

Note that for some users, if gperf wasn’t installed, the installation would fail. I guess we know for sure that gperf is required. An updated ebuild with gperf listed as a dependency was recently posted to Gentoo Bugzilla. That’ll probably make its way into portage soon.

Suffice to say, I wouldn’t recommend updating to OpenOffice 2.3 until all the bugs are ironed out. I’d definitely wait until it gets marked as stable. 2.2 is plenty stable and should work absolutely fine.

Not feeling the Xen here…

So I’ve pretty much given up on Xen for the time being.

Since I wanted to use a more current version for the newer kernel (with the newer drivers, needed for the motherboard that’s running my Core2Quad set up), I used the Xen overlay from overlays.gentoo.org which has Xen 3.1.0 (main Portage only has 3.0.x).

While I got everything to compile and was able to successfully boot into dom0, I couldn’t get networking to work in domU as it wouldn’t detect or bridge properly through my eth0.

I’m pretty sure it has something to do with how I’ve configured things and with just a little more hacking, I could probably figure it out, but I’m not sure if it’s worth it, especially if you listen to what this guy has to say. It pretty much meshes with what I’ve read while doing my research and from the rumblings of some of the Xen community who have become frustrated at the direction Xen is going or how disorganized the code base has become. Couple that with Citrix’s purchase of XenSource, and Citrix’s close ties to Microsoft, and well, conspiracies abound…

KVM has been integrated into the Linux kernel and looks like the most ideal and scalable method going forward so I’m going to stick with that. Since the KVM client program is based off of a modified version of QEMU, that means I can create Virtual Machines using QEMU and then switch to KVM whenever it appears in the main Portage tree without having to re-build those machines.

Suffice to say, I’ve started (and am continuing) to use QEMU (along with the KQEMU kernel module which allows user and kernel code to be executed directly on CPUs that support KVM extensions; provides a nice speed boost) and it fulfills my needs for now. It was also dirt easy to set up, although it would be nice if there were a sexy graphical management tool in Gentoo that I could use to manage it all. Hopefully, once KVM takes off, I’ll be able to access other hardware directly instead of just the CPU. I would love it if stuff could render directly on my sound card and video card. This would imply that 3D gaming through a Windows virtual machine could be possible without much overhead. For now, QEMU emulates a SoundBlaster 16-bit sound card and a really weak video card which works, but sucks for any Windows applications that benefit from (or require) hardware acceleration.

In any case, I might revisit Xen at a future date when the Xen 3.1 overlay gets updated with new packages (or Xen 3.1 makes it into the main Portage tree as stable). For now, I’ll stick with QEMU as it gets the job done for what I need and I won’t have to worry about any possible vendor lock-in in the future.

Growing Pains

So Gentoo pushed out Gnome 2.6.18 into the main Portage tree today.

I figured that before I’d make a major system change to Xen, I should make sure that a major system change to Gnome 2.6.18 should be stable first.

So, I figured a quick emerge -uD gnome would be simple enough, right?

Nope. Of course, when is a major update to any major system package easy?

For future generations, I’ll document the hell I had to go through (i.e. the changes I needed to make) here so that others who are in my position don’t have to look too far for answers.

First, control-center was blocking the installation of some packages. To get around this, I had to run:

emerge -C control-panel && emerge -uD world

Things went happily along for a while, but then borked when compiling vte.

The error that appeared was:

checking for XML::Parser... configure: error: XML::Parser perl module is required for intltool

!!! Please attach the following file when filing a report to bugs.gentoo.org:
!!! /var/tmp/portage/x11-themes/gnome-icon-theme-2.18.0/work/gnome-icon-theme-2.18.0/config.log

!!! ERROR: x11-themes/gnome-icon-theme-2.18.0 failed.
Call stack:
ebuild.sh, line 1632: Called dyn_compile
ebuild.sh, line 983: Called qa_call 'src_compile'
ebuild.sh, line 44: Called src_compile
ebuild.sh, line 1322: Called gnome2_src_compile
gnome2.eclass, line 70: Called gnome2_src_configure
gnome2.eclass, line 66: Called econf
ebuild.sh, line 586: Called die

!!! econf failed
!!! If you need support, post the topmost build error, and the call stack if relevant.
!!! A complete build log is located at '/var/tmp/portage/x11-themes/gnome-icon-theme-2.18.0/temp/build.log'.

To fix this, I had to run:

emerge --unmerge dev-perl/XML-Parser && emerge dev-perl/XML-Parser

which was weird because I already had that package installed. Oh well.

Things went merrily along again. And then it borked on building shared-mime-info.

This time, the error was:

INTLTOOL_EXTRACT=../intltool-extract srcdir=. ../intltool-update --gettext-package shared-mime-info --pot
WARNING: This version of gettext does not support extracting non-ASCII
strings. That means you should install a version of gettext
that supports non-ASCII strings (such as GNU gettext >= 0.12),
or have to let non-ASCII strings untranslated. (If there is any)
/usr/bin/xgettext: error while loading shared libraries: libexpat.so.0: cannot open shared object file: No such file or directory
ERROR: xgettext failed to generate PO template file. Please consult
error message above if there is any.
make[1]: *** [shared-mime-info.pot] Error 1
make[1]: Leaving directory `/var/tmp/portage/x11-misc/shared-mime-info-0.21-r1/work/shared-mime-info-0.21/po'
make: *** [check-recursive] Error 1

!!! ERROR: x11-misc/shared-mime-info-0.21-r1 failed.
Call stack:
ebuild.sh, line 1632: Called dyn_compile
ebuild.sh, line 983: Called qa_call 'src_compile'
ebuild.sh, line 44: Called src_compile
shared-mime-info-0.21-r1.ebuild, line 32: Called die

!!! emake failed.
!!! If you need support, post the topmost build error, and the call stack if relevant.
!!! A complete build log is located at '/var/tmp/portage/x11-misc/shared-mime-info-0.21-r1/temp/build.log'.

It’s failing on finding libexpat.so.0. The closest package matching that name dev-libs/expat, which I already had installed. I tried the same trick with this one as XML-Parser, but that didn’t work either.

So I tried re-emerging gettext in the hopes that it would build against it since gettext was the one complaining that it couldn’t find the module. Luckily, that did the trick.

Next one to crash? gnome-desktop Big surprise there. However, compiling it a second time worked. Weird.

Up next, epiphany failed. I tried re-emerging it again in the hopes that what worked for gnome-desktop would work here. That failed. Since I was writing this on Ephiphany, I tried closing all my browser windows and re-emerged it to see if that would work. It didn’t.

The root error message was the same in all cases:

/usr/bin/dbus-binding-tool: error while loading shared libraries: libexpat.so.0: cannot open shared object file: No such file or directory
make[2]: *** [stamp-ephy-dbus-server-bindings.h] Error 127
make[2]: *** Waiting for unfinished jobs....
/usr/bin/dbus-binding-tool: error while loading shared libraries: libexpat.so.0: cannot open shared object file: No such file or directory
make[2]: *** [stamp-ephy-dbus-client-bindings.h] Error 127

Ah, libexpat. You piece of garbage.

There’s no package called dbus-bindingtool, so I decided to try re-emerging dbus. That didn’t work. Next likely candidate: dbus-glib.

That did it.

And it finished all the way through!

Unfortunately, a lot of packages that were built against that pesky libexpat file needed to be rebuilt again. Running revdep-rebuild showed that more than 20 packages needed to be recompiled, including Openoffice (sigh).

Three hours later, and I finally have a working system again!

Am currently using Gnome 2.6.18 now. I found that to keep my sexy 3D desktop effects, I had to recompile compiz again, but that’s no biggie. I don’t see much of a difference over 2.6.16 and I heard that .18 had built in compositing features, but I haven’t found them yet. I’m going to play around for a little bit before I attempt the Xen project. I have a feeling that will be painful too.

Why is it never easy?

I primarily use VMWare for development purposes, but I’ve recently found out that for some reason, VMWare won’t boot any images on my sexy new quad-core machine as it borks when probing VGA devices. That’s probably because I’m using an ASRock 4CoreDual-VSTA board which has both a PCI-E slot, and an AGP slot (of which, I’m currently using an AGP video card). Suffice to say, such a configuration is a freak of nature and probably shouldn’t exist, but yes, it does. As such, it’s probably confusing the hell out of VMWare.

So I figured I’d try some alternative virtualization software. It looks like I get to choose between QEMU and Xen. Both of them support hardware KVM, fortunately.

QEMU (with the optional (although if you have a KVM enabled CPU, highly recommended) KQEMU add-on) is a nice option as you don’t need to compile any kernel modules. However, QEMU in portage needs to be compiled with GCC 3. Since I don’t want to have to switch between compilers whenever I update my system (as it makes emerge -uD world difficult to do automatically since I’d have to now eyeball for changes in QEMU and update it manually), this solution loses a lot of its attractiveness.

Xen looks like it’s very feature rich, but it also looks like a pain to set up. However, with my sexy KVM enabled Quad-Core CPU, I should be able to run unmodified Xen clients at near native speeds with almost full hardware support, features that I believe you have to currently pay for if you use VMWare).

So, masochist that I am, which option do you think I’ll take? 🙂

I’ll let you know how it goes.

Fedora 7 is like Windows ME…

…in the sense that it is an operating system in transition.

You see, Fedora 7 is the first release where the development was one hundred and one per-cent in the community. In fact, they hope that when they look back at Fedora 7 one or two years down the road, the decisions that [they] made for this release will have proven to be as impactful as anything [they’ve] done in the Fedora space since the start of the Fedora Project.

Of course, with any kind of massive overhaul, there are bound to be a few bugs and hiccups.

Now, with work being what it is, I’ve found myself taking on more of a Sys/Net Admin role as of late, as opposed to the operations/logistics/HR/supply chain management stuff I have been doing for the last year (which came after a developer/technical/analyst role; I feel like I’m bouncing all over the place).

Anyways, since I don’t want to gong any live servers, I’ve turned to VMware for all my developmental needs. By setting up a bunch of virtual machine environments in one computer, I can do all the testing I want without fear of borking something on a live server, and without having to bust out and lug computers all over the place just to set up a test lab (yeah, I’m lazy. I’ve gotten tired of lugging 30-70 pound boxes up and down stairs at work. Sue me.).

So when it came time to set up a virtual development machine that could administer a bunch of other virtual servers within my locally enclosed virtual NAT environment, I figured I’d give Fedora 7 a try, seeing how it came out about a week and a half ago.

On a side note, I used to be all about Red Hat/Fedora, but I really hate how aggressive the Fedora release schedule has become. After less than a year and a half, support for their current distribution ends and if you want any software updates (either for feature enhancements, bug fixes or security improvements), you have to upgrade to the latest major version. In the past, bad things have happened to the point where if I were to do that, I’d just rather wipe everything clean and start from scratch. However, if you’ve been using your workstation to the point where it’s critical, that might not be an option and doing an upgrade might break things as well.

Anyways, now I run Gentoo Linux and I haven’t looked back. They call themselves a “metadistribution”, which means that you can upgrade individual components as you see fit as there is no concept of “major distribution releases” where you have to upgrade every single thing or nothing at all (like going from Fedora Core 6 to Fedora 7; yes, they changed their name again). It’s a source based distribution as well, which takes longer to upgrade since you have to compile everything (and I mean everything) from scratch. However, the trade off is that you can pass options to the compiler tailored for your specific CPU architecture, which can give you a 10-30% increase in performance for free since you’re creating binaries that take advantage of modern instruction sets and don’t have to be backward-compatible enough to worry about having to run on an ancient machine like a 486.

Of course, Gentoo isn’t for the faint of heart. If you’re looking to try out Linux for the first time, I would highly recommend giving the Ubuntu LiveCD a try. It lets you boot onto Linux and run it directly from the CD, so you don’t erase anything on your hard drive. If you wanted to take the plunge later on and switch, you could easily install it as well. Ubuntu is the only distribution I’ve found that can fully activate all of the hardware on my Dell Inspiron 700m without having to jump through any hoops to set it all up (ex. finding and downloading wireless drivers, getting DVD movies to work, etc). I was quite impressed. If you’re looking for long term stability and software support (say, 5 years or so), CentOS is another option, which is based off of Red Hat’s Enterprise level product (if it’s good enough for big business, it’ll probably be good enough for you).

Back to the story, I run VMware on a 64-bit Gentoo Linux system which has been optimized up the wazoo (so I know the back end is running as efficiently as possible leaving the maximum amount of CPU and other system resources to handle all the virtualization I want to do). Up until now, I haven’t had any problems with VMware at all. It’s taken everything I’ve thrown at it. I’m currently running three CentOS 5 64-bit systems (i.e. my developmental server cluster), an office customized DiscoverStation (i.e. our software), Windows XP (yes, you heard me right) and even Mac OS X Tiger (don’t ask). It’s so easy to set up in VMware as well. Just accepting the defaults gets you a new virtual machine in just minutes.

Or so I thought.

So I was quite surprised to have encountered my first issue installing something in VMware: no hard drive detected.

Well, that sucked.

For most guest systems that you create using a ‘SCSI’ hard drive, it usually defaults to emulate a LSILogic SCSI controller. Apparently, something changed in Fedora 7 that exposed a bug (so they say) in VMware’s LSILogic SCSI emulation.

The workaround is to emulate a Buslogic SCSI controller and create a virtual hard disk using that.

Finding this option, however, was a little difficult. If you choose most of the logical Guest environments (ex. Red Hat Enterprise, SuSE Linux, Other Linux kernel 2.6.X, Windows XP, etc), they’ll defalut to LSILogic emulation without giving you a choice.

The trick is to just choose either the “Other Linux” or “Other Linux 64-bit” option and configure the machine based off of that. When it comes to creating the hard disk, it will then give you the choice of which SCSI adapter to emulate. Once the hard disk is created, you can go back and change the Guest environment to what ever you want (I changed mine to Other Linux kernel 2.6.x 64-bit). Then you can proceed to install the virtual machine as normal and everything seems fine.

Another trick is to change your emulated CD-ROM to use a SCSI controller instead of an IDE controller (leave it in IDE mode during installation though; it seems the VMware BIOS won’t recognize a SCSI CD-ROM on boot up) which will get rid of some ATA errors that may show up during run time.

The system is currently installing as I write this; we’ll see if this Fedora 7 is any good.

Countdown to Destruction

So everyone at work pretty much knows what I do. Here’s a quick snippet:

Operations & logistics, office coordination, human resources, supply chain management, marketing, technical writing & documentation, business development, product enhancements & refinement, software development, system and network administration, technical support, customer service, Magic & Miracles.

Just to name a few.

So let’s just say that there may be something coming down on the horizon that may require me to put on the ol’ SysAdmin hat again.

Looks like I’m going to have to re-teach myself all about DNS, Firewalls, Network Security, Mail, and probably a whole whack of other stuff all in just a day. Fun times, especially since I haven’t touched that stuff in over a year.

‘Cuz, it isn’t like I have anything else better to do.

Nope. Nothing at all.


Now, where was that magic wand again?

Dragon whips its tail…

So I managed to catch 18 Fingers of Death on DVD the other night. I had no idea what it was about and was expecting a campy kung-fu flick. Boy was I wrong. Like Jesus Christ: Vampire Hunter, it takes a certain mindset to truly appreciate the movie that they were trying to make. All in all, it was decent, although I would have preferred seeing a real kung-fu flick instead.

Ubuntu seems to be working really well (I’m writing this post on it now). Everything seems to work out of the box, and anything that didn’t was just an “apt-get” away. It’s quite obvious that they put a lot of work into the user experience. However, I’m still trying to set up a development environment in it, and with my lack of knowledge on how Debian administration works, I’m finding it a little difficult to get all the development libraries I need loaded onto it as I don’t know what the package names are (build-essentials for gcc? How intuitive is that?). That, and I’m not quite familiar with how the advanced features of dpkg and apt-get/aptitude works just yet.

The main thing I liked about Fedora was that it could install everything you could ever want right from the install CD if you wanted. Ubuntu just installs the bare essentials that you need to get started as a regular computer user so tracking down extra stuff like dev libraries, various services, etc, is difficult if you’ve never done it before and have no idea what they’re called. I suppose that’ll all be solved with experience eventually.

“I am what I am because of who we all are”

So I got tired of listening to my cousin tell me of the glory of Ubuntu that I finally decided to switch my laptop from Fedora Core 5 to Ubuntu. We’ll see how this goes. I’m writing this entry from inside it, and it seems to work fine. The more immediate problems to overcome is getting my wireless card and full 1280×800 resolution working. Everything after that should be trivial.

Crack that whip (although if you hit something you’re not responsible for, you could be held liable)!

It occurs to me that a Spam Filter is much like a Tamagotchi. It tries its best but it really needs some TLC before it can reach it’s full potential.

I recently added some spam filtering to our work’s email server as I had gotten sick and tired of the deluge of spam hitting us and I was finally able to find some spare time to actually do it (a rarity in my job setting as I keep getting more and more responsibility while at the same time still responsible for all of my prior duties). It worked for the most part, but was still letting things through.

However, a few hours of training the Baysian classifier later, I found that it was catching about 98% of the spam we were getting (I anticipate with more training with a broader sample set, I can push that figure even higher). Suffice to say, I was quite impressed. Yet, it should come as no surprise that there were some people who still weren’t satisfied, wanting stuff from the web forms on our web site to come through, even though the entries were spam itself (which is why I continue to let the stuff through even though it’s been marked as spam, but I guess that some people just don’t appreciate the subtleties of what I do). Oh well, can’t please everyone.

Of course, it wasn’t until later that I found out that all the non-company stuff that’s on that server (all a part of personal favors given away from upper management) was possibly broken. I’ve tried my best to fix it, but really, I don’t think I should be doing it for free, and here’s my line of reasoning why:

  1. Stuff that has nothing to do with our company isn’t my responsibility
  2. Stuff that has nothing to do with our company is essentially stuff that has to do with another company
  3. If I do work for another company while not being employed by them, that would make me a consultant
  4. If I’m a consultant, then I should be paid a consulting fee or compensated proportionately to the added responsibility to ensure that they get quality work from me

I know it sounds like I’m complaining, but really. If something really, really bad happened, it would be my head on the line which sucks since I don’t have access to the resources needed to test things that are outside of the company’s scope. With all of my responsibilities at work now (which almost rivals Ben’s in terms of the sheer number of roles and scope), I really don’t think I should be touching anything non-company related, as I really don’t have the time right now to clean up any messes that may result from it.

That, plus I’m a lazy bastard. Whatever.

Anyways, I know what I’m going to do: I’m going to move our mail server off that machine onto a local one, and let all the other non-company stuff on that machine fend for themselves. At least this way, I’ll finally learn how to set up a mail server from scratch, which is the closest thing that relates to my career path that I’ll be able to do in this company in a very long time.