Friday, May 26, 2006

Lapping it up

My intention to learn some new tricks has been going quite well.

I've been working with PHP on a little project of mine. It's a system monitoring application - data about the health of systems is inserted into a database and reports displayed on a web page.

It was written using JSP, and that works extremely well up to a point. But it doesn't really need any specific technology - there's nothing in JSP that is especially suited to the task, which is fairly simple. Rewriting it in PHP therefore looked fairly easy, and having a real project is a good way to learn something new.

And it worked very well. PHP is very easy to develop in, and I was able to implement some new features as well.

So how would I compare PHP and JSP? Each has advantages and disadvantages:

PHP is quicker to get going in, so putting together simple prototypes and building on them is slightly easier.

I found it easier to make silly mistakes in PHP - the language is a bit more forgiving, so that some typos that would have generated a failure in JSP get through and you end up with something that doesn't quite work right. You need to impose a bit more discipline with PHP.

PHP doesn't need a huge java process to run a servlet engine. This has always bothered me - the overhead of a servlet container, just to dynamically create a few web pages, is considerable.

The really big advantage of PHP, though, is that it's very easy to create images on the fly. The GD extension is pretty well standard, and works (although not capable of horizontal dashed line - doh!). I've tried various ways of creating graphs using JSP and, while it's possible, it's neither easy nor pleasant. There's clearly for an opening here for a dead-easy JSP image generator to be created (or, if it exists, to come out from wherever it's been hiding).

The downside to PHP was the length of time it took to put the stack together. Tomcat is a zip file you unzip, and you're basically done. PHP makes the common mistake of using autotools and is a right pain in the neck to configure and build correctly as a result. (Note that its not PHP's fault; it's autoconf.)

So that project is almost done, and while I wouldn't claim to be fluent in PHP, at least I'm capable of asking directions in the language. Now on to ruby on rails when I get some more free time...

OpenSolaris contributions

As Jim has just noted, some of my contributions to OpenSolaris recently got putback.

This is great. It's great that Sun - as an organization - gives external contributors like myself the opportunity to put changes into Solaris. It's great that individuals within Sun (thanks Dave!) take on the time and effort required to integrate the changes into the codebase.

It's taken a while to get going. I was starting to work on fixes during the pilot stage of the OpenSolaris project, over a year ago. The fixes just putback aren't those (although I am looking at getting that work integrated too). This set of fixes to the install consolidation was actually due to something I was trying to do at work, and pkgchk wasn't cooperating. In the old days, I could have spent days fighting with support trying to persuade them that (a) it was a problem, and (b) that it might be worth fixing. Instead of which I can now get in there and fix the thing properly.

I have some other fixes planned for the Solaris package tools as well, with several possible performance improvements having been identified.

Actually getting the fixes into OpenSolaris isn't hard (the Sun sponsor does most of the legwork). It would do no harm to start off simple, just to run through the process (I didn't, of course, and it was the first external contribution to the install consolidation as well, so it was a learning experience for us).

Monday, May 15, 2006

Old dog, new tricks

I've decided it's time to learn a few new tricks.

The first trick I'm trying to learn is PHP. I'm used to java and JSP pages (although these modern new-fangled frameworks do nothing for me), but there seem to lots of web-based applications out there using PHP, and if I'm going to install and maintain some of them, I ought to understand how the language works.

The second trick is Ruby on Rails. The promise of simplicity is very appealing to me. That agrees with my systems philosophy - KISS.

(More generally, the more completely you can understand a system, the better the chance you have of making it work properly.)

After a few trails and tribulations, I've now got both software stacks up and running. Off to try a few examples...

Flimsy fans?

Anyone having trouble with the main fans ("rear fan 1") in a W2100z?

We've got 3 W2100z boxes at work, from two different batches. Even the oldest one is only a year old.

Now all 3 have had the main fan die, needing hardware replacement. The system is dead while waiting for a new fan to show up, which is somewhat annoying.

(No, it's not the old BIOS problem.)

Still, my old SunBlade 2000 had its disk die today, so sparc boxes aren't immune. This means I'm temporarily without a system running a nevada build - I shall have to go root around in the dumpster tomorrow to see if there are any spare FCAL drives left.

dumb

Every so often I have it reinforced just how stupid using autoconf can be.

I maintain my own software stack in /usr/local - applications that don't come with the operating system, or different version of ones that do.

I was just updating gzip. So the ./configure script, when invoked with --help, says:

By default, `make install' will install all the files in
`/usr/local/bin', `/usr/local/lib' etc. You can specify
an installation prefix other than `/usr/local' using `--prefix',
for instance `--prefix=$HOME'.


OK, so I do ./configure, make, make install.

What the heck?!?!?!

It's installed it in /usr, overwriting the system copies!

Looking at the configure output in more detail, it did say:

checking for prefix by checking for gzip... /usr/bin/gzip


I know I can't blame autoconf itself for this one, because the tool has been misused, but using a tool and then violating both its common conventions (default prefix of /usr/local) and its own self-documenting behaviour, is really bad.

And one last thing. It installs itself as zcat, not gzcat. On Solaris, zcat isn't even the same beast.

Thursday, May 11, 2006

MyISAM vs InnoDB

I've been using MySQL for the database componenet of a number of projects over the years.

Usually, I've used the MyISAM storage engine. It's fast (that's the reason for using MySQL in the first place), and generally reliable.

MyISAM isn't robust against system failure though. System crashes, reboots, and power cuts tend not to be handled very well. (Yeah, I know, they shouldn't happen in the first place, but this is the real world.)

I don't need the transactional capabilities of InnoDB (from the functional point of view even MyISAM is overkill for most of what I do), but something more robust would help.

So I thought I would do a quick check of the impact of using InnoDB on the system. This isn't a benchmark, it's completely unscientific, and all that, but it told me what I needed to know.

Running InnoDB, one minute's worth of disk activity looks like:

r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.1 7.0 0.8 34.1 0.0 0.1 1.6 12.4 1 5 d0

whereas with MyISAM it looks like:

r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.1 2.3 0.8 3.0 0.0 0.0 3.5 14.5 1 1 d0


OK, so InnoDB generates 3 times as many writes, and 10 times as much data transfer, as MyISAM. And the mysqld process consumes correspondingly more cpu time (although it's still very small - much less than 1% of a processor), so the system load average is a bit lower with MyISAM too.

I don't think this rules out InnoDB, although it does indicate that there is a significant cost to changing, and to scale up by a factor 10 (which I'm going to need to do, and then some) would likely have problems if I was using InnoDB. If I go down that route, I need to do more optimisation of the system design and the database client interactions.

Wednesday, May 10, 2006

Sinking ship?

Is this the end for SGI?

It's been a long painful slide, but inexorably downwards, and I can't really see any way back (short of a government funded rescue package).

I remember sitting through a sales presentation from SGI some years ago. We were told of their grand plans to throw away their well-respected workstation technology and become a shifter of (not quite standard or compatible with anybody elses) PCs running Windows NT; to throw away IRIX and adopt Linux; and to throw away their own RISC chips and go down with Itanium. (Itanium wasn't even shipping at the time.)

We regarded this as doubly suicidal. Not only were the products entirely unattractive to us (as existing customers with investments in their hardware and software platforms, incompatibility is a big turn-off), but we could see the plan failing and were thus reluctant to invest in products from a company that - in a single presentation - had gone from a front-runner to certainly doomed.

Sunday, May 07, 2006

Outdated libraries

Every so often, the ugly interaction of Solaris and the undisciplined open software world completely breaks a project I'm working on.

Latest case in point - libxml2. Solaris ships a horrifically antiquated version (2.6.10, to be exact) with Solaris 10 (and nevada, currently). Snag is, PHP requires 2.6.11 or later and won't build.

There are a couple of bugs open: 6362386 and 6193876. Not to mention 6378879.

It's been this way for 16 months. Surely time enough for something to have happened?

I realise that, because certain core components of Solaris itself rely upon libxml2, it cannot be upgraded without due care. If that is the case, then those components should use a private, compatible, copy, and allow the publically available version to be kept reasonably current.

Unfortunately the download site appears to be down at the moment, so I'll have to grab the source and build my own up-to-date copy tomorrow before I can make progress.

It's not as if libxml2 is the only external component that is somewhat antiquated. In fact, quite a lot of it is getting sufficiently old as to be useless, and if Solaris is going to be used as a platform for other open-source applications then some serious updating is going to have to be done.

Great Open Source

Looked fairly innocuous to start with, but this blog on ZDnet points to a list of the top 50 open source projects. Check it out - there are some real nuggets, and starting points for further exploration.

Friday, May 05, 2006

Solaris to/from Linux?

So Matty gave some reasons why people might consider switching from Solaris to Linux.

This is all a matter of opinion, of course. Despite computers being binary systems, everything in IT is shades of grey (often very dirty shades of grey). So here's my take on Matty's points, in the same order:

1. Integrated LAMP stack. Actually, all my Solaris boxes could have a full SAMP stack installed, but I would never use it. I wouldn't (and didn't) use it on Linux either. In both cases I would much rather install the application stack separately. It's very much safer that way - not only can I be sure that all the components are at specified known levels, and are the same across platforms, but I can be sure that my OS vendor isn't going to screw me around. I always had to wipe all evidence of apache, mysql, and friends from a RedHat system in order to get operational stability; unfortunately it appears that Sun are heading down the same misguided path by bundling more services in Solaris.

2. At my previous place of employment, you could always tell when the next version of Fedora had been released - all the group of developers were surly and miserable for a week because there desktop had been randomly rearranged and they had to relearn it. I reckon each of them lost a week's productivity as a result. Sun aren't much better - open source software doesn't care for operational stability, and every time Sun update Gnome or JDS in Solaris nothing works. And I'm now convinced we're going backwards. (As an aside, I've had many more problems with desktop apps not working right on Fedora than Solaris, but that may have been because the Solaris versions didn't try to push the bleeding edge so hard.)

3. The JES stack deserves a good kicking. My experience of it has been woeful. So I agree with that part. The other side of the coin is regular updates. As I said in point 1, I don't want my OS provider to randomly modify the working of key applications. So I wouldn't use either of them.

4. My experience of ISVs is that they hate supporting Linux. And I can't blame them - the qualification effort is horrendous. I asked one the other week about Solaris x86 support, and their answer was "we keep getting asked about that, but it's not on our roadmap". Anothe one went "phew!" when we said we were going to deploy on Solaris - he was then confident it would work out of the box. But, yes, there does seem to be more visibility for Linux - I suspect vendors think that's what customers want, when most of the time any alignment being customer requirements and vendor offerings is random at best. At least with Solaris, if it works it works, rather than having to stick to one particular release of one particular Linux distribution with a specific set of patches.

5. Not so; managing applications and patches on Solaris is an absolute doddle. Provided you ignore completely Sun's perpetually failed attempts to improve the process - patchpro, smpatch, prodreg, update connection are all worthy only of the dustbin. Fortunately it's trivial to write your own tools, or use pkg-get or pca which are massively better than anything Sun have come up with so far.

6. Why not upgrade? Modulo bugs in occasional releases (and these are the sorts of bugs that mean that not every build qualifies as a regular Express release) regular upgrade or live upgrade work fine. There's no need to futz with bfu unless you really want to.

7. At least with Solaris you have the option of zones! Are zones a universal panacea? No. But they are enormously useful for a whole range of operations where you need to consolidate or isolate services. I can believe smpatch update could take days, but then I wouldn't do it like that - I wouldn't use smpatch, for starters, and I wouldn't create the zones and then apply the patches. For this sort of thing our mode of operation was to simply migrate services onto a zone on a different box, and then rebuild machines, if the patch overhead was too great. (Based on the philosophy of never patching or rebooting a live service.) The fact is that there's no getting away from the fact that by creating 25 zones, you've increased the update cost 25 times. It is certainly true, though, that there's room for improvement in the patch tools. They are significantly slower than can be accounted for by the actual work they're doing.

8. It's a tricky balancing act. And I thin Sun have done pretty well with OpenSolaris. There was massive concern that the open source process would destroy Solaris' core strengths, and it doesn't seem to have done so yet. My hope was that it would address some of Solaris' weaknesses, and I don't think it's actually done that yet, but the level of engagement is there, and the signs are promising. To be honest, I'm not sure I would want to use an OS that would allow outsiders to put code back to the kernel source tree - does Red Hat allow me to modify their kernel source?

One thing that worries me about end-user putbacks is that the whole thing has to be vetted and managed, and that has to be done by Sun (because their position is that if it goes into OpenSolaris then it goes into Solaris - there is only one codebase), so it costs Sun more to have a community member fix something than to have one of their own engineers do it. Given the financials of the past few years, this doesn't strike me as an optimal situation.

There are some good points here - it's not as if Solaris hasn't got any weaknesses. The real killer is the availability of commercial software. If you're only offered it under Linux then you're going to have to put up with increased costs in terms of licensing, support, hardware, and staffing (or just pretend they don't exist), or choose another product. But there are many places where one person's ideal is anathema to someone else, and we just have to accept that there is no one solution to every problem.

Thursday, May 04, 2006

Well, that didn't work...

I notice that Eric made a decent comment about my recent rant on the decline of the desktop.

Now, I've tried xfce a couple of times in the past, but it has similarities to CDE that have always put me off. But I've never tried fluxbox (although I did try blackbox back in the day).

So I thought it deserved a try. Downloaded and installed it (which was fine apart from the usual problems).

I started it up, and everything seemed unusually sluggish. That's odd, this is supposed to be light and fast. I then made the mistake of clicking on the background and selecting 'About'. This sent the whole thing into a tailspin. 4G of memory usage later and my machine is like stuck in treacle.

Worth a try, but I think I have to write that one off.

I'm leaning back towards windowmaker. (Anyone know which black hole the windowmaker website has disappeared down at the moment?)

Wednesday, May 03, 2006

tuxpaint

One of my girls came home from school yesterday raving about Tux Paint.

It looks pretty good stuff to me. (The fact that one of my girls is mad about penguins might also make it popular.)

They're using it on Windows at School, but it's cross-platform, so I built it for Solaris too.

I had to go through the prerequistes:
In all cases I did a

(setenv CC cc ; setenv CXX CC ; setenv F77 f77 ; ./configure )
gmake
gmake install

to build, using the free Sun Compiler, as libtool couldn't cope with gcc for some reason.

Then to build tuxpaint

gmake CC=gcc


This wasn't quite enough. I needed to add an explicit

-lpng -lsocket

to the link list, and then it fails because it needs strcasestr().

OK, so I grabbed a shim copy of strcasestr from OpenSolaris (for example, this one), compiled it, and linked that in.

And, because it used a simple makefile, fixing up the problems was trivial.

Oh, and I've filed a couple of bugs. One against tuxpaint so that it handles a missing strcasestr (or doesn't require it at all); and an RFE that Solaris include this function (as opposed to the several private copies dotted about in the source), as this isn't the first time I've seen applications want it.