Thursday, March 24, 2005

OBP updates, or not?

Sometimes, when I rebuild my Sun machines, I update the OBP (OpenBoot PROM).

Not always, and not necessarily to a systematic pattern. Part of the reason for this is that I don't always have a convenient window to do so, but also I'm not entirely sure whether it's really a good idea - it's not broken, so why fix it?

I have had SunService complain about non-current OBP versions, but without any particularly good justification. And I have had one case where an OBP update actually stopped a system working.

It's easier to update systems now - US-III machines (most of them, anyway) can be updated from Solaris with a script, which is good. I just have to put the keyswitch into normal (which involves physically going to the machine) on servers. Older machines are slightly more tricky.

I know there are some cases where I have to upgrade OBP. For example, before a CPU upgrade so it will recognize the new processors.

I wonder what other people do as a strategy. Do you religiously upgrade OBP, or avoid doing so like the plague?

Wednesday, March 23, 2005

I love the sound of breaking glass

(Not!)


Not how you want to find your computer room! Posted by Hello

Yup, that's a glass door spontaneously shattered overnight. Certainly the first time I've seen anything like this.

(And it's obviously shattered rather than fell out - the distribution of glass indicates that it's dropped vertically. That was a mess to clear up, let me tell you! Fortunately safety glass tends not to generate large sharp pieces, although there were a lot of nasty dust-like shards.)

Monday, March 21, 2005

FantasyLand

As Jim says, Conspiracy theories are fun.

Specifically, the idea that Sun might buy SCO.

Does SCO have anything of value left that Sun might want? I doubt it. Last year, Sun bought out whatever IP from SCO it needed, to help Open Source Solaris. Sun's got all the IP it needs from SCO, has a much better OS in Solaris than SCO will ever have, and it's not as if SCO has any other assets that would be interesting to Sun.

I was entertained by the considerable advances listed in the article:

OpenServer can now support 16 processors, use 16GB of general-purpose memory, let databases access 64GB of memory, support files bigger than 2GB and multithreaded application via the native SVR5 kernel.

Oh my. We were doing that with Solaris 7. In what, 1998?

Sunday, March 20, 2005

Improving Operational Efficiency

Most computers sit pretty idle most of the time. (In fact, one could well ask if they ever do anything useful!) This is true both for desktops and servers.

Normally, this is because you have to overspecify the kit - based on the average load - in order to be able to handle the peaks. Either to give good response on a desktop machine once the user actually does something, or to handle sudden unpredictable surges in demand, or simply because load is cyclical.

(In our case, one case we have to cater for is the case of an instructor standing in front of a class of 20 students and inviting them all to run some interesting application on our servers - at the same time.)

The well-known downside to this is that you end up with a system that's horribly inefficient. Not only do you have to spend much more up front than you really need, you end up burning electricity and running your air conditioning plant round the clock, so the cost is based on the peak load which is usually highly atypical.

There are a range of solutions that can drive up efficiency and equipment utilization - or, more to the point, how to achieve the same (or better) service for customers and users for lower cost.

On the desktop front, thin-client solutions can lead to considerable savings. Not so much in up-front costs any more, but in terms of power and cost of ownership the thin-client starts to make much more sense. In many ways, though, it's more subtle issues like the lower noise and desktop footprint that ought to make this a no-brainer. We've used SunRays (and the hot desking is really useful) with some success, although we've avoided them for developer desktops in the past because developers tend to want to do things like run netbeans, tomcat, apache, mysql and the like, and you can't really have more than one on a machine. But with zones in Solaris 10 we could consolidate developers onto a SunRay server as well.

Many servers sit pretty idle simply because it's been traditional to allocate one server per service. I know we've done this in the past - simply to keep services separate and manageable. Then server sprawl can become a serious problem. Enter Solaris zones again - it looks like you're running one service per machine, but they're all consolidated. Providing you have some means of handling resource management issues so that you can stop one service monopolizing all the resources on the box, you can consolidate a lot of services onto a single piece of hardware. Not only that, you can afford a better system - more RAS, more power, more memory - so that the services can run faster and more reliably.

Handling cyclical load is another matter. If you have to do something once a month, or once a night, then you normally have a given window in which to do it, and almost by definition the systems used will sit idle the rest of the time. Sure, there's some opportunity to steal CPU and other resource from other machines on your network, but if you want to consolidate then the only opportunity is to find someone else with the same needs but at different times (it's no use if you both want to do the same analysis at the same time!) and share systems with them. (Or, for certain workloads, you have datacenters in different timezones and move the load around the planet as the earth rotates.)

I'm guessing that this is the sort of workload that Sun's recent grid offerings (and grid is one of those words I'll return to in a future blog, no doubt) are designed to address. The business model has to be that if Sun can keep the machines busy then they can make money, and that it's cheaper to buy CPU power when you need it than pay for it and have it sitting idle.

So the grid provision model isn't going to be of any use to customers who have already got high utilization - or, same thing, to customers who have constant workloads. I worked this out for our compute systems and once you get better than 50% (or so - it's only approximate) utilization then it's cheaper to do it yourself. But with utilization much lower than that, it's cheaper to buy the stuff off someone else as you need it.

I wonder who the likely customers are - or what market segment they might be in. In particular, is it going to be large companies or small? If small, there's an opportunity for resellers - brokers, if you like - to act as middlemen, buying large chunks and doing the tasks on behalf of the smaller customers. And while, as I've understood it, Sun are offering capacity in a fairly raw form, smaller end users might be interested in more focused services rather than counting individual cycles.

This could go down to individual consumers once you get into storage. Now, I don't suppose Sun are going to deal with individual consumers, but I know that I would be interested in a gigabyte at a dollar a month for my own critical data. After all, I have a PC and it isn't backed up - and never will be. So I need somewhere to keep this stuff that isn't vulnerable to hardware failure, theft, or user stupidity.

None of this is new, of course. The problem of systems operating inefficiently - at very low utilization levels - has been around for years. It remains to be seen if current initiatives are any more successful than past approaches in getting rid of the horrible inefficiencies we currently put up with.

Friday, March 18, 2005

No such thing as a free lunch

Something I've always wondered about are the enormous predictions made for the value of the Linux market in the future. $35bn per annum, or some such. Where's that IT spend going, if the OS is free?

My assumption has always been that it would be in the consultancy market. But, according to Tech News on ZDNet - Open source--open opportunity for consulting - the consultancy firms aren't really moving in for the kill.

While I'm at it, I liked this bit:

The order-of-magnitude cost savings more than justified the consultancy fees and the stranglehold of proprietary hardware architectures was broken forever.

From what I can see, not only do the cost savings rarely justify the consultancy fees, but you get locked into paying the consultancy fees over and over.

Let's face it, the reason Oracle and IBM want to encourage you to use cheaper solutions is not to save you money, it's so you have more cash left over to give them a bigger slice of the pie.

So why aren't consultancies falling over themselves trying to get customers using open source? They have the opportunity - following the previous argument - to save you some money so that they can increase their fees.

And what consultancies thrive on is change and complexity. The latter being a barrier to the former that can only be overcome by the transfer of large fees in their direction. Many commercial packages are outstanding examples of complexity and opaqueness. The suppliers are clearly in league with the consultancies, as you can't just pick up the thing and make it work. (And in cahoots with book publishers and purveyors of training courses at the same time.) And the idea that you could take a new version and just install it, and everything would just work afterwards, with all your customizations carried forward correctly, while maintaining compatibility with older versions of clients and servers, well that's just a dream isn't it?

While commercial packages seem to embrace change and complexity as a matter of policy, open source isn't immune to this disease. Change is often seen as a desirable attribute, compatibility over time and between releases isn't always a high priority, and so there's clearly an opportunity for consultants to step in and manage the process (and taking their fees along the way). Of course, there are examples of companies or products that want to step into this space and help you out.

What is the barrier, though? Is it that those organizations that are predisposed to open source solutions have an inbuilt aversion to consultancy? Or have consultancies worked out that, because they cannot control the change and complexity of open source, that it would actually cost them more?

Over the Hill

I've been wanting to blog on more personal stuff, and this didn't seem the right place to do it, so I've set up a separate personal blog to allow me to keep my personal ramblings away from here, which is largely Solaris-centric.

Amongst other things, this means that Planet Solaris won't get saturated with my holiday snaps.

Thursday, March 17, 2005

Minimalist Solaris

I've been looking at minimizing a Solaris installation. This has similarities to the work that Eric Boutilier has been doing, with the difference that while he's trying to find the minimum configuration that will actually boot so that you can install 3rd-party software on top to make a viable system, I'm interested in keeping a usable (and manageable) system that can be integrated into our network without additional utilities needing to be installed.

I'm down to 74 packages so far. This seems a lot, I know, but the core install is about a third of that, with the packages necessary for living on our network being another third, and system admin tools being another third again.

Solaris 10 actually needs more packages than previous releases. One reason is the increasing granularity of the package system, so there simply are more packages. Another is that the functionality of Solaris keeps increasing, and more utilities are making use of features outside the basic core. (For example, using XML for system configuration requires you to have an XML parser.)

While my test system got down to 74 packages, I'm typically installing Solaris 10 on machines and ending up with over 800 packages (for one thing, the Java Desktop System has a few hundred). This is getting plain silly.

I think it's got to the point where a flat list of packages has outlived its usefulness. Trying to do installation management with this many packages is impossible. You just have to hope for the best. (I know that Solaris does have package clusters, but these are really just shorthand ways of referring to lists of packages rather than having meaning in their own right.)

One problem with the present system is that you want to be both very specific and very generic. I want to say "Install JDS on a desktop" and "disable and delete this one daemon" with equal ease, and the current flat system doesn't really allow me to do either very well. (Although there are cases when the install granularity has reached the level of individual services.)

So the question that naturally arises is what sort of package management system can be used that will make life easier? I tend to think that it has to be hierarchical, but haven't got much further than that. Note that the question is really one of "how can we best bundle up files into meaningful and easy to manage chunks" rather that asking what software to use to manage those chunks.

Sunday, March 13, 2005

What a week

Man I'm tired.

Just spent the week in Palo Alto, officially meeting up regarding the Solaris 10 beta program, so got to meet up with a lot of the Solaris engineers again and talk Solaris, which was fantastic.

Also managed a bit of OpenSolaris activity while I was there, meeting up with some of the Sun people on the project (thanks for lunch, Jim!), but also having a great evening with Ben Rockwood.

I learnt a lot, thought a lot, and it's going to take time to digest it all. But first I must get some sleep.

[And find some warmer clothes. It's warmer here now than when I left a week ago (so it's above freezing), but as anyone in the Bay area knows it's been sunny and very warm all week, making a nice change.]

Saturday, March 05, 2005

Getting the next bus

So, according to Microsoft and Intel, The Time For 64-Bit is Now.

Strange. We got our first 64-bit Sun back in 1996. We were running 64-bit Solaris on 64-bit sparc cpus in 1998. I think it's 5 years since we had any 32-bit sparc systems running (they largely got canned in the Y2K work - although we did have one or two Ultra 1s or E150s running in 32-bit mode for the first year or so of this millenium). We've been doing 64-bit for years.

Of course, most of our x86 systems are stuck in 32-bit mode, but not my Sun W2100z, which is 64-bit. And damn quick with it.

They might have missed the bus first time around, but you just have to wait awhile and another one turns up.

Friday, March 04, 2005

Snow, glorious snow

Or maybe not. Woke up this morning and it was just starting to snow. Ok, so it snowed hard for a while and we ended up with about an inch of snow. Which is a good fall by local standards.

Of course, this being England it caused total chaos. Schools closed, roads jammed. It's pretty clear now, an hour or so later, and it'll probably rain by lunchtime and wash it all away.