Monday, August 29, 2016

The Tribblix filesystem layout

On Solarish systems, the filesystem(5) manpage gives a good description of where in the directory tree you might find the various files associated with a piece of software.

The version in illumos is largely broken, in that many of the directories referenced make no sense at all for illumos itself, and are largely wrong for the various illumos distributions. In particular, some of the directories are very specific to the old Solaris Java Desktop System, or JDS, and relate to GNOME.

Now, how does Tribblix handle all this?

For anything inherited from illumos-gate, I simply put files wherever illumos-gate put them.

For anything I build and ship, I normally build with a --prefix of /usr. And, for most packages, that's the only thing I set. What this means is that for most packages, --sysconfdir is /usr/etc and --localstatedir would be /usr/var. I do not redirect --sysconfdir to /etc by default. In most cases I think I've done the right thing, to be honest, as often the files that would have been put into /etc aren't meaningfully editable in any case.

In those cases where the application does expect user-editable configuration, I will set --sysconfdir to /etc. This covers things like BIND, samba, cups, openssh, and the like.

Laying things out like this helps with things like sparse-root zones. I'm loopback-mounting /usr read-only, and that neatly catches everything (and ensures the parts of a package are consistent).

On the subject of zones, in a sparse-root zone /lib is inherited, which causes a problem. The SMF manifests and method scripts are now stored under /lib, and some are only relevant to the global zone. To handle this, I make a fixed copy of /lib for sparse-root zones to use, that doesn't have any errant SMF services present.

In order to be able to add my own services to zones, I make sure the manifests live under /var, which is unique to a zone.

I also handle /opt specially. According to filesystem(5), this is the "Root of a subtree for add-on application packages." The idea has always been that 3rd-parties pick a directory there and have that as their own dedicated prefix. (As an aside, I've always found the use of /etc/opt/foo and /var/opt/foo to be incredibly confusing, as it basically splatters the files associated with a given application all over the filesystem, making it very hard to keep track of things. Which is one of the reasons I just specify the prefix and put everything under the one root if I can.)

And what I do with /opt is mandate that it's not inherited by zones. Anything installed in /opt won't automatically be inherited by a zone. If you want it in a zone, you need to make sure it gets added there.

For my own applications designed for zones - particularly services, I put them under /opt/tribblix, so that an application foobar lives in /opt/tribblix/foobar, its configuration under /opt/tribblix/foobar/etc, and the like. Again, it's easier to see everything clearly if there's only one place to look. This layout makes it easy to run services in sparse-root zones, as the OS in /usr is read-only and the application never needs to touch that.

Modulo dependencies, anyway. That's a problem I haven't really solved, as some applications depend on packages that live in /usr, so I need some way to ensure that the right packages are installed in the global zone (or the zone template).

Solaris also had the notion of subsystems. For example, CDE (the dt subsystem) lived under /usr/dt, /var/dt, /etc/dt and the like. Again, I don't follow that. (Although there is the one exception which is that I install CDE under /usr/dt, because that's where it's always lived.) Most things are either generic (so live directly in /usr) or are  services that live under /opt/tribblix for zone support.

The exception to this are packages that live under /usr/versions in Tribblix. The main idea here is for things that might come in more than 1 version. For example, python 2 vs python 3. Or the various versions of Node.js or Java. Here the convention is that the application lives in a versioned directory under /usr/versions, allowing multiple versions of an application to coexist. (One or two things end up under /usr/versions even though there's no meaningful need to ever support multiple versions, when I need to put something in it's own directory hierarchy rather than directly in /usr, just to avoid having to create another standard location. Sort of like subsystems, but more tightly managed.) I'll generally put convenience links in the default path, although sometimes that involves picking a default version.

This all mirrors how I used to install software on Solaris 10 with zones many years ago. It's designed with zones in mind, and has been pretty sucessful.

Saturday, August 27, 2016

Updating desktop caches and a tale of woe

I recently updated some of the MATE components for Tribblix. On testing, various bits of MATE didn't work. Worse, various bits of Xfce didn't work.

The first issue that was fairly easy to solve was that MATE was looking for its menus under /etc/xdg/menus, whereas it had installed them under /usr/etc/xdg/menus. I had to set XDG_CONFIG_DIRS=/usr/etc/xdg in the MATE session startup scripts, and the menus reappeared.

Slightly trickier was that all the mime associations had stopped working. For everything - MATE, Xfce, and any other desktop that uses the shared desktop mime infrastructure.

There are various desktop caches that have to be kept up to date. Each cache is handled by an SMF service, so if you know a cache needs to be updated, then you just get a package to kick the service whenever it's installed or uninstalled. I had inherited these from OpenIndiana which got them from OpenSolaris, so they have gnome in the package name even though they're really more generic.

Each of these SMF services has a method script that follows the same pattern. First check to see if anything needs doing, then update the cache if necessary.

This logic is, in some cases, plain broken. There's a python script that does the check, and in most cases the check is much more expensive than actually updating the cache. For a couple of the services, I had already simply ignored the check and blindly done the update. In particular, for updating the icon cache, which is the most common case.

Another problem with this check is that if you add an old package or untar some old files, there's the possibility that the "new" files you've just added get ignored because they have older timestamps than the cache. This shouldn't be a problem because the search looks for ctime, but some things reset that as well.

This time, the mime caches got broken. This only happened because there's normally one package - shared-mime-info - providing the mime types. Nothing else updates it. Until MATE comes along.

I had a bit of a dig, and this python script turns out to have python2.4 in it's shebang. Oops! I've never shipped python older than 2.7, so this never worked. The method scripts redirect errors to /dev/null so they're never seen. The fact that this stuff worked at all was a complete accident, with the cache file being created the first time and never updated since.

There's a desktop mime cache, and that's pretty quick, so that was easy - just do the update without checking, and it's at least twice as quick.

The main mime cache is the one where the update itself is very expensive - it's almost 3s, which would be an eternity during every boot if it's done unconditionally. So for that I had to fix the python shebang and keep the logic intact.

As an aside, the package that delivers the SMF scripts and manifests hadn't been updated in ages, and I hadn't documented how it was built. Updating is easy - just take the existing package, modify, and repackage it. But I put together a desktop-cache repo on github to hold it so I don't have to go dumpster-diving next time.

While I was at it I discovered that the package dependencies weren't quite right (the SMF methods clearly depend on the packages supplying the binaries that update the caches), and the way I had put these into the Tribblix overlays wasn't quite right, so I sorted that out too.

All in all, a lot of effort to sort out something that shouldn't have been broken in the first place. Hence the tale of woe.

Oh, and there's a third bug I haven't yet tracked down. Sometimes the MATE file manager, caja, goes into a loop trying to open a png file under ${prefix}. Yes, the actual open() call includes a literal ${prefix}, so something hasn't been substituted in the code correctly.

Sunday, July 31, 2016

Minimizing apache and PHP

Recently I was looking at migrating a simple website in which every page but one was static.

The simplest thing here would be to use nginx. It's simple, fast, modern, and should make it dead easy to get an A+ on the Qualys SSL test.

But that non-static page? A trivial contact form. Fill in a box, the back-end sends the content of the box as an email message.

The simplest thing here in days gone by would have been to put together a trivial CGI script. Only nginx doesn't do CGI, at least not directly. Not only that, but writing the CGI script and doing it well is pretty hard.

So, what about PHP? Now, PHP has gotten itself a not entirely favourable reputation on the security front. Given the frequent security updates, not entirely undeserved. But could it be used for this?

For such a task, all you need is the mail() function. Plus maybe a quick regex and some trivial string manipulation. All that is in core, so you don't need very much of PHP at all. For example, you could use the follwoing flags to build:

--disable-all
--disable-cli
--disable-phpdbg

So, no modules. Far less to go wrong. On top  of that, you can disable a bunch of things in php.ini:

file_uploads = Off [change]
allow_url_fopen = Off [change]
allow_url_include = Off [default]
display_errors = Off [default]
expose_php=Off [change]

Furthermore, you could start disabling functions to your heart's content:

disable_functions = php_uname, getmyuid, getmypid, passthru, leak, listen, diskfreespace, tmpfile, link, ignore_user_abord, shell_exec, dl, set_time_limit, exec, system, highlight_file, source, show_source, fpaththru, virtual, posix_ctermid, posix_getcwd, posix_getegid, posix_geteuid, posix_getgid, posix_getgrgid, posix_getgrnam, posix_getgroups, posix_getlogin, posix_getpgid, posix_getpgrp, posix_getpid, posix, _getppid, posix_getpwnam, posix_getpwuid, posix_getrlimit, posix_getsid, posix_getuid, posix_isatty, posix_kill, posix_mkfifo, posix_setegid, posix_seteuid, posix_setgid, posix_setpgid, posix_setsid, posix_setuid, posix_times, posix_ttyname, posix_uname, proc_open, proc_close, proc_get_status, proc_nice, proc_terminate, phpinfo

Once you've done that, you end up with a pretty hardened PHP install. And if all it does is take in a request and issue a redirect to a static target page, it doesn't even need to create any html output.

Then, how to talk to PHP?  The standard way to integrate PHP with nginx is using FPM. Certainly, if this was a high or even moderate traffic site, then that would be fine. But that involves leaving FPM running permanently, and is a bit of a pain and a resource hog for one page that might get used once a week.

So how about forwarding to apache? Integration using mod_php is an absolute doddle. OK, it's still running permanently, but you can dial down the process count and it's pretty lightweight. But we have a similar issue to the one we faced with PHP - the default build enables a lot of things we don't need. I normally build apache with:

--enable-mods-shared=most
--enable-ssl

but in this case you can reduce that to:

--enable-modules=few
--disable-ssl

now, there is the option of --enable-modules=none, but I couldn't actually get apache to start at all with that - some modules appear to be essential (mod_authz_host, mod_dir, mod_mime, and mod_log_config at least), and going below the "few" setting is entering unsupported territory.

You can restrict apache even further with configuration, just enable PHP, return an error for any other page, listen only on localhost. (I like the concept of the currently experimental mod_allowmethods as we might only want POST for this case. Normally disabling methods with current apache version involves mod_rewrite, which is one of the more complex modules.)

In the end, we elected to solve the problem a different way, but it was still an instructive exercise.

The above would be suitable for one particular use case. For a general service, it would be completely useless. Most providers and distributions tend to build with the kitchen sink enabled, because you don't know what your users or customers might need at runtime. They might build everything as a shared module, and could package every module in a separate package (although this ends up being a pain to manage); or you rely on the user to explicitly enable and disable modules as necessary.

In Tribblix, I've tended to avoid breaking something like apache or PHP up into multiple packages.  There's one exception, which is that the PHP interface to postgresql is split out into a separate package. This is simply because it links against the postgresql shared library, so I ship that part separately to avoid forcing postgresql to be installed as a dependency.

Saturday, July 30, 2016

Building Tribblix packages

Software in Tribblix is delivered in packages, which come from one of three sources - an illumos build, a bootstrap distribution (OpenIndiana or OpenSXCE depending on hardware architecture), and native Tribblix packages.

The illumos packages are converted from the IPS repo created during a build of illumos-gate using the repo2svr4 script in the tribblix-build repo. There's also a script ips2svr4 in the same repo that's used to construct an SVR4 package from that installed on a system using IPS packaging, such as OpenIndiana.  The OpenSXCE packages are shipped as-is.

(The use of another distro to provide components was expedient during early bootstrapping. Over time, the fraction of the OS provided by that other distribution has shrunk dramatically. At the present time, it's mostly X11.)

What of the other packages, those natively built on Tribblix?

Those are described in the build repo.

In the build repo, there are a number of top-level scripts. Key of these is dobuild, which is the primary software builder. Basically, it unpacks a source archive, runs configure, make, and make install. It can apply patches, run scripting before and after the configure step, and knows how to handle most things that are driven by autoconf.

There are some other scripts of note. The genpkg and create_pkg scripts go from a build to a package. The pkg_tarball script is an easy way to do a straight conversion of an archive to a package. There are scripts to create the package catalogs.

For each package, there is a directory named after the package, contains files used in the build.

At the very minimum, you need a pkginfo file (this is a fragment, the rpocess creates the rest of the actual pkginfo file). There's the possibility of using fixit and fixinstall scripts to fix up any errant behaviour from the make install step before actually creating packages. There are depend files listing package dependencies, and alias files listing user-friendly aliases for packages.

However, how do you know how a package was actually built? Even for packages created with the dobuild script, there are a lot of flags that could have been provided. And a lot of software doesn't fit into the configure style of build in any case.

What I actually did was have a big text file containing the commands I used to build each package. Occasionally with some very unprintable comments about some of the steps I had to take to get things to build. (So simply adding that file to the repo was never going to be a sensible way forward.)

So what I've done is split those notes up and created a file build.sh for each package, which contains the instructions used to create that package. It assumes that the THOME environment variable points to the parent of the build repo, and that there's a parallel tarballs directory containing the archives. (Many of the scripts, unfortunately, assume a certain value for that location, which is the location on my own machine. Yes, that should be fixed.)

There are a number of caveats here.

The first is that some packages don't have a build.sh file. Yet. Some of these are my own existing packages, which were built outside Tribblix. Some go back to the very earliest days, and notes as to how they were built have been lost in the mists of time - these will be added whenever that package is next built.

The second is that the build recipe was valid at the time it was last used. If you were to run the recipe now, it might not work, due to changes in the underlying system - packages are not rebuilt unless they need to be, so the recipes can go all the way back to the very first release. It might not generate the same output. (This is really autoconf, which gropes around the system looking for things it can use, so running it again might pull in additional dependencies. Occasionally this causes problems and I need to explicitly enable or disable certain features. In some cases, you have to uninstall packages to make the build run in a sane manner.)

The third is that, while the build recipe looks like a shell script, and in many cases will actually function as such, it's really a recipe that you cut and paste into a terminal. At least, that's what I do. Sometimes it's necessary, because there was some manual hacky workaround I needed that's just in the build script as a comment.

This has been an outstanding TODO item for a while now, so I'm glad to have got it out of the way.



Wednesday, June 22, 2016

Getting to grips with Docker

A while ago, I described how we took an existing application build script and managed to run it inside Docker.

Having played with this inside Docker a little more, it's probably worth scribbling down a few notes I happened to stumble across on the way.

I'm looking at having 2 basic images: as a foundation, Ubuntu with all the packages we want added; then an image that inherits FROM that with our application stack built and installed (but not configured). The idea behind this layering is simply to separate the underlying OS, which is fairly standard, from the unique stuff that is all ours.

Then, you create an instance image from the application image, simply by running a configuration script that you COPY in. Once you've got a configured application instance, you create a volume container from it, and then run the application image using the volume(s) from the instance image. You keep that volume container around, just as a home for your data, essentially forever. And you can run multiple application instances from the same base image, you just need to configure and create a volume container for each instance.

That's a brief overview of the workflow, now some tweaks and pitfalls.

We're using Ubuntu, so the first step is to run apt-get with our list of packages. This originally created a 965MB image. It's not going to be small, we need both java and a full development stack to create our application.

However, some of the stuff installed we'll never need. Using the --no-install-recommends flag to apt-get saved us about 150M. The recommends list is stuff that might be useful, but not essential. But remember - our Docker container is only ever going to run a fixed set of applications, so we'll never need any of the optional stuff. The only thing to be careful of here is if you accidentally depend on something in the recommends list without realizing you're only getting it indirectly.

We can do slightly better in terms of saving space. We use postgresql, but get it to store the database files in our own locations, so we can remove /var/lib/postgresql/9.X and what's underneath it, saving almost another 40M.

One thing to be aware of is that the list of packages in the official Ubuntu Docker image isn't quite the same as you would get from a regular Ubuntu install. There are one or two packages we didn't bother adding because they were there in a regular install that we need to add with Docker. Things like sudo and wget are on this list, so I needed to add those to the apt-get list.

Another thing to be aware of is that because you're building images afresh each time, you aren't guaranteed that new users will always get the same uid and gid. If you change the list of packages (even by just adding --no-install-recommends), this might change which users exist, and that affects the uid assigned to later users. I got burnt when a later base build ended up giving the postgres user a different uid, so it didn't own its database files on the persistent volume any more. I think the long term fix here is to create the users you need by hand before installing any packages, forcing the uid and gid to known values.

In order to keep image sizes small, you'll often see "rm -rf /var/lib/apt/lists/*" in a Dockerfile. In general, deleting temporary files is a good idea. This includes any files created by your own software deployment stage. Cleaning that up properly saved me another 200M or so in the final image. (Remember to clean up /tmp, that's part of the image too.)

It isn't strictly related to Docker, but I hit an ongoing problem - in some environments I ended up blocking on /dev/random. Search around and you'll find a lot of problems reported, especially related to java and SecureRandom (or, in our case, jruby). Running Docker on my Mac was fine, running it on a server in the cloud gave me 15-minute startup times. The solution here is to add -Djava.security.egd=file:///dev/urandom or -Djava.security.egd=file:/dev/./urandom to your java startup (or JAVA_OPTIONS).

(And, by the way, this illustrates that while Docker can guarantee that your app is the same in all environments, it doesn't magically protect you from differences in the underlying environment that can have a massive impact on your application.)

My application listens on ports 8080 and 8443, which I map on the host to the common ports, with

docker run -p 80:8080 -p 443:8443 ...

This works fine for me in testing, when I'm only running one copy and simply point a browser at the host. Networking gets a whole lot more complicated with multiple containers, although I think something like a load-balancer in front might work.

I've been using the Docker for Mac beta for some of this - while at times it's been beta in terms of stability, generally I can say it's a very impressive piece of work.

Sunday, June 19, 2016

Data Destruction and illumos

When disposing of  a computer, you would like to be sure that it has no data on its storage that could be accessed by the direct recipient (or any future recipient). It would be somewhat embarrassing for personal photos to be retrieved; it would be far worse if financial or business data were to be left accessible.

The keywords you're looking for here are data remanence and disk sanitization.

There are three methods to remove data from a disk. Total physical destruction, degaussing, and overwriting the data. The effectiveness of these methods is up for debate; as is the feasibility of a sufficiently determined and well-funded attacker being able to retrieve data.

Here I'm just going to discuss overwriting the disk. For a lot of casual and home purposes that'll be enough, and is a lot better than not bothering at all, or simply reformatting the drive (or reinstalling on OS on it) which will leave a lot of disk sectors untouched and amenable to simply being read off in software.

The standard here seems to be DBAN. However, it's not seen much activity in a while, and was sold to a company that offers a commercial product that's claimed to be much better.

Basically, all DBAN is doing is scribbling over every sector on a drive. That's not hard.

In Solarish systems, format/analyze/purge does essentially the same thing. It's the documented method for wiping hard drives on Solarish style systems.

However, it's a little fiddly to use and requires a modest level of expertise to get that far. You can't purge the disk you're booted from, the solution proposed there is to boot from installation media, drop to a shell, and run format from there. That has a couple of problems - it's still very manual, and the install (or live) media are rather large and can take an age to boot.

So I started to think, how hard could it be to create a minimalist illumos boot media that just contains the format command, and a simple script around it to make it easy to run?

I've already done most of the work, as part of the minimal viable illumos project. It was pretty easy to create a new variant.

The idea is to erase disk drives, so the intended target is physical hardware rather than a hypervisor. So I added a number of common storage drivers to the image. (As an aside, I really have no idea as to what storage HBAs are actually in common use, so which drivers to put in this list or on the Tribblix install iso is largely guesswork.)

There should be no need for networking. You really don't want a mechanism for any external access to the system while the disks are being wiped, so networking is simply not there.

And I added a simple wrapper script that enumerates disk drives and runs the appropriate format commands. If you want to see how this works, just look at the wrapper script. All this is in the mvi repo, see the files with "wipe" in their names.

And there's the (14M in size) iso image I created also available.

(Why is such a small image good, you might ask? Apart from simply being sure that it's only capable of doing the one function that it's advertised for, if you're trying to wipe a remote system mounting the image over the network, then the smaller the better.)

I tested this in VirtualBox, which exposed a few quirks. For one, the defect list switching you'll see in the docs doesn't work there (I have no idea if it's going to work on any real hardware). The other is that the disk image I was using was a file on a compressed zfs file system. The purge process writes a repeating pattern, which is very compressible, so the 1G disk image I was testing only takes up 16M of disk space.

While I don't think it's really a proper alternative to DBAN, I think it's useful as a real-world example of how to use mvi.

Thursday, June 16, 2016

Connecting to legacy Sun ILOM with modern clients

The bane of many a system administrator's existence is the remote management capability on their servers. In Sun's case, I'm talking about the ILOM.

(Of course, Sun have had RSC and ALOM and eLOM and maybe some other abominations over time.)

Now, for many purposes, you can just ssh to the ILOM and you're done. On Sun boxes anyway, where you often have serial console redirection and the OS using the serial console.

However, if you want to manage the system fully, you need a proper client. There are a couple of common cases. First, if you need the VGA console (either for a broken OS, or to interact with the BIOS), or if you want to do storage redirection (in other words, you want to remotely present a bootable image).

That's where the fun starts, and you get in a tangled relationship with Java. Often, it ends up being a tale of woe.

And that's on the best of days. With legacy hardware - such as the X4150 - it gets a whole lot more interesting.

Now, while the X4150 is legacy and well past end of life now, it turns out that there was an updated firmware release in 2015. (For POODLE, I think.) If you can, apply this, as it should fix some of the UI compatibility issues with newer browsers. (Not all, I suspect, but if you've tried using a current browser and only got half the GUI then you know what I'm talking about.)

However, that doesn't necessarily mean that the Java application is going to work. There are actually a couple of issues here.

The first is that the application is a signed jar, and the certificate used to sign it has expired. Worse, due to Java's rather chequered security history, current versions have draconian checks in place which you'll run into. To fix, go to the Java Control Panel, down to "Perform signed code verification checks" and change it to "Do not check". Generally, disabling security like this is a bad idea, but in this case it's necessary.

Next, if you start up the application, click through the remaining security dialogs, and try to connect to the console, you'll get a cipher suite mismatch failure. The ILOM is pretty old, and uses SSLv3 which is disabled by default in current Java. You'll need to edit the java.security file (in ${JAVA_HOME}/jre/lib/security/java.security[*]) and comment out two lines - the ones with jdk.certpath.disabledAlgorithms and jdk.tls.disabledAlgorithms, then run the application again.

With luck, that will at least enable you to get to the console.

If you want storage redirection, then you're in for more fun. For starters, you need to be running Solaris, Linux, or Windows. If you're on a Mac, it's not going to work. You'll need to get yourself another machine, or run a VM with something else installed.

And the other thing is that you need to be running a 32-bit Java Virtual Machine. If you're running Solaris, this rules out Java 8 - you'll have to go back to Java 7. On other platforms, you'll have to make sure you have a 32-bit JVM, which might not be the default and you might have to manually install it.

Oh, and if you're on Linux or Solaris and running OpenJDK (rather than the Oracle builds) then you'll need IcedTea to get the javaws integration. At least with IcedTea you can ignore the Java Control Panel stuff.

[*: On my Mac I discovered that I had 2 different installations of Java. The one that you get if you type "java" isn't the same one used for browser integration and javaws launching. Running /usr/libexec/java_home gave me the wrong one; I ended up looking at the ps output when running the Control Panel to find out the location of the one I really needed.]