[pca] Bitten by 141445-09

Allen Eastwood mixal at paconet.us
Thu Oct 22 20:34:01 CEST 2009


> From: Ben Taylor <bentaylor.solx86 at gmail.com>
> Agreed.  I have regularly used multiple 20G / with ABE space for
> 8, 9 and 10 (pre-zfs) for this exact purpose.  Depending on the size
> of the physical disk (and I was almost always mirroring with SDS)
> was the determining factor for sizing.  for 300G disk, I leaned towards
> 30-40G, depending on how the system resources were to be used.

Yes, LU works fine with UFS if you have to do this.  You just have to
leave yourself enough space for the ABE.

ZFS root/LU uses ZFS snapshots.  Typically, I give the entire root
disk slice 0 to rpool.  Philosophically, I do not believe in having
non OS stuff on the boot disks, leads to too many issues.  However,
one could theoretically split up the boot disks and use slice 0 for
rpool and other slices for other zpools.  Just give yourself enough
space in the rpool to handle OS, Zones, snaphosts, etc.

The other nice thing is that ABE creation is done in just a few
minutes.  Also, I've used it to back out of other OS changes.  Create
a baseline, and luactivate it back as needed.

Also, if doing zones with LU, the zone root pretty much needs to be
the rpool dataset.  Not having the zone root on the boot disks can
cause havoc with LU.  I try to stick to sparse zones whenever possible
to keep the disk space and backup requirements down a bit.

> I'm real crazy about not having a separate /var, but the recoverabilty far
> outweighs the issue for me about having /var on a / slice.  Real monitoring
> of disk space (along with notification) mitigates space issues that might
> arise from something running away with /var space.

There have been several patches that fail beautifully when /var is in
a separate dataset.  By dataset, I mean a separate ZFS file system in
rpool.  I use jumpstart exclusively to install and separating /var is
really the only available option in that scenario.

I have yet to run into issues with having /var as part of /, as I am
using the entire disk, and well, disks are pretty big these days.  I
use ncdu from sunfreeware when I need to dig out space hogs.  I do
have some quick and dirty "monitoring" scripts that I provide to
customers that can run in cron.  One function is to monitor disk space
and email admins when it crosses a threshold.

Side note, limit tmpfs in /etc/vfstab!  I have had developers crater
systems filling up /tmp with stuff and running systems out of physical
RAM.

Also, if doing zones, consider setting quotas on the zone root file
systems.  Typically, I create /zones/<zonename> for each zone root so
I can monitor and control that.

I know this is kinda tangential to the PCA list, but as a SE for a Sun
VAR, I do a lot of installs, migrations and so on.  How you set up the
OS has a lot to do with how well patching goes, especially considering
zones.  I feel as system admins, we really need to evaluate our
practices, which have served us well, in light of new features of
Solaris.



More information about the pca mailing list