Matthew Chapman is a PhD student in CSE (or is it NICTA) and also
works part-time for the CSG. Or maybe I should say "worked".
Matt was recently offered, and accepted, an internship with Hewlett-Packard in
Colorado, U.S.A. This will be a great experience for Matt and relates
well to the work he is doing for his PhD.
The internship is for 4 months, so we expect to see Matt back in
August some time. So this isn't so much "goodbye" as "see you
setuid programs in HOME directories to become ineffective
Over the mid-year break, we plan to disable the `setuid' functionality
in home directories.
Unix (and related systems such as Linux) allow a program to be marked
as `setuid' (using the chmod program). This means that when
the program is run, it runs with the privileges of the user who owns
the program rather than those of the user who is running the program.
While this can be very useful in some circumstances, it can also be
very dangerous: a poorly written program can give away more privileges
than were intended. Many security holes in Unix related systems are
due to problems with setuid programs.
A recent scan of home directories on elfman, eno and
kamen found over 8000 files with the setuid or the related
setgid flags set. The vast majority of these are not actually
programs and have the flag set inappropriately. While this does not
represent a clear danger, it does suggest that many files get this
flag set wrongly and if any of them are actually programs, then that
would open a real security hole.
So, to generally enhance security, we will be disabling the setuid
(and setgid) flag of all files on all HOME directories.
This will not involve turning it off, but will involve causing Linux
to ignore the flag. When mounting the HOME filesystems, we will use
the "nosetuid" option which causes these bits to be ignored.
All this means that if you have a genuine need for having a setuid
program, you will need to find an alternative solution.
By far the single largest user of setuid in home directories has been
the "give" system for submitting and marking assignments. This uses a
single setuid program called "classrun". Each course had its own
version of "classrun" which was setuid to that class.
We have now provided a single setuid "classrun" in
/usr/local/bin/classrun which can be used by all classes.
As it is not in a HOME directory, the setuid flag will continue to
Similar strategies can be made available for any other real
requirements for setuid that there might be.
So, if you have a genuine need for a setuid program in a HOME
directory, please email ss with the details and we will work
something out for you.
buff enhancements -- improved selection of files for incremental backups
Every night we run an "incremental" backup, which backs up files that
have changed in the last day or so (These backups can be recovered with
The program which selects the files to be backed up is called
buff (the BackUp File Finder).
We impose some limits on how much data can be backed up each night to
make sure that no individual swamps the backups system to the
detriment of others. These limits include a limit on the size of
individual files (which can be changed by the user), and a limit on
the total amount of data that can be backed up from each home
directory (which cannot be changed).
Some people found, on having exceeded their home directory limit, that
the files that were actually backed up were not their most important
files. If some of their files were not going to fit into the home
directory limit, they would have preferred that these more important
files got priority, and their less important files be the ones to miss
This functionality is now available through the first and
last commands. These commands can be placed in
.backup specification files to indicate which files should
be backed up first, or which should be left to last.
More information is available in the online manual. See
Note that these incremental backups are just one part of our backup
strategy. For more information about backups generally, see
New backup arrangements for self-admin machines
While we have for some time had very comprehensive backup arrangements
for our centrally maintained computers (with daily incremental and
monthly image dumps) there has been a steadily growing hole in our
backups scheme: self maintained (self-administered) machines.
Self maintained machines are largely the responsibility of the owner
of the machine. They are responsible for installing software,
configuring the software, applying security updates and keeping backups.
However we, the CSG, still have some role in supporting this
We provide loan-CD's for software installation, FAQ pages to describe
how software can be configured to work with our infrastructure, and
mailing lists through which security alerts are distributed. However,
we have not provided much support for backups.
We do provide normal home-directories on centrally managed and
backed-up servers, and encourage self-maintainers to keep copies of
important files in these directories. But this is a long way from
providing real support.
Over the past year or so we have been discussing possible solutions,
and are currently well into implementing a solution that should be
reasonably effective for most people.
The basic idea is to provide a large, centrally managed, file server
onto which individuals can place and maintain a replica of the files on
their self-maintained machine.
In purchasing this machine, we aimed for large storage at the price of
performance. It is likely that this machine (named feldman)
is substantially slower than our regular fileservers, and would not
cope if dozens of people tried concurrently to access it over NFS for
normal work. However, it has substantially more storage than our
regular file servers for much the same price.
Our plan is to provide owners of self-maintained machines with a
generous quota on feldman -- of about 15 Gigabytes -- and to
encourage them to regularly synchronise their machine into this
storage. i.e. to copy important parts of their filesystem onto
feldman and then regularly copy across any changes.
The tool that we expect to use for this copying would be either
rsync or unison. The next stage in providing this
service is to get some real experience with both of these tools being
used for this purpose from both Linux and MS-Windows machines. We are
not currently in a position to make any recommendations or guidance
for setup. However if anyone in the School has a self-admin machine
and would like to experiment with these (or other) tools for backup to
feldman, please contact ss for access.
Backup plans for feldman itself have not been finalised
yet. However, it is expected that we will perform regular incremental
backups, and very occasional full-backups to tape of individual homes.
Due to the projected usage of feldman it is not expected that
backups will be needed much. This is because it is not a
The normal situations that require backups to be restored are human
error (deleting a file that you really wanted) or hardware failure.
Human error should not occur on feldman as it should not be
accessed directly. Human error may well occur on your self-maintained
machine, but then the backup is readily available on feldman.
Hardware failure on feldman is certainly possible (though we are using
a RAID6 arrangement so that we can survive any two drives failing) but
if it does happen, all the current data should still be on individual
self-admin machines. So if we completely lose everything on
feldman it should only be necessary to have every user
re-copy their data onto feldman.
One other use for backups is archival storage. We expect to support
this by writing each directory on feldman to tape once or
twice a year.
With the recent release of
OpenVPN version 2.0 we have been
exploring options for using VPNs in and around CSE, and have found a
few interesting uses. These arrangements are currently
"under-development" to some extent, but if anyone is interested in
being an early-adopter (where appropriate), we would appreciate the
interest and feedback.
Linux machines which are maintained by the CSG need to be connected to
a "trusted" network. This is primarily for NFS access. NFS access
control relies largely on validating the IP address that requests come
from. In order to ensure the security of the data that we share using
NFS (such as home directories), we need to ensure that no-one can
improperly use any IP address that has been assigned to a CSG-managed
This is achieved by having all such machines on a "trusted" subnet,
and making sure that all network ports on that subnet are locked to a
particular MAC address. If a different machine is plugged into a
socket on this subnet, the difference will be noticed and the port
will shut down.
However, we have a growing need for CSG maintained machines in
locations that cannot be secured in this way. In particular, machines
that are used by NICTA staff and students will be in NICTA offices and
connected to NITCA network, which we do not have control over.
To handle this situation we have created a configuration of OpenVPN
which allows a physically remote machine to securely appear on our
trusted subnet. This allows it to access files over NFS much like all
other CSG maintained Linux machines.
The computer still needs to be physically secure (locked to a table,
locked closed, and with a bios-password which disables booting from
removable media), but no longer needs to be on a physical network
managed by CSG. It can instead be on a Virtual Private Network which
is managed by CSG.
This arrangement is currently being tested on the author's desktop
computer, and on a test-machine over at NICTA. It seems to work fine,
and next time someone at NICTA wants a CSG-Maintained Linux machine we
will be able to provide one using a known-working solution.
NFS Access for self-maintained machines
One frustration with having a self-maintained Linux machine at CSE is
the problem of access to the CSG maintained central home directories.
On MS-Windows machines, access via MS-Windows file sharing and
Samba works reasonably well. However, this option doesn't work for
The standard file-sharing protocol for Linux and Unix-based systems is
NFS. The common authorisation model for NFS involves giving
specific machines full access based on IP address. (Some
implementations of NFS do have the possibility of using Kerberos to
do per-user authentication, however this functionality is only slowly
becoming available and we do not have any Kerberos infrastructure at
CSE). This makes it difficult to give self-maintained machines NFS
access to our NFS server.
Again, OpenVPN can be used to provide a solution, though not a full
The approach taken is to allow a self-admin machine to create a VPN
tunnel to a particular NFS server. This tunnel will be suitably
authenticated so OpenVPN on the server will "know" who is sending
packets over the link. It can then configure the NFS service to
accept requests over the link but to treat them all as coming from one
particular user -- the owner of the remote machine. This provides a
reasonably effective authentication layer on top of NFS to allow
access from self-maintained machines.
There are two imperfections in this scheme which make it less than
ideal, but still quite usable.
Firstly, access is given not to a user, but to a machine. If a
machine creates a tunnel to the NFS server authenticated as a
particular user, then anyone on that machine may access files as that
user. This is fine for machines that are only used by one person,
but means that the solution cannot be used for self-maintained
machines that are used by a group.
The other problem is that due to the way the NFS server in Linux
works, a list of group IDs cannot be given access in this way. The
access is given to just one userid and just one groupid. While normal
NFS access allows you to access files for which any of your groups
have access, using the VPN tunnel to get access will only provide
access to files for which your primary group has access.
This is a solvable problem, and the NFS server will quite
possibly be enhanced later this year to allow a list of groups to be
associated with a given VPN tunnel.
This NFS solution is currently being used by Geoff Oakley (Manager of
the CSG) to access files from his Mac-Mini (being trialled for
possible lab usage). If other members of the school are interested in
trialling this, please contact `ss' or Neil Brown
VPN for Wireless authentication
We currently require people to register the MAC address of their
Wireless Ethernet card in order to get access to our wireless subnet.
This is really unnecessary administration and it would be much better
to authenticate people rather than computers, and then people could
use whatever computer or wireless card they choose without having to
talk to ss.
The obvious choice of authentication on the wireless subnet is to use a
VPN. The less obvious choice is which VPN technology to use.
Three options seem to be the most likely. They are IPSec, PPTP, and
IPSec is the approach used by the UniWIDE wireless network. It
requires special drivers and is not always straightforward to set up.
PPTP is a standard option in MS-Windows systems and so is in some ways
a good choice. It does work on Linux, but again requires special
drivers to be installed.
OpenVPN seems to be the simplest to install and configure. Its main
drawback is that it needs to run as Administrator (in MS-Windows) or
root (in Linux). With that restriction it is very easy to work with
and very configurable.
We already have a PPTP based VPN available which is documented
We now have an OpenVPN solution in-place that some of us in the CSG
are experimenting with. If anyone else wants to try it out, there are
brief instructions and configuration files at
These instructions will probably be moved to a more permanent home
It is our intention (without a clear timeline yet) to require all
users of the wireless subnet to use a VPN to authenticate. This will
be achieved by simply not routing any unauthenticated traffic off the
VPN. Before we do that we would like to be sure that the available
VPN solutions meet the needs of our customers.
So: if anyone uses the wireless subnet regularly and is interested in
a bit of experimentation, please look into experimenting with one of the
VPN solutions mentioned above.