Computing Support
Issue 27: 30th May 2005

Computing Support Group
School of Computer Science and Engineering
The University of New South Wales

Issue Index

Front Page

Just in time to be called the May Issue, here is another thrilling (or at least, reasonably informative) issue of the Computing Support Gazette, keeping you in-touch with the inner operations of the CSG.

Prompted partly by the current discussions on restructuring the school, I was reflecting recently on some changes that we have seen over the past couple of decades in the nature of the work done within the CSG (a group which was called the EECF -- Electrical Engineering Computing Facility -- when I first joined). Particularly I was pondering the sort of software that needs to be developed in order to adequately support computing within this school.

A good sample problem is found in the need to add extra controls of the login process.

We have always wanted to maintain fine control over who can log in to which machine. This is needed to enforce certain policies about lab usage (e.g. only students doing a thesis should be allowed to log in to the Thesis Lab) and to control access to special purpose servers (only computing support staff can login directly to the file servers). It is also needed to implement our booking system which must ensure that when a lab computer is allocated to a particular person, no other person may log in.

20 years ago, this sort of problem would be solved by making direct, ad hoc, changes to the source code for the "login" program. At the time we had a complete source tree of all of the programs we ran on our small number of Unix machines, and they contained lots of local modifications. Adding another one was a straightforward and obvious approach to use.

10 to 15 years ago, our options were quite different. We were using proprietary Unix (Domain/OS, Ultrix, Solaris, etc) and such changes were completely out of the question. Instead we could either completely replace a system (as we did with Email) or find some innovative way to plug our functionality into the existing system. For login control, we chose the latter.

If you use the command ypmatch neilb passwd and look at the last field, you will notice that my shell (and everybody else's) is something in a directory called /usr/local/etc/logins. All the programs in this directory are links to one program which performs the access checks that we want to perform. If the checks succeed, the user's real shell is run.

Today (and for the last half decade or so) we mostly use Open Source software. One the many positive features of such software is that it tends to be very configurable and "pluggable". As it was developed by a community that actively uses it in lots of different ways, it tends to be designed to easily fit into lots of different ways of doing things.

For our login example, were we to try to solve that problem now, we would simply write a PAM -- a Pluggable Authentication Module. This module would be loaded by any 'login' type of program (ssh server, IMAP server, XDM login session, etc) and would thus be in the perfect place to impose our local policies.

Thus we have seen a gradual restructuring of the way we do support, from completely managing all our own software, to making good use of the highly configurable nature of modern packages and simply adding local customisations where appropriate. We do still need to make some intrusive changes to packages, usually to fix bugs, but these can be sent back to the package maintainer so that future releases can be used as-is.

This restructuring -- an accidental restructuring you might say -- has allowed us to support a much wider range of services, as is demanded in our ever-growing field. This is because each service can be supported more easily.

If we were to follow this trend to its logical conclusion (or at least, logical next step), we would consider taking some of our locally developed software systems and injecting them into the open-source community. We would hope to get other people using them and so benefit from their feedback and possible enhancements. While I believe that the long-term gains of doing this could be quite significant, the short term costs are significant too. We would need to package the software in a way that is usable by others, and support that use, at least initially. This is a sufficient cost that we would need a very strong reason to proceed on any particular piece of software.

Anyway enough of my front-page ramblings. Please read on into the rest of the Gazette where you will find information about

  • Changes with Backups -- incremental and self-admin
  • New VPNs
  • -- a trip report
  • Fixing Linux kernel bugs
and more.

csg-info mailing list archive


Staff Movements

Matthew Chapman is a PhD student in CSE (or is it NICTA) and also works part-time for the CSG. Or maybe I should say "worked".

Matt was recently offered, and accepted, an internship with Hewlett-Packard in Colorado, U.S.A. This will be a great experience for Matt and relates well to the work he is doing for his PhD.

The internship is for 4 months, so we expect to see Matt back in August some time. So this isn't so much "goodbye" as "see you soon".

setuid programs in HOME directories to become ineffective

Over the mid-year break, we plan to disable the `setuid' functionality in home directories.

Unix (and related systems such as Linux) allow a program to be marked as `setuid' (using the chmod program). This means that when the program is run, it runs with the privileges of the user who owns the program rather than those of the user who is running the program.

While this can be very useful in some circumstances, it can also be very dangerous: a poorly written program can give away more privileges than were intended. Many security holes in Unix related systems are due to problems with setuid programs.

A recent scan of home directories on elfman, eno and kamen found over 8000 files with the setuid or the related setgid flags set. The vast majority of these are not actually programs and have the flag set inappropriately. While this does not represent a clear danger, it does suggest that many files get this flag set wrongly and if any of them are actually programs, then that would open a real security hole.

So, to generally enhance security, we will be disabling the setuid (and setgid) flag of all files on all HOME directories. This will not involve turning it off, but will involve causing Linux to ignore the flag. When mounting the HOME filesystems, we will use the "nosetuid" option which causes these bits to be ignored.

All this means that if you have a genuine need for having a setuid program, you will need to find an alternative solution.

By far the single largest user of setuid in home directories has been the "give" system for submitting and marking assignments. This uses a single setuid program called "classrun". Each course had its own version of "classrun" which was setuid to that class.

We have now provided a single setuid "classrun" in /usr/local/bin/classrun which can be used by all classes. As it is not in a HOME directory, the setuid flag will continue to work.

Similar strategies can be made available for any other real requirements for setuid that there might be.

So, if you have a genuine need for a setuid program in a HOME directory, please email ss with the details and we will work something out for you.

buff enhancements -- improved selection of files for incremental backups

Every night we run an "incremental" backup, which backs up files that have changed in the last day or so (These backups can be recovered with tkrestore).

The program which selects the files to be backed up is called buff (the BackUp File Finder).

We impose some limits on how much data can be backed up each night to make sure that no individual swamps the backups system to the detriment of others. These limits include a limit on the size of individual files (which can be changed by the user), and a limit on the total amount of data that can be backed up from each home directory (which cannot be changed).

Some people found, on having exceeded their home directory limit, that the files that were actually backed up were not their most important files. If some of their files were not going to fit into the home directory limit, they would have preferred that these more important files got priority, and their less important files be the ones to miss out.

This functionality is now available through the first and last commands. These commands can be placed in .backup specification files to indicate which files should be backed up first, or which should be left to last.

More information is available in the online manual. See

man buff

Note that these incremental backups are just one part of our backup strategy. For more information about backups generally, see

New backup arrangements for self-admin machines

While we have for some time had very comprehensive backup arrangements for our centrally maintained computers (with daily incremental and monthly image dumps) there has been a steadily growing hole in our backups scheme: self maintained (self-administered) machines.

Self maintained machines are largely the responsibility of the owner of the machine. They are responsible for installing software, configuring the software, applying security updates and keeping backups. However we, the CSG, still have some role in supporting this self-maintenance.

We provide loan-CD's for software installation, FAQ pages to describe how software can be configured to work with our infrastructure, and mailing lists through which security alerts are distributed. However, we have not provided much support for backups.

We do provide normal home-directories on centrally managed and backed-up servers, and encourage self-maintainers to keep copies of important files in these directories. But this is a long way from providing real support.

Over the past year or so we have been discussing possible solutions, and are currently well into implementing a solution that should be reasonably effective for most people.

The basic idea is to provide a large, centrally managed, file server onto which individuals can place and maintain a replica of the files on their self-maintained machine.

In purchasing this machine, we aimed for large storage at the price of performance. It is likely that this machine (named feldman) is substantially slower than our regular fileservers, and would not cope if dozens of people tried concurrently to access it over NFS for normal work. However, it has substantially more storage than our regular file servers for much the same price.

Our plan is to provide owners of self-maintained machines with a generous quota on feldman -- of about 15 Gigabytes -- and to encourage them to regularly synchronise their machine into this storage. i.e. to copy important parts of their filesystem onto feldman and then regularly copy across any changes.

The tool that we expect to use for this copying would be either rsync or unison. The next stage in providing this service is to get some real experience with both of these tools being used for this purpose from both Linux and MS-Windows machines. We are not currently in a position to make any recommendations or guidance for setup. However if anyone in the School has a self-admin machine and would like to experiment with these (or other) tools for backup to feldman, please contact ss for access.

Backup plans for feldman itself have not been finalised yet. However, it is expected that we will perform regular incremental backups, and very occasional full-backups to tape of individual homes.

Due to the projected usage of feldman it is not expected that backups will be needed much. This is because it is not a working-copy.

The normal situations that require backups to be restored are human error (deleting a file that you really wanted) or hardware failure. Human error should not occur on feldman as it should not be accessed directly. Human error may well occur on your self-maintained machine, but then the backup is readily available on feldman.

Hardware failure on feldman is certainly possible (though we are using a RAID6 arrangement so that we can survive any two drives failing) but if it does happen, all the current data should still be on individual self-admin machines. So if we completely lose everything on feldman it should only be necessary to have every user re-copy their data onto feldman.

One other use for backups is archival storage. We expect to support this by writing each directory on feldman to tape once or twice a year.

VLANs everywhere

With the recent release of OpenVPN version 2.0 we have been exploring options for using VPNs in and around CSE, and have found a few interesting uses. These arrangements are currently "under-development" to some extent, but if anyone is interested in being an early-adopter (where appropriate), we would appreciate the interest and feedback.


Linux machines which are maintained by the CSG need to be connected to a "trusted" network. This is primarily for NFS access. NFS access control relies largely on validating the IP address that requests come from. In order to ensure the security of the data that we share using NFS (such as home directories), we need to ensure that no-one can improperly use any IP address that has been assigned to a CSG-managed machine.

This is achieved by having all such machines on a "trusted" subnet, and making sure that all network ports on that subnet are locked to a particular MAC address. If a different machine is plugged into a socket on this subnet, the difference will be noticed and the port will shut down.

However, we have a growing need for CSG maintained machines in locations that cannot be secured in this way. In particular, machines that are used by NICTA staff and students will be in NICTA offices and connected to NITCA network, which we do not have control over.

To handle this situation we have created a configuration of OpenVPN which allows a physically remote machine to securely appear on our trusted subnet. This allows it to access files over NFS much like all other CSG maintained Linux machines.

The computer still needs to be physically secure (locked to a table, locked closed, and with a bios-password which disables booting from removable media), but no longer needs to be on a physical network managed by CSG. It can instead be on a Virtual Private Network which is managed by CSG.

This arrangement is currently being tested on the author's desktop computer, and on a test-machine over at NICTA. It seems to work fine, and next time someone at NICTA wants a CSG-Maintained Linux machine we will be able to provide one using a known-working solution.

NFS Access for self-maintained machines

One frustration with having a self-maintained Linux machine at CSE is the problem of access to the CSG maintained central home directories.

On MS-Windows machines, access via MS-Windows file sharing and Samba works reasonably well. However, this option doesn't work for Linux.

The standard file-sharing protocol for Linux and Unix-based systems is NFS. The common authorisation model for NFS involves giving specific machines full access based on IP address. (Some implementations of NFS do have the possibility of using Kerberos to do per-user authentication, however this functionality is only slowly becoming available and we do not have any Kerberos infrastructure at CSE). This makes it difficult to give self-maintained machines NFS access to our NFS server.

Again, OpenVPN can be used to provide a solution, though not a full solution yet.

The approach taken is to allow a self-admin machine to create a VPN tunnel to a particular NFS server. This tunnel will be suitably authenticated so OpenVPN on the server will "know" who is sending packets over the link. It can then configure the NFS service to accept requests over the link but to treat them all as coming from one particular user -- the owner of the remote machine. This provides a reasonably effective authentication layer on top of NFS to allow access from self-maintained machines.

There are two imperfections in this scheme which make it less than ideal, but still quite usable.

Firstly, access is given not to a user, but to a machine. If a machine creates a tunnel to the NFS server authenticated as a particular user, then anyone on that machine may access files as that user. This is fine for machines that are only used by one person, but means that the solution cannot be used for self-maintained machines that are used by a group.

The other problem is that due to the way the NFS server in Linux works, a list of group IDs cannot be given access in this way. The access is given to just one userid and just one groupid. While normal NFS access allows you to access files for which any of your groups have access, using the VPN tunnel to get access will only provide access to files for which your primary group has access.

This is a solvable problem, and the NFS server will quite possibly be enhanced later this year to allow a list of groups to be associated with a given VPN tunnel.

This NFS solution is currently being used by Geoff Oakley (Manager of the CSG) to access files from his Mac-Mini (being trialled for possible lab usage). If other members of the school are interested in trialling this, please contact `ss' or Neil Brown

VPN for Wireless authentication

We currently require people to register the MAC address of their Wireless Ethernet card in order to get access to our wireless subnet.

This is really unnecessary administration and it would be much better to authenticate people rather than computers, and then people could use whatever computer or wireless card they choose without having to talk to ss.

The obvious choice of authentication on the wireless subnet is to use a VPN. The less obvious choice is which VPN technology to use.

Three options seem to be the most likely. They are IPSec, PPTP, and OpenVPN.

IPSec is the approach used by the UniWIDE wireless network. It requires special drivers and is not always straightforward to set up.

PPTP is a standard option in MS-Windows systems and so is in some ways a good choice. It does work on Linux, but again requires special drivers to be installed.

OpenVPN seems to be the simplest to install and configure. Its main drawback is that it needs to run as Administrator (in MS-Windows) or root (in Linux). With that restriction it is very easy to work with and very configurable.

We already have a PPTP based VPN available which is documented at

We now have an OpenVPN solution in-place that some of us in the CSG are experimenting with. If anyone else wants to try it out, there are brief instructions and configuration files at

These instructions will probably be moved to a more permanent home soon.

It is our intention (without a clear timeline yet) to require all users of the wireless subnet to use a VPN to authenticate. This will be achieved by simply not routing any unauthenticated traffic off the VPN. Before we do that we would like to be sure that the available VPN solutions meet the needs of our customers.

So: if anyone uses the wireless subnet regularly and is interested in a bit of experimentation, please look into experimenting with one of the VPN solutions mentioned above.

Trip Report

Trip report :

-- Neil Brown

Approaching Canberra at dusk one comes to the crest of a hill, peeps over, and there spread before you are thousands of fairy-lights. These reveal the city of Canberra, our nation's capital, nestled in a series of valleys around the Molonglo and Murrumbidgee rivers and their subsidiaries.

Of the many differences between Canberra and Sydney, one that has always struck me is this fact that you can so easily look down over Canberra and view the city from above. Brisbane has a good view from Mount Coot-tha. Melbourne has a reasonable view from the Dandenongs. Adelaide looks great from Mount Lofty. Canberra has a host of mountains providing a good view, of which Mount Ainsley, Black Mountain, and Red Hill are the most accessible. The best that Sydney seems to manage is Centrepoint Tower, and the view from there is regularly obscured by a brown haze.

Canberra in Autumn was chosen to host this year's LCA -- The Australian Linux technical conference. This conference migrates from city to city and is normally held in January. Last year Adelaide was the host city and next year from the 23rd to the 28th of January, Dunedin on the north island of New Zealand will be visited by Linux fans. But because Canberra is so much more attractive in autumn (all those European trees!) the 2005 conference was delayed 3 months and held from the 18th to the 23rd of April.

The LCA program was, as always, very varied. While it is called a "Linux conference" there is always a lot of content that is not directly related to the Linux kernel and is really about various user-space programs, systems, and languages. However "Open Source" doesn't seem as catchy as "Linux", and the name remains.

The program opened with a number of 1 and 2 day "Mini-conferences" which are highly focussed series' of presentations organised separately from the main conference but with a lot of support from the conference committee and using the same facilities. This year's minis covered Debian, Linux Security, OpenOffice.Org, Embedded Linux, Linux in Education, Clustering, Audio, and Gnome.

This was followed by a day of tutorials and then two and a half days of regular conference papers.

As there were usually three presentations happening at once, and sometimes five if you include the mini-conferences, it is impossible to take it all in, and inevitable that you will miss something that was interesting. To help those who missed out, three of the talks voted as "best" were presented a second time on Saturday afternoon. Also, audio recording of all the talks should be available for download.

When the audio does become available, I would seriously recommend listening to the Keynote given on Saturday morning by Eben Moglen. Eben is a lawyer who does a lot of work for the Free Software Foundation and for free and open software in general. He spoke about the legal issues facing free software, the problem of patents, and the power of "The Monopoly".

The main project that Eben is pursuing at the moment is the Software Freedom Law Center. This is a centre that is funded by large corporations which have a growing investment in software freedom and see that many of the developers who create code that they use are in need of legal support that they cannot afford. The centre is apparently staffed largely by recent law graduates who only entered law school because the dot-com bust convinced them that they would be unlikely to make a living with an IT degree. Thus they have an IT interest and a legal training -- a perfect combination for the Software Freedom Law Center.

This centre has already provided legal advice and assistance to a number of projects including the high-profile Samba project. Being in direct competition with (a small part of) Microsoft's business makes them a clear target and having good advice is very helpful.

Eben spent only a few minutes on the topic of GPL3 -- the possible successor to the current General Public License that much free software is released under. He says that the main issue for the moment is not what should be included (he thinks he has a good handle on that) but rather how to get all the stake-holders (developers, businesses, and the FSF) involved in conversations so that they can feel ownership of the new license and be happy to work with it.

One of the most visually impressive talks (which made it a best-of) was given by Keith Packard of the about re-architecting the X window system.

The core idea in the re-architecture is that clients should not draw windows directly onto the screen, but rather into off-screen memory. The windows should then be combined or "composited" together by a separate tool which provides the actual image that appears on the screen.

This allows simple and correct implementation of things like transparent windows, which allow the windows behind them to show through to some extent. It also allows simple magnification of sections of the display by having the compositing agent scale-up some or all windows, or even increase contrast for sight-impaired people.

And then there are the "gee-whiz" uses that probably aren't useful in themselves, but serve to get people thinking about possibilities. This includes such things as the luminocity window manager/compositor. The version that Keith showed off treated windows as rubber sheets and applied physics rules to movement. So, when a window "pops up", it does so with a bit of a spring and oscillates larger and smaller for a few seconds before settling down.

When moving a window in luminocity, the point that is held by the mouse moves as you would expect, but the rest trails behind a bit just as a rubber sheet might, stretching a bit, and then compressing a bit when you stop moving, possibly over shooting the mark and coming back. This even allows a window to be picked up and thrown.

This is all achieved by having the windows drawn into separate memory buffers, and luminocity takes those buffers, transforms them geometrically, and paints them onto the screen. As I said, it isn't clear that it is useful, but it sure is fun to watch. is the place to start looking.

The last talk I would like to mention was "CIFS to the UNIX Desktop (or the Death of NFS :-)" by Jeremy Allison -- one of the lead developers of SAMBA.

Being the official maintainer of the NFS server in Linux, this particularly interested me. If it did mean the death of NFS, then I would be out of a job -- hurray.

The core observation here is that CIFS is the most widely used protocol for providing remote file access to MS-Windows clients, and NFS is the most widely used protocol for providing file access to Unix and Linux clients. This means that anyone who manages a diverse network (which many, many people do) will need to support two protocols, CIFS and NFS. Surely it would be easier to support just one.

Discarding CIFS and just using NFS is out of the question. While NFS clients do exist for MS-Windows, they cost money and don't perform nearly as well as the default CIFS. As Microsoft continue to change their platform, they will keep CIFS working, but any NFS client is likely to break. So CIFS really is a must-have for anyone who needs to support MS-Windows.

Given that, maybe CIFS could be made to work for Linux as well so that NFS would not be needed. This would require a number of changes to CIFS, but the CIFS protocol has room for extensions and the SAMBA team are certainly willing to support Unix extension. You would probably not be able to use these extensions with a Linux client against a MS-Windows server, but that isn't a big problem as it is much easier to do without MS-Windows servers than without their clients.

Another observation is that version 4 of the NFS protocol which is still under development is starting to look a lot like CIFS. It is much more complex than version 3 of NFS (simplicity was always one of the strengths of NFS) and as it is addressing similar problems to those CIFS addresses, it inevitably looks similar. Given this similarity, the fact that CIFS is a more mature protocol suggests that it will probably work better. It is not necessarily a better protocol, but as developers have more experience with it, we can reasonably expect there to be fewer problems in the medium term.

So, work is underway to provide a fully functional CIFS client in Linux, and to develop extensions to CIFS which will be implemented in Samba and the CIFS client. Whether this will become popular remains to be seen. Whether it will mean the end of NFS is certainly doubtful. But it is certainly an interesting development that should be watched.

On the whole, the conference was definitely a success. There were lots of interesting people and interesting presentations at a very nice facility. I'm certainly hoping to go to Dunedin next year.


Under The Mouse Mat

As has been mentioned in previous issues of the Gazette, we are beginning to roll-out 2.6 series kernels on some of our Linux machines, particularly some of our servers and newer machines.

This is not without some risks. The stability of a kernel, or any large piece of software, is very hard to measure, and probably the best measure is the number of different sites that are using it without problems. No matter how careful the developers try to be, early adopters are bound to find more problems than late adopters.

While we are not exactly early adopters of the 2.6 kernel, it does not seem to be as commonly used as the most stable (though older) 2.4 series and consequently can be expected to have more problems.

We found one such problem fairly soon after we started running 2.6. However it took quite a while to confirm that the problem was with 2.6, and to isolate what exactly was going wrong. The error has now been identified and resolved, and this article is the story of that bug.

We first became aware that there was a problem in late February or early March. A few people were reporting that they were having problems reading their mail. In particular, their .incoming-mail file would have a string of nul characters in it. This string of nuls was normally immediately before that start of a message. This would confuse most mail reading programs and the message after the string of nuls would effectively disappear.

It was fairly easy to relieve this symptom by editing the mail file and removing the nuls. Despite this corruption, people didn't seem to be actually losing mail (which was a relief) but it was obviously a substantial inconvenience and we started looking more deeply so as to find a cure for the cause rather than just for the symptom.

Unfortunately there were two things that had changed recently that could have been related. One was that we had installed 2.6 on a number of machines recently and all the reports seemed to involve these machines. The other was that we had recently duplicated the mail server so that we had two computers processing mail and when mail was collected into .incoming-mail it would be collected from both computers.

The possible connection with using a 2.6 machine was discounted at first. The main login servers -- wagner and weill, -- were running 2.6 and as lots of people use these machines from time to time, the fact that all people with mail problems had recently used one of these machines didn't seem overly significant. This turned out to be a mistake.

So I set out to review all the handling of mail delivery that could possibly cause nuls to appear and could possibly be changed by the fact that we now had two servers. This involved a substantial number of hours of pouring over code and making experimental changes. It did uncover a few minor bugs, but nothing that appeared significant. When all the bugs that could be found were fixed and the fixes installed, the problem kept occurring.

Having exhausted this avenue, and with some more evidence to work with -- such as samples of the location and size of 'nul' strings within lots of people's mail files -- I started trying to discover if it could be a problem with the 2.6 kernel.

This bore some fruit relatively quickly. A key observation was that people who experienced this problem were often logged on to two computers at once. Thus `getmail' might be run on one computer to deliver mail into .incoming-mail while a mail client such as mutt might be run on another computer.

I wrote a little test program that tried to simulate this behaviour. On one machine, I would repeatedly append a small message to a file. On another computer, I would remove a small message from the file. This was all done with appropriate locking so that it should have worked. It didn't. Strings of nuls would sometimes appear in a file. It appeared to be a bug with the NFS client code in Linux i.e. the code that takes file access requests from a program and uses the NFS protocol to send the requests to a server.

What was happening was that sometimes the computer that was appending to the file wouldn't notice that the file had shrunk due to the other computer removing a message. Because of this it would append the new message to the wrong place -- several bytes after the end of the file. This would leave a string of nul bytes in the file, exactly as we were experiencing.

A bit of experimenting and reading the NFS client code led me to discover that if I opened a file for reading just before opening it for writing to append the new message, we would get the size of the file correct and the append wouldn't leave a string of nuls. I modified getmail to open and close the file before opening for write and this largely fixed the problem. It was not a complete fix as there was still a chance for a race between two machines. But it was a much smaller race that we were less likely to lose.

At the same time I had been corresponding with Trond Myklebust, the NFS client maintainer. Together we managed to isolate exactly what was happening and he sent me a couple of patches which fixed the problem properly. These patches are now included in the 2.6 kernels that we install and having rebooted all the machines with the faulty 2.6 kernel, the problem of corrupted mail files has disappeared.

It's always a nice feeling to get to the bottom of a problem that had been causing lots of inconvenience for our customers and then to have the stream of problem reports simply stop.

Issue Index
Plain Text

End Note

This newsletter is a publication of the Computing Support Group of the School of Computer Science and Engineering at The University Of New South Wales.

It is available:

  • at
  • by subscribing to Use the command mlalias csg-news -a your-user-name
  • In paper form from the Help Desk or the Student Office.

Past issues are available on the web by following the "Issue Index" link.

All correspondence about this newsletter should be sent to

Permission is granted to make and distribute, without limit, verbatim copies of any of the published forms of this newsletter.