[an error occurred while processing this directive]
vlab gives you a CSE lab desktop session on non-CSE computers, including UNSW library computers, your own laptop or desktop, and mobile phones and tablets.
CSE labs computers typically use gdm
to provide the login screen on the console, and then to start the user's screen/desktop environment (window manager, etc.).
vlab uses xdm
to do the same thing on the virtual screens. There are a couple of reasons for this choice:
xdm
is simpler and lighter weight than gdm
. This is important where multiple instances may be running at a time. In the case of gdm
, only one instance will ever be running at a time.Location | What's there |
Local.vlab | vlab product in the Conformary |
/etc/X11/xdm | xdm configuration files |
/etc/xinet.d/vlab-vnc-server | Configuration file for xinetd with TCP ports and virtual screen sizes |
~/.xsession-vlab | User's X session script which sets up their own environment and starts their own window manager |
/usr/local/vlab/wmprompt | Tcl script which pops up a window manager prompt if the users doesn't have a ~/.xsession-vlab file |
/usr/local/vlab/pam_user_resource_limits_setup | Invoked by xdm via PAM (session) to set up CPU, memory and PID limits for the user |
xinetd
is started from /etc/init.d
. This eventually reads the configuration file /etc/xinetd.d/vlab-vnc-server
. This file contains the list of TCP ports on which xinetd
will listen for new connections and the commands to run (VNC servers) when a connection is received.xinetd
starts the VNC server (Xvnc
) with the specified command-line options which include the virtual screen geometry and allowed encryption.xdm
to display and accept the user's login. xdm
uses the configurations files in /etc/X11/xdm
to determine its look-and-feel and its behaviour./etc/X11/xdm
runs the script /etc/X11/Xsession-vlab
. This script runs as the user and either starts the user's screen/desktop environment if they have a .xsession-vlab
file in their CSE home directory, or else runs /usr/local/vlab/wmprompt
which pops up a window in the centre of the screen to prompt them to choose from a list of window managers with default configurations.vlab uses the Control Group feature in the Linux kernel (4.X) to impose per-user CPU, memory and PID resource limits. These help to ensure that all users get a fair share of the resources on the host and that no one can intentionally or unintentionally deny service to other users by using up all of a resource.
Documentation can be found in the Linux kernel source under:
Documentation/cgroup-v1/*
and
Documentation/cgroup-v2.txt
And on machine with the right man
page installed maybe also try:
# man 7 cgroups
In short, how this works is that resource pools are firstly created, then resources — such as CPU cycles, megabytes of RAM or process ID numbers — are allocated to these pools, and finally each user's processes (and the processes' subsequent children) are added to these pools. All the processes in a pool share the resources available to that pool.
In vlab, we create three separate resource pools per user: CPU, memory and PID (or process count).
The control groups management interface is abstracted out (by the kernel) into virtual directories and files which appear under /sys/fs/cgroup
. There is one subdirectory for each resource type cpu
, memory
and pids
.
By creating subdirectories under these we can create resource pools. The convention used in vlab is that each user has their own resource pools which they don't share with any other user. The pools are named "user<uid>" where <uid> is the user's UID number. Thus, the CPU resource pool control interface for processes running as yours truly (with UID 4606), can be found at:
/sys/fs/cgroup/cpu/user4606
And the memory resource pool interface for my process is at:
/sys/fs/cgroup/memory/user4606
Processes are added to a resource pool automatically by fork()
'ing or by writing their process IDs into the virtual tasks
file (that's its name) in the virtual directory corresponding to the pool. This same virtual file can be read to see a list of all processes IDs in the pool.
The resource limits are set up for each user in the script /usr/local/vlab/pam_user_resource_limits_setup
, which is run by xdm
via PAM using pam_exec.so
during session setup.