User Tools

Site Tools


haas:system:c107a-fs

Overview

c107a-fs.lair.lan is our covert AFP server providing storage solutions to students in CCC's Mac lab, C107a.

hostname RAM disk swap OS Kernel
c107a-fs.lair.lan 512MB 10GB (/) 458MB Debian 6.0 “Squeeze” (i386) 2.6.32-5-686
40GB + 40GB RAID1 (/dev/md0)

News

  • Installed disks, installed Debian squeeze (20110927)
  • Installed netatalk and configured it to talk to the Mac (20110928)
  • Created the initial batch of users (20110928)
  • avahi-daemon apparently got mucked up, added cron job to restart it every so often (20111006)
  • finished writing 'manage', an assets management script for c107a-fs (20111010)

TODO

  • Contemplate LAIR LDAP integration, auto homedir creation through pam_mkhomedir (so the resource can be used by more than just the art class)
  • Samba?

Network Configuration

Machine Interface IP Address MAC Address
c107a-fs.lair.lan eth0 10.100.21.139 00:0d:56:a3:15:a0
tap0 10.80.1.160 n/a

Packages

The following packages have been installed:

lair-std
lair-mail
lrrd-node
lair-ldap
mdadm
netatalk (from sid)
openvpn
apg
dialog

Netatalk

The netatalk package provides AFP services, enabling MacOS X clients to network mount fileshares with authentication.

Following are some of the specific configuration changes made:

/etc/default/netatalk

# Netatalk configuration
# Change this to increase the maximum number of clients that can connect:
AFPD_MAX_CLIENTS=32

# Change this to set the machine's atalk name and zone.
# NOTE: if your zone has spaces in it, you're better off specifying
#       it in afpd.conf
#ATALK_ZONE=@zone
ATALK_NAME=`/bin/hostname --short`

# specify the Mac and unix charsets to be used
ATALK_MAC_CHARSET='MAC_ROMAN'
ATALK_UNIX_CHARSET='LOCALE'

# specify the UAMs to enable
# available options: uams_guest.so, uams_clrtxt.so, uams_randnum.so, 
#                    uams_dhx.so, uams_dhx2.so
# AFPD_UAMLIST="-U uams_dhx.so,uams_dhx2.so"

# Change this to set the id of the guest user
AFPD_GUEST=nobody

# Set which daemons to run.
# If you need legacy AppleTalk, run atalkd.
# papd, timelord and a2boot are dependent upon atalkd.
# If you use "AFP over TCP" server only, run only cnid_metad and afpd.
ATALKD_RUN=no
PAPD_RUN=no
TIMELORD_RUN=no
A2BOOT_RUN=no
CNID_METAD_RUN=yes
AFPD_RUN=yes

# Control whether the daemons are started in the background.
# If it is dissatisfied that atalkd starts slowly, set "yes".
ATALK_BGROUND=no

# export the charsets, read form ENV by apps
export ATALK_MAC_CHARSET
export ATALK_UNIX_CHARSET

# specify the UAMs to enable
# available options: uams_guest.so, uams_clrtxt.so, uams_randnum.so, 
#                    uams_dhx.so, uams_dhx2.so
# AFPD_UAMLIST="-U uams_dhx.so,uams_dhx2.so"

# Change this to set the id of the guest user
AFPD_GUEST=nobody

# Set which daemons to run.
# If you need legacy AppleTalk, run atalkd.
# papd, timelord and a2boot are dependent upon atalkd.
# If you use "AFP over TCP" server only, run only cnid_metad and afpd.
ATALKD_RUN=no
PAPD_RUN=no
TIMELORD_RUN=no
A2BOOT_RUN=no
CNID_METAD_RUN=yes
AFPD_RUN=yes

# Control whether the daemons are started in the background.
# If it is dissatisfied that atalkd starts slowly, set "yes".
ATALK_BGROUND=no

# export the charsets, read form ENV by apps
export ATALK_MAC_CHARSET
export ATALK_UNIX_CHARSET

# config for cnid_metad. Default log config:
# CNID_CONFIG="-l log_note"

/etc/netatalk/AppleVolumes.default

This file defines the Volumes to be made available over AFP. Similar to /etc/exports for NFS.

# The line below sets some DEFAULT, starting with Netatalk 2.1.
:DEFAULT: options:upriv,usedots
#:DEFAULT: options:tm,usedots

# By default all users have access to their home directories.
#~/         "Home Directory"
~/          "$u"    options:usedots,upriv dperm:0775 fperm:0664 ea:ad
/home       "users" allow:@staff options:usedots,upriv dperm:0775 fperm:0664 ea:ad
/public     "public"    options:usedots,upriv dperm:0777 fperm:0666 ea:ad
/manage     "manage"    allow:mann options:usedots,upriv dperm:0775 fperm:0664 ea:ad
#~/ "$u" allow:username1,username2 cnidscheme:cdb
# End of File

Make sure that shared directories have appropriate permissions on the UNIX side of things otherwise errors will occur. I made /home group owned and writable by group staff, and added the appropriate users to that group.

The /public directory is group owned and writable by group lab46, so that everyone can mount and put information there.

Additionally, users in group staff have the ability to mount a share containing ALL the home directories, and they have the ability to manipulate files therein… so this can be used by instructors collecting student work, for example.

/etc/netatalk/afpd.conf

This file defines the behavior of afpd:

# default:
# - -tcp -noddp -uamlist uams_dhx.so,uams_dhx2.so -nosavepassword
- -tcp -noddp -uamlist uams_dhx.so,uams_dhx2.so -nosavepassword
#- -tcp -noddp -uamlist uams_guest.so,uams_dhx.so,uams_dhx2.so -nosavepassword
#- -transall -uamlist uams_randnum.so,uams_dhx.so -nosavepassword -advertise_ssh

Note the option to include guest login (uams_guest.so).

avahi

avahi provides the mDNS services equivalent to zeroconf/bonjour. Configuration follows:

/etc/avahi/services/afpd.service

<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
    <name replace-wildcards="yes">%h</name>
    <service>
        <type>_afpovertcp._tcp</type>
        <port>548</port>
    </service>
    <service>
        <type>_device-info._tcp</type>
        <port>0</port>
        <txt-record>model=MacPro</txt-record>
    </service>
</service-group>

MD array configuration

The purpose of the disk array is to provide RAID1 (mirror) to the Xen VM images.

creating /dev/md0

I opted to build the array straight to disk– no messing with partition tables.

c107a-fs:~# mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sdb /dev/sdc 
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
c107a-fs:~# 

checking disk array status

To check the status:

c107a-fs:~# cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sdc[1] sdb[0]
      488385424 blocks super 1.2 [2/2] [UU]
      [=>...................]  resync =  8.9% (43629696/488385424) finish=56.9min speed=25132K/sec
      
unused devices: <none>
c107a-fs:~# 

usually (when finished building and all is in order) it'll likely look something like:

c107a-fs:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb[0] sdc[1]
      488385424 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>
c107a-fs:~# 

Setting /etc/mdadm/mdadm.conf

To avoid oddities (such as /dev/md0 coming up as /dev/md127 and confusing everything) on subsequent boots, we should set up the /etc/mdadm/mdadm.conf file accordingly. Assuming hardware is in identical places device-wise, the only data unique to each peer is the hostname and the md0 uuid, as is seen in the following:

mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE /dev/sdb /dev/sdc

ARRAY /dev/md0 uuid=609551a2:e06a9ddd:9b618e96:f5bc7eb4 devices=/dev/sdb,/dev/sdc

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST c107a-fs

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Wed, 28 Sep 2011 11:26:01 -0400
# by mkconf 3.1.4-1+8efb9d1

How to find the local md volume UUID

To obtain the UUID generated for the md volume, simply run the following (it is unique per host):

c107a-fs:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Sep 28 11:26:47 2011
     Raid Level : raid1
     Array Size : 39061404 (37.25 GiB 40.00 GB)
  Used Dev Size : 39061404 (37.25 GiB 40.00 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Wed Sep 28 14:16:22 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : c107a-fs:0  (local to host c107a-fs)
           UUID : 609551a2:e06a9ddd:9b618e96:f5bc7eb4
         Events : 36

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
c107a-fs:~# 

You'll see the UUID listed. Just copy this into /etc/mdadm/mdadm.conf in the appropriate place, as indicated by the above config files, to ensure the proper identification of the MD array.

After configuring mdadm.conf

According to the information in /usr/share/doc/mdadm/README.upgrading-2.5.3.gz, once we configure the /etc/mdadm/mdadm.conf file, we must let the system know and rebuild the initial ramdisk:

c107a-fs:~# rm -f /var/lib/mdadm/CONF-UNCHECKED
c107a-fs:~# update-initramfs -t -u -k all
update-initramfs: Generating /boot/initrd.img-2.6.32-5-686
update-initramfs: Generating /boot/initrd.img-2.6.32-5-686
c107a-fs:~# 

Local Modifications

Automating the mount in /etc/fstab

We can have the system work to automatically mount our volume on boot by putting an appropriate entry into /etc/fstab, by appending the following to the bottom of the file:

# RAID1 share for data (mounted on /export)
UUID=33f9cabe-526f-4117-8ea5-bc1e7ebe9b58   /export ext3    errors=remount-ro   0   1

integrating the array's storage into the system

The disk array is going to store user data generated on the Macs in C107a.

The following directories have been created:

  • /export - the array's main mountpoint
  • /home - location of user data (symlink to /export/home)
  • /public - location of publicly sharable data (symlink to /export/public)

sudo access

In preparation for deploying my “manage” script, I made the following addition to /etc/sudoers (also removed the ###LAIRCONF### up top to prevent changes on package update):

# Cmnd alias specification
%staff ALL= NOPASSWD: /etc/init.d/avahi-daemon restart, /etc/init.d/netatalk restart, /etc/init.d/lrrdnode restart, /etc/init.d/nscd restart, /root/newuser

Basically, I want certain privileged users to have the ability to restart key services in the event they stop working during the day.

crontab

There's a crontab entry in /etc/crontab that instructs the machine to shutdown every night at 11:00pm.

I have enabled auto wakeup every weekday at 7AM (if the time doesn't change with daylight savings time it should wake up at 6AM, worst case, or autocorrect and still wake up at 7AM).

I also put in entries to ensure sane file ownership and permissions.

Total /etc/crontab changes are as follows:

0   23  * * *   root    shutdown -h now
24  */4 * * *   root    chmod -R u=rwX,g=rwX,o=rX /export/home /export/public
32  */4 * * *   root    chmod -R u=rwX,g=rwX,o= /export/manage
36  */4 * * *   root    chgrp -R staff /export/home
42  */4 * * *   root    chown -R mann:staff /export/manage
48  */4 * * *   root    chgrp -R lab46 /export/public
56  */2 * * *   root    /etc/init.d/avahi-daemon restart

/etc/rc.local

Some changes to /etc/rc.local to help ensure network routes through the VPN get up and running after boot:

#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

sleep 20; ping -c 2 10.80.2.38
sleep 4; ping -c 4 10.80.1.6

/etc/init.d/lrrdnode restart

exit 0

/etc/pam.d/common-session

After installing lair-ldap to enable users with a Lab46 account access to the system, I had to make one local change on c107a-fs to create their home directory, since we're not auto mounting from the fileserver in the LAIR:

#
# /etc/pam.d/common-session - session-related modules common to all services
#
session [default=1] pam_permit.so
session requisite   pam_deny.so
session required    pam_permit.so
session required    pam_unix.so
session optional    pam_ldap.so
session optional    pam_mkhomedir.so

Specifically, the line for pam_mkhomedir.so was appended to this file, so on login, the LDAP-specified home directory will be created automatically.

/etc/hosts

Because c107a-fs is both outside of the LAIR but also quite interconnected, I've made the following changes to /etc/hosts:

127.0.0.1       localhost.localdomain   localhost
10.80.2.38      lab46.corning-cc.edu    lab46.offbyone.lan  lab46.lair.lan  lab46
10.80.1.3       nfs.lair.lan        nfs mirror
10.80.1.6       web.lair.lan    web.offbyone.lan    web
10.80.2.9       auth1.lair.lan  auth1
10.80.2.10      auth2.lair.lan  auth2
10.80.1.11      auth3.lair.lan  auth3
10.80.1.1       juicebox.lair.lan   juicebox
10.80.2.1       caprisun.offbyone.lan   caprisun

OpenVPN

To facilitate monitoring of status and performing and necessary system administration, c107a-fs will VPN into the LAIR so we will have a guaranteed path of access to it (provided the network is operating normally).

The venerable OpenVPN software was installed, and keys/certs generated. Configuration follows:

configuration

The configuration file for c107a-fs is in /etc/openvpn/lair-vpn-client.conf, its contents are as follows:

##############################################################################
#
#   LAIR OpenVPN Client Configuration File
#
#   This configuration is to facilitate the joining of the LAIR VPN.
#
#   Please replace all instances of USER with the actual user name (also the
#   name on the VPN certificate/key).
#
##############################################################################

##############################################################################
#   VPN Server Information
##############################################################################
#remote          184.74.34.14            # IP of remote OpenVPN server
remote          143.66.50.18            # IP of remote OpenVPN server
port            1194                    # Port on which to connect on server
proto           udp                     # Type of traffic {tcp-client|udp}

##############################################################################
#   Network Interfaces
##############################################################################
dev-type        tap                     # Type of interface to use {tap|tun}
dev             tap0                    # Interface name (usually tun0)

##############################################################################
#   Credentials
##############################################################################
cd              /etc/openvpn              # establish proper working directory
key             lair/client-c107a-fs.key  # Server key (private)
ca              lair/ca.crt               # Certificate (public)
cert            lair/client-c107a-fs.crt  # Server Cert (private)
tls-cipher      EDH-RSA-DES-CBC3-SHA      # set tls cipher type

##############################################################################
#   Client Settings
##############################################################################
comp-lzo                                # use fast LZO compression
keepalive       10      120             # send packets to keep sessions alive
nobind                                  # don't bind to local address & port
persist-key                             # don't re-read keys across restarts
persist-tun                             # on restart, don't reset tun device
pull                                    # Follow route suggestions of server
resolv-retry    infinite                # keep trying to connect if failure
route-delay     8                       # delay setting routes for 8 seconds
tls-client                              # enable TLS and assume client role

##############################################################################
#   System Options
##############################################################################
chroot          /etc/openvpn            # run in a chroot of VPN directory
user            nobody                  # after launching, drop privs
group           nogroup                 # after launching, drop privs
daemon                                  # detach and run in background

##############################################################################
#   Verbosity/Logging Options
##############################################################################
status          log/status.log          # status log file
log-append      log/lair.log            # log file
verb            3                       # level of activity to log (0-11)
mute            20                      # log at most N consecutive messages

##############################################################################

Some important actions to take care of include the following:

  • ensure that /etc/openvpn and /etc/openvpn/lair exist.
  • ensure that user 'nobody' and group 'nogroup' exist. If not, either create them or modify the config to ensure a proper user and group exist for OpenVPN to run as.
  • install the key/certs in /etc/openvpn/lair that were generated on the OpenVPN server.
  • create /var/log/openvpn and have /etc/openvpn/log be a symlink to it.

LRRDnode configuration

To facilitate administration, c107a-fs is configured as a LRRDnode client and logs data that can be retrieved from LRRD at: http://web.offbyone.lan/lrrd/

Install lrrd-node

First step is to install the actual LAIR package:

c107a-fs:~# aptitude install lrrd-node
The following NEW packages will be installed:
  libstatgrab6{a} lrrd-node python-statgrab{a} 
0 packages upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 118 kB of archives. After unpacking 348 kB will be used.
Do you want to continue? [Y/n/?] 
Get:1 http://mirror/debian/ squeeze/main libstatgrab6 amd64 0.16-0.1 [57.6 kB]
Get:2 http://mirror/debian/ squeeze/main python-statgrab amd64 0.4-1.1+b2 [53.0 kB]
Get:3 http://mirror/lair/ squeeze/main lrrd-node all 1.0.7-1 [7,128 B]
Fetched 118 kB in 0s (9,978 kB/s)
Selecting previously deselected package libstatgrab6.
(Reading database ... 28935 files and directories currently installed.)
Unpacking libstatgrab6 (from .../libstatgrab6_0.16-0.1_amd64.deb) ...
Selecting previously deselected package python-statgrab.
Unpacking python-statgrab (from .../python-statgrab_0.4-1.1+b2_amd64.deb) ...
Setting up libstatgrab6 (0.16-0.1) ...
Setting up python-statgrab (0.4-1.1+b2) ...
Processing triggers for python-support ...
Selecting previously deselected package lrrd-node.
(Reading database ... 28961 files and directories currently installed.)
Unpacking lrrd-node (from .../lrrd-node_1.0.7-1_all.deb) ...
Setting up lrrd-node (1.0.7-1) ...
Adding lrrdNode to init.d
update-rc.d: using dependency based boot sequencing
insserv: warning: script 'lrrdnode' missing LSB tags and overrides
Running lrrdNode ...
Starting lrrdNode: stat collection thinger: Starting LRRD Node
lrrdNode
                                         
c107a-fs:~# 

Configure lrrd-node at LRRD

Once installed and running on the client side, we need to configure (or reconfigure, as the case may be) at LRRD.

So pop a browser over to: http://web.offbyone.lan/lrrd/

And log in (~root, punctuation-less ~root pass).

Click on the “Configure” link, and find the host in question (if it has prior history reporting to LRRD).

If found, note that it is Enabled, and click the “reconfigure” link to the right of the entry.

There's an option to delete existing databases (do it), and check off any appropriate network interfaces.

Manual lrrd-node restart

If it is discovered that data reporting ceases, and other components of the LRRD system are still deemed functioning, it is likely that the lrrd-node client needs a restart. Simply do the following on the machine in question:

c107a-fs:~# /etc/init.d/lrrdnode restart
Stopping lrrdNode: stat collection thinger: lrrdNode
Starting lrrdNode: stat collection thinger: Starting LRRD Node
lrrdNode
c107a-fs:~# 

Wait at least 5 minutes for data reporting to make it into graphable form.

manage: an assets management script

To facilitate the administration of c107a-fs, I have written a script to help automate some of the important tasks of maintaining the system, such as user creation, password updating, disk usage reporting, and service restarting.

The script was both parts great intellectual journey and some parts annoying. But it is finally done and in place, so authorized users can run it.

References

Netatalk

MDADM

Volume coming up on md127 instead of md0

haas/system/c107a-fs.txt · Last modified: 2011/10/10 19:01 by 127.0.0.1