dd if=/dev/zero of=/xen/domains/vm14/vicepa.disk count=900 bs=1M
mkfs -t ext3 vicepa.disk
vi /xen/conf/vm14.cfg
add the highlighted line below
#
root = '/dev/xvda2 ro'
disk = [
'file:/xen/domains/vm14/swap.img,xvda1,w', 'file:/xen/domains/vm14/disk.img,xvda2,w', 'file:/xen/domains/vm14/vicepa.disk,xvda3,w', ]
after saving the file then do a shutdown of vm14 and bring it back up.(a reboot will not work
Next we need to logon to vm14 and mount the disk:
mount -o loop -t ext3 /dev/xvda3 /vicepa
next do a df -k to verify that the disk is mounted.
df -k
Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda2 2064208 685996 1273356 36% / tmpfs 65632 0 65632 0% /lib/init/rw udev 10240 476 9764 5% /dev tmpfs 65632 4 65628 1% /dev/shm /dev/xvda3 907096 17560 843456 3% /vicepa
To get it going we will us the module-assistant method.:
apt-get install module-assistant
m-a prepare openafs
m-a a-i openafs
After the kernel module is installed, we can proceed with installing the OpenAFS client:
This will be run on vm14, vm15, and vm16
apt-get install openafs-{client,krb5}
AFS cell this workstation belongs to: student.lab # (Your domain name in lowercase, matching the Kerberos realm in uppercase)
Size of AFS cache in kB? 50000
Run Openafs client now and at boot? No
Look up AFS cells in DNS? Yes
Encrypt authenticated traffic with AFS fileserver? No
Dynamically generate the contents of /afs? Yes
Use fakestat to avoid hangs when listing /afs? Yes
DB server host names for your home cell: afs1
now on vm14 we will install the server software. apt-get install openafs-{fileserver,dbserver}
Cell this server serves files for: student.lab
Next to configure the server. kadmin.local
Authenticating as principal root/admin@student.lab with password.
kadmin.local: addprinc -policy service -randkey -e des-cbc-crc:normal afs/student.lab
Principal “afs@student.lab” created.
kadmin.local: ktadd -k /tmp/afs.keytab -e des-cbc-crc:normal afs/student.lab Entry for principal afs with kvno 2, encryption type DES cbc mode with CRC-32 added to keytab WRFILE:/tmp/afs.keytab.
kadmin.local: quit
Once the key's been created and exported to file /tmp/afs.keytab as shown, we need to load it into the AFS KeyFile. Note that the number “2” in the following command is the key version number, which has to match KVNO reported in the 'ktadd' step above.
asetkey add 2 /tmp/afs.keytab afs/student.lab
To verify the key has been loaded and that there is only one key in the AFS KeyFile, run bos listkeys: bos listkeys afs1 -localauth
You should get something like this: key 2 has cksum 2035850286 Keys last changed on Tue Jun 24 14:04:02 2008.
edit the /etc/krb5.conf section “[libdefaults]”, and explicitly set “allow_weak_crypto = true”. Restart krb5-kdc with invoke-rc.d krb5-kdc restart After restarting kerbros we need to create the afs cell.
afs-newcell
Do you meet these requirements? [y/n] y
What administrative principal should be used? root/admin
This will creating initial protection database. Some errors will be generated about an id already existing and a bad ubik magic. We can ignore them. The next thing to do is to setup our first partition first we need to get an admin token.
kinit root/admin
Password for root/admin@student.lab: PASSWORD
aklog
We need to verify that you hold the Kerberos ticket and AFS token:
klist -5f
You will get some thing like this:
Ticket cache: FILE:/tmp/krb5cc_1116 Default principal: root/admin@student.lab
Valid starting Expires Service principal 02/09/10 17:18:18 02/10/10 03:18:18 krbtgt/student.lab@STUDENT.LAB renew until 02/10/10 17:18:16, Flags: FPRIA 02/09/10 17:18:18 02/10/10 03:18:18 afs/student.lab@STUDENT.LAB renew until 02/10/10 17:18:16, Flags: FPRAT
tokens
Tokens held by the Cache Manager: apri User's (AFS ID 1) tokens for afs@student.lab [Expires Apr 15 03:18]
Now, with a successful kinit and aklog in place, we can run afs-rootvol: afs-rootvol
The AFS client must be running and pointed at the new cell. Do you meet these conditions? (y/n) y
What AFS Server should volumes be placed on? afs1 What partition? [a] a
lets enable the client in /etc/openafs/afs.conf.client we need to edit the line AFS_CLIENT=false and make it AFS_CLIENT=true:
perl -pi -e's/AFS_CLIENT=false/AFS_CLIENT=true/' /etc/openafs/afs.conf.client invoke-rc.d openafs-client restart next to check the mount point of our afs cell. fs lsm /afs/student.lab
'/afs/student.lab' is a mount point for volume '#student.lab:root.cell'
Let's check the volume stats
fs lv /afs/student.lab
File /afs/student.lab (536870919.1.1) contained in volume 536870919 Volume status for vid = 536870919 named root.cell.readonly Current disk quota is 5000 Current blocks used are 4 The partition has 843456 blocks available out of 907096
now to test that we can read and write to the afs cell. cd /afs/student.lab
ls -al
total 14 drwxrwxrwx 2 root root 2048 2008-06-25 02:05 . drwxrwxrwx 2 root root 8192 2008-06-25 02:05 .. drwxrwxrwx 2 root root 2048 2008-06-25 02:05 service drwxrwxrwx 2 root root 2048 2008-06-25 02:05 user
echo TEST > testfile
-bash: testfile: Read-only file system
cd /afs/.student.lab
echo TEST > testfile
-bash: testfile: Permission denied inorder to write we will need to get a token.
cd /afs/.student.lab fs la .
Access list for . is Normal rights:
system:administrators rlidwka system:anyuser rl
kinit root/admin; aklog
Password for root/admin@student.lab: PASSWORD
echo TEST > testfile
cat testfile
TEST
rm testfile
In order to add new user to AFS you will need to first set them up in Kerberos and then in afs.
Once we have the user setup in Kerberos now we can add them to AFS.
pts createuser mirko 20000
User mirko has id 20000 next we need to create the home volume for user.
vos create afs1 a user.mirko 20000
Volume 536997357 created on partition /vicepa of afs1
vos examine user.mirko
user.mirko 536997357 RW 2 K On-line
afs1.student.lab /vicepa RWrite 536997357 ROnly 0 Backup 0 MaxQuota 20000 K Creation Mon Apr 19 18:30:24 2010 Copy Sun Apr 19 18:30:45 2010 Backup Never Last Update Never
RWrite: 536997357 number of sites -> 1 server afs1.student.lab partition /vicepa RW Site
cd /afs/student.lab/user mkdir -p m/mi fs mkm m/mi/mirko user.mirko -rw next to view volume and directory information: fs lsm m/mi/mirko
'm/mi/mirko' is a mount point for volume '#user.mirko'
fs lv m/mi/mirko
File m/mi/mirko (536997357.1.1) contained in volume 536997357 Volume status for vid = 536997357 named user.mirko Current disk quota is 20000 Current blocks used are 2 The partition has 843456 blocks available out of 907096
now lets view the permissions on the volume: fs la m/mi/mirko
Access list for m/mi/mirko is Normal rights:
system:administrators rlidwka
fs sa m/mi/mirko mirko all
fs la m/mi/mirko
Access list for m/mi/mirko is Normal rights:
system:administrators rlidwka mirko rlidwka
Next we are going to switch user to mirko and make sure we still have access to the home directory: unlog; kdestroy
kinit mirko; aklog
Password for mirko@student.lab: PASSWORD
cd /afs/student.lab/user/m/mi/mirko
echo IT WORKS > test
cat test
now to check the volume data size quota and increase it from the default 5 MB to 100 MB:
cd /afs/.student.lab
fs lq
Volume Name Quota Used %Used Partition root.cell 5000 28 1% 38%
fs sq . 100000
fs lq
Volume Name Quota Used %Used Partition root.cell.readonly 100000 28 1% 38%
Now we need to let ldap know where the new home directory for mirko is located. We first need to create a ldif file we will put it in tmp.
vi /tmp/homechange.ldif
dn: uid=mirko,ou=people,dc=student,dc=lab changetype: modify replace: homeDirectory homeDirectory: /afs/student.lab/user/m/mi/mirko :wq!
Once the file has been create then we need to apply it to the ldap database.
ldapmodify -c -x -D cn=admin,dc=spinlock,dc=hr -W -f /tmp/homechange.ldif let verify that the change took:
getent passwd mirko
mirko:x:20000:65534:mirko:/afs/hcoop.net/user/m/mi/mirko:/bin/bash
These will need to be done on the server and all client systems. apt-get install libpam-afs-session
To limit the chance of get locked out of the system will doing the PAM configurations. It is best to have another root terminal open and that you make a backup of all of the /etc/pam.d/common* files. To do this, preform the following in that root terminal you started, make sure to leave the terminal open:
cd /etc cp -a pam.d pam.d,orig
Note:If after you finish these step and you can not login then the above will allow you to be able to revert back to a functioning state by using the open root terminal and executing:
cp -a pam.d,orig/* pam.d/
Once you have edited the PAM files shown below it is best to restart the services. This isn't necessary, but it do ensures that the services will read the new PAM configuration. vi /etc/pam.d/common-auth auth sufficient pam_unix.so nullok_secure auth sufficient pam_krb5.so use_first_pass auth optional pam_afs_session.so program=/usr/bin/aklog auth required pam_deny.so
vi /etc/pam.d/common-session session required pam_limits.so session optional pam_krb5.so session optional pam_unix.so session optional pam_afs_session.so program=/usr/bin/aklog
At this point you can scp these two file to the afs clients, vm15 and vm16 or you could log on to each and make the need pam changes I would go with the SCP as it removes the chance of a typo.
At this point you now have a working afs file system.