User Tools

Site Tools


haas:status:status_201009

STATUS updates

TODO

  • the formular plugin is giving me errors, need to figure this out (email assignment form)
  • use include plugin to include a page containing various prior month status pages
  • can I install writer2latex on wildebeest herd without needing gcj??
  • lab46: FIX grep
  • mail: relocate /var/mail to be an nfs export
  • set up DRBD volumes between sokraits and halfadder, locate VMs there
  • back up user home directories

URLs

Other Days

September 29th, 2010

more multiseat fun

I gave multiseat another go today… and although not successful, scrounged up some more resources that I didn't know about before.

I think I'm going to attempt a virtualized prototype again, as it is just easier that way.

New links:

I may just have to give the multi video card approach another shot, because some of this stuff looks quite interesting.

fun with GNU indent

Seeing as I've had a chance to look at a lot of student source code lately, and spend considerable time making it readable, I figured it was high time to finally figure out GNU indent.

And I did.

Although some will have their own particular coding styles, for students in my classes, I'm going to require them to use the ANSI/Allman style of coding (which puts braces after things, with the exception of structs or do-whiles).

I went and figured out the particular options to produce just what I want, and that turns out to be:

-linux -bl -bli0 -nce -saf -sai -saw -sob -bad -bap -cdw -l86

With the exception of -l86, of course, because not everyone's terminal may be 90 chars like mine. So in /etc/indent.conf, I put the following:

-linux -bl -bli0 -nce -saf -sai -saw -sob -bad -bap -cdw

And in /etc/profile, I added the following:

# Configure GNU indent
INDENT_PROFILE="/etc/indent.conf"

And in /etc/skel, I added the exact contents of /etc/indent.conf to a file called .indent.pro

I also added the /etc/profile logic to .bashrc (and real.bashrc) in /etc/skel.

This way, all new users should get their own indent settings file.. and all existing users should hopefully inherit INDENT_PROFILE from /etc/profile when they log in.

I'm sure there will be situations I haven't envisioned, but at least this should produce far more readable code.

September 27th, 2010

gnuplot dokuwiki plugin installed

Upon a request from a student wishing to put some plots up on their wiki pages, I looked around and settled on the gnuplot dokuwiki plugin:

In order to install it, it needs darcs and gnuplot installed, so I installed the following on www:

www:~$ sudo aptitude install gnuplot-nox darcs

And to test it, here is a sample graph:

<plot>
plot [-20:20] cos(x)/x
</plot>

<plot> plot [-20:20] cos(x)/x </plot>

I installed it on both the Lab46 wiki and my wiki.

additional packages on Lab46

To assist some students who are indentationally challenged with their source code, I have installed indent.

For testing, I have also installed a package called cppcheck which will apparently gloss over C/C++ code looking for common problems.

September 23rd, 2010

jb rebooted

Around 3:46pm today, jb went stupid. It was eventually discovered that the cause of the stupidity was that its drive was partially ejected from the hot swap bay it was installed in… someone may have accidentally jostled it, thereby freaking it out.

A reboot restored sanity.

September 20th, 2010

Linux 64-bit local root exploit

It would appear that last week (actually, way back on September 7th) an exploit was released which could enable a local user to escalate privileges to root. This only affects 64-bit systems, and apparently may only affect systems with a 32-bit compatibility layer.

Pretty much every distribution has issued updates. The particular vulnerability has been branded: CVE-2010-3081

Doing a Google search for that will turn up likely all the relevant information one needs.

There are three pieces of code of interest in all this:

ABftw and diagnose-2010-3081 did not run successfully on lab46 or irc. diagnose displays a “!!! Error in setting cred shellcodes” message, which many people also seem to be getting.

robert_you_suck.c, on the other hand… I was able to root irc.

Here are some pages with varying bits of information on CVE-2010-3081:

So once new kernels are released, reboots are in order.

security reboots for CVE-2010-3081

The Lenny VMs (or VMs using the updated Lenny kernels, which includes Etch VMs) were rebooted to take advantage of the fix for CVE-2010-3081. Additionally, any updates were applied (so this sort of rounds out the list from the other day):

  • www
  • antelope
  • gnu
  • wildgoat
  • wildebai
  • auth
  • log
  • mail
  • irc (using 2.6.32-bpo.50-xen-amd64 from lenny-backports)
  • lab46 (using 2.6.32-bpo.50-xen-amd64 from lenny-backports)
  • repos

I tested the exploit on irc and lab46 running the backported Lenny kernel… looks like we're good to go, it is no longer successful in elevating to a root shell.

September 19th, 2010

package updates

I did an “aptitude update && aptitude upgrade && aptitude clean” on the following machines:

  • lab46
  • irc
  • sokraits
  • halfadder
  • db (no new updates to install)
  • web
  • www (no new updates to install)
  • lab46db (no new updates to install)
  • nfs1 (some updates installed… stayed away from nfs and drbd updates for now)
  • repos
  • nfs2 (updated all, knocked drbd out, had to reboot nfs2– came back fine after reboot)
  • auth
  • log
  • mail
  • mthsvn
  • backup

Could not update nfs-common and nfs-kernel-server on nfs2, because none of the files it needs are available (all symlinked under /export, which isn't mounted on nfs2 as it is the secondary peer right now).

backup not allowing ssh logins

Attempts to login via ssh to the VM backup.lair.lan either as root or my wedge account yields the following:

machine:~$ ssh backup
Permission denied (publickey).
machine:~$ 

I can't remember if this is something I rigged up, or if it is due to something else that has gone awry/needs fixing.

It has been up for 184 days.. I opted to reboot it to see if that made a difference.

I can still “xm console” into it from cobras, and that works just fine. I need to finish overhauling backup at some point anyway… no significant backups other than daily/weekly www /var/www tar'ed archives have been going to this machine… I think I want to repurpose it as a user data/content backup VM– so it would continue to backup www's content, along with perhaps my long standing desire to back up home directories.

backup would be perfect for this task, and neatly isolated on cobras, reducing potentials for failure in the event of some problem.

Oh… it looks like I may have enabled public key access as the ONLY means of connecting to backup. I see the following show up in the logs:

Sep 19 16:41:31 backup sshd[1664]: User root from ahhcobras.lair.lan not allowed because not listed in AllowUsers 
Sep 19 16:41:34 backup sshd[1666]: User root from ahhcobras.lair.lan not allowed because not listed in AllowUsers 

So I definitely have configured something.. I seem to recall this, trying to lock it down so the backups are ultra secure.

Log in as wedge from my LAIR home directory works… so I just need to install the appropriate public key if a desire exists to connect from a non-LAIR machine… probably best just to keep it as is.

September 16th, 2010

status page script

I updated my status page script to include previous, current, and next month links, so navigation to prior months is now possible via hyperlinks (scroll to the bottom of this page to see it in action).

I manually put in the code for all existing months, and prepped this month's status, so the script should cap it off appropriately and start off next month correctly.

Had to add some new variables to account for past months with respect to beginning and end of year switchovers.. we'll see if it works 100%. Hooray.

September 15th, 2010

dslab content added to lab46 wiki

I finally got around to migrating the DSLAB content off of the old internal LAIR wiki, and put it in its own namespace “/dslab” on the lab46 wiki.

I rolled accounts for the current DSLAB students, and gave them edit privs on that namespace.

Hopefully this is utilized, as it would be a lot easier to maintain and eventually mirror.

Even created a specific status page for DSLAB updates, in a similar format as here (although it doesn't roll over every month… we'll see how often it gets used and therefore what level of content within a certain timeframe warrants the best roll over window).

September 14th, 2010

mail: maildirs relocated to nfs

I finally got around to moving all the user Maildirs to a share on nfs, now exported via /export/lib/mail … mail mounts it via nfs read/write… lab46 currently mounts it read-only.

Users can now log into mail and get their home directory automounted.

Lab46 /var/mail mount working nicely.

September 13th, 2010

multiseat

My latest candidate for the FUTCH0RE has arrived; and gazelle is its name.

A link of possibly great value:

Some quick notes:

  • git clone git:anongit.freedesktop.org/xkeyboard-config (as mentioned in tutorial) * git clone git:anongit.freedesktop.org/xorg/driver/xf86-input-evdev (change in git path)
  • git clone git:anongit.freedesktop.org/xorg/driver/xf86-input-mouse (change in git path) * git clone git:anongit.freedesktop.org/xorg/driver/xf86-input-keyboard (change in git path)

Also made a symlink to facilitate compilation of gdm:

  • cd /usr/include
  • ln -s dbus-1.0/dbus dbus

Aside from these changes, the tutorial still appears to be refreshingly accurate (maverick has likely changed considerably since july when it was written, and we're using xorg-xserver-1.9.0 instead of 1.8.0). So certainly lots of potential for something to break; we'll see.

Oh, I also didn't rebuild Xorg from source, just those drivers… I am hoping that maverick's bleeding edge X packages will have taken care of the problem for me.

September 12th, 2010

joevm.lair.lan

Joe had requested a VM a couple weeks back. I finally got around to setting it up. Running it on 'cobras.

Created it as follows:

ahhcobras:~$ sudo xen-create-image --hostname=joevm --mac 00:16:3e:42:ee:44 --role=udev

Set it up in DHCP and DNS… 10.80.1.86

Going to forward in a web port (80), as he wanted to do web things.

September 11th, 2010

UNIX assignment submitting

I finally got around to automating the processing of submitted student assignments for UNIX. Script follows:

#!/bin/bash
#
# asnsubmit.sh - submission validation script
#
# determine if due assignments were turned in on time, and report to data
# directories accordingly.
#
# 20100911 initial version (mth)
#
 
# obtain current semester
semester="`/usr/local/bin/semester.sh`"
 
# week is an attempt to figure out the assignment
if [ -z "$1" ]; then
    weektmp="`/usr/local/sbin/calcweek.sh`"
    if [ "${chkweek}" -eq 0 ]; then
        echo "DEFERRED: Assignment Processing not needing during BREAK WEEK."
        exit 0
    fi
 
    let weektmp=$weektmp-2
else
    weektmp="$1"
    chkweek="`/usr/local/sbin/calcweek.sh`"
    if [ "${chkweek}" -eq 0 ]; then
        echo "DEFERRED: Assignment Processing not needing during BREAK WEEK."
        exit 0
    fi
 
    let chkweek=$chkweek-2
    if [ "${weektmp}" -gt "${chkweek}" ]; then
        echo "ERROR: Request for something in the future."
        exit
    fi
fi
week="`echo \"obase=16;${weektmp}\" | bc -q`"   # support for hexadecimal coded labs
 
#echo "Week is: $week"
 
unixpath="/var/www/haas/data/pages/${semester}/common/unix_assignments.txt"
symchk="`cat ${unixpath} | grep "lab${week}" | grep '\^' | wc -l`"
if [ $symchk -eq 1 ]; then
    cal_date="`cat ${unixpath} | grep "lab${week}" | cut -d'^' -f4 | cut -d'|' -f1 | sed -e 's/\*//g' -e 's/ //g'`"
else
    cal_date="`cat ${unixpath} | grep "lab${week}" | cut -d'|' -f5 | sed -e 's/\*//g' -e 's/ //g'`"
fi
due_date="`date -d \"${cal_date}\" +'%Y%m%d235959'`"
 
#echo "due date is: $due_date"
 
list="/home/wedge/local/attendance/etc/list/class.${semester}.unix.list.orig"
datadir="/home/wedge/local/data"
submitpath="/var/www/haas/content/unix"
for student in `cat $list | grep '^[^ ]*$'`; do
    for asn in lab cs; do
        #echo -n "[$student:$asn:$week] "
        if [ -e ${submitpath}/current/${student}-${asn}${week}.txt ]; then
            subdate="`cat ${submitpath}/current/${student}-${asn}${week}.txt | grep -A 2 \"(${student})\" | tail -1 | sed -e 's/\.//'`"
            asndate="`date -d \"${subdate}\" +'%Y%m%d%H%M%S'`"
            if [ "${asndate}" -gt "${due_date}" ]; then
                msg="1:${asn}${week} submitted after the due date. (${asndate})"
            else
                msg="2:${asn}${week} submitted on time. (${asndate})"
            fi
        else
            msg="0:${asn}${week} was NOT submitted."
        fi        echo $msg >> ${datadir}/${student}/results.unix.assignments
        chown wedge:${student} ${datadir}/${student}/results.unix.assignments
        chmod 640 ${datadir}/${student}/results.unix.assignments
    done
done

I'm relatively pleased with this script… does some great date manglings to do submission time comparisons. I have it running on Sunday morning (so there's a “late” window that can take place, and before the week number increments).

I've found a few potential holes that I've already taken care of:

  • break week ready
  • hex assignment numbering ready
  • cannot grade assignments from a week that has not occurred yet (temporally grounded)
  • works with header highlighted OR regular table cell syntaxed lines (carrot vs. vertical line)

The future gotcha I'll need to still do something about is of course after the semester is done… I more or less have to disable a dozen of my scripts after the semester so they don't try and continue processing things. But this will go under the category of “known limitation”.

At the moment I only caused the #0 assignments to be processed… I left the #1's for cron to get tomorrow morning (to get a sense of operational success).

September 9th, 2010

mail nfs issues

I discovered today that mail is exhibiting some nfs issues… it is acting as both a client and a server… basically, it serves out /var/mail to requesting clients (lab46 specifically), but it appears that when doing so, it cannot be a client.

Attempting such results in the following error message:

mail:~$ sudo mount -t nfs4 nfs:/export/home /home
mount.nfs4: Cannot allocate memory
mail:~$ 

No clear solution found for this… we used to have this on the old wildebeast (I may have ended up just mounting it NFSv3, which I really don't want to resort to, as it isn't really fixing the problem).

Functionality doesn't appear to be impacted, and this has likely been going on for about a month (whenever I “fixed” local mail notification on Lab46 I would suspect)… and mail has been delivered, and there hasn't been any reports that mail is broken from users.

What I'm going to do (at some point), is:

  • just copy /var/mail over to nfs
  • export it there (on nfs)
  • then mail will merely have 2 client mounts, versus being a server and a client..

… that should fix it.

Almost starting to seem like too much redirection, but this change should have a net positive performance improvement (or a reduction in some overhead impacting performance, the way things are currently arranged).

September 8th, 2010

I spent some quality time getting the attendance and data directories in order.

convert.sh

In the event I need to manually reinitialize all the attendance data, I wrote the following script (note: manual configuration required):

convert.sh
#!/bin/bash
#
# convert.sh
#
mkdir -p bak
mv -v processing/* bak/
for course in compess data unix; do
    for entry in `cat ${course}.class | grep -v '^class_dates'`; do
        student="`echo $entry | cut -d',' -f1`"
        for((class=2; class<7; class++)); do
            value="`echo $entry | cut -d',' -f${class}`"
            if [ "${value}" = "0" ]; then
                if [ $class -eq 2 ]; then
                    day="Aug24"
                elif [ $class -eq 3 ]; then
                    day="Aug26"
                elif [ $class -eq 4 ]; then
                    day="Aug31"
                elif [ $class -eq 5 ]; then
                    day="Sep02"
                else
                    day="Sep07"
                fi
                msg="0:Did NOT Attend Class on ${day}"
            elif [ "${value}" = "1" ]; then
                if [ $class -eq 2 ]; then
                    day="Aug24"
                elif [ $class -eq 3 ]; then
                    day="Aug26"
                elif [ $class -eq 4 ]; then
                    day="Aug31"
                elif [ $class -eq 5 ]; then
                    day="Sep02"
                else
                    day="Sep07"
                fi
                msg="1:Attended Class on ${day}"
            else
                if [ $class -eq 2 ]; then
                    day="Aug24"
                elif [ $class -eq 3 ]; then
                    day="Aug26"
                elif [ $class -eq 4 ]; then
                    day="Aug31"
                elif [ $class -eq 5 ]; then
                    day="Sep02"
                else
                    day="Sep07"
                fi
                msg="1:Attended Class at ${value} on ${day}"
            fi
            echo $msg >> processing/${student}.${course}.class
        done
    done
done
chown wedge:lair processing
exit 0

It lives in the base of the semester directory (attendance/etc/fall2010/convert.sh), and I ran it as root… although technically there should be no reason why my normal user cannot execute it, since it parses through things in the current directory tree, which I own anyway.

removing obsolete attendance data

After running convert.sh, I needed to go in and remove all the attendance files in the student data directories.

The following will do that (run from the base of data/):

for student in `/bin/ls -1`; do
    rm -vf ${student}/results.*.attendance
done

NOTE: These files previously had an extension of .fortitude … I did a mass rename today, changing all the pertinent scripts to call the files .attendance instead of .fortitude … basically all I had to change was /usr/local/sbin/fortitude.sh in the appropriate place.

grade not-z moved to wednesday

I figured, to reduce some confusion, I'd bump the grade not-z's journal check to Wednesday morning instead of Tuesday… this way all my classes will have at least 1 class in the new week (where reminders could theoretically be given) prior to grading taking place.

The relevant crontab entry (/etc/cron.d/dokuwiki on www):

# Grade Not-Z comes to check on journals (Wednesday early morn)
32 4 * * 3      root    /usr/local/sbin/journalcheck.sh

Of course, don't forget to restart cron:

www:~$ sudo /etc/init.d/cron restart
Restarting periodic command scheduler: crond.
www:~$ 

journal mangling

Finally, I set about reinitializing all the journal data. I started off with the following script to back all existing data up:

mkdir -p bak
for student in `/bin/ls -1 | grep -v '^bak$'`; do
    mv -v $student/results.journal bak/$student.results.journal
done

This needs to be run from data/

I also went and checked to see who didn't create their journal by the deadline. As it turns out, only 2 people… really just one… as one added the class after the deadline.

To determine this, I checked the journal creation logs on www, and created a sorted text file (from /usr/local/etc/mth on www):

www:/usr/local/etc/mth$ cat journal.fall2010.log | grep 'created their journal\.' | cut -d' ' -f3 | sort > list.create

In my data/ directory, I generated a list of all my students:

/bin/ls -1 > list

NOTE: be sure to remove the “list” entry from list.

Put both files in the same place, and run a diff on them:

www:/usr/local/etc/mth$ diff list.create list
2a3
> user1
11a13
> user2

voila! Got the users who didn't get the CREATE achievement… just need to slap those results in each user's newly created results.journal file, and re-run the journal check scripts for weeks 1 and 2, and that should be all set.

So… reinitialize the results files for those who created their journals on time…

for user in `cat list.create`; do
    echo "1:CREATE achievement unlocked" >> $user/results.journal
done

And then do nothing for those who did not unlock it.

Rechecking weeks 1 and 2

To perform the rescan for weeks 1 and 2, I had to run (I did so as root) journalcheck.sh in /usr/local/sbin on www. By specifying the desired (week+1) on the command-line, it will look at the appropriate entry.

www:~$ sudo /usr/local/sbin/journalcheck.sh 2
www:~$ sudo /usr/local/sbin/journalcheck.sh 3

This turned up some journal anomalies… 2-3 users lacked a week2.txt but had a week3.txt… I corrected this.

At least one user still lacked a proper journal structure, so I initialized it.

And one user was using a slightly modified structure so they'd end up missing all the checks (I brought it into alignment).

There seems to be a logic hole in one of my journal scripts… for it failed to properly parse the attendance data, and in the journal directory on the wiki there exist directories of all the students' first and last names… this may have happened at the conclusion of the automated journal initialization… not sure, but worthy to note for future semesters.

To handle it right now, I did the following:

for name in `ls -1d * | grep '^[A-Z]'`; do
    rm -rvf $name
done

Which worked, so long as nobody had a last name that begins with a lowercase letter :) Only one such case, easily removed. So now, JUST the usernamed directories are present.

Journals are now re-initialized… attendance is sanitized… and data directories are set to match.

I have all the attendance data queued up to fire off and update the data directories around 11pm tonight.. so we'll see if it all works as it should.

September 5th, 2010

multiseat thoughts return

September 4th, 2010

lair-backup updated

I hooked up irc and lab46 to partake in the monthly backup scheme. In the process I rooted out some bugs in the files provided by the lair-backup package.

Still have to manually install an ssh key on the destination, but at least being able to install a package can take a lot of work out of the process.

Updated to version 1.0.1-0.

<html><center></html>

<html></center></html>

haas/status/status_201009.txt · Last modified: 2010/11/01 14:56 by 127.0.0.1