User Tools

Site Tools


haas:status:status_201001

STATUS updates

TODO

  • I need to figure out how to declare PHP variables, and have other sections in the wiki recognize those variables and their set values (ie the whole “$week”, “$desig”, etc. variables of old), so I can further integrate some functionality. phpwikify plugin which I ultimately removed and came up with another solution.
  • update books for ASM, and the rest
  • Complete journal weekly entry check script (how to incorporate keywords?)
  • Complete journal new weekly entry insertion script
  • script for status auto-rollover each month (regen TODO, URLs, and Other Days)
  • Need to figure out ASM first assignment
  • Finish ASM, HPC0, HPC2, SYSNET syllabi
  • Need to fix 'calcweek' script
  • Need to deploy the journal checking scripts (once calcweek is fixed)
  • Finish UNIX quest deployment for first week
  • Need to finish Friday UNIX quests
  • Need to deploy cron job to initiate journal create deadline
  • Wednesday: figure out next quest to give out on friday's unix class
  • Wednesday: figure out next course of action to give out on thursday's asm class
  • Attendance: I should write a script to more immediately report attendance data to students after each class
  • check user creation scripts to make sure .pinerc is set correctly
  • rig up a cron job to reboot wildebeast every early morning weekday
  • Adapt attendance deployment script and deploy it in cron (friday night)
  • How are we handling quest #0 deadline?? I guess I'm killing it.
  • UNIX quests for week #2
  • ASM prime number submission functionality
  • Need to finish writing up HPC0 projects
  • the formular plugin is giving me errors, need to figure this out (email assignment form)
  • Before LAIR classes, have a terminal attached to wildebeast's console

URLs

Other Days

January 31st, 2010

And here we are, the last day in January.

I spent the day putting the finishing touches on the prime number program submission functionality, handled by the grade not-z… allowed me to implement a lot of logic that I can use in later things, especially UNIX quests.

With that said, I still very much need to finish UNIX quest #1… so I'm hoping I won't have any major interruptions tomorrow morning… the premise is a simple one… just need to make sure I have the time to debug.

January 30th, 2010

Journal Creation Deadline

I deployed the logic to cause the journal creation deadline to come into effect. At this point pretty much everyone has created it, which is good.

Attendance tending

I deployed the script to propagate attendance data to each user, on a bi-weekly basis (mid-week, and end of the week)… cron entry (/etc/cron.d/attendance) looks as follows:

##
## Process attendance data into per-user data directories
##
04 23 * * 3,5   wedge   /usr/local/sbin/fortitude.sh

Note to self, be sure to check monday's attendance output to make sure files are named correctly and data is as I intend.

To complement this, of course, is fortitude.sh itself:

cd /usr/local/etc/attendance/spring2010/processing
 
for entry in `/bin/ls -1`; do
    class="`echo $entry | cut -d'.' -f2`"
    user="`echo $entry | cut -d'.' -f1`"
    cat $entry >> /usr/local/etc/data/$user/results.$class.fortitude
    chown wedge:$user /usr/local/etc/data/$user/results.$class.fortitude
    chmod 640 /usr/local/etc/data/$user/results.$class.fortitude
    rm -f $entry
done
exit 0

This makes use of the recent modifications I made to Derek's attendance script… so hopefully it works.

A potential extension to this functionality would include doing some checks of the attendance data output and reporting to me (via an e-mail) cases where there is a majority of 0's, indicating something is up.

Also… attendance is break-week safe… I just had a thought, how did I handle journals on break weeks?

break week calculations

I just went and checked– sure enough, I didn't yet handle that… so I went and implemented a fix to calcweek.sh (updated the source listing below from a few days ago)… if it IS a breakweek, calcweek will return (or display) a value of 0… so scripts should check to see if calcweek displays a 0 to determine if it is a break week (I updated the affected journal scripts to just exit on such a condition, as they have no need to be run during a break week).

I also changed the logic for post-semester weeks– just set week to 0 (so 0 is now the case for breakweeks or extra-semester events)… basically, if we're not a legitimate week semester, set to 0. If I remember to implement this in all my scripts, we shouldn't have any problems.

We'll see if my logic is actually correct once we hit a break week.

ASM prime checking

As I rig up my code to do an automated checking and grading of prime number programs, I figured I'd deploy some of the logic in a chrooted environment.

To do that, I needed to figure out how to do such things.

As it turns out, in the base of your chroot, have a bin/ and lib/… put bash in bin/, and the libraries bash depends upon in lib/.

I just tested this on one of my C programs, and it works beautifully… obviously, it can only run stuff there is library support for, so the simplest of C programs– hence why I asked the prime program to be simple.

January 29th, 2010

NOTE: Wildebeast did lock again during class today, despite my morning automated reboot. I'll need to attach to the console for each class to try and catch any telling system messages.

I started tackling the automated reporting of attendance records. I ended up modifying Derek's attendance scripts to also output information in a highly convenient format for me.

The modifications took place exclusively in bin/attendance:

131 #Verify Login is from the proper ip
132 if [ -n "$(echo $IP | egrep "$IP_Range" )" ]
133 then
134     #Record login time
135     sed -i "s/^$student.*/&,$LoginTime/g" $ClassFile
136     echo "1:Attended Class at $LoginTime on $Today" >> $ClassDir/processing/$student.$Class
137     #Remove the student from the list
138     StudentList="$(echo $StudentList | sed s/$student//g)"
139 fi
...
149 # End of class cleanup - Record 0 for all absent students
150 for student in $StudentList
151 do
152     sed -i "s/^$student.*/&,0/g" $ClassFile
153     echo "0:Did Not Attend Class on $Today" >> $ClassDir/processing/$student.$Class
154 done

I specifically added lines 136 and 153, which would give me a simple 1 or 0 output to a particularly named class file that I could later use in a week-end clear out script to each student's data directory.

To handle this first week, I wrote the following hack script:

for class in asm hpc0 unix; do
    if [ "$class" = "unix" ]; then
        date1="Jan25"
        date2="Jan29"
    else
        date1="Jan26"
        date2="Jan28"
    fi
 
    for student in `cat $class.class | grep -v '^class_dates'`; do
        name="`echo $student | cut -d',' -f1`"
        status1="`echo $student | cut -d',' -f2`"
        status2="`echo $student | cut -d',' -f3`"
        echo "$status1:Attendance record for $date1" >> processing/results.$class.fortitude.$name
        echo "$status2:Attendance record for $date2" >> processing/results.$class.fortitude.$name
    done
done

And finally, to put the attendance files into the appropriate student data directory:

# Run this part as root
for file in `/bin/ls -1`; do
    fname="`echo $file | cut -d'.' -f1,2,3`"
    user="`echo $file | cut -d'.' -f4`"
    mv $file ../../../../data/$user/$fname
    chown wedge:$user ../../../../data/$user/$fname
    chmod 640 ../../../../data/$user/$fname
done

January 28th, 2010

Started with student installs in hpc0 today… overall, things worked.. I needed to update the DHCP records to allow the student VM servers to actually do network installs.

January 27th, 2010

My first mid-week “get stuff done” day.

calcweek

I finally sat down and rewrote calcweek (this is probably the third time)… this version is not dependent upon my mthCMS infrastructure, and each semester I must manually enter start of semester and breakweek dates, it then figures out the rest, using date, because date is awesome like that.

calcweek.sh
#!/bin/bash
#
# calcweek.sh - calculate the current week of the semester
#
 
##
## Configuration - set dates to the MONDAY of the week in question
##
start="20100125"
break1="20100301"
break2="20100412"
duration="14"
boffset="0"
breakweek=0
 
##
## Obtain offsets
##
sem_start="`date +'%j' -d $start`"
sem_break1="`date +'%j' -d $break1`"
sem_break2="`date +'%j' -d $break2`"
sem_today="`date +'%j'`"
 
##
## Some debugging
##
#sem_today=95
#echo "sem_today: $sem_today"
#echo "sem_break1: $sem_break1"
#echo "sem_break2: $sem_break2"
 
##
## Determine break week offsets
##
if [ "$sem_today" -ge "$sem_break1" ]; then
    boffset=1
    let endofweek=${sem_break1}+6
    if [ "$sem_today" -le "$endofweek" ]; then
        breakweek=1
    fi
fi
 
if [ "$sem_today" -ge "$sem_break2" ]; then
    boffset=2
    let endofweek=${sem_break2}+6
    if [ "$sem_today" -le "$endofweek" ]; then
        breakweek=2
    fi
fi
 
##
## Perform THE calculation
##
week="$(((((${sem_today}-${sem_start})/7)+1)-$boffset))"
 
# If we're in a breakweek, week is set to 0.
if [ "$breakweek" -gt 0 ]; then
    week=0
fi
 
##
## If we're past the end of the semester, do nothing (set to 0, treat like breakweek)
##  
if [ "$week" -gt "$duration" ]; then
    week=0
fi      
 
##
## Produce output for other scripts to use
## 
echo "$week"
exit 0

This allowed me to slap journalweeklyprocessing and journalcheck in cron to fire off every week.

I made some adaptations to the two journal scripts so they'd run with the appropriate week's data in mind (ie decrement the week to assess for journal entry requirements).

wildebeast lockup prevention

I put in a crontab entry on wildebeast to reboot it every class weekday morning ~7:24AM… this means Monday, Tuesday, Thursday, Friday. We'll see if that helps reduce the lockups taking place (until we find time to rebuild wildebeast).

new user .pinerc

Sure enough, the new user scripts were not substituting =USER for =username, merely just copying over the stock and letting it be.

I fixed that. So there will be a few users who will definitely need to run the 'fixmail' script on lab46.

January 26th, 2010

Continued kicking off classes for the semester.

Wildebeast locked for the second time in two days (once a day so far, let's see how the pattern persists).

So far my quest and journal creation scripts seem to be working.

There might be a flaw in my new user creation scripts that prevent the user's .pinerc file from being set correctly… need to investigate this.

January 25th, 2010

First day of classes.

I kicked off SYSNET, HPC2, and UNIX.

Tomorrow will be ASM and HPC0.

Just put the finishing touches on the ASM syllabus. Everything else is hopefully in a ready state.

Minor problem encountered with wiki ACLs regarding journals. I resolved it, but it means each semester I'll have to add a new rule to accommodate the particular semester namespace.

Also, there seems to be some problems with users trying to log in– if they uncheck “Use Secure Login”, the problem goes away. It is not universal, and I've yet to discern a pattern (not exclusively IE, some firefox users… even some sitting in the LAIR at the student machines), as many seem to be able to get in.

The UNIX class is looking to be another good one– already very active, and enthusiastic.

I'm slowly amassing a list of things I need to do on my “free” day, Wednesday… to ensure smooth transition into the latter half of the week.

January 24th, 2010

ready or not

Well, go-time is now upon us… some of my scripts are not as robust as I'd like… I hope they end up working (I did test them, but you never know the extra bugs that'll pop up when it goes into production).

I need need to deploy the weekly journal checking stuff… but I've got until next week to do that :)

Need to finish filling out projects for HPC0.

Need to finish the ASM syllabus.

status maintenance

Before my new status page of ramblings grows out of control, I decided to take some action to curtail its length– resetting it each month, and backing up the old one under the status namespace.

This way, it won't grow to epic proportions as the old LAIR STATUS page is, and I'll have it broken up into units of months.

I had to go back and do some minor edits to my status page, namely making sure all wiki links are specified as absolute instead of relative (otherwise they all break once the page moves into a new namespace– it's in the root namespace now).

For posterity, here is my status migration script:

#!/bin/bash#
# dokuwiki-status.sh - update my status page each month
#
cd /var/www/haas/data/pages
lineno="`cat -n status.txt | grep '=====Other Days=====$' | sed -e 's/^[^0-9]*\([0-9][0-9]*\)[^[^0-9]*/\1/'`"
 
curmonth="`date +'%m'`"
curyear="`date +'%Y'`"
let lastmonth=$curmonth-1
if [ "$lastmonth" -lt 1 ]; then
    lastmonth=12
    let curyear=$curyear-1
fi
lastmonth="`printf '%.2d' $lastmonth`"
 
let nextmonth=$curmonth+1
if [ "$nextmonth" -gt 12 ]; then
    nextmonth=1
fi
nextmonth="`printf '%.2d' $nextmonth`"
 
mkdir -p status
mv status.txt status/status_${curyear}${lastmonth}.txt
head -$lineno status/status_${curyear}${lastmonth}.txt > status.txt
chown -R wedge:www-data status*
chmod -R u=rwX,g=rwX,o= status*
exit

Once again I do some neat things with the date command, and also manage to preserve the prior month's TODO and URLs sections (so I can keep referencing them– but also keeping a snapshot on the old month).

I rather like it. Deploying in cron to fire off the first of each month.

January 23rd, 2010

Completed my changes to the journalinit script— created a journalinithelper script to actually perform the user-incurred journal creation (avoiding some potential concurrency issues).

Also fleshed out the remaining journal scripts– got a script I can fire off each week to append the new week's entry (need to fix calcweek now so it has a purpose).

The journalcheck script will perform the various assessments upon each person's journal and entries for the respected week (word count, changes made, achievements, etc.) I made it a separate script so I can fire it off separately than the journalweeklyprocessing script.

There may be some caching issues with the include plugin, which drives the journals… when a change is made to one of the entries, the changes themselves may not actually show up, and a more manual ?purge=true needs to be added to the URL to kick it into gear.

The plugin author is aware of the issue and has been working on it, so hopefully at some point I won't have to continue worrying about it. Until then, I've created the journalcacheclear script to touch journal files when it fires off. I'll be putting it in cron.

January 22nd, 2010

The last friday before the start of the semester. Aside from the bureaucratic distractions of the day, I also made my first significant appearance in the LAIR, where I completed the transition of all systems over to UPS power.

To be complete, I shut almost everything down (well, everything that wasn't on the student.lab shelves and ahhcobras– although I did power down nodes 00-04 of the cluster, as they likely got a little stupid when everything else went down)– all VMs on sokraits and halfadder, caprisun, the switches themselves… even gave jb2 a reboot to mitigate any of its too-much-traffic-and-I-hardlock issues.

So everything is running on a fresh boot. I then performed whatever updates were available on all VMs and machines… lots of libc6 updates and pushed through all the lingering lair-std, lair-nfs, and lrrd-node updates. There are some lingering apache upgrades to do on web, but I figured I'd save that fun for another day.

Also backed up (prior to doing the updates though), ALL VMs to NFS, just to have a safe backup. 63GB of VM backup goodness (I didn't backup the swap files, I figured that was somewhat pointless).

So, machines rebooted, put on battery, updated, made all shiny fresh for the new semester. Also tested my wiki pages from the wildebeast netboot images… they load and are usable, so no surprises from that corner come monday.

Overall it was a fairly painless refresh of current settings, the only big problem I experienced was MY irssi session refusing to start, because I had accidentally introduced a typo in my config file a few days ago. Easy enough fix, and appreciably mild compared to whatever potential problems could have unfolded.

Now all I have to do is finish my scripts, syllabi, and whatever other preparations for the start of next week. I'll have to see how I feel about actually having classes on Mondays again- haven't done that in a while.

January 21st, 2010

A day of distractions… didn't really get a chance to even touch the journal checking code.

Ended up trying to find a way to have a dokuwiki form where someone could attach a file that ended up being sent to a specified e-mail address upon form submission.

bureaucracy doesn't seem to immediately do it.

the formular plugin claims to, but I'm experiencing weird permissions errors (I suspect some sort of interaction with the server that needs to be better configured).

January 20th, 2010

Came up with an adequate solution to the journals for the semester. Going to meld some of my quest monitoring logic with the per-user web log logic to create something that will check for journal creation by students, and auto-create a stock week entry (something I can then diff against to ensure changes have been made).

Just a quick note:

chmod u=rwx,g=rwx,o=,+t $user

Is functionality equivalent to:

chmod 1770 $user

Just to let that sink in.

Got the journal situation figured out, as indicated above… about halfway through writing the scripts… going to have 2 scripts– journalinit, which is responsible for monitoring the creation of new journals by students, and then a script that manages the weekly journal maintenance (that in itself might turn into two).

I've gotta redo the journal achievements to better reflect the wiki environment, but we're getting closer.

I need to roll out more quests… those will be the tricky part, to make sure I stay ahead of them. And to set timeframe availabilities.

January 19th, 2010

Made some significant headway with inotifywait… I got a solution that will automatically sense a group of files, and continue monitoring as that group dwindles, exiting when there are no more to monitor.

I am looking at this as a way of having per-quest monitors, running only as long as they need to… but I also intend to work in some time limits, so they aren't always running.

January 18th, 2010

The countdown has begun! It is a race now to see how much I will get settled before officially launching this semester's classes.

Recommenced some explorations of inotify to start getting UNIX quests created.

Lab46 was giving me a little fight installing inotify-tools… one of the /etc/security/limits.conf settings I have is apparently limiting root's files to like 16MB… I needed to extend it to allow aptitude to work.

Something like:

lab46:~$ sudo ulimit -f 262144

did the trick… but I should go in at some point and fix that, so when I'm in a rush I don't have to get “file size exceeded”, blocking my every attempt.

January 17th, 2010

wikis updated to 2009-12-25c

There was a cross-site scripting vulnerability discovered, and patch released, for dokuwiki, against the latest release (not a problem with 2009-12-25, but actually the implementation of the acl plugin, and had actually been a problem since August of 2008, so approximately 3 dokuwiki versions affected).

The fix was simple– download the latest acl-plugin code, and put it in lib/plugins/acl (the archive extracts into an acl/ directory). Also, update conf/msg to “25”.

Important hint: Don't make backups of the acl directory within lib/plugins… when the admin page loads, it reads ALL THOSE DIRECTORIES, and you end up with a charming “cannot redeclare class” fatal error, making you think something horrific just happened.

At any rate, we're up and running with the latest release.

juicebar pf rules updates

I finally decided to do something about the seemingly non-functional ssh brutes address collecting and blocking on juicebar (waking up to yet another dictionary attack on the router).

I went in and checked, comparing caprisun's rules with juicebar's… and discovered that the brutes eligibility was never being calculated, so I fixed it.

Should hopefully be all better now.

Assign users to class groups

One of the other bits of new functionality I've rolled out is the class groups, which students will be assigned membership to at the start of the semester, and removed at its conclusion.

This will allow for more fine-tuned control over accessibility of various resources.

The following will detail the process undertaken for determining group membership, and assigning users to the actual groups.

Step 1: Obtain user and group list

Because I have my hand-made class files under my attendance directory, this will be the starting place.

The intent here is to grab out of each class file (for the current semester), the username, and in conjunction with the class filename itself, create a file with users and their associated groups, in the following format:

dn: cn=groupname,ou=Group,dc=lair,dc=lan
changetype: modify
add: memberUid
memberUid: user1
memberUid: user2
memberUid: user3

For starters, we'll need to create those files with the lead-off lines (since they only need to occur once). Something like this will do the trick:

for class in `/bin/ls -1 class.spring2010.* | cut -d'.' -f3 | sed -e 's/^sys.*/sys/' -e 's/^hpc[12]/hpc/'`; do
    echo "dn: cn=$class,ou=Group,dc=lair,dc=lan" > $class.ldif
    echo "changetype: modify" >> $class.ldif
    echo "add: memberUid" >> $class.ldif
done

Step 2. Fill LDIF files

Now we need to populate those files with the actual users.

With this, I really outdid myself— using a for loop to generate the list for the outer for loop. It shows up ugly, but I'm rather impressed with it (and worth the ugliness, for letting me avoid a step of intermediate output files):

for group in $(for user in `cat class.spring2010.* | grep '^[a-z]' | sort | uniq`; do
        grep $user class.spring2010.* | sed -e "s/class\.spring2010\.\(.*\)\.list\.orig:\(.*\)$/\1:\2/g" | \
        sed -e 's/^sys.*:/sys:/' -e 's/^hpc[12]/hpc/' \
        done | sort); do
    grpname=`echo "$group" | cut -d':' -f1`
    usrname=`echo "$group" | cut -d':' -f2`
    echo "memberUid: $usrname" >> $grpname.ldif
done

Step 3. Add myself to the users' user group

Before we declare victory, I also need to add myself to each user's user-named group. This way, I can work fancy magic with stuff that only each individual user can gain access to.

for user in `cat *.ldif | grep '^memberUid:' | sort | uniq | sed -e 's/^memberUid: //'`; do
    echo "dn: cn=$user,ou=Group,dc=lair,dc=lan" >> usergroups.ldif
    echo "changetype: modify" >> usergroups.ldif
    echo "add: memberUid" >> usergroups.ldif
    echo "memberUid: wedge" >> usergroups.ldif
    echo >> usergroups.ldif
done

Step 4. Combining the LDIFs into one file

As it turns out, ldapmodify only likes to operate on one file per instantiation. So, to allow for a one-liner in the next step:

for ldif in `/bin/ls -1 *.ldif`; do
    cat $ldif >> result.ldif
    echo >> result.ldif
done

Step 5. Perform changes in LDAP tree

Shorter and simpler, in the form of a one-liner:

$ ldapmodify -x -W -f result.ldif -D "cn=admin,dc=lair,dc=lan"

Step 6. Grant LAIRstation access

Functionally, we are now done with this process, but there's another step that I should include so that I save myself from doing it later.

And that is enabling LAIRstation access for students in the hpc, hpc0, and sys groups (basically, all the classes that meet in the LAIR and are not unix).

So, some very similar logic is applied:

for user in `cat hpc.ldif hpc0.ldif sys.ldif | grep '^memberUid' | sort | uniq | sed -e 's/^memberUid: //'`; do
    echo "dn: uid=$user,ou=People,dc=lair,dc=lan" >> hostadds.ldif
    echo "changetype: modify" >> hostadds.ldif
    echo "add: host" >> hostadds.ldif
    echo "host: lairstation1" >> hostadds.ldif
    echo "host: lairstation2" >> hostadds.ldif
    echo "host: lairstation3" >> hostadds.ldif
 
    cchk="`grep $user hpc.ldif | wc -l`"
    if [ $cchk -eq 1 ]; then
        echo "host: log" >> hostadds.ldif
    fi
 
    echo >> hostadds.ldif
done

And we apply it to the tree:

$ ldapmodify -x -W -f hostadds.ldif -D "cn=admin,dc=lair,dc=lan"

Bam! Users have their appropriate hosts entries for using the LAIRstations (and the ability to personally check log files if they are in either of the HPC Experience classes).

The obvious complement to this functionality, which I will not pursue at the moment, is of course automated removals of these attributes (host, group) at the conclusion of the semester.

Step 7. Student data directories

Part of my new scheme involves having directories, one for each student, owned by me, readable/writeable by them, so they can deposit assignments, and get access to other course metrics (thinking a little like a personal /proc).

Script to create those is as follows:

for user in `cat /path/to/class.spring2010.* | grep '^[a-z]' | sort | uniq`; do
    mkdir $user
    chown wedge:$user $user
    chmod u=rwx,g=rwx,o=,+t $user
done

Now to actually hook it all up :)

January 16th, 2010

new user creation process

As I get geared up to implement some additional functionality for the semester ahead, I realized I should ensure that the new user creation process is all set, so I don't have to do stuff by hand.

First up, I went and created my class lists, as they current exist. Hopefully enrollment doesn't change too much.

I then went and figured out which users would need accounts created. Spur of the moment scripts follow:

First up, I needed a list of users that didn't exist:

for student in `cat class.spring2010.* | grep '^[a-z]' | sort | uniq`; do
    id $student
done > /dev/null 2>output

This would produce a file containing the users I'd need to create, along with the rest of the telling id command's output saying the user doesn't currently exist. So let's strip that superfluous data out of there:

cat output | sed -e 's/^.*: \(.*\):.*$/\1/' > user.list

Then, we need to “convert” from just username, to newuser creation script format:

for user in `cat user.list`; do
    fullname=`grep --max-count=1 -B 1 -h $user class.spring2010.* | head -1 | sed -e 's/ /*/g'`
    echo "$user:$fullname" >> new.list
done

The -h and –max-count were first-time uses for me, and worked as advertised.

This last approach was neat, because I got to go back to the original data to source the information I'd need, using the looking back functionality of grep to do so. Always a fun time.

Now, we are left with 'new.list', which is the desired format my script requires.

These things aside, I also went and integrated mail into the new user creation script, so appropriate Maildirs will be created.

I also changed the default home directory permissions to be more secure– we actually had unnecessary permissions set:

  • everyone is a member of group 'lab46'
  • www-data, on the web server, is *not* a member of group 'lab46'
  • to keep prying eyes out of each person's home directory, no group permissions whatsoever
  • to allow web access to work, the “search (x)” bit is on for world

This way, the web server can still pull data without permission problems, AND other Lab46 users cannot even descend into other user's home directories, OR web directories, because they're members of group 'lab46', and that has no permissions on the directory.

statistics

Since I've been playing with providing individual users access to web activity affecting their web space, I decided to do the same on a larger scale– provide web access collection for both the Lab46 wiki and my wiki.

I installed the logstats and statdisplay plugins on the Lab46 wiki, and just the logstats plugin on my wiki. I told the logstats plugin on my wiki to use the same log file that the Lab46 logstats plugin uses, so we'd be able to see total wiki activity.

I also configured it to omit any and all accesses by me, both as my 'wedge' user and from any of the IPs I would normally come in from. This is under the assumption that I will likely be a heavy accessor of the wikis as I develop content, so out of the intent to gauge actual end user utilization, omitting myself from the logs would aim to yield a much better picture of what is going on.

I created the stats page on the Lab46 wiki to show off some of the statistics collected and analyzed. I may lock this down… probably at the very least restrict anonymous viewers.

nullmailer fun

I had cause to look further into the nullmailer configuration, to enforce sane originating FROM addresses so messages being generated automatically (such as by cron) would not be unintentionally attempted to push through mail.corning-cc.edu (they're meant for us, and all it really does is clog up logs).

So, I did that. The next step is to create a 'lair-mail' package, which I've been meaning to do for some time, that basically drops in nullmailer with LAIR-specific configuration settings.

The following site I found useful:

wiki view of user web access logs

The user web access log viewing script I wrote still worked, but I enhanced it, to make better use of screen width consumption, and hopefully a bit more intuitive (took out some extraneous information as well):

weblogs.sh
#!/bin/bash
#
# weblogs.sh - update per-user access and error log information
#
# Fixed problem with user error log, enhanced output 20100116 (mth)
# Initial release 20100115 (mth)
#
cd /var/www/data/pages/logs
for user in `/bin/ls -1 *.txt | cut -d'.' -f1`; do
    echo "~/public_html/ web server access and error logs, the 32 most recent entries for each." > $user.txt
    echo >> $user.txt
 
    echo "=====Common Server Codes=====" >> $user.txt
    echo "^  Code  ^  Description  ^" >> $user.txt
    echo "|  200  |OK  |" >> $user.txt
    echo "|  206  |Partial content  |" >> $user.txt
    echo "|  301  |Moved permanently  |" >> $user.txt
    echo "|  302  |Redirection  |" >> $user.txt
    echo "|  304  |Not modified  |" >> $user.txt
    echo "|  400  |Bad request  |" >> $user.txt
    echo "|  403  |Forbidden  |" >> $user.txt
    echo "|  404  |Not found  |" >> $user.txt
    echo "|  408  |Request timeout  |" >> $user.txt
    echo >> $user.txt
 
    echo "=====Access Log=====" >> $user.txt
    echo "^  Who  ^  What  ^  Where  ^  When  ^  How  ^" >> $user.txt
    grep " /~$user" /var/log/apache2/access.log | tail -32 | sed -e 's/^\([0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\) .*\[\([0-9][0-9]*\/...\/[0-9][0-9][0-9][0-9]\):\([0-9][0-9]:[0-9][0-9]:[0-9][0-9]\) -[0-9][0-9][0-9][0-9]\] "\([A-Z][A-Z]*\) \(.*\) \(.*\)" \([0-9][0-9]*\) .*$/|\1  |  \7  |\5  |\2 at \3|\6 \4|/g' >> $user.txt
    echo >> $user.txt
 
    echo "=====Error Log=====" >> $user.txt
    echo "^  Who  ^  What  ^  Where  ^  When  ^  Why  ^" >> $user.txt
    grep "/$user" /var/log/apache2/error.log | tail -32 | sed -e 's/^\[... \(...\) \([0-9][0-9]\) \(..:..:..\) \([0-9][0-9][0-9][0-9]\)\] \[\(.*\)\] \[client \([0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\)\] \(.*\): \/home\/.*\(public_html.*\)$/|\6|  \5  |\8  |\2\/\1\/\4 at \3|\7  |/' >> $user.txt
    echo >> $user.txt
    chown wedge:www-data $user.txt
    chmod 660 $user.txt
done

If I were to enhance this any more, it would likely have to be broken up into several intermediate steps, as I'd like to incorporate additional processing on some of the messages.

Particularly some error messages, that create an annoying “if they're there, they're there, if not, they're not”, preventing me from regex grouping it, because it either breaks when nothing is there (and is the more prevalent error message state), or gets grouped in with the existing structure.

I'm happy with it as it is though, so short of tweaking any newly discovered logic holes I missed (which will be apparent if you see a raw apache log line breaking up the nice tabled output), it probably will be as is.

Theoretically preferred approach

If I do discover a straightforward way to obtain more interactive access to the logs, I will pursue that, but only under the following scenario:

  1. user requests page (available only to user, so no web robots can hit it mercilessly)
  2. page fires off script
  3. script generates output
  4. output replaces user's requested page
  5. user sees latest information

That way, no regular processing would need to take place… it would only occur on an “as-needed” basis, and therefore, considering the behaviors of the user base, will only be called upon rarely.

I view this as a “best of both worlds” solution, as it reduces our processing time spent on this resource, but also yields to the user a greater amount of interactivity with the web server logs most pertinent to them (ie as close as you can get without a “tail -f”).

I do not want to implement upfront per-user filtering of the web server logs, as that would be an ongoing process.

As it is, logrotate on www rotates apache's logs once per week, so the files will never get overly massive to really push the current implementation in the generally current environment up against any processor-intensive bounds.

January 15th, 2010

Poked around a bit more with lighttpd, tweaking settings to make it operate as apache currently is on www.

Still needs a little bit more work, but it is most just in forming the correct regular expressions.

A page of value:

User web server logs

I also got around to solving a long-standing feature I've wanted implemented– the ability for individual users to monitor their own web server access and error logs, without necessarily needing access to the actual logs on the web server.

I wrote a nifty new script:

#!/bin/bash
#
# weblogs.sh - update per-user access and error log information
#
cd /var/www/data/pages/logs
for user in `/bin/ls -1 *.txt | cut -d'.' -f1`; do
    echo "~/public_html/ web server access and error logs, the 32 most recent entries for each." > $user.txt
    echo "=====Access Log=====" >> $user.txt
    grep "/~$user" /var/www/log/access.log | tail -32 | sed -e 's/^\([0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\) .*\[\([0-9][0-9]*\/...\/[0-9][0-9][0-9][0-9]\):\([0-9][0-9]:[0-9][0-9]:[0-9][0-9]\) -[0-9][0-9][0-9][0-9]\] "\([A-Z][A-Z]*\) \(.*\) \(.*\)" \([0-9][0-9]*\) .*$/^Who:|\1^What:|\5  ^When:|\2 \3^How:|\6 \4^Response:|\7|/g' >> $user.txt
    echo >> $user.txt
    echo "=====Error Log=====" >> $user.txt
    grep "/~$user" /var/www/log/error.log | tail -32 | sed -e 's/^\[... \(...\) \([0-9][0-9]\) \(..:..:..\) \([0-9][0-9][0-9][0-9]\)\] \[error\] \[client \([0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\)\] \(.*\)$/^Who:|\5^What:|\6  ^When:|\2\/\1\/\4 \3|/' >> $user.txt
    echo >> $user.txt
done

Which slices and dices the web logs up to snork out user-related info, and slaps it in dokuwiki-style tables.

Users can access their own logs at: http://lab46.corning-cc.edu/logs/username (where “username” is their Lab46 username).

To enable this feature, all an interested user has to do is create that page— they need to be logged into the wiki to do this, and can do something as simple as a space then save. This will create the file, then when the script fires off, it will fill it with log goodness.

Removing all content from this page and saving will of course disable this feature, as the script only checks existing pages in the logs namespace (tried to make it an efficient use of resources).

I would have liked it to have been instantaneous… but despite my efforts at having dokuwiki not cache the page, SOMETHING was caching it… maybe apache, maybe the web browser. Either way, something I couldn't immediately control, and therefore resorted to this method. In all likelihood will be more than adequate (to go from no user access to logs to something? That's not nothing :)).

I locked it down so ONLY the user can view their own logs. So they must be logged in to the wiki.

I also allowed members of the faculty group to also access student logs, so they can help troubleshoot.

January 14th, 2010

Today it is nice and sunny out.

Having been rather impressed with lighttpd, I set about upgrading www, and having it use lighttpd instead of apache. My efforts today have largely revolved around that.

And I think it is going pretty well. I'm documenting my efforts, currently, at system_www, so I can retrace my steps in the future.

I also rigged up a job listing page off the Lab46 front page… I occasionally get passed job opportunities, so instead of not knowing who to send them off to, I'll just post them. Done.

Oh, in my pursuits, I found what looks to be a pretty nice tutorial for setting up lighttpd and tomcat… so when that student project fires up again:

January 13th, 2010

Finally got mailing lists operational and web pages accessible from the Lab46 site.

In the end, Mailman is too intelligent for its own good.

Also patched both wikis when a potential security (or as they call it, an “information leak”) flaw was discovered and patched. I'm loving it more and more… an accessible and open development community, providing not only fixes to discovered problems, but explaining what needs to be fixed, and even how to apply the fixes manually.

And that's the big thing I think… the attention on how to do some tasks manually… while there are technological aids to assist in a lot of this, for those just starting out, getting the feel for how these processes work yield a much deeper understanding into the overall code base. It is this unintentional focus on pedagogy AND productivity that I find uniquely valuable.

Something I want to explore, at some point, is revamping www with lighttpd… I have a better grasp of its configuration than the behemoth Apache… and from the sounds of it, might inadvertently snag some performance improvements through knowledgeable enabling of functionality.

January 12th, 2010

I spent, I think most of the productive day, banging my head against the mailman-on-mail-server problem.

It is NOT at 100% yet.

Basically:

  • I got lighttpd running nicely on mail
  • I got mailman running on mail and talking nicely to lighttpd on mail
  • I got www's apache to “proxypass” web requests for mailman stuff TO lighttpd on mail

THAT is all fine and dandy. I even wrote most of it down.

Now, the problem is, when you hit the mailman web stuff from lab46 (the full out lab46.corning-cc.edu), it clearly shows up as mail.offbyone.lan as far as mail/mailman is concerned. This is problematic as links are in the form of 'mail.offbyone.lan', instead of 'lab46.corning-cc.edu', so users external to the LAIR would not have much luck using this.

And that is my entire problem. If I could figure out how to get the mailman web interface to report ALL addresses as lab46.corning-cc.edu, I'd be all set.

At this point, I might just rip mailman right back out again, and try a fresh go at it. Because, really, it IS all mailman at this point, being all particular.

Aside from that, I spent a little time fleshing out some more of the hpc0 project info.

Two sites that have semi-useful information:

I've learned a heck of a lot more about performing mailman operations on the command-line, along with the mailman directory structure.

It truly is an effective mailing list manager, which I never disputed, but I still agree that it can be very picky.

January 11th, 2010

Aside from dokuwiki single sign on, I ended up spending some quality time developing content– today, specifically, for hpc0.

I fleshed out the hpc0 projects page hpc0_projects, and eventually ended up at 20 total projects. I decided to allow people to work in limited groups, and also limited the times projects can be chosen (somewhat first come first serve– once a group choses a project, that is one less opportunity for that project to be chosen by another group).

Basically just filling out the details for the projects (coming up with requirements and guidelines and the like), overall it is coming together.

Still need to really start on my UNIX overhaul– ie the quests… I should just sit down and do that, as I think once I get into it, the energy will just keep flowing.

Single Sign On

I realized to enhance intuitive usage of the wikis, there would need to be some form of shared authentication between the Lab46 wiki and my wiki.

As it turns out, the process is relatively straightforward and painless, as described here:

Basically, I edited inc/init.php on both wikis, and did the following (identical change to both):

//if (!defined('DOKU_COOKIE')) define('DOKU_COOKIE', 'DW'.md5(DOKU_REL.(($conf['secure    cookie'])?$_SERVER['SERVER_PORT']:'')));                                              
if (!defined('DOKU_COOKIE')) define('DOKU_COOKIE', 'DW'.md5('lab46wikispace'));

Basically, alter how the md5 hash is generated. We need to ensure the hashes are identical. This does that.

Next up, identical changes to inc/auth.php, in association with calls to setcookie():

    if (version_compare(PHP_VERSION, '5.2.0', '>')) {
        setcookie(DOKU_COOKIE,'',time()-600000,DOKU_REL,'',($conf['securecookie'] && is_ss     l()),true);                                                                          
    }else{
        setcookie(DOKU_COOKIE,'',time()-600000,DOKU_REL,'',($conf['securecookie'] && is_ss     l()));
    }

would become…

    if (version_compare(PHP_VERSION, '5.2.0', '>')) {
        setcookie(DOKU_COOKIE,'',time()-600000,'/','',($conf['securecookie'] && is_ss     l()),true);                                                                          
    }else{
        setcookie(DOKU_COOKIE,'',time()-600000,'/','',($conf['securecookie'] && is_ss     l()));
    }

Basically, change all occurrences of DOKU_REL to '/'. There are 4 such occurrences in that file (two of which are listed above).

The following search and replace works nicely from vim… :%s/DOKU_REL/'\/'/

Bam!

Next, we need to deal with the data/meta/_htcookiesalt files… they need to be the same.

So what I did, after renaming them, is log into one the wikis, causing this file to be created. I then copied it to the other wiki's data/meta/ directory.

This is all that was needed to get this functional… log in on one wiki, then switch to the other, and you are already logged in with your credentials. Pretty nice.

To extend the awesomeness, since I have the 'loglog' plugin installed, is I made a symlink from one to the other, so there's only one data/cache/loglog.log file… so log into either wiki, and it is recorded appropriately in this file. Nice!

Just one more thing to make this whole setup very nice to work with.

Something that's always annoyed me is how ownership shows up on symlinks… especially if you want it to be something other than what it shows up as.

As it turns out (at least on Linux systems– not all apparently implement this), you can change JUST the symlink's ownership, with the '-h' argument to chown or chgrp. Linux, along with most other OSes, do not implement any facility for changing (or even checking, really), the permissions on the symlink.

chown -h wedge:www-data symlink_in_question

January 10th, 2010

more plugin fun

So I spent the remainder of the day traipsing through the syntax.php of the doodle plugin, to apply a diff that would enable additional functionality (that in retrospect I may actually not need), but it allowed myself to get familiarized with the internals a bit of a particular dokuwiki plugin.

I also attempted to play with the 'task' plugin… both doodle and task have the potential of usefulness for me, but in their current states (and separate states), I am afraid I cannot take advantage of their functionalities, at least as I was envisioning it in the hpc0 class.

What I want:

  • project title list
    • additional attributes
    • put it all in a table, much as I have hpc0_projects right now
  • last column be a button/drop down/checkbox or message
    • “action” would allow a user to “sign up” for a project
    • message would indicate something like “project is taken by max allowed users”
  • in the project's page itself, the task dialog is present, with assigned user's name
    • user can change their status (started, working, done, etc.)
    • when in “done” state, rig up some notification so I am aware of it

…or something like that.

Just seems like something neat, semi-automatable (so I don't have to deal with project delegation– let the students throw themselves at it).

I think I'll have to explore the bureaucracy and pagemod plugins some more… there could be some currently unseen potential there I could apply to something else. We'll see.

I think, though, the progressbar plugin can be useful down the road… feed its parameters with some php, be a nice visualization of student progress according to certain metrics.

dokuwiki broken images

I got to explore some more of the mysterious “broken images” problem I've experienced off and on. I had determined that the problem liked to occur when a sizing attribute was provided to the image, such as: image.png?240

If I leave the ?240 off, the image would then render appropriately. So clearly, some sort of failure in the parsing code in dokuwiki… perhaps even I caused it with some combination of plugin installations… just to be on the safe side, I pruned out unnecessary/redundant plugins.

But even that seemed to prove not enough, as when I installed the stars plugin, the issue seemed to crop up once again… and as far as I could tell, there were no sizing attributes in use this time around.

So, I went and poked around, and eventually ascertained the situation through analyzing the end product page source– the image addresses weren't being rendered properly, leaving the browser scratching its head.

In lib/plugins/stars/syntax.php, I made a simple change inside function _Stars($d), where it constructs the img tags for full, half, and empty stars– I replaced DOKU_PATH with DOKU_URL. bam! Problem solved.

I'll have to use this reasoning back with the original problem to see if I can narrow down where problems enter the picture. But for now, it works, and I am glad.

service status

I enhanced some bits on the Lab46 pages today, creating a long-time desired functionality– service status. The page does a nice red/green light indicating whether the service is up or down. It's pretty sweet actually, and uses the 'sos' plugin, installed just for this purpose.

securelogin

Supposedly logins are “secure” now… there was a securelogin plugin that I installed which implemented a client-side javascript encryption using public keys… it claims a checkbox will appear for such functionality on the login page… but no such thing visualizes.

Somewhat modern plugin too… so unless something in the 2009-12-25 release of dokuwiki broke it, I don't know what's going on. Maybe it is cached…

ah, I think I see— it seems to do stuff in the URL string.

January 9th, 2010

LDAP groups

To facilitate some interactions I'd like to have with users in various classes, I decided the easiest way to accomplish my intended task was to create a group for each user, as many systems by default already do (user1 would be a member of group user1)… on Lab46 we've been putting everyone in group 'lab46', so it is generally a big happy community.

At any rate, I asked the might LDAP tree in the sky who were the users:

auth:~$ sudo ldapsearch -h 127.0.0.1 -x -b "ou=People,dc=lair,dc=lan" | egrep '^(uid|uidNumber):' > users

And after a little post-processing, got the data I wanted in the format of 'userID:uidNumber', one per line, for each desired user (I omitted faculty, lair, offbyone group members). For this example I'll use the file 'group' to contain all the group definitions (gidNumber the same as uidNumber for maximum convenience).

Then, I asked of the mighty sky-forming LDAP tree: Make it please thee to groupify these users.

And thus it was done:

auth:~$ sudo ldapadd -x -D "cn=admin,dc=lair,dc=lan" -W -f group

And much rejoicing was had, and much celebrating performed.

inotify for the greater good

As I took my unix class preparations a step further, I poked at inotify, something I discovered half by accident this morning that appeared as if it could do some interesting things.

And sure enough, in short order I had the following script:

#!/bin/bash
 
watchpath="/usr/local/share/expensiveforest"
inotifywait -m --format '%f' -e create $watchpath | while read file; do
	user="`/bin/ls -lo $watchpath/$file | sed -e 's/  */:/g' | cut -d':' -f 3`"
	tty="`/bin/ps --user $user | grep quest | sed -e 's/  */:/g' | cut -d':' -f 2`"
	cat /root/lobster > /dev/$tty
	echo "$user at /dev/$tty has fallen into our trap..."
done

If one runs 'quest' (which is just a copy of bash), and then wanders into the /usr/local/share/expensiveforest directory on www, and creates a file there, an ASCII art lobster will nearly instantaneously be displayed to your terminal.

This will allow me some added interactivity with students as they perform their 'quests' in the unix class… making “offerings”, “looking around”, removing files, moving files. The events I have at my disposal are:

  • access: file/directory read
  • modify: file/directory write
  • attrib: timestamps, file permissions, other attributes
  • close_write: file closed after being open in writeable mode
  • close_nowrite: file closed after being open in readonly mode
  • close: file was closed
  • open: file was opened
  • moved_to: file was moved into watched directory
  • moved_from: file was moved out of watched directory
  • move: file was moved
  • create: file was created
  • delete: file was deleted
  • delete_self: file was deleted, file/directory no longer being monitored
  • unmount: filesystem where we were watching was unmounted and therefore, no longer being watched

inotify page on sourceforge

I'll have to play with this, a lot more.

mailing lists, the next attempt

I ended up, after encountering some web pages illustrating how to do it, deciding to migrate mailman over to mail… thereby solving the schism of mailservers I'd have to maintain (the essentially “mailing list only” postfix on www, that I can now get rid of, and make www just do mail forwarding to mail).

So, thttpd on mail, mailman on mail. I've yet to set it up, but I'm a lot more confident about doing that than trudging into unexplored or meticulously complicated postfix configs (I just want it to work, gosh darn it, with some decent level of resiliency to changes).

I'm pretty confident this solution will work out well- web access via Lab46 web page for new subscriptions, and working mail access to mailing lists. Win-win all around.

January 8th, 2010

When cron ran my student page generation script, the output differed, causing none of the usernames to actually be picked up, thereby rendering a rather useless wiki page.

I redid the loop logic of the script to just do a '/bin/ls -1' of /home, and then run 'groups' on each iterated user, excluding them if they were in certain groups (lair, offbyone, faculty).

I also gave the script intelligence to determine the semester:

fall="(August|September|October|November|December)"
spring="(January|February|March|April|May)"
summer="(June|July)"
 
fchk="`echo $fall | grep $month | wc -l`"
schk="`echo $spring | grep $month | wc -l`"
if [ $fchk -eq 1 ]; then
    semester="fall$year"
elif [ $schk -eq 1 ]; then
    semester="spring$year"
else
    semester="summer$year"
fi

I was originally going to do it another way, hence the egrep-style regexes in the seasonal variables… but works just as well with the way I ended up doing it (perhaps still a possibility for some as-yet-unfathomed “future use”).

I also added some additional logic to customize the individual student sections— so those without journals or web pages wouldn't have an empty table cell (it just wouldn't be included). So it looks a bit sharper.

The only thing it doesn't do, and this is exclusively in columned output, is give both left and right columns equal real estate. The right column doesn't know it has half the window to fill, and only fills as much as it needs to. I'll fix it at some point.

Installed some more plugins to synchronize the lab46 wiki and my wiki.

I also redid the structure of the student journals. They are now structured as:

  • journal namespace
    • semester namespace
      • user namespace
        • journal content (start, intro, week1, week2, …, weekN)

This way, journals are preserved from semester to semester, and if I implement it right, will still be entirely independent of any contentious 'week.php' in the same position. I hope to have PHP logic auto-detect the semester, and grab from an appropriately named semester file containing the same variables. Then I can rewrite all my scripts to do the same thing– identify the semester, and update the appropriate file.

Still experimenting. Attempting some runs with the phpinc plugin, which claims to do what I want (and also good, in that I don't have enable PHP use on the wiki proper, so it is more secure).

Actually, the phpwikify plugin is exactly what I want… it allows for embedded PHP, and parses the result through the dokuwiki syntax parser, so I can have PHP generate section headings and stuff and they'll be rendered as such.

The only downside is that the edit button functionality seems to get rather confused when it is called upon to edit a section that doesn't actually exist in the real file (somewhat problematic). I'm looking to see if I can find a way to disable the edit buttons on just that page, but it looks to be somewhat embedded within dokuwiki.

So close. So very, very close. If I can get this working adequately, I can move student journals entirely wiki-side, and even start incorporating custom-php code in other places.

I've pretty much got it (not sure why it decided to start working), but the first section (the intro on the journal) seems to have a different header level than the others… even though there's no difference in the code. So this needs to be figured out somehow (maybe forcefully insert a “makeitwork” header or something)..

Also, I might want to check out the “command” plugin to see if it can do anything useful I might want to take advantage of.

January 7th, 2010

A hodge-podge sort of day today. Mosey-ed in LDAP-land, deleting users, adding groups, sprucing up various LDAP fields (capitalizing first letters of names).

Didn't touch mail today, but I did some rather intense overhauls of the wikis:

  • My “content” wiki is now my “haas”/“wedge” main web site… (aka “my” wiki)
    • I moved all my ~/public_html files into ~/public_html/archive
    • redid the Apache rewrite rules
    • set up my wiki to authenticate with LDAP
    • set up account-needed read access (aside from my general home page)
    • enabled “You are here” breadcrumbs
  • The “Lab46Web” wiki is now in the base of /var/www (now the “Lab46” wiki)
    • Other files moved into my ~/public_html/archive directory
    • redid the Apache rewrite rules
    • it was already LDAP'ed, so I extended it with various group-level ACLs
    • still world-readable, but the ability to modify stuff is now much more prevalent
    • some theme modifications, enabled “You are here” breadcrumbs
  • The “notes” wiki was migrated in its entirety to the “Lab46” and retired
    • all data that was on notes was migrated into a “notes” namespace on the Lab46 wiki.
    • should I need it, I moved it to ~/public_html/archive

I fixed any immediate-looking brokenness caused by moving stuff around, and redid the Lab46 Student Pages with a new wiki page.

Lab46 Students Page

I wanted it to still be based on dynamic-ish content, so I did some digging and found where the file is located (as it turns out: /var/www/data/pages/user/start.txt), and wrote a nifty little shell script that runs every 4 hours out of cron that generates it. Things it looks for:

  • home directory is has 'lab46' group ownership (using this to differentiate students from everyone else)
  • if the user is not a student, their home directory has other group ownership
  • I made a 'faculty' group, and put faculty in there
  • I made an 'offbyone' group, and put outliers in there
  • If the student is in one of my class groups (unix, asm, etc.), they get a journal link
  • all users get a personal wiki page link
  • if the student has a readable ~user/public_html/index.(php|html), they get a web page link
  • users with journals get listed first
  • users with web pages get listed next
  • the remaining users get listed last (assumed less active)

Script is at: /usr/local/sbin/studentlistcreate.sh

Some things I still need to do in this area:

  • finalize journal location (yes in its own journal namespace)
    • need to separate by semester
    • therefore need to update studentlistcreate.sh and have it be aware of the semester
  • other web-accessible resources I should highlight on the students page?

Should also update the faculty page to provide wiki links, should anyone wish to take advantage of that.

LDAP deletion of users

Worthy of note, since I can't say I've ever done this.

To complete the user deletion process I began a couple days ago, I ended up having to do the following:

I took the list of users and contorted it to be ldapmodify compatible. Basically, if I had a file with a list of users:

user1
user2
user3

That were slated for deletion, I would have to have the following in a text file:

dn: uid=user1,ou=People,dc=lair,dc=lan
changetype: delete


dn: uid=user2,ou=People,dc=lair,dc=lan
changetype: delete


dn: uid=user1,ou=People,dc=lair,dc=lan
changetype: delete

I read somewhere that you might need two (2) blank lines after each entry.. but this seems nonsensical and I didn't stop to verify it.

So anyway, you have a file in that format, then I did the following:

auth:~$ sudo ldapmodify -x -W -f userdel.list -D "cn=admin,dc=lair,dc=lan"
Enter LDAP Password: 
deleting entry "uid=user1,ou=People,dc=lair,dc=lan"

deleting entry "uid=user2,ou=People,dc=lair,dc=lan"

deleting entry "uid=user3,ou=People,dc=lair,dc=lan"

auth:~$ 

Today has certainly been an LDAP sort of day.

dokuwiki namespaces

I had a bright idea of segmenting a wiki into different categorical areas, each by namespace. So that meant I had to figure out how to do namespaces in dokuwiki.

As it turns out, something like: [ [newnamespace:newpageinnamespace]] (Without the intentional space)

Is all you need. Once you make *a* page, the namespace exists. And then you can make many other pages in that namespace, simply by prefixing the 'newnamespace:' before it.

And how this all translates on the backend? Namespaces are directories. Pages are files.

They're all located under data/pages/ on the dokuwiki directory tree. Directories by the name of the namespace, and files appended with .txt for the name of the page. This makes a lot of things very possible.

LDAP groups

To test some access control functionality in dokuwiki, I needed to roll out some groups (namely things like 'unix', 'asm', etc.) so I could theoretically put students in them and see if I could control how they could access certain pages.

So, I created the 'unix' group in LDAP:

dn: cn=unix,ou=Group,dc=lair,dc=lan
objectClass: posixGroup
gidNumber: 1730
cn: unix
memberUid: wedge
memberUid: kinney
memberUid: squirrel

Turns out, we had an incorrect notion of group membership stored in our LDAP tree for years (ie memberUid: wedge,kinney,squirrel) which is a no-no.

January 6th, 2010

I continued to explore the general brokenness of mailman more-or-less having to reside on www, while the primary mailserver is of course hanging out on mail.offbyone.lan; people have gotten this to work… and I do somewhat understand the process that needs to be undertaken to get it working, but I also don't feel like doing things exactly in that way. So I've been exploring other avenues of possibility.

I also uncovered what looks to be a rather detailed tutorial on setting up mail, using Debian Lenny no less! It covers mailman configuration as well, along with setting up authenticated services, using postfix and dovecot (how perfect is that?). The only drawback is that it relies on a MySQL backend for storing pretty much all its data. Not that I couldn't do that, I just don't want to… so I've been following along, and adapting to flat file configuration.

The tutorial is here: http://workaround.org/ispmail/lenny/

As a testbed for following through the tutorial, I went and set up mail.lair.lan, a new VM residing on ahhcobras. This will serve the lair.lan side of the network, and branch out to provide specific e-mail services for the g7n.org domain that I happen to possess. It will also cooperate with mail.offbyone.lan in providing general mail functionality in the LAIR (maybe not universal IMAP access, but could hopefully share in SMTP services).

I must say, I impressed myself as I was combing through the config files, deleting whole swaths of commented and unnecessary lines… I was doing offset math in my head and being quite accurate about it (36 through 109? That's 9 less than 36– 27, from 100… 80 - 7 = 73). I should start reading the fast math book again.

I also learned how to crop pages of PDF files in OS X's Preview.app, which is quite awesome, and gives me one less use for ever needing Adobe Acrobat (which I'd have to put up a small fight with to even get to work, thanks to Adobe's charming licensing restrictions– although, not as bad as Photoshop CS4, I actually had Acrobat working with much less of a hassle… I've just forgotten how exactly I did that :) ).

Basically, to crop:

  1. Open PDF in Preview.app
  2. Set sidebar in Thumbnail view (if it isn't already)
  3. Choose the 'Select' Tool (the outline box)
  4. Drag a box on a selected page, you can adjust the borders afterwards
  5. Clicking a thumbnail of another page will superimpose the select box on that page
  6. Check all the pages you wish to crop under this particular crop box to make sure it is universal
  7. When satisfied with the crop box, select all the pages you wish to crop
  8. Command-K will crop
  9. Bam! Adjusted PDF. Save it and you're good to go.

I also discovered how to merge different PDF documents together, also using Preview.app …

  1. Open PDF #1
  2. Open PDF #2
  3. Have thumbnail view going in sidebar on both
  4. Drag and Drop the page(s) from one over a page on the other. A rectangle should form. Release!
  5. Page will join the other document, and can be dragged and dropped to appropriate page order.
  6. Save and we're good to go.

Inadvertently, I experienced a bug in Calibre– when putting one of these Preview.app cropped PDFs in its book library and subsequently sending it to an eReader, the PDF will render with all blank pages. Apparently Preview.app adds some metadata that Calibre gets confused with.

The quick and easy solution? Manually put the un-Calibre'd cropped Preview.app PDF on the eReader yourself. Works great.

I would hope so, because that's the whole reason I set about figuring out how to crop a PDF in the first place, so it would be more readable in its overall structure on my Sony Reader.

I should explore setting up a Blog under Dokuwiki… I could turn this into it… giving an overview of my activities (something I've been meaning to get back into for a while now).

January 5th, 2010

As I found myself preparing to debug some of the lingering mail issues, I decided to undertake the great stale user purging of 2010!

In all, I was able to eliminate ~166 accounts (still need to remove them from LDAP).

My efforts are documented here: user_deletion

Some logic was also created to assist in the creation of new users (all that still needs to be effectively deployed).

January 4th, 2010

On a whim, I set up a new virtual machine called mail.offbyone.lan

It is now the mail server for the LAIR. It runs postfix.

It also runs dovecot, and provides functional IMAP services.

User mail has been converted over to the maildir format, and hosted exclusively on mail. Users much configure their mail user agents to be imap clients in order to get/send mail.

All existing Lab46 users have been converted to maildir, and pine configs updated to be imap clients.

My efforts are largely documented here: system_mail.offbyone.lan

January 3rd, 2010

I explored the wonderful capabilities of the dokuwiki “include” plugin today, re-establishing a lot of the fine modularization I enjoyed with my custom PHP setup.

Created the ASM syllabus, and modularized the UNIX one.

Put in course info, descriptions, objectives, and topics for all my Spring 2010 courses. Since the wiki maintains revisions, I no longer need to keep separate copies of old semesters– I can simply jump back in time, so I abandoned any “_s2010” naming and am just sticking with the root designations.

Fixed the color errors in the wrap plugin's CSS. I should document that.

on backups

Whipped up a quick more regular archiving of www's /var/www directory (weekly), and my content wiki (nightly, unless it is time to do the weekly archival, in which case it just gets folded into the big one), and stores the archives on backup.

Just out of a desire to mentally think about it (also working out material for possible student projects), I'm going to roll out a simple storage freeing system for the backup, that will work something as follows:

  1. Friday hits, time to do big backup
  2. Perform big backup
  3. After it is complete, generate a list of previous weekly backups
  4. ensure the list is at least 4 entries in size
  5. do some quick math, and get the difference of existing archives to the 4 (existing-4)
  6. head that number from the “history” file that is created on each backup
  7. for each value that pops up (the oldest entries, since we're appending new ones to the end), delete
  8. also generate a list of previous daily backups
  9. delete all but the most recent (each Friday)

This logic could also be deployed for the general lair-dump clients as well.. I think, though, I may end up implementing it on backup itself, as a weekly (monthly, actually) cron job, so it would scour through ALL the /dump/* directories, analyzing each “history” file, and performing this logic, thereby automatically pruning out older archives.

my lab46 homepage

On a whim, I also wiki-ized my lab46 faculty page… also into my “content” wiki.

The conversion went quite quickly, and I even kicked over outlying index.php files, so anyone trying to reach my Lab46 homepage will get the new wiki version.

January 2nd, 2010

notes wiki

I added a third wiki, 'notes' to join my 'content' wiki and the 'lab46web'. All three are dokuwiki's.

content and lab46web are locked down– authorized edits only.

notes is hooked up to LDAP to allow any Lab46 user to log in and make changes.

I should write down my dokuwiki installation method for future reference.

Configuring dokuwiki for LDAP user authentication

The 'notes' wiki allows for Lab46 users to log on and edit. Therefore, it would need to be hooked up to do LDAP authentication against the LAIR LDAP server.

The following configuration is added to /var/www/notes/conf/local.php to enable this to happen:

$conf['authtype']                       = 'ldap';
$conf['passcrypt']                      = 'ssha';
$conf['auth']['ldap']['server']         = 'ldap://auth:389';                              
$conf['auth']['ldap']['usertree']       = 'ou=People,dc=lair,dc=lan';
$conf['auth']['ldap']['grouptree']      = 'ou=Group,dc=lair,dc=lan';
$conf['auth']['ldap']['userfilter']     = '(&(uid=%{user})(objectClass=posixAccount))';
$conf['auth']['ldap']['groupfilter']    = '(&(objectClass=posixGroup)(|(gidNumber=%{gid})(memberUID=%{user})))';
$conf['auth']['ldap']['version']        = 3;

After this is saved, refresh the page, and you should be good to go.

I referenced http://www.dokuwiki.org/auth:ldap_openldap in setting this functionality up.

Forcing a user to be the superuser

After enabling LDAP authentication, the 'admin' user no longer worked. So I needed to force-enable an existing user to be the admin… a quick google search dug up:

$conf['superuser'] = 'wedge';

January 1st, 2010

dokuwiki calendar plugin enhancements

I adapted the dokuwiki 'wikicalendar' plugin to output months in a more sane format (ie Sunday being the leftmost day listed, with Saturday being the rightmost).

Default behavior is to have Monday be the left-most. So, to make it align its output with that of the 'cal' command, I made the following changes.

First up:

  // (CURRENT) wikicalendar/lang/en/lang.php:
 
  $lang['days'][7]    = 'Sunday';
  $lang['days'][1]    = 'Monday';
  $lang['days'][2]    = 'Tuesday';
  $lang['days'][3]    = 'Wednesday';
  $lang['days'][4]    = 'Thursday';
  $lang['days'][5]    = 'Friday';
  $lang['days'][6]    = 'Saturday';
  // (ORIGINAL) wikicalendar/lang/en/lang.php:
 
  $lang['days'][1]    = 'Monday';
  $lang['days'][2]    = 'Tuesday';
  $lang['days'][3]    = 'Wednesday';
  $lang['days'][4]    = 'Thursday';
  $lang['days'][5]    = 'Friday';
  $lang['days'][6]    = 'Saturday';
  $lang['days'][7]    = 'Sunday';

Basically, I physically put Sunday first in the array order. Notice the array element is still 7… this is so I would not have to go through and redo the logic… it was used to having Sunday have a value of 7, so I kept it. Since all we're interested in is output, that's a very minor thing indeed.

The next change:

(CURRENT) wikicalendar/syntax.php:

219: if($wd == 7) $out .= '<tr>';
...
223:     while($wd < ($this->MonthStart)) {
(ORIGINAL) wikicalendar/syntax.php:

219: if($wd == 0) $out .= '<tr>';
...
223:     while($wd < ($this->MonthStart-1)) {

And here, changing where the calendar row begins (changed to '7' – Sunday), and also to make sure initial blank days are filled in appropriately, $this→MonthStart instead of $this→MonthStart-1.

All in all, a relatively minor change, and only affecting the output… internal logic remains the same.

lab46web wiki installed

To handle the main lab46 web presence and related documentation & tutorials, I have set up an additional dokuwiki serving out of the /var/www/lab46web/ directory on www.

Closed to designated users only (ie no LDAP authentication).

enabling apache mod_rewrite on/for dokuwiki

To make things look extra pretty in the address bar, I looked up how to enable URL rewriting… the best way, it would seem is to enable it on the web server proper– this means apache.

As a result of exploring this, I ended up switching www over to using apache2 instead of apache1, as getting things like the rewrite module working were just so much quicker.

specific apache2 config in /etc/apache2

On the web server side, in the case of notes and content, I had to add directory stanzas to /etc/apache2/httpd.conf:

<Directory /var/www/haas/content>
    AllowOverride AuthConfig FileInfo Limit
</Directory>
 
<Directory /var/www/notes>
    AllowOverride AuthConfig FileInfo Limit
</Directory>

which enable the use of .htaccess files in those directories.

In the case of the lab46web wiki, because it is the primary serving point for www now, I had to perform the configuration change under the /etc/apache2/sites-available/default config file, under the primary VirtualHost entry. The specific directory stanza looks something like this:

    <Directory /var/www/>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride AuthConfig FileInfo Limit
        Order allow,deny
        allow from all
        # This directive allows us to have apache2's default start page
                # in /apache2-default/, but still have / go to the right place
                RedirectMatch ^/$ /lab46web/
    </Directory>

The big trick here is the “AllowOverride AuthConfig FileInfo Limit” in all cases.

dokuwiki-specific .htaccess config

Next, back in the base directory of each dokuwiki install, I copied the .htaccess.dist file to .htaccess, edited it, and enabled the following options:

RewriteEngine on
RewriteBase /notes     # or "/lab46web" or "/haas/content"
RewriteRule ^_media/(.*)              lib/exe/fetch.php?media=$1  [QSA,L]
RewriteRule ^_detail/(.*)             lib/exe/detail.php?media=$1  [QSA,L]
RewriteRule ^_export/([^/]+)/(.*)     doku.php?do=export_$1&id=$2  [QSA,L]
RewriteRule ^$                        doku.php  [L] 
RewriteCond %{REQUEST_FILENAME}       !-f 
RewriteCond %{REQUEST_FILENAME}       !-d 
RewriteRule (.*)                      doku.php?id=$1  [QSA,L]
RewriteRule ^index.php$               doku.php                                            

When all is said and done, restart apache2 and all should be working.

<html><center></html>

<html></center></html>

haas/status/status_201001.txt · Last modified: 2010/09/16 18:44 by 127.0.0.1