Table of Contents

STATUS updates

TODO

URLs

Some links of interest:

Other Days

October 31st, 2010

dokuwiki mth template

I modified the look of one of my UNIX labs, and noticed an inconsistency in its display in the web browser.

I set about fixing it, and here is what I changed…

In /var/www/haas/lib/tpl/mth/design.css on www, look for the div.page definition, which was originally:

div.page {
  margin: 4px 2em 0 1em;                                                                  
  text-align: justify;
}

and I changed the right margin from 2em to 1em:

div.page {
  margin: 4px 1em 0 1em;                                                                  
  text-align: justify;
}

Saved, refreshed, and voila! Just what I wanted.

October 30th, 2010

Makefile fun

On the Data Structures backgammon project, I rolled out some more Makefile tricks… this time making the output appear more streamlined, but also using ifneq conditionals to restore default output in the case of debugging.

Pretty darn cool.

Here's an example of the Makefile for the node class:

CXX = g++ $(CXXFLAGS) $(INC) $(LIBS)
AR = ar
CXXFLAGS = -Wall                                                                          
INC = -I ../include/
LIBS =
SRC = create.cc destroy.cc accessor.cc
OBJ = $(SRC:.cc=.o)
BIN = ../lib/libnode.a
all: $(SRC) $(BIN)

debug: CXX += -DDEBUG -g
debug: DEBUG = debug
debug: $(SRC) $(BIN)

$(BIN): $(OBJ)
ifneq ($(MAKECMDGOALS),debug)
    @printf "[AR]  %-20s ... " "$(BIN)"
    @$(AR) rcs $(BIN) $(OBJ) && echo "SUCCESS" || echo "FAIL"
else
    $(AR) rcs $(BIN) $(OBJ)
endif

.cc.o:
ifneq ($(MAKECMDGOALS),debug)
    @printf "[B]   %-20s ... " "$<"
    @$(CXX) -c $< && echo "OK" || echo "FAIL"
else
    $(CXX) -c $<
endif

clean:
    rm -f *.o $(BIN) core

default: $(BIN)

Getting the conditionals to work at first proved a little troublesome, but after some variations (switching to $(MAKECMDGOALS)), I finally got it.

make tends to utilize some variables by default, so I may have been getting tripped up by that.

plan9

More Plan9 playing…. I extended some of my documentation pertaining to updating the system and installing new software.

It turns out that I need to give MORE memory to the fileserver… the venti process has consumed all the memory:

% out of physical memory; no swap configured
9: venti killed: out of memory
% 

Which has been causing me grief throughout the afternoon as I attempt to build libssh2 (in order to have a Plan9 ssh2 client).

Things are gradually making more sense.. I succeeded in getting a netbooting CPU server.

October 29th, 2010

new VMs

The DSLAB folk have been playing around getting a monitoring resource up, and they have: Zibbix. I offered them VM space in the LAIR to deploy, as it is a distributed resource… and we now have: monitor.offbyone.lan

Still a work in progress.

Because Zibbix uses MySQL, I also offered to use db.offbyone.lan as a client, in exchange for getting it rebuilt (we're finally going to shed our “legacy” 192.168.10.x stuff!). So I've taken the old db down (I backed up all the old mediawiki content).

October 26th, 2010

netstat fun

In the logs, I had been noticing recently a semi-regular error taking place:

Oct 26 21:32:10 irc ngircd[846]: lab46.corning-cc.edu:59686 (10.80.2.38) is closing the connection ... 
Oct 26 21:32:10 irc ngircd[846]: Shutting down connection 18 (Socket closed!) with lab46.corning-cc.edu:59686 ... 
Oct 26 21:32:10 irc ngircd[846]: Client unregistered (connection 18): Socket closed! 
Oct 26 21:32:10 irc ngircd[846]: Connection 18 with lab46.corning-cc.edu:59686 closed (in: 0.1k, out: 0.1k). 

With the UNIX students working on their IRC bots, suspected there was likely something awry in a config… only problem, a check of the running processes did not immediately reveal any definite culprits.

So what I ended up doing, with the help of a Google search, is discovering the nifty “-p” argument to netstat. What I did:

lab46:~$ netstat -nap | grep 59686
tcp     0    0 10.80.2.38:59686    10.80.2.12:6667     ESTABLISHED 20048/python
lab46:~$ 

To summarize:

And sure enough, now I had a PID, and the python process confirmed it was an irc bot (they're playing with Phenny, a Python IRC bot)… a quick ps aux with a grep later, and I had the situation figured out.

Shortly thereafter, the problem ceased incurring itself on the logs :)

commitchk.sh

There was a loophole in my wiki chk logic, where if someone perhaps an inordinate number of commits, it will push stuff off the page (it only presents it in units of 24, by default)… after looking into it, I implemented some logic that will also grab the subsequent 24 (and the next 24… etc.) in a loop until the $start_date variable is greater than the revision date… pretty nifty.

Script follows:

#!/bin/bash
#
# commitchk - script to ensure that the appropriate number of commits took place.                                                                                                                                                                                                 
#
# 20101026 - logic loophole in wiki chk... now scans older revisions (mth)
# 20101024 - logic error in score calc elif... none and some got lumped. Fixed (mth)
#            also added wiki edit check logic (scoring more flexible- cli args)
# 20101023 - initial version (mth)
 
##
## Grab operating parameters from command-line
##
if [ "$#" -lt 4 ]; then
    echo "ERROR. Must provide at least 4 arguments."
    exit 1
fi
 
start_date="$1"
end_date="$2"
num_commits="$3"
num_wiki_edits="$4"
debug="$5"
 
##
## Change to subversioned directory tracking repository in question
##
cd /home/wedge/src/backgammon
 
##
## Get the latest information
##
svn update
 
#################################################################
## Obtain data to process
#################################################################
 
##
## Check for wiki update
##
rm -f /tmp/wikichk.out /tmp/wikichk.tmp
touch /tmp/wikichk.out /tmp/wikichk.tmp
chmod 600 /tmp/wikichk.out /tmp/wikichk.tmp
 
loop=1
item=0
 
while [ "$loop" -ne 0 ]; do
    wget -q -O - "http://www/notes/data?do=revisions&first=${item}" | egrep '(^2010|^[a-z][a-z0-9]*</span>)' | sed "s/^\(`date +%Y`\)\/\([0-9][0-9]\)\/\([0-9][0-9]\) \([0-9][0-9]\):\([0-9][0-9]\).*$/\1\2\3\4\5:/g" | sed 'N;s/\n//; s/<\/span>//g' | grep -v 'wedge' >> /tmp/wikichk.tmp
    echo "--> http://www/notes/data?do=revisions&first=${item}"
    let item=$item+24
    otime="`cat /tmp/wikichk.tmp | tail -1 | cut -d':' -f1`"
    stime="${start_date}1012"
    if [ "$stime" -gt "$otime" ]; then
        loop=0
    fi  
done
 
 
 
##
## Check for repository commits
##
rm -f /tmp/commitchk.out /tmp/commitchk.tmp
touch /tmp/commitchk.out /tmp/commitchk.tmp
chmod 600 /tmp/commitchk.out /tmp/commitchk.tmp
 
svn log | grep '^r[1-9][0-9]*' | grep -v wedge | sed "s/^r[1-9][0-9]* | \([a-z][a-z0-9]*\) | \(`date +%Y`\)-\([0-9][0-9]\)-\([0-9][0-9]\).*$/\1:\2\3\4/g" >> /tmp/commitchk.tmp
 
##
## Filter for appropriate data
##
for((i=$start_date; i<=$end_date; i++)); do
    cat /tmp/commitchk.tmp | grep $i >> /tmp/commitchk.out
    cat /tmp/wikichk.tmp | grep $i >> /tmp/wikichk.out
done
 
wsum=0
wavg=0
sum=0
avg=0
 
LST="/home/wedge/local/attendance/etc/list/class.fall2010.data.list.orig"
LSTCNT="`cat $LST | grep '^[a-z][a-z0-9]*$' | wc -l`"
 
DTA="/home/wedge/local/data"
for student in `cat $LST | grep '^[a-z][a-z0-9]*$'`; do
    cscore=0
    wscore=0
    cnt="`cat /tmp/commitchk.out | grep $student | wc -l`"
    wcnt="`cat /tmp/wikichk.out | grep $student | wc -l`"
    if [ "$wcnt" -gt "${num_wiki_edits}" ]; then
        let wsum=$wsum+$wcnt
        wscore=`echo "$wscore+${num_wiki_edits}+0.5" | bc -q`
        msg="Active wiki contributor ($wscore);"
    elif [ "$wcnt" -eq "${num_wiki_edits}" ]; then
        let wsum=$wsum+$wcnt
        wscore=`echo "$wscore+${num_wiki_edits}" | bc -q`
        msg="Contributed to wiki ($wscore);"
    elif [ "$wcnt" -eq 0 ]; then
        msg="No wiki contributions ($wscore);"
    else
        let wsum=$wsum+$wcnt
        if [ "${num_wiki_edits}" -eq 1 ]; then
            let wscore=$wscore+0.5
            wscore=`echo "$wscore+0.5" | bc -q`
        else
            wscore=`echo "$wscore+${num_wiki_edits}-1" | bc -q`
        fi  
        msg="Missed wiki edit count ($wscore);"
    fi  
 
    if [ "$cnt" -gt ${num_commits} ]; then
        let sum=$sum+$cnt
        cscore=`echo "$cscore+${num_commits}+0.5" | bc -q`
        msg="$msg Active contributor. ($cscore) by $end_date"
    elif [ "$cnt" -eq ${num_commits} ]; then
        let sum=$sum+$cnt
        cscore=`echo "$cscore+${num_commits}" | bc -q`
        msg="$msg Met commit requirement. ($cscore) by $end_date"
    elif [ "$cnt" -lt ${num_commits} ] && [ "$cnt" -gt 0 ]; then
        let sum=$sum+$cnt
        if [ "${num_commits}" -eq 1 ]; then
            cscore=`echo "$cscore+0.5" | bc -q`
        else
            cscore=`echo "$cscore+${num_commits}-1" | bc -q`
        fi  
        msg="$msg Missed commit requirements. ($cscore) by $end_date"
    else
        msg="$msg Did not commit at all. ($cscore) by $end_date"
    fi  
    score=`echo "$wscore+$cscore" | bc -q`
    msg="$score:$msg"
 
    if [ -z "$debug" ]; then
        echo "WRITE TO FILES: $student/results.data.assignments"
        cat $DTA/$student/results.data.assignments | grep -v "${end_date}" > $DTA/$student/results.data.assignments.tmp
        cp -f $DTA/$student/results.data.assignments.tmp $DTA/$student/results.data.assignments
        rm -f $DTA/$student/results.data.assignments.tmp
 
        echo "$msg" >> $DTA/$student/results.data.assignments
    else
        echo "[$student] $msg"
    fi  
done
avg=`echo "$sum/$LSTCNT" | bc -q`
wavg=`echo "$wsum/$LSTCNT" | bc -q`
 
echo "Average Repository Commits: $avg"
echo "Average Wiki Edits: $wavg"
 
rm -f /tmp/commitchk.out /tmp/commitchk.tmp /tmp/wikichk.out /tmp/wikichk.tmp
exit 0

October 24th, 2010

9grid

Short and sweet: http://www.9gridchan.org/

Go, the language

I endeavored to compile and install Go:

Following the instructions here:

Looks like I was successful.

sed removal of endlines

I was enhancing commitchk.sh today, and had a situation where I needed to remove the endline off of one line in order to merge two lines together:

sed 'N;s/\n//; s/<\/span>//g'

Seems to have done the trick.

commitchk.sh

I enhanced commitchk.sh today to be more flexible with regards to scoring, enabling bonus points, handling incomplete submissions, and also including logic to check the project wiki for contributions (along with a separate variable for the required number of wiki edits). I also added a debug option.

Script is now as follows:

#!/bin/bash
#
# commitchk - script to ensure that the appropriate number of commits took place.
#
# 20101024 - logic error in score calc elif... none and some got lumped. Fixed (mth)
#            also added wiki edit check logic (scoring more flexible- cli args)
# 20101023 - initial version (mth)
 
##
## Grab operating parameters from command-line
##
if [ "$#" -lt 4 ]; then
        echo "ERROR. Must provide at least 4 arguments."
        exit 1
fi
 
start_date="$1"
end_date="$2"
num_commits="$3"
num_wiki_edits="$4"
debug="$5"
 
##
## Change to subversioned directory tracking repository in question
##
cd /home/wedge/src/backgammon
 
##
## Get the latest information
##
svn update
 
#################################################################
## Obtain data to process
#################################################################
 
##
## Check for wiki update
##
rm -f /tmp/wikichk.out /tmp/wikichk.tmp
touch /tmp/wikichk.out /tmp/wikichk.tmp
chmod 600 /tmp/wikichk.out /tmp/wikichk.tmp
 
wget -q -O - 'http://www/notes/data?do=revisions' | egrep '(^2010|^[a-z][a-z0-9]*</span>)' | sed "s/^\(`date +%Y`\)\/\([0-9][0-9]\)\/\([0-9][0-9]\) \([0-9][0-9]\):\([0-9][0-9]\).*$/\1\2\3\4\5:/g" | sed 'N;s/\n//; s/<\/span>//g' | grep -v 'wedge' >> /tmp/wikichk.tmp
 
##
## Check for repository commits
##
rm -f /tmp/commitchk.out /tmp/commitchk.tmp
touch /tmp/commitchk.out /tmp/commitchk.tmp
chmod 600 /tmp/commitchk.out /tmp/commitchk.tmp
 
svn log | grep '^r[1-9][0-9]*' | grep -v wedge | sed "s/^r[1-9][0-9]* | \([a-z][a-z0-9]*\) | \(`date +%Y`\)-\([0-9][0-9]\)-\([0-9][0-9]\).*$/\1:\2\3\4/g" >> /tmp/commitchk.tmp
 
##
## Filter for appropriate data
##
for((i=$start_date; i<=$end_date; i++)); do
        cat /tmp/commitchk.tmp | grep $i >> /tmp/commitchk.out
        cat /tmp/wikichk.tmp | grep $i >> /tmp/wikichk.out
done
 
wsum=0
wavg=0
sum=0
avg=0
 
LST="/home/wedge/local/attendance/etc/list/class.fall2010.data.list.orig"
LSTCNT="`cat $LST | grep '^[a-z][a-z0-9]*$' | wc -l`"
 
DTA="/home/wedge/local/data"
for student in `cat $LST | grep '^[a-z][a-z0-9]*$'`; do
        cscore=0
        wscore=0
        cnt="`cat /tmp/commitchk.out | grep $student | wc -l`"
        wcnt="`cat /tmp/wikichk.out | grep $student | wc -l`"
        if [ "$wcnt" -gt "${num_wiki_edits}" ]; then
                let wsum=$wsum+$wcnt
                wscore=`echo "$wscore+${num_wiki_edits}+0.5" | bc -q`
                msg="Active wiki contributor ($wscore);"
        elif [ "$wcnt" -eq "${num_wiki_edits}" ]; then
                let wsum=$wsum+$wcnt
                wscore=`echo "$wscore+${num_wiki_edits}" | bc -q`
                msg="Contributed to wiki ($wscore);"
        elif [ "$wcnt" -eq 0 ]; then
                msg="No wiki contributions ($wscore);"
        else
                let wsum=$wsum+$wcnt
                if [ "${num_wiki_edits}" -eq 1 ]; then
                        let wscore=$wscore+0.5
                        wscore=`echo "$wscore+0.5" | bc -q`
                else
                        wscore=`echo "$wscore+${num_wiki_edits}-1" | bc -q`
                fi
                msg="Missed wiki edit count ($wscore);"
        fi
 
        if [ "$cnt" -gt ${num_commits} ]; then
                let sum=$sum+$cnt
                cscore=`echo "$cscore+${num_commits}+0.5" | bc -q`
                msg="$msg Active contributor. ($cscore) by $end_date"
        elif [ "$cnt" -eq ${num_commits} ]; then
                let sum=$sum+$cnt
                cscore=`echo "$cscore+${num_commits}" | bc -q`
                msg="$msg Met commit requirement. ($cscore) by $end_date"
        elif [ "$cnt" -lt ${num_commits} ] && [ "$cnt" -gt 0 ]; then
                let sum=$sum+$cnt
                if [ "${num_commits}" -eq 1 ]; then
                        cscore=`echo "$cscore+0.5" | bc -q`
                else
                        cscore=`echo "$cscore+${num_commits}-1" | bc -q`
                fi
                msg="$msg Missed commit requirements. ($cscore) by $end_date"
        else
                msg="$msg Did not commit at all. ($cscore) by $end_date"
        fi
        score=`echo "$wscore+$cscore" | bc -q`
        msg="$score:$msg"
 
        if [ -z "$debug" ]; then
                echo "WRITE TO FILES: $student/results.data.assignments"
                echo "$msg" >> $DTA/$student/results.data.assignments
        else
                echo "[$student] $msg"
        fi
done
avg=`echo "$sum/$LSTCNT" | bc -q`
wavg=`echo "$wsum/$LSTCNT" | bc -q`
 
echo "Average Repository Commits: $avg"
echo "Average Wiki Edits: $wavg"
 
rm -f /tmp/commitchk.out /tmp/commitchk.tmp /tmp/wikichk.out /tmp/wikichk.tmp
exit 0

I also tested its deployment as an at job:

lab46:~$ at 10:12am on Oct 26
warning: commands will be executed using /bin/sh
at> /path/to/commitchk.sh 20101021 20101026 2 1
at> <EOT>  # hit CTRL-D here
job 47 at Tue Oct 26 10:12:00 2010
lab46:~$ 

I get e-mailed the result of the job.

October 23rd, 2010

commitchk.sh

Since the current project in Data Structures really requires the steady stream of contributions from all class members, I came up with an assignment to perform 2 relevant commits to the project repository by next class.

The more I've thought on it, the more I realize how this should be a fairly regular practice for the duration of this project. To help ensure steady participation, and fight off the natural tendency of students to procrastinate until the deadline is looming… it also helps to generate questions and hammer out uncertainty.

I ended up writing a script that will become part of the Grade Not-Z family of services to accomplish a big portion of this task, and it is called commitchk.sh:

#!/bin/bash                                                                                                                                                                            
#
# commitchk - script to ensure that the appropriate number of commits took place.
#
 
##
## Grab operating parameters from command-line
##
if [ "$#" -ne 3 ]; then
    echo "ERROR. Must provide 3 arguments."
    exit 1
fi
 
start_date="$1"
end_date="$2"
num_commits="$3"
 
##
## Change to subversioned directory tracking repository in question
##
cd /home/wedge/src/backgammon
 
##
## Get the latest information
##
svn update
 
##
## Obtain data to process
##
rm -f /tmp/commitchk.out /tmp/commitchk.tmp
touch /tmp/commitchk.out /tmp/commitchk.tmp
chmod 600 /tmp/commitchk.out /tmp/commitchk.tmp
 
svn log | grep '^r[1-9][0-9]*' | grep -v wedge | sed "s/^r[1-9][0-9]* | \([a-z][a-z0-9]*\) | \(`date +%Y`\)-\([0-9][0-9]\)-\([0-9][0-9]\).*$/\1:\2\3\4/g" >> /tmp/commitchk.tmp
 
for((i=$start_date; i<=$end_date; i++)); do
    cat /tmp/commitchk.tmp | grep $i >> /tmp/commitchk.out
done
 
sum=0
avg=0
 
LST="/home/wedge/local/attendance/etc/list/class.fall2010.data.list.orig"
LSTCNT="`cat $LST | grep '^[a-z][a-z0-9]*$' | wc -l`"
 
DTA="/home/wedge/local/data"
for student in `cat $LST | grep '^[a-z][a-z0-9]*$'`; do
    cnt="`cat /tmp/commitchk.out | grep $student | wc -l`"
    if [ "$cnt" -gt ${num_commits} ]; then
        let sum=$sum+$cnt
        msg="2:Exceeded required number of commits in assignment timeframe. ($cnt/${num_commits}) by $end_date"
    elif [ "$cnt" -eq ${num_commits} ]; then
        let sum=$sum+$cnt
        msg="2:Performed required number of commits in assignment timeframe. ($cnt/${num_commits}) by $end_date"
    elif [ "$cnt" -lt ${num_commits} ]; then
        let sum=$sum+$cnt
        msg="1:Performed some, but not all the required number of commits. ($cnt/${num_commits}) by $end_date"
    else
        let sum=$sum+$cnt
        msg="0:Did not commit to repository during assignment timeframe. ($cnt/${num_commits}) by $end_date"
    fi
    #echo "$msg" >> $DTA/$student/results.data.assignments
    echo "$student -- $msg"
done
avg=`echo "$sum/$LSTCNT" | bc -q`
 
echo "Average Submissions: $avg"
 
rm -f /tmp/commitchk.out /tmp/commitchk.tmp
exit 0

In deployment mode, it'll update the per-student data/ directories. It will also offer me some metrics on average commit rate… which I may use to marginally increase the required number of commits as activity heats up (within reason, of course).

I intend to deploy the script as a cron job (or maybe an at job, due to its somewhat irregularity)… I could tie it in with the attendance scripts, however, as they know to run every class… that might be the most effective point of entry… I think I'll want to add another check to have it consult the assignments page or something, as although I intend this to be a somewhat regular activity, is temporary in the grand scheme of the semester for this class.

libraries

Played around some more with staticly linked libraries. Got it to work, thanks to this page:

Figured out the problem I've experienced when trying to link against my own libraries (really, ANY libraries that I'd want to do manually). You absolutely need to list the libraries AFTER your objects and output files… because apparently, if no dependencies have been established, it discards the whole thing… so here I was including all my libraries earlier in the command-line, and the compiler was tossing them out.

For example, what doesn't work:

lab46:~/src/backgammon/testing$ g++  -Wall -static -I../include/ -L../lib/ -lnode -o nodetest nodetest.o
nodetest.o: In function `main':
nodetest.cc:(.text+0x46): undefined reference to `Node::Node()'
nodetest.cc:(.text+0x79): undefined reference to `Node::setValue(int)'
nodetest.cc:(.text+0xa2): undefined reference to `Node::Node()'
nodetest.cc:(.text+0xef): undefined reference to `Node::setValue(int)'
nodetest.cc:(.text+0x131): undefined reference to `Node::getValue()'
nodetest.cc:(.text+0x175): undefined reference to `Node::~Node()'
nodetest.cc:(.text+0x1a3): undefined reference to `Node::~Node()'
collect2: ld returned 1 exit status
lab46:~/src/backgammon/testing$ 

and what DOES work:

lab46:~/src/backgammon/testing$ g++  -Wall -static -I../include/ -o nodetest nodetest.o -L../lib/ -lnode
lab46:~/src/backgammon/testing$ 

So, I just wasn't thinking deep enough… I knew that order mattered (with respect to local object files on the compiler command line), but didn't stop to think that libraries are essentially in the same scope… they are also objects…

October 21st, 2010

rds kernel exploit

According to Slashdot, there's a newly discovered local root exploit for systems running the 2.6.30+ kernel.

It involves the RDS functionality of the kernel (Reliable Datagram Sockets), and if it isn't enabled, the system isn't vulnerable.

Lab46, running a 2.6.32 kernel and being publicly accessible, has RDS enabled as a module… I've added them to the blacklist and relocated them from the module tree… so they won't be able to be inserted.

This should effectively mitigate that problem, at least until a kernel update is released.

3 modules in question:

It looks like it is predominantly used with services like infiniband, which on lab46, we don't have to worry about.

October 20th, 2010

plan9

Had some luck booting/installing/booting a plan9 paravirtualized VM in Xen today.

Some documentation on that can be found here.

Xen VNC console

What I think will really make things more awesome is to utilize the VNC console capabilities for paravirtualized VMs… this seems to be what is holding me back from doing graphical stuff in Plan9 via the VM (either that or get networking and network authentication working so I can connect to it via drawterm).

Some links:

In the end, it would almost seem as if I need a xen-vfb binary to run, in addition to xen-console… this does not appear to have been built with the official Debian packages, so building it from source is necessary.

I quickly attempted to do this on yourmambas (my testbed for HVM and Plan9 VM activity today)… unfortunately I was not met with success.

Ultimately I should just stick with getting the Plan9 VM properly configured and on the network, so I can connect to it with drawterm and subsequently proceed to take over the world.

October 14th, 2010

postfix playings

To test to see if my changes made any difference, I looked into forcing a flush of the postfix queue, which would instigate an attempt to deliver all held messages.

As it turns out, the postqueue command is responsible for this.

View all messages held in queue

mail:~$ sudo postqueue -p
-Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient-------
04E8D182DD   508240 Thu Oct 14 15:30:15  user1@lab46.corning-cc.edu
(conversation with mail.corning-cc.edu[143.66.1.19] timed out while sending message body)
                                         user1@aol.com
                                         user1@corning-cc.edu
                                         user2@lab46.corning.cc.edu

Flush all messages held in queue

mail:~$ sudo postqueue -f

Resulted in at least a little indication in the log that something was happening. Not exactly getting instant gratification, but this could also be contingent upon putting up with the bogged down content scanning present in-line before the CCC mail server.

Good knowledge to know though.

postfix message body timeout

I noticed a student getting the message body timeout error this evening, and thought to look into the problem again.

A google search for “postfix timed out while sending message body” turned up:

which indicated that bogged down mail servers performing content scanning (hmm, sound like any server we might know?) may not have adequately finished this processing before message transfer timeouts are hit.

A poster on that forum recommends upping the defaults for (in this order of priority):

  1. smtp_data_done_timeout (default 600s)
  2. smtp_data_xfer_timeout (default 180s)
  3. smtp_data_init_timeout (default 120s)

I upped smtp_data_xfer_timeout to 600s, and smtp_data_init_timeout to 180s, and restarted postfix on mail.offbyone.lan … we'll see if that makes any difference.

This bit was added to the end of /etc/postfix/main.cf on mail.offbyone.lan:

##
## SMTP data timeouts
##
smtp_data_xfer_timeout = 600
smtp_data_init_timeout = 180

repos access

While deploying the Backgammon project today in Data Structures, I encountered some problems getting some of the students to access the repository on repos.

As it turns out, since we're doing svn+ssh access, they need to actually be able to log into repos… so I disabled the host attribute check in PAM.

This (along with having students run genkey) seemed to fix the problem.

This should also resolve any issues accessing per-user repositories.

caprisun cron

After deploying nullmailer yesterday, every 30 minutes I'd get a failure message as something tried to invoke the OpenBSD sendmail with an incompatible option to perform some routine OpenBSD sendmail process.

Yesterday I tried fixing this by disabling it in cron, and sending a SIGHUP to the cron process, but this didn't seem to satisfy its crankiness. Today, I ran crontab -e as root, and saved it, which did finally make a difference. Annoying message every 30 minutes stopped.

Problem solved.

October 13th, 2010

nfs cron job

To fully deploy the user home dir backup script, I did the following on nfs:

nfs:~$ sudo ln -s /export/lib/homedirbackup.sh /usr/local/sbin/homedirbackup.sh
nfs:~$ sudo chmod 700 /export/lib/homedirbackup.sh

And I added a new cron job entry to /etc/init.d/backup, so the file now looks like this:

#
#  Perform routine level 0 and level 1 dumps to the LAIR backup server
#
0 9 1-7 * *   root  /usr/local/sbin/lairdump.sh
12 22 1-26 * *  root    /export/lib/homedirbackup.sh

and then of course, I restarted cron:

nfs:~$ sudo /etc/init.d/cron restart
Restarting periodic command scheduler: crond.
nfs:~$ 

I will need to set up the symlink and cron job on nfs2 for times when it is the master peer.

homedirbackup.sh

Continuing the home directory backup process I manually started Monday, I finally finished rolling out my solution to regularly get LAIR user home directory data backed up, and stored on a disk that isn't on NFS.

I have it backing up to the disks on sokraits and halfadder… each month every home directory gets backed up twice (once to sokraits, once to halfadder).

The script even manages backups, and deletes the old backups to maintain a maximum of 4 backups for each user (estimated to take up ~50GB of space for all users combined once we get to that point).

Top 3 disk consuming users:

  1. 2.6G squirrel
  2. 2.2G wedge
  3. 1.1G mgough

I was surprised it wasn't me. I think I may still take up the most when uncompressed, but I guess enough of my stuff is compressible.

At any rate, here's the script:

homedirbackup.sh
#!/bin/bash
#
# homedirbackup.sh - script responsible for performing home directory back ups
# to a location that is NOT the fileserver.
#
# 20101014 - directory typo on destination fixed (mth)
# 20101013 - initial version (mth)
#
ismaster="`df | grep export | wc -l`"
if [ "$ismaster" -eq 1 ]; then
 
    alphabet="abcdefghijklmnopqrstuvwxyz"
    date=`date +"%Y%m%d"`
    day=`date +"%d"`
    if [ "$day" -lt 14 ]; then
        bhost="sokraits"
    else
        bhost="halfadder"
        let day=$day-13
    fi
 
    range="`echo $alphabet | cut -c $day,$(($day+13))`"
    cd /export/home
    for user in `/bin/ls -1d [$range]*`; do
        #echo -n "[$user] "
 
        # Check the load average, and delay as long as we're above 200% CPU
        loadavg=`uptime | sed 's/^.*average: \([0-9][0-9]*\)\.\([0-9][0-9]\).*$/\1/'`
        while [ "$loadavg" -ge 2 ]; do
            sleep "$((RANDOM % 64 + 64))"
            loadavg=`uptime | sed 's/^.*average: \([0-9][0-9]*\)\.\([0-9][0-9]\).*$/\1/'`
        done
 
        ssh $bhost "mkdir -p /backup/${user}; /backup/prune.sh ${user}"
        tar cpf - $user | gzip -9 | ssh $bhost "dd of=/backup/${user}/${user}-${date}.tar.gz"
    done
fi
exit 0

and it has a companion script that runs on sokraits/halfadder:

prune.sh
#!/bin/bash
#
# prune.sh - prune backups, weed out any entries older than the most recent 3
#
# 20101013 - initial version (mth)
#
user="$1"
 
if [ -z "$1" ]; then
    echo "ERROR! Must be called with a user argument."
    exit 1
fi
 
if [ -e "/backup/$user" ]; then
    cd /backup/$user
    bckcnt="`/bin/ls -1A | wc -l`"
 
    if [ "$bckcnt" -gt 3 ]; then
        let removal=$bckcnt-3
        files="`/bin/ls -1A | head -$removal`"
        rm -f $files
    fi
fi
 
exit 0

What I really enjoy about these two scripts is how they utilize a level of flexibility I haven't deployed much in other scripts. Different users can and will obtain the maximum number of backups before others, but the scripts can handle that without choking (or should, time will tell).

I also put in some other checks for things (load avg) and a variable delay if the script tries to run in really busy conditions. Certainly could put in more checks, but these should hopefully serve us well for some time to come.

nifty gcc book

Here are some nifty links I've encountered today:

A fairly readable book (available in print and PDF) on using gcc.

convert .a to .so

Important bit of knowledge I always find myself googling, but never writing down.

When you have a static library: .a

And you want to convert it into a shared library: .so

lab46:~$ ar -x mylib.a
lab46:~$ gcc -shared *.o -o mylib.so 

data structures backgammon project svn

I decided I'm going to have the ENTIRE class be involved in the backgammon project, as one group.

To accommodate this, I established an SVN repository for the backgammon project:

repos:~$ sudo mkdir -p /var/svn/backgammon
repos:~$ sudo svnadmin create /var/svn/backgammon
repos:~$ sudo chown -R wedge:data /var/svn/backgammon
repos:~$ sudo chmod -R ug+rwX,o= /var/svn/backgammon

As a client, I checked out the repository as follows:

lab46:~/src$ svn checkout svn+ssh://repos/svn/backgammon
Checked out revision 0.
lab46:~/src$ 

I went ahead and added a README.txt and a main.cc. We'll be doing the project in C++.

nullmailer on lab46

Perhaps a potential source of some occasionally experienced problem with mail in the LAIR, /etc/nullmailer/remotes on lab46 was set to: mail.corning-cc.edu

I changed this to: mail.offbyone.lan

All mail in the LAIR should go to mail.offbyone.lan … any externally-bound messages will then be appropriately routed out to the world from mail.offbyone.lan

pre-emptive preparation for fire code inspection

I noticed a fire truck pull into the BDC lot today. Main campus had their fire inspection last week, so I thought to run down to the LAIR real quick and check for/diffuse any past known “violations”. I did so.

nullmailer on caprisun

There have been a couple rogue mail configuration errors floating around… after receiving one this morning, I thought to finally do something about at least one of them.

I ended up manually installing nullmailer on caprisun, configuring it, and deploying it (replacing and disabling the original sendmail daemon from running).

I followed the instructions here:

The process went something like this:

caprisun:~$ sudo mv /usr/sbin/sendmail{,.orig}
caprisun:~$ sudo mv /usr/bin/mailq{,.orig}
caprisun:~$ wget http://untroubled.org/nullmailer/nullmailer-1.05.tar.gz
caprisun:~$ tar -zxvf nullmailer-1.05.tar.gz
caprisun:~$ cd nullmailer-1.05
caprisun:~/nullmailer-1.05$ ./configure
caprisun:~/nullmailer-1.05$ gmake
caprisun:~/nullmailer-1.05$ sudo gmake install-root
...
caprisun:~$ sudo adduser # add user/group 'nullmail'
caprisun:~$ sudo pkill sendmail

I also ran (as root) vipw and went in to change the UID/GID of user nullmail to a free system value (I ended up with 40 for both). I made the home directory /var/empty, and the shell /sbin/nologin.

I changed /etc/rc.conf.local to have the following line:

sendmail_flags=""

And added the following to /etc/rc.local:

/usr/local/sbin/nullmailer-send 2>&1 && echo -n "nullmailer "

I configured nullmailer by doing the following:

caprisun:~$ sudo /bin/hostname > /usr/local/etc/nullmailer/me
caprisun:~$ sudo echo "mail.offbyone.lan smtp" > /usr/local/etc/nullmailer/remotes

and then started nullmailer manually by running the logic I placed in /etc/rc.local, and backgrounded it.

I tested mail, and the test appeared successful (did a simple mail wedge@offbyone.lan)

October 11th, 2010

home dir backups

The string of thunderstorms that came through this evening reminded me of my need to get the user home directories backed up.

So instead of delaying it not having an established solution, I wrote a quick script to get the job done for tonight:

cd /export/home
for user in `/bin/ls -1`; do
    echo -n "[$user] "
    tar cpf - $user | gzip -9 | ssh sokraits "dd of=/backup/${user}-20101011.tar.gz"
done

Quick and simple home directory tar+gzip, with permission preservation. Dropped on a machine with storage independent of NFS.

Started at 9pm. 2 minutes to do all the “a” users, 4 minutes for the “b” users, 3 minutes for the “c” users… pretty much at the speed of disk retrieval + network + remote disk storage.

Process doesn't have NFS pegged too badly… 1.66 cpu load as I write this (seems to swing between 1.17 and 1.8).

wiki caching

In the last month I've noticed some more significant wiki content caching issues, including some corrupted cache on various frequently visited pages.

Although I've been manually fixing them as I encounter them (the good ol' ?purge=true trick), I realize I need to do something more automatic and regular.

So, I finally go around to doing something about it.

I added the following stanza to /etc/cron.d/dokuwiki on www:

# Every month (2nd day, at 3:37am), force recaching of all wiki pages
37  3 2 * *     root    touch /var/www/conf/local.php
38  3 2 * *     root    touch /var/www/haas/conf/local.php

This should hopefully help to mitigate future caching issues.

October 10th, 2010

One-Wire sensor fun

Happy International Raw Food Day!

I picked up the following hardware from iButtonLink:

And obtained the following software:

Plugged in the LinkUSB into a free USB port, took a cat-5 cable and plugged in the MultiSensor Temp/Light sensor to it. I did this in a VirtualBox VM, so I passed the USB device to it. It was picked up as follows:

[ 4663.999661] usb 1-1: new full speed USB device using ohci_hcd and address 2
[ 4664.184365] usb 1-1: configuration #1 chosen from 1 choice
[ 4664.263414] usb 1-1: New USB device found, idVendor=0403, idProduct=6001
[ 4664.263429] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 4664.263429] usb 1-1: Product: FT232R USB UART
[ 4664.263430] usb 1-1: Manufacturer: FTDI
[ 4664.263441] usb 1-1: SerialNumber: A800bXS8
[ 4664.312360] usbcore: registered new interface driver usbserial
[ 4664.312360] usbserial: USB Serial support registered for generic
[ 4664.312360] usbcore: registered new interface driver usbserial_generic
[ 4664.312360] usbserial: USB Serial Driver core
[ 4664.316018] usbserial: USB Serial support registered for FTDI USB Serial Device
[ 4664.316018] ftdi_sio 1-1:1.0: FTDI USB Serial Device converter detected
[ 4664.316018] ftdi_sio: Detected FT232RL
[ 4664.316018] usb 1-1: FTDI USB Serial Device converter now attached to ttyUSB0
[ 4664.316018] usbcore: registered new interface driver ftdi_sio
[ 4664.316018] ftdi_sio: v1.4.3:USB FTDI Serial Converters Driver

digitemp

Tried to use the precompiled digitemp_DS2490 to get a reading, but it segfaulted. So on to compilation!

I installed the following additional packages (on top of an already installed build-essential):

Within the base of the digitemp-3.6.0 directory:

debian32:~/digitemp-3.6.0$ make ds2490

To initialize, the README recommends running the following:

debian32:~$ sudo ./digitemp_DS2490 -s /dev/ttyUSB0 -i

but when I did, I got the following:

debian32:~$ sudo ./digitemp_DS2490 -s /dev/ttyUSB0 -i
DigiTemp v3.5.0 Copyright 1996-2007 by Brian C. Lane
GNU Public License v2.0 - http://www.digitemp.com
USB ERROR: owAcquire called with invalid port string

A quick google search for that error turned up the following:

Where a poster recommended several variations on the digitemp command-line, but ultimately there seemed to be a consensus (include a response relayed from the digitemp author), that digitemp's ability to read from USB interfaces is unreliable, and to use serial instead.

OWFS was recommended, and seemed to garner greater success. So I shall try that instead.

OWFS

To get started, I installed the following packages:

After which, I ran configure enabling usb support, one-wire traffic support, and debian system support, among some others:

debian32:~/owfs-2.8p2$ ./configure --enable-usb --enable-debian --enable-owtraffic --with-python --with-perl5 --prefix=/usr/local
...
Current configuration:

    Deployment location: /usr/local

Compile-time options:
                  Caching is enabled
                      USB is enabled
                      I2C is enabled
                   HA7Net is enabled
                       W1 is enabled
           Multithreading is enabled
    Parallel port DS1410E is enabled
        TAI8570 barometer is enabled
             Thermocouple is enabled
         Zeroconf/Bonjour is enabled
             Debug-output is enabled
                Profiling is DISABLED
Tracing memory allocation is DISABLED
1wire bus traffic reports is enabled

Module configuration:
                    owlib is enabled
                  owshell is enabled
                     owfs is enabled
                  owhttpd is enabled
                   owftpd is enabled
                 owserver is enabled
                    ownet is enabled
                 ownetlib is enabled
                    owtap is enabled
                    owmon is enabled
                   owcapi is enabled
                     swig is DISABLED
                   owperl is DISABLED
                    owphp is DISABLED
                 owpython is DISABLED
                    owtcl is DISABLED

debian32:~/owfs-2.8p2$ 

as you can see, I didn't manage to get owpython or owperl support working, but oh well, I got USB and OWFS, and those are the two I care about to actually test this.

So we proceed with a 'make', followed by a 'make install'. Compilation went without a hitch and resulting binaries installed into /usr/local.

So we should be good now, right? Giving owfs a shot:

debian32:~$ sudo owfs -u /opt
DEFAULT: owlib.c:SetupSingleInboundConnection(196) Cannot open USB bus master
DEFAULT: owlib.c:LibStart(54) No valid 1-wire buses found
debian32:~$ 

ok… searched for that, turned up the following page:

A fix:

debian32:~$ sudo mkdir /var/1-Wire
debian32:~$ sudo owfs /dev/ttyUSB0 /var/1-Wire
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: C1
   <.>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 71
   <q>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 0F
   <.>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 00
   <.>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: C5
   <.>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: DD
   <.>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 71
   <q>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 0F
   <.>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 00
   <.>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: C5
   <.>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: DD
   <.>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 45
   <E>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 44
   <D>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 5B
   <[>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 5A
   <Z>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 3F
   <?>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 3E
   <>>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 29
   <)>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 28
   <(>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 95
   <.>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: 97
   <.>
TRAFFIC OUT <write> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: C5
   <.>
TRAFFIC IN  <read> bus=0 (/dev/ttyUSB0)
Byte buffer anonymous, length=1
--000: DD
   <.>
debian32:~$ mount
/dev/hda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
OWFS on /var/1-Wire type fuse.OWFS (rw,nosuid,nodev)
debian32:~$ 

I had to modprobe fuse as I got an error that indicated I should do this the first time I ran that… makes sense, it is trying to perform a FUSE mount.

I think all the noise I saw was the Traffic Report functionality I enabled. If that ends up being more of a verbosity, I think I'll recompile with it disabled. But at least good to know that something seems to be working, as when I view /var/1-Wire:

debian32:~$ cd /var/1-Wire
debian32:/var/1-Wire$ ls
.   26.54538C000000  settings      statistics  system
..  bus.0            simultaneous  structure   uncached
debian32:/var/1-Wire$ 

Stuff! Woohoo!!!

OWFS settings

After some poking around, I found some settings… by default it looked like the temperature sensor was set to Celcius. I changed this to Fahrenheit:

debian32:/var/1-Wire$ cd settings/units
debian32:/var/1-Wire$ ls
.  ..  pressure_scale  temperature_scale
debian32:/var/1-Wire$ cat temperature_scale
Cdebian32:/var/1-Wire$ sudo echo -n "F" > temperature_scale

When I did this, it seemed to hang… CTRL-C wouldn't interrupt it. So I issued an SSH escape sequence and logged back in. Upon checking temperature_scale, however, it was now set to F. I also did not issue the -n argument to echo the first time, as I have listed in the example above, so perhaps that would not cause it to hang up.

Either way, still working!

Reading the temperature

With temperature set to Fahrenheit, I set about getting the sensor to report the temperature to me:

debian32:/var/1-Wire$ cd 26.54538C000000
debian32:/var/1-Wire/26.54538C000000$ cat temperature
     68.7313debian32:/var/1-Wire/26.54538C000000$ 

68.7313 degrees, eh? I'd say that's pretty much accurate.

Light sensor?

I then tried to get a reading off the light sensor, which is also supposedly a part of this package, but all I found were references to humidity… so I wonder if there was some mix-up, and I got the more expensive Temp/Humidity sensor device instead (certainly not complaining if so). I'll have to do some digging, but I got a report of 117.65 (units set to barometers) if so.

Possible chip name reportings:

I'll have to look these up to see if they're related to humidity, or maybe light is being reported via the humidity interface?

Either way, it would seem we have successful operation of at least the linkUSB and MS-TL sensor… I'll have to try out the other two at some point.

October 6th, 2010

recursive/iterative btrees

After getting the binary tree functionality working, and perfecting my visualization routines (semi-adjusting algorithm for better fit to a normal-sized screen), I decided to embark on implementing the identical functionality but doing so iteratively, so as to have a nice comparison between iterative and recursive functionality.

Might lead to some class activities checking out performance, memory utilization, etc.

Although it was easy to implement add() iteratively, fill_level() is currently another matter… I'm having some conceptual troubles getting it off the ground, and what's amusing is that when one plays with trees enough like this, recursion starts to become regular and ordinary to think about :)

I might go back and do some recursive stack, queue, and list operations too, where applicable (probably just list, as there isn't much iteration going on in stacks and queues).

update

I found some sample code that has recursive and iterative implementations of tree functions… what's amazing is how they handle it iteratively: THEY USE STACKS.

This is perfect… I can show the progression of topics. Beautiful.

Apparently this can be done without stacks, if a parent pointer is present.. which I have… but I guess at this point I'll just go the stack route as it is far more pedagogically valuable.

quiz plugin

I discovered a recently developed plugin for dokuwiki called quiz.

Quiz plugin homepage: http://www.dokuwiki.org/plugin:quiz

I installed it on the /haas wiki. Had to make one minor change to get it to work out of the box, otherwise we see that a call to sprintf has a warning.

Edit the lib/plugins/quiz/lang/en/lang.php file, and change:

"You answered %d out of %d questions correctly (%d %)";

into…

"You answered %d out of %d questions correctly (%d %%)";

Note the %% instead of the %… the fix makes sense, the second data parameter to the sprintf() was never being used.

Ultimately, although I like a lot of the functionality that quiz provides, it also has some limitations preventing me from using it for my intended purposes (namely as a means to assess students)… it will need some enhancing in order to make that happen.

My quiz testbed is at:

With the quiz data located here:

October 5th, 2010

listlib queues working

I fell prey to one of those “duh” moments… of course qsee() was not showing me each value I most recently put on the queue… it is a queue, so it was correctly showing me the first value I had put in the queue.

duh. duh. duh.

I was happy when I realized this, though… after rewriting all the queue code. Twice. And was really starting to get frustrated when I could find nothing wrong.

Finally it was a super-exploratory gdb run that sparked the realization, when I was displaying contents of the queue within a nestled function… first value was there, but where were the others? Along the previous pointer. Took a few seconds, but that was a wonderful realization.

As a bonus, the queue code is now written to utilize the list code… so at some point I actually need to go back and rewrite the stack code to utilize the list code as well (to keep with the whole “building block” nature of what I'm doing).

Started on binary trees. I think I can just go ahead and use regular nodes in the tree… ignoring “left” and “right”, and using “prev” and “next”… same difference, just a slight naming variation.

update

It turns out there was still a problem with queue… but moreover, the problem was in listops.c:obtain(), I had a situation where one end of the list could become NULL, and nothing was done to correct the other end of the list (which was still left pointing at the old value).

Specifically:

 99             if (current -> end != NULL)
100                 current -> end -> next = NULL;
101             else
102                 current -> start = current -> end;
103         }
104         else if (location == current -> start) // grabbing from start of list
105         {
106             tmp = current -> start;
107             current -> start = current -> start -> next;
108             tmp -> prev = NULL;
109             tmp -> next = NULL;
110 
111             if (current -> start != NULL)
112                 current -> start -> prev = NULL;
113             else
114                 current -> end = current -> start;

Adding in the two else statements and specifically setting the opposite end of the list to its peer (NULL would beget NULL), resolves this problem.

Very tricksy. But at least that bug is squashed.

asnsubmit

Although I fixed the assignment submission problem the other day, I wanted to go back and implement correction logic into the script, so I could just re-run the script for the specified week and have it update all the pertinent data files, instead of me having to do it manually.

I did this, and the result was successful. Ending up really streamlining the script too (with a conscious performance degradation… moving the due date logic into the inner loop, which is largely unnecessary, as this information is the same for ALL Students.).

Possible future improvements:

October 3rd, 2010

listlib

As I'm looking forward toward the next Data Structures class project, I decided to write a library that will unify all our knowledge with a consistent structure.

The result is currently something I call listlib. I've basically re-implemented all the node, list, and stack operations we've done in class (with a few extra functions too), and intend for it to be used in building future programs.

It also will serve to address some problems run into during the freecell implementation… namely, passing around pointers to structs and modifying the structs in a function, and having those values persist back in the calling function.

This has always caused me grief, and has caused students grief. And for other brave souls, has caused them grief (as witnessed by many attempts to find answers on the internet). Finally, I figured it out. Gosh. Darn. It.

Not really as bad as it seems, once you wrap your head around it. But not necessarily 100% pure.. but there are no compiler warnings (the people on alt.lang.c would seem to disagree with some of the things I've done, claiming it isn't universally portable due to per-compiler specific functionality implementations, but I'll take working with gcc).

IMPORTANT NEW TRUTHS ABOUT C

  1. C, contrary to popular belief, does NOT have pass by reference. …WHAT? Yes, you heard me right. It does NOT have pass by reference. Let me say that one more time: IT DOES NOT HAVE PASS BY REFERENCE.
    1. instead, it has pass by address. What we think of as passing by reference is actually passing by address.
    2. this is an important distinction, as my ultimate solution would seem to rip the reality of those who think C has pass by reference.
  2. The & operator, which we all learned of as “address of”, should be looked at as adding a layer of indirection (adding a *).
    1. Let's say that again: & is like adding a *.
    2. And * removes a level of indirection (ie it dereferences, as we commonly view it).
  3. When you pass a pointer to a structure as a parameter, you're essentially passing it by address. Just pass it (in what might appear to be passing by “value”, but you're not, because it is a pointer).
    1. If you were to prepend a &, thinking you're passing by reference, you're actually passing a **, when you think you're still passing a *, and can likely see a can of worms that we just opened.
    2. So when you pass a pointer to a function, you are automatically passing it by address (which you might think of “by reference”).

Some informative links that helped me obtain this newfound enlightenment:

The world is now a brighter place.

listlib queue

I'm experiencing an odd problem in my listlib queue implementation… I can enqueue a bunch of stuff and then dequeue them all and it works great… but my “qsee()” function doesn't seem to work… looking and looking and looking… nothing out of the ordinary.

I then decided to emulate qsee() by dequeueing then enqueueing the value… and segfault! Some sort of small thing I'm glossing over. Might just have to rewrite enqueue() and dequeue() from scratch to get it all perfect.

Idea is, once it is working, to try and have an in-struct function pointer to peek() so I can have one for both of types Stack and Queue. We'll see how well that works out. I still have thoughts of rewriting the whole thing to use C++.

asnsubmit reporting incorrectly

It would appear that all my UNIX students received late marks on lab #4/cs #4, when this was clearly not the case.

A bug was obviously suspected, and it took a little bit to figure out, but eventually was found and corrected.

The offending code:

symchk="`cat ${unixpath} | grep "lab${week}" | grep '\^' | wc -l`"
if [ $symchk -eq 1 ]; then
    cal_date="`cat ${unixpath} | grep "lab${week}" | cut -d'^' -f4 | cut -d'|' -f1 | sed -e 's/\*//g' -e 's/ //g'`"
else
    cal_date="`cat ${unixpath} | grep "lab${week}" | cut -d'|' -f5 | sed -e 's/\*//g' -e 's/ //g'`"
fi

The corrected code:

symchk="`cat ${unixpath} | grep "lab${week}\>" | grep '\^' | wc -l`"
if [ $symchk -eq 1 ]; then
    cal_date="`cat ${unixpath} | grep "lab${week}\>" | cut -d'^' -f4 | cut -d'|' -f1 | sed -e 's/\*//g' -e 's/ //g'`"
else
    cal_date="`cat ${unixpath} | grep "lab${week}\>" | cut -d'|' -f5 | sed -e 's/\*//g' -e 's/ //g'`"
fi

Basically, I was just searching for the substring lab${week}. This worked great until this week, because there was another substring found in the assignments file– the hyperlink for journal creation at the bottom (contained the word lab46, which triggered that match). This was easily fixed (again, once discovered), by forcing the substring to have a word termination.

This is an issue that could easily rear its head again in the context of specially crafted strings (if I put in the string “slab5”, for example, we'd have a similar situation as the one I just fixed.

I guess I could just put in regex matches for the beginning and ending of the desired word in question, but I don't feel like thinking about it at this point. The current problem is fixed… I still have to manually go in and change all the incorrect assignment recordings.

TODO: update asnsubmit.sh to override existing recordings, so in situations such as these, I could just re-run the script once the problem is fixed, and have the script fix it all for me.

This would involve (at a minimum):

A sed delete should work nicely for removing it. I'll just have to do some tests. Be nice to incorporate similar logic into the journal tabulation scripts.

more C stuff

Function Pointers

I think this is the next area I should learn more about. Here are some potentially useful links:

and okay! Looks like just one at this point.

<html><center></html>

<html></center></html>