STATUS updates
=====TODO=====
* the formular plugin is giving me errors, need to figure this out (email assignment form)
* can I install writer2latex on wildebeest herd without needing gcj??
* lab46: FIX grepset LC_COLLATE to C in /etc/default/locale, problem solved.
* set up an OCFS2/DRBD volume between sokraits and halfadder, locate VMs there
* look into how to adequately set up subversion svn+ssh:// authentication
* set up symlink and cron job for userhomedirbackup.sh on nfs2; update wiki page for nfs
=====URLs=====
Some links of interest:
* http://www.freelists.org/post/dokuwiki/invoke-mediamanager-in-a-plugin,2
* unrelated: http://infoworld.com/d/adventures-in-it/run-it-business-why-thats-train-wreck-waiting-happen-477
* [[http://www.youtube.com/watch?v=ggB33d0BLcY&feature=player_embedded#|laddergoat]]
* [[http://www.llvm.org/|LLVM]]
* [[http://fluxbox-wiki.org/index.php?title=Howto_set_the_background|Fluxbox config]]
* [[http://www.reocities.com/harpin_floh/glglobe_page.html|GLglobe]]
* [[http://www.heavens-above.com/|Heavens Above]]
* http://wiki.debian.org/kristian_jerpetjoen
* http://www.webupd8.org/2010/11/alternative-to-200-lines-kernel-patch.html
* http://myproxylists.com/nix-brute-force
=====Other Days=====
=====November 28th, 2010=====
====asnsubmit====
My assignment submission script wasn't able to find any assignment submissions in the a-f range… it was capitalizing the letters, the actual submissions use lowercase.
I fixed the script logic (using a **tr** translation) to convert any uppercase values to their corresponding lowercase so that detection takes place as intended.
=====November 22nd, 2010=====
====virtualbox tweaks====
I've been experiencing some odd problems with virtualbox when run on my 2nd display. I finally went and googled this and found that the symptoms are likely due to running a 64-bit Snow Leopard. The solution? System Preferences -> Displays -> Color… make sure BOTH displays use the same color profile.
I did, and it seemed to fix the problem nicely.
====boot ubuntu text installer====
I actually wasn't successful in getting the text-based installer going, but I was able to finally get it booted to install.
The trick… get to the grub menu (hold down shift), and EDIT the line with **quiet splash --** at the end. Change it to: **quiet splash nomodeset --**
CTRL-X to boot. It did the trick for me.
====LAIRwall: evolutions====
I stumbled upon some interesting pieces of software that would make for great capability improvements to the LAIRwall:
* http://bino.nongnu.org/multi-display.html - bino is a 3D/multi display video player
* http://www.equalizergraphics.com/index.html - equalizer is a framework for distributing OpenGL, sort of like Chromium
* also: http://www.sagecommons.org/
and a document some school put together for constructing their large tiled display wall:
* http://www.comp.leeds.ac.uk/viznet/reports/powerwall.pdf
=====November 20th, 2010=====
====updated commitchk====
I combined two tasks into one--- first, commitchk needed some additional updating to work with nested wiki pages.
Secondly, I wanted to set up a bigger presence for some of these scripts on the wiki.
Thusly, [[scripts/commitchk]] was created.
And instead of maintaining two copies of the script (by copying in any changes made), I will display the current version of the script itself. I did this via the [[http://www.dokuwiki.org/plugin:code2|code2]] plugin.
What I want to do on the backend is for each script I am showcasing on the wiki, to collect (perhaps daily?) in an automatic fashion (scp or csync2 it to a common location), so that the wiki page will automatically display it (once the update has taken place, of course).
At this point, no such automation is in place. That's another TODO.
====installed code2 plugin====
I needed some additional capabilities that the default
tag couldn't handle… as it turns out, there's a [[http://www.dokuwiki.org/plugin:code2|code2]] plugin which solves my dilemma (and more!).
Looks like it can even do some command-line displaying… I'll have to try it out to see if it adequately replaces the cli plugin in functionality.
====plan9ings====
Today the main plan9 sources repository appears to be down… so I set about trying to find an alternate mirror.
In the process, I found a mirror, although it was not as complete nor up-to-date as the main mirror. It did have some things I had been looking for previously (namely **python**), so I may actually be able to get **python** and **hg** installed, which would allow me to try out the plan9 port of go.
====halfadder physdev bridging====
Turns out a reboot was needed on halfadder, as I tried launching a plan9 cpu server (which seemed to work, mostly, minus some networking issue).
Rebooted, and the log messages went away.
In the process I relocated /tmp to /export/staging/tmp, and have the intention of moving /var off of / as well.
I discovered that squeeze uses the concurrent launching process, which explains why it appears so darn fast when booting.
====xen4 networking====
IF we end up having sporadic networking problems with xen4 domUs, here is some information to be aware of:
* http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1666
* http://www.gossamer-threads.com/lists/xen/users/183736
* http://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg826818.html
=====November 19th, 2010=====
====DRBD====
Now that halfadder is running squeeze and a recent kernel, there is a newer version of DRBD I can play with… what's more, as of 2.6.33 (looks like it was backported into Debian's 2.6.32), DRBD is part of the mainline kernel tree! Sometimes we do get nice things!
Anyway… squeeze comes with DRBD 8.3.7… on NFS we're currently running 8.0.14… so certainly a significant upgrade.
Some links to revisit:
* http://fghaas.wordpress.com/2007/09/03/drbd-806-brings-full-live-migration-for-xen-on-drbd/
* http://www.drbd.org/download/mainline/
* http://www.drbd.org/users-guide/ch-configure.html
* http://www.peakscale.com/archives/gridvm/drbd-lvm-gnbd-and-xen-for-free-and-reliable-san/
====halfadder xen tweak====
So far so good… halfadder appears to be buzzing along working quite nicely.
In the output of **dmesg** I spied what appeared to be a harmless but overly verbose message:
[ 719.092411] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
[ 719.092416] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
After some googling, I discovered (mostly as expected) that this was due to evolving functionality in iptables (somewhat also tied to kernel version… new iptables / new kernel), and that an option was created to handle this situation: **--physdev-is-bridged**
The Xen lists had been talking about it, and even had a fix, which I applied to **/etc/xen/scripts/vif-common.sh**:
# iptables "$c" FORWARD -m physdev --physdev-in "$vif" "$@" -j ACCEPT \
iptables "$c" FORWARD -m physdev --physdev-is-bridged --physdev-in "$vif" "$@" -j ACCEPT \
2>/dev/null &&
iptables "$c" FORWARD -m state --state RELATED,ESTABLISHED -m physdev \
--physdev-is-bridged --physdev-out "$vif" -j ACCEPT 2>/dev/null
# --physdev-out "$vif" -j ACCEPT 2>/dev/null
if [ "$command" == "online" -a $? -ne 0 ]
then
log err "iptables setup failed. This may affect guest networking."
fi
}
I basically commented out the original line, and added in a mostly duplicate line, save for the addition of the **--physdev-is-bridged** option, so that should take care of the problem.
References for this:
* http://xen.1045712.n5.nabble.com/PATCH-vif-common-sh-prevent-physdev-match-using-physdev-out-in-the-OUTPUT-FORWARD-and-POSTROUTING-che-td3255945.html
* http://www.gossamer-threads.com/lists/xen/devel/189692
* http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=571634#10
This would apparently have been a problem if any of the domUs were in the business of forwarding packets… none of the ones currently running on halfadder do, so this problem likely would not have been experienced until such a time, likely causing us all sorts of unintended grief.
=====November 17th, 2010=====
====halfadder rebuilt====
I was able to commence on a project I've been intending to embark upon for some time… sort of a continuation of my on-going getting the most out of virtualization and distributed filesystems… last spring (or whenever it was) when I moved all the VMs from sokraits and halfadder onto NFS, I realized shortly thereafter that they stressed NFS more than I deemed acceptable (any serious disk activity will drive up load on NFS… and it isn't hard to get load up over 4.0).
So, after some thinking I thought up a solution, which is more or less as follows:
* put a small SSD drive in for the boot drive
* have 2 magnetic disks acting in a RAID 1 (mirror) config
* using DRBD+OCFS2, have that same data volume be available on both sokraits and halfadder
This would move the VM-related data off of NFS, relieving its CPU stress, and having another reliable data storage medium where we can do other things (backup unique data that otherwise is only on NFS, such as user home directories).
Then, it appears as if the Debian people have finally ported over Xen to squeeze, so I was able to slap on a squeeze install, and run in full 64-bit kernel and userspace (just like on Lab46). With a solid state drive and all this, it feels rather snappy. Even though they're still running off NFS, halfadder is back up and running, and running half the LAIR VMs.
It boots FAST! VMs seem to boot fast and all appears happy.
I need to get sokraits switched over in order to commence playing with DRBD and OCFS2.
====lab46 at jobs====
Either I've been in an alternate universe, or the **at**(**1**) command on Lab46 has changed.
My commitchk.sh script job didn't appear to run on Tuesday, and upon trying to launch another **at** job today resulted in an error where I swear I did not get one before.
At any rate, after reading the manual, I found perhaps a more specific way to do it, which is a bit more scriptable:
lab46:~$ at -t 201011181012.00
warning: commands will be executed using /bin/sh
at> /home/wedge/bin/commitchk.sh 20101116 20101118 2 1
at>
job 87 at Thu Nov 18 10:12:00 2010
lab46:~$
====LAIR loses climate control====
At some point after 4AM this morning, we lost climate control in the LAIR once again. It is presently a balmy 81 degrees.
=====November 16th, 2010=====
====DSLAB CoRAID outage====
Around 8pm monday night, power was lost to the CoRAID, which data.dslab.lan uses for its data storage backend. It wasn't discovered until around 2pm today, and not fully resolved until closer to 4pm.
End result is that power was restored, data rebooted, connectivity and functionality restored, and cluster rebooted and everything is back on-line.
Specific education as to what the CoRAID is and the importance of NOT TOUCHING IT was relayed to the DSLAB folk.
====submit.php updated====
After my SPAM-hardening updates to submit.php the other day, completely unrelated conditions took place (ie actually having a lab go above 9 in number), where my existing logic wasn't prepared to deal with it.
I ended up making the following modification:
5 // Format assignment name
6 $tmp0 = ucfirst($assignmentshort);
7 $tmp1 = strrev($tmp0);
8 $tmp1{0} = " ";
9 $tmp0 = strrev($tmp1);
10 $tmp1 = ereg_replace("^[a-z]*", "", $tmp0);
11 if ($tmp1 == "Cs ")
12 $tmp1 = "CS";
13 //$tmp2 = preg_replace("^[a-z]*", "", $assignmentshort);
14 $tmp2 = strrev($assignmentshort);
15 $tmp3 = strtoupper($tmp2{0});
16 $tmp2 = $tmp3;
17 $assignment = "$tmp1 0x$tmp2";
Basically, I took out the (now commented) 2nd preg_replace() and did a string reversal/grab the first element of (reversed) string, and blammo! Problem solved.
If I ever go 2 digit assignments, we'll have a problem once again.
=====November 14th, 2010=====
====templates and most C++ compilers====
I had an urge to explore templates today, to potentially convert the Stack and List classes in the Backgammon project to be usable as templates (use with LNode, use with TNode, it don't mattah').
I went and did the conversion, which was actually pretty straightforward:
===stack.h===
To convert a class to use templates, changes need to take place in the declaration and definitions. Here we see the converted **stack.h**:
#ifndef __STACK_H
#define __STACK_H
#include "list.h"
//template class Stack;
//template class Stack;
#pragma interface "stack.h"
template
class Stack
{
public:
Stack(); // Default constructor
Stack(int); // Accepts an integer value to dictate the maximum size of the stack
~Stack(); // Destructor
bool push(const NodeType&); //Adds a node onto stack
bool push(int value); //Adds a integer value onto stack
NodeType* pop(); // Removes the top node while also making the stack 1 element smaller.
NodeType* peek(); // Returns the value of the top node.
int getLength(); // Returns the value of the stacks current length.
bool setLength(int); // Sets the length of the current stack.
private:
List *info;
NodeType *top;
int length;
};
#endif
The #pragma line may or may not be entirely necessary… it came in when I was attempting to fix the problems that sprouted up.
===stackops.cc===
The actual class definitions also need some attention:
#include "stack.h"
using namespace std;
template
bool Stack::push(const NodeType& tmp)
{
bool status = false;
if (this -> length == 0 || this -> info -> getQty() < this -> length)
{
this -> info -> append(tmp);
this -> top = this -> info -> end;
}
if (this -> top == tmp)
{
status = true;
}
return (status);
}
template
bool Stack::push(int value)
{
bool status = false;
if (this -> length == 0 || this -> info -> getQty() < this -> length)
{
this -> info -> append(value);
this -> top = this -> info -> end;
}
if (this -> top -> getValue() == value)
{
status = true;
}
return (status);
}
template
NodeType* Stack::pop()
{
NodeType* tmp = this -> top;
if (this -> top != NULL)
{
if (this -> top -> getPrev() == NULL)
{
//tmp = this -> top;
this -> top = NULL;
}
else
{
//tmp = this -> top;
this -> top = this -> top -> getPrev();
this -> top -> setNext(NULL);
tmp -> setNext(NULL);
tmp -> setPrev(NULL);
}
}
this -> info -> setQty((this -> info -> getQty()) - 1);
return (tmp);
}
template
NodeType* Stack::peek()
{
return (this -> top);
}
Sort of neat in concept… we just channel through the particular type we want on actual instantiation (in the main program), but here in the class definitions, it is just some available conduit (provided the fundamental NodeType's (LNode and TNode) can interoperate in the same code base).
===stacktest.cc===
Speaking of the instantiation… a snippet from testing/stacktest.cc:
#pragma implementation "stack.h"
#include "stack.h"
#include
#include
#include
using namespace std;
// main() function
int main()
{
bool status = false;
Stack *mystack = NULL;
Stack *mylimitedstack = NULL;
LNode *tmp = NULL;
int input = 0;
printf("[stacktest] stacktest.cc (Stack Class) test application.\n");
printf
("[stacktest] Instantiating object (parameterless constructor) .... ");
//Create an instance of our Stack object
mystack = new Stack();
===Pulling it all together===
At the end of all this, the Stack class compiles (library builds), stacktest.cc even compiles, but we have a problem during linking:
lab46:~/src/backgammon/testing$ make stacktest
***
*** Building stacktest
***
g++ -Wall -static -fexternal-templates -I../include/ -o stacktest stacktest.o -L../lib/ -lstack -llist -llnode -lnode
stacktest.o: In function `main':
stacktest.cc:(.text+0x5d): undefined reference to `Stack::Stack()'
stacktest.cc:(.text+0xe9): undefined reference to `Stack::push(int)'
stacktest.cc:(.text+0x141): undefined reference to `Stack::Stack(int)'
stacktest.cc:(.text+0x1cd): undefined reference to `Stack::push(int)'
stacktest.cc:(.text+0x211): undefined reference to `Stack::pop()'
stacktest.cc:(.text+0x245): undefined reference to `Stack::peek()'
stacktest.cc:(.text+0x267): undefined reference to `Stack::pop()'
stacktest.cc:(.text+0x29b): undefined reference to `Stack::peek()'
collect2: ld returned 1 exit status
make: *** [stacktest] Error 1
lab46:~/src/backgammon/testing$
[[http://stackoverflow.com/questions/614233/undefined-reference-to-function-template-when-used-with-string-gcc|According]] [[http://www.cs.duke.edu/~ola/courses/cps108/templates.html|to]] [[http://www.cs.umbc.edu/courses/undergraduate/341/spring01/rabi/gcc_templates_notes.shtml|various]] [[http://gcc.gnu.org/ml/gcc-help/2004-10/msg00208.html|sources]], [[http://www.cs.utah.edu/dept/old/texinfo/gcc/gppFAQ.html|templates]] [[http://www.velocityreviews.com/forums/t285808-why-doesnt-this-template-work.html|definitions]] must also be included (ie can't rely on just the header file declarations). The standard behavior for this appears to be including the C++ file as well, which I think is rather kludgy… but it **does** work.
A nice concise explanation of the problem, as specified [[http://www.cs.utah.edu/dept/old/texinfo/gcc/gppFAQ.html|here]]:
> g++ does not implement a separate pass to instantiate template
> functions and classes at this point; for this reason, it will
> not work, for the most part, to declare your template functions
> in one file and define them in another. The compiler will need
> to see the entire definition of the function, and will generate
> a static copy of the function in each file in which it is used.
There is some interesting stuff talked about at:
* http://www.network-theory.co.uk/docs/gccintro/gccintro_60.html
But ultimately it does not appear to solve this problem.
Apparently the C++ deities recognized this problem, and the specification makes amends for an **export** keyword that can be used to separate template declaration from definition. Unfortunately, it would seem that barely any C++ compilers support this (include g++). There are many mentions of processing overhead this would incur, memory utilization, complexity, and even potential code incompatibilities.
Some more information on that can be found here:
* http://www.network-theory.co.uk/docs/gccintro/gccintro_61.html
BTW, that's a really useful site.
Another useful page:
* http://www.network-theory.co.uk/docs/gccintro/gccintro_59.html
I actually tried to do that, but it still did not work. I'm sure I could get it to work, but it horribly breaks the distributed nature of the code, so I think I may avoid it altogether (maybe develop as an independent example to demonstrate templates).
===More information===
* http://gcc.gnu.org/onlinedocs/gcc/Template-Instantiation.html
* http://www.cs.colostate.edu/helpdocs/c++.html (an interesting approach that is somewhat more palatable)
=====November 13th, 2010=====
====gom/print====
After my zone rearrangings the other day, I realized that our print server, print.lair.lan, was not in the zone, so its IP would change (since I shifted the range up by at least 100).
Once its lease expired, sure enough, it was no longer reachable.
So I set about locating it so as to properly set it up in DHCP and set a more permanent IP address.
As it turns out, on juicebox, if I were to cat **/var/db/dhcpd.leases**, I'd see a list of all the IPs semi-recently allocated for the dynamic range. Luckily, there weren't that many, and a ping of all the allocated IPs revealed two machines... upon logging into both, I discovered where **print** had run off to, and also discovered **gom** was also in need of a more permanent setting.
So I placed them both in DHCP and DNS, restarted the services, and restarted both of those machines... on restart, they both appeared at their designated locations.
For now, I set **print.lair.lan** at 10.80.1.18, and I set **gom.lair.lan** at 10.80.1.114 (squirrel's happy fun lucky prototype range).
I adjusted the **pf.conf** rule so connections to **gom** were routed to the proper machine.
All should be good.
====ISO image extraction fun====
When downloading the current **9atom.iso.bz2** image, there was a transfer problem which resulted in the full file appending to the incomplete file.
So instead of having an 87MB file, we had a 136MB file.
The output from the **wget** download indicated the correct file size was: **90391530**
The resultant appended file was: **142446319**
Subtracting the correct size from the incorrect size:
**142446319 - 90391530** resulted in **52054789**
So, we essentially need to skip the first **52054789** bytes of the file in order to get to the **90391530** bytes we are interested in. **dd**(**1**) to the rescue!
lab46:~$ dd if=9atombad.iso.bz2 of=9atomgood.iso.bz2 bs=1 skip=52054789
90391530+0 records in
90391530+0 records out
90391530 bytes (90 MB) copied, 178.848 s, 505 kB/s
lab46:~$ file 9atomgood.iso.bz2
9atomgood.iso.bz2: bzip2 compressed data, block size = 900k
lab46:~$
An interesting tidbit I learned:
* **seek** option to dd skips obs-sized blocks at start of **//output//**
* **skip** option to dd skips ibs-sized blocks at start of **//input//**
That realization suddenly allows me to realize why some of my previous attempts at this appeared to fail miserably.. I was trying to contort the output file while still starting at the 0 offset in the input file.
====plan9port on lab46====
I installed Plan 9 from User Space on Lab46 in /usr/local/plan9.
Users that wish to make use of it can add the follow to their login files:
export PLAN9=/usr/local/plan9
export PATH=${PATH}:${PLAN9}/bin
====netsync plan9 updates====
With our recent increase in Plan9 activity, I thought to add support to netsync so we periodically pick up updates.
At the moment, there are 3 files of interest:
* plan9.iso.bz2
* 9atom.iso.bz2
* plan9port.tgz
The following section was added to /export/lib/netsync.sh (below the CEPH section):
##############################################################################
# Plan9 goodies
##############################################################################
repopath="/export/apt-mirror/repository/pub/Plan9"
##
## Plan9 from Bell Labs
##
repoaddr="http://plan9.bell-labs.com/plan9/download/plan9.iso.bz2"
mkdir -p ${repopath}
cd ${repopath}
/bin/mv -f plan9.iso.bz2 plan9.iso.bz2.old 2>/dev/null
echo "Processing Plan9 build ..." | tee -a ${LOG} >> ${DETLOG}
wget -q ${repoaddr} | tee -a ${LOG} >> ${DETLOG}
##
## Plan9 from User Space
##
repoaddr="http://swtch.com/plan9port/plan9port.tgz"
addr="http://swtch.com/cgi-bin/info.cgi?file=/plan9port/plan9port.tgz"
rver=`wget -q ${addr} -O - | grep -A 1 '^\$ md5sum p.*tgz' | tail -1 | sed 's/ /:/g'`
lver=`md5sum plan9port.tgz | sed 's/ */:/g'`
if [ ! "${rver}" = "${lver}" ]; then
/bin/mv -f plan9port.tgz plan9port.tgz.old
echo "Processing Plan9 from User Space ..." | tee -a ${LOG} >> ${DETLOG}
wget -q ${repoaddr} -O plan9port.tgz
fi
##
## 9atom - Erik Quanstro's augmented Plan9 distribution
##
repoaddr="ftp://ftp.quanstro.net/other/9atom.iso.bz2"
/bin/mv -f 9atom.iso.bz2 9atom.iso.bz2.old
echo "Processing 9atom ..." | tee -a ${LOG} >> ${DETLOG}
wget -q ${repoaddr} -O 9atom.iso.bz2
Future updates to this logic will likely include:
* Plan9 Xen kernels
I am interested in actually mirroring the entire Plan9 repository, but I'd want to do that in a more Plan9-esque way (ie in a Venti archive or something).
====UNIX submission form spam enhancement====
I woke up this morning to find 2 new messages queued up in www's nullmailer queue. For far too long, I had been complacent and manually tended to the deletion of these messages.
Recently I've noticed a marginal increase in form spamming (actually stemming all from the Fall 2009 regex lab, for whatever reason).
Having been meaning to do something about this for some time, I finally did... submit.php has been enhanced thusly:
57 echo "";
58 echo "Verification: ";
59 $validuser = system("id $userid 2> /dev/null | wc -l");
60 echo "
";
61 if ($validuser == "0")
62 {
63 echo "
";
64 echo "";
65 echo "Error! Invalid user. Typo or evil form spammer?
";
66 echo "If you are real, hit BACK and make the correction.
";
67 echo "
";
68 echo
69 die;
70 }
Basically, in the section where it verifies that a name and username has been entered into the appropriate fields, I now **also** check to see if that provided username is valid.
If it is valid (and it does appear that the **id** command is somewhat case insensitive, so it will still allow improperly capitalized usernames through), the script proceeds as normal.. if invalid, it dies and displays the error in shiny CSS divs.
Future enhancements to **submit.php** should include:
* updating other error messages to CSS divs
* updating success screen to use CSS divs
* force-to-lowercase the userid, so it won't trip up the other grade not-z scripts
* more graceful failure when the question source files are not found (right now it times out).
=====November 12th, 2010=====
====LAIR DNS zone reorganization, part 1====
I had an urge to tackle some lair.lan and offbyone.lan DNS zone rearrangements this morning, consolidating little-used ranges, better utilizing our available IP space.
Some notable improvements:
* I moved the DHCP dynamic range for lair.lan to 10.80.1.180 - 10.80.1.199 (doubling its size)
* I moved the DHCP dynamic range for offbyone.lan to 10.80.2.170 - 10.80.2.199 (adding another 10)
* I removed some long obsolete entries in both domains
* moved the LAIRwall entries in both domains to 10.80.1.201 - 10.80.1.206, following the DHCP range
* consolidated some project VM ranges
At the moment, on offbyone.lan, 10.80.2.60 - 10.80.2.159 is open.
I may do some further optimizations in lair.lan at a later point.
=====November 10th, 2010=====
====Xen on squeeze====
I stumbled across some new developments indicating modern Xen dom0 activity for the Debian squeeze distribution. Looks like it took place just after I checked previously (I think back in the spring), so it would appear that squeeze will get Xen after all (Xen 4.0.1 at that!)
The links:
* http://wiki.debian.org/Xen#A.22networkbridgingforxen4.0withmultipleinterfaces.3A.22
* http://etbe.coker.com.au/2010/03/21/xen-debian-squeeze/
* http://www.agileweboperations.com/xen-debian-lenny-dom0-with-ubuntu-lucid-guest/
* http://womble.decadent.org.uk/blog/debian-linux-packages-the-big-bang-release
====kerrighed====
Found an interesting link that I'd like to remember:
* http://www.debianadmin.com/how-to-set-up-a-high-performance-cluster-hpc-using-debian-lenny-and-kerrighed-updated.html
=====November 8th, 2010=====
====dokuwiki wordblock.conf====
As I was updating Lab #A for my UNIX class, I got a "SPAM attempt blocked" message... I apparently had stumbled upon dokuwiki's word block functionality with some content in my Regular Expressions lab.
Upon a closer inspection, it turned out the problem was a link, and the link in question was:
* http://www.cs.colorado.edu/~schenkc/UNIX_Regular_Expressions.pdf
Why? Apparently someone had the following rule in conf/wordblock.conf:
https?:\/\/([^\/]*\.)?colorado\.edu
No idea. Couldn't find any controversies surrounding colorado.edu, so I just removed the line from the file and went on with my business. Works.
=====November 5th, 2010=====
====lairdump on juicebox====
Every so often (as it would turn out, the first 7 days of each month or so), I'd get an error e-mail from juicebox claiming it cannot find /usr/local/sbin/lairdump.sh
I finally investigated, and sure enough, I had never installed lairdump.sh on juicebox (this is old jb, remember)... but I had migrated some of the updated cron jobs, etc. from a jb2 backup when old jb came out of retirement.
I copied over capri's lairdump.sh, modified it accordingly, and install ssh keys in the appropriate place so juicebox can (tomorrow) participate in the LAIR-wide backup festivities.
====rxvt options on wildebeest herd====
A couple weeks ago I added an option to the fluxbox menu on antelope for the lab46 terminal. The option was "-sl 2048", to allow for a longer local scrollback buffer, so some obscenely long output we were debugging could actually be more effectively referenced.
This morning I set about adding this scrollback length buffer to all terminals on all the wildebeest herd fluxbox-menu and fluxbox-menu-good files in /etc/X11/fluxbox
Should be a rather transparent change to all, except for the benefit of being able to access information farther in the past than has been typical.
====pruning wildebeest herd packages====
While installing CUPS for client-side printing access, I also pruned out some unnecessary packages that came in during the CUPS installation:
antelope:~# aptitude purge samba-common smbclient avahi-daemon avahi-utils libnss-mdns
It will claim certain packages //recommend// some of these, including cups, but //recommend// is not //required//, and since we're not using samba for our printer stuff, it can be removed.
====creating a CUPS client====
Setting up a machine as a CUPS client apparently is rather straightforward:
client:~# aptitude install cups cups-client
...
client:~#
Create /etc/cups/client.conf and have it contain the following:
##
## /etc/cups/client.conf - CUPS client configuration file
##
##
## Server configuration
##
ServerName print.lair.lan
##
## Security configuration
##
Encryption IfRequested
And restart the CUPS system:
client:~# /etc/init.d/cups restart
Restarting Common Unix Printing System: cupsd.
Next, configuring our system to use a printer...
client:~# lpstat -d -p
no system default destination
printer lair_cp1518ni is idle. enabled since Fri 05 Nov 2010 10:32:49 AM EDT
client:~#
This will verify that CUPS is working, and can see the printers exported by the print server.
Now, we will make a printer (currently, the only printer) the system default:
client:~# lpoptions -d lair_cp1518ni
media=Letter sides=one-sided finishings=3 copies=1 job-hold-until=no-hold job-priority=50 number-up=1 auth-info-required=none job-sheets=none,none printer-info='HP Color LaserJet CP1518ni' printer-is-accepting-jobs=1 printer-is-shared=1 printer-location=Lair printer-make-and-model='HP Color LaserJet Series PCL 6 CUPS' printer-state=3 printer-state-change-time=1288967569 printer-state-reasons=none printer-type=12380
To verify the success of this operation, run **lpstat** again and notice the output:
client:~# lpstat -d -p
system default destination: lair_cp1518ni
printer lair_cp1518ni is idle. enabled since Fri 05 Nov 2010 10:32:49 AM EDT
To print, we can use the **lp** command, along with a filename:
client:~# lp /etc/motd
request id is lair_cp1518ni-6 (1 file(s))
client:~#
Some useful links referenced:
* http://www.cups.org/documentation.php/doc-1.4/options.html (very useful)
* http://www.debianadmin.com/setup-cups-common-unix-printing-system-server-and-client-in-debian.html#more-316
* http://wiki.debian.org/SystemPrinting
=====November 1st, 2010=====
====grep problem FIXED====
As a perfect example of looking at a problem with the same perspective and not coming up with anything, I have finally broken free and discovered a solution!
Previously I had indicated a discovered problem with **grep** with some regular expressions, namely ranges.
For example:
lab46:~$ echo j | grep '[A-Z]'
j
lab46:~$
Indicates a problem. 'j' should not be output, as it does not match the RegEx for a member of the uppercase alphabet. Yet, sure enough, it was printing.
Attempts at searching for this problem (typically with search terms related to debian and squeeze), found no immediate solutions, or even acknowledgements of the problem, save for a potential reference to unicode character sets, and the problem might be **some** sort of difference between the recognition of ASCII vs. unicode (even though they're supposed to be aligned, for the characters that matter).
What ended up being the problem:
lab46:~$ locale
LANG=en_US
LC_CTYPE="en_US"
LC_NUMERIC="en_US"
LC_TIME="en_US"
LC_COLLATE="en_US"
LC_MONETARY="en_US"
LC_MESSAGES="en_US"
LC_PAPER="en_US"
LC_NAME="en_US"
LC_ADDRESS="en_US"
LC_TELEPHONE="en_US"
LC_MEASUREMENT="en_US"
LC_IDENTIFICATION="en_US"
LC_ALL=
lab46:~$
The problem? The fact that **LC_COLLATE** was equal to **"en_US"**...
As it turns out, different locales support different collating options, and the problem was that, in the **en_US** locale, collating something like "[A-D]" would actually check for:
AaBbCcDd
instead of the traditional:
ABCD
therein lies the problem! We need to set **LC_COLLATE** to the appropriate value in order to restore traditional collation.
The correct locale? **C**
So, in **/etc/default/locale**, I appended the line:
LC_COLLATE=C
Resulting in a file that looks like:
###LAIRCONF###
LANG=en_US
LC_COLLATE=C
Because **/etc/default/locale** is managed by LAIRCONF, I updated the archive used in the **lair-std** package, so any future updates will maintain this setting (and potentially fix it on systems where it is not yet knowingly causing problems).
Useful links referenced in solving this problem:
* https://bbs.archlinux.org/viewtopic.php?pid=303544
====dokuwiki-status.sh incorrectly updating====
A new month, a new status page roll-over. I noticed, upon checking my status page this morning, that the pages for October and September (two months that were the first to be managed by my dokuwiki-status.sh script) were not including a link to the "Next" month... Previous and Current, yes, but nothing for next.
I went and checked the script... and it appeared as though next was being included... and upon checking the actual wiki source of september/october, next was also indeed there.
But, the problem was, my script was not properly terminating the table cell! So the next just wasn't being rendered.
A quick fix to the script, and two quick manual fixes for September and October, and all is as it should be.
Original (incorrect) code:
47 echo " ^ [[:status|Current Month]] | ^ [[status/status_${curyear}${cmonoffset}${curmonth}|Next Month]]" >> status.txt
Note the lack of the terminating **|** after the "Next Month" link.
Updated (correct) code:
48 echo " ^ [[:status|Current Month]] | ^ [[status/status_${curyear}${cmonoffset}${curmonth}|Next Month]] |" >> status.txt
And all should be good.
^ [[status/status_201010|Previous Month]] | ^ [[:status|Current Month]] | ^ [[status/status_201012|Next Month]] |