User Tools

Site Tools


haas:status:status_201103

<html><center></html>STATUS updates<html></center></html>


TODO

  • the formular plugin is giving me errors, need to figure this out (email assignment form)
  • set up symlink and cron job for userhomedirbackup.sh on nfs2; update wiki page for nfs
  • update grade not-z scripts to handle/be aware of winter terms
  • update system page for (new)www
  • update system page for db
  • migrate nfs1/nfs2 system page to current wiki
  • flake* multiseat page

URLs

Other Days

March 27th, 2011

wildebeest herd updates

I ran an aptitude update; aptitude upgrade; aptitude clean on all members of the wildebeest herd.

Additionally, I changed the default browser to chromium… let's see how that does performance/memory-wise.

Also gave them all a reboot for good measure.

iOS 4.3.1 / iTunes

From my expansive exploration of google and resulting message board posts yesterday with my apparent inability to get iOS installed on my iPad and iPod… I arrived at the following theory:

  • Filesystem problem on my MacBook Pro… I should run Disk Utility to repair any odd-ball permissions and/or filesystem problems.
  • Try it on a separate install of iTunes (different user even)

So first thing, I fired up my Windows XP VM and installed iTunes. Copied over the iOS files, and attempted there (if there's one good thing that has happened from all this, it is that I look at the iOS sync process as much less fragile… they can take a bit of a beating and still come back to life). SUCCESS! Got both my iPad and iPod back in working order, both running iOS 4.3.1…

And got to experience a restore from backup (good to know how that works)… and further customized what I'm syncing to various devices.

March 26th, 2011

iOS 4.3.1 update

So Apple has released iOS 4.3.1. I decided to indulge myself in the update because:

  • it has some security / bug fixes
  • there are claims of improved performance
  • the Jailbreak community has been holding back their payloads until 4.3.1 drops

I plugged into iTunes and basically let it do its thing, as has been the case dozens of times.

However, this time it didn't work– went and stopped on the progress bar close to the end.

Multiple attempts revealed the same result– it would “Work”, but wouldn't complete.

Again and again I'd try… redownloading firmware, even attempting to regress to a previous iOS release (the 4.3 release that was working). Nothing. Different cables, different devices (I tried both my iPad 2 and iPod Touch 3rd gen… ended up essentially bricking them both).

What an absolute pain… and there went my Saturday… long unsuccessful hours of failed attempts getting iTunes to update iOS devices.

March 23rd, 2011

locating random image data in a directory tree

A long-standing task I've been meaning to accomplish is that of parsing through a directory tree of files, locating those files which represent image data, the extraction of that image data, and proper naming (pretty much slap on the appropriate extension).

I've been putting this off for a variety of reasons… the task of getting access to this data (off an almost failed harddrive, which needed a 2nd attempt at data extraction), the likely hundreds of failed attempts at accessing the information from the first extraction… and now the realization that, now that I have access to the data, how to locate that data which is specifically desired?

For some reason I thought to finally resolve the matter today. I came up with the following, for starters:

# Locate all regular files (Exclude directories)
find . -type f > result.txt
 
# Prune out just the files that seem to contain image data
for line in `cat result.txt | sed 's/ /*/g'`; do
    file "`echo $line | sed 's/\*/ /g'`"
done | grep -i image | grep -v '/Plug-ins' | grep -v '/Sample Documents'

This seems to give me a handful of files… I guess I would have expected a bit more (moreso an expectation on my behalf than anything), but after all this, I find a grand total of 13 files.

Upon looking closer, I do find many more files… but they are in a somewhat more exotic “image?” format, which while I can extract, may not end up being viewable without somewhat reproducing similar functionality of the originating system the data was extracted from.

reverse ssh tunnels

http://toic.org/2009/01/18/reverse-ssh-port-forwarding/

Logic seems to be:

ssh -R [bind_address:]port:host:hostport machineyouareconnectingto

The optional “bind_address” will allow you to connect to it over the network (what would seem to be the neat feature to do it to begin with)… but this is disabled in ssh unless you explicitly enable “GatewayPorts clientspecified” in /etc/ssh/sshd_config and restart ssh on the… machine that you'll be hitting wanting to pull this off?

Something like that.

March 19th, 2011

lab46 running out of free memory

It turns out that, with the on-going use of lab46 by various classes, that the memory allocated (512MB + 256MB swap) is being largely utilized. I had a student unable to login this morning due to too much memory being used (granted, they were hitting a ulimit limit, but upon further checks, there was much more swap used than free).

The following actions were taken:

  • lab46 VM config file was given total memory of 1GB (so on next boot, we'll have twice the memory)
  • I looked through the process list and pruned unnecessary processes (duplicate screen sessions, duplicate irssi, no-longer-used irc bots).
  • I requested wezlbot be launched from irc.offbyone.lan

I also want to (at least) double lab46's swap space… this will have to wait until the next reboot, however.

ahhcobras added to backup

squirrel discovered that there is a '-e' argument to dump which excludes files based on inodes.

I modified the lairdump.sh on ahhcobras to exclude the /xen directory (since it is on the same partition as the root):

    XEN="`stat /xen | grep Inode | sed 's/^.*Inode: \([0-9][0-9]*\).*$/\1/'`"
 
    # Record in log
    ssh dump@${BACKUP} "echo \"$MYNAME beginning backup at `date`\" >> log"
 
    # Perform any necessary maintenance
    ssh dump@${BACKUP} "mkdir -p ${MYNAME}; chmod -R u=rwX,g=rX,o= ${MYNAME}"
 
    # Perform dump
    ${DUMP} -0uan -e $XEN -f - / | gzip -9 | ssh dump@${BACKUP} "dd of=${MYNAME}/${DUMPNAME}"

etch returns to apt-mirror

Apparently, when I removed apt-mirror config for etch (or more likely, when mirror.rit.edu removed etch before I even did that), all the etch files went away.

This is causing currently configured etch machines in the LAIR to be unable to get any packages (aside from the LAIR packages)… so I've re-added etch via archive.debian.org Basically, quick check for /xen inode, and inclusion of that on the dump command-line.

So we should now have regular cobras backups available.

March 18th, 2011

cobras psu fan

It was discovered today that the fan in cobras' PSU was seized up (and likely was for some time).

It was taken down for maintenance, and the fan restored to operation.

An update prior to this removed the “mem=3G” kernel parameter, causing the onboard NIC not to work properly. This knowledge was remembered and restored, bringing the machine back up to fully operating status.

Aside from discovering the seized up fan and the later NIC issues, the event went without hiccup. What's more– the realization that cobras as a LAIR machine has been serving us quite well for a while.

During the initial boot, there was a claim that /dev/sda1 (aka “/”) had gone 1172 days without being checked.

We suspect that it had over a year of uptime.

March 11th, 2011

DSLAB node00 /usr/global not mounted

The /usr/global NFS mount on node00 (head node of the DSLAB cluster) apparently was not mounted once again… all other nodes were set. I did a quick “mount -a” as root and the problem was resolved.

diskreport.sh

I dusted off my diskreport.sh script I wrote some time ago, and added support for some key DSLAB machines (cluster specifically).

I also added a run of this script to my cron, so I should be receiving a daily e-mail with disk information.

weblogs.sh

Upon a request for web log information, I took a look at my weblogs.sh script, which has been in service since January 2010… still running, but it appears that some server-side scripting can generate errors that aren't captured by my elaborate sed line.

I fixed this by instead having all error reports go inside a dokuwiki <code> tag, so it is almost just like looking at a file (the top access.log is still in the friendlier format).

March 9th, 2011

DNS empty zone on koolaid

Ian discovered that the source of koolaid's lrrd reporting troubles was due to DNS timeouts trying to resolve 10.10.10.x addresses. This was remedied by slapping in an empty zone for 10.10.10.x reverse lookups.

www error.log ate up disk space

I happened to catch www while it ran out of disk space (causing subversion to break, that's what clued me into a problem)… apparently apache's error.log grew unexpectedly to over 500MB, consuming all the remaining disk space.

I “fixed the glitch”, compressed error.log, and updated logrotate to rollover logs every 32MB… hopefully that will mitigate similar problems in the future.

March 2nd, 2011

svn propset

Users with html files in their repository viewing them from a web browser would prefer html data be rendered by the browser, instead of splashing up the HTML.

As it turns out, subversion is treating all data with a mime-type of text/plain, we'd need to change this to text/html:

svn propset svn:mime-type text/html props.html
svn commit -m "added property to HTML file"

<html><center></html>

<html></center></html>

haas/status/status_201103.txt · Last modified: 2011/04/01 01:12 by 127.0.0.1