Table of Contents

STATUS updates

TODO

URLs

Some links of interest:

September 3rd, 2014

apt-cacher-ng

I've been meaning to resume my apt-mirror activities for some time… but just haven't gotten around to it. In the process, I was recollecting my old apt-proxy experiences, and how that generally worked “well enough” (except when it didn't). So I thought I would explore that again.

Turns out apt-proxy isn't in the package database, but two new alternatives are: apt-cacher and apt-cacher-ng.

apt-cacher-ng claims to be a from-scratch rewrite and optimizes for lower bandwidth connections, so that seemed the ideal candidate. I installed it on both data1 and data2, and configured it to serve out of /export/cache/apt-cacher-ng

I also moved the configuration directory out of /etc and into /export/lib, and made a symlink back to /etc on both machines, so no matter who the DRBD master is, they'll be working with the appropriate configuration.

As for client configurations, I put the following in a new file called /etc/apt/apt.conf.d/00apt-cacher-ng on all my jessie machines (this would likely work on older distros as well):

Acquire::http { Proxy "http://10.80.1.3:3142"; };

Seems to be working just fine. We'll see how it holds up in the long term.

Useful URL:

September 9th, 2014

groups

A quick script implementation to allow students to view data stored in their data directories resulted in a rash of “permission denied” errors.

Seems that “group membership” is not the same as “primary group membership”, and in order to make things work, the correct group must be the user's “primary” group at time of execution.

newgrp wasn't working for me. Then eventually (also with some googling), this scenario seemed familiar; and sure enough, a quick grep revealed I had dealt with this issue before, in gimmeh. The solution: sg

I ended up hacking a quick wrapper script to encapsulate primary group altering (also to mitigate a huge amount of resultant log messages).

/usr/local/bin/status is as follows:

sg ${USER} -c "/usr/local/bin/status.logic ${*}"

Finally works.

September 10th, 2014

data2 primary / resultant upgrades and config

In preparation for the drive replacement in data1 (due to arrive today, likely will get to replacing on Friday; hopefully it holds out), I have kicked primary fileserver control and duties over to data2.

The transition, although still entirely manual, seemed to go without a hitch, and I was able to tend to some pending administrative tasks outstanding on data2 (like an nfs package upgrade).

I also found some massaging was in order for apt-cacher-ng, and I had to install the tftp-hpa and mini-httpd packages. All in all, a nice non-emergency opportunity to test things out to make sure they work.

rebooted halfadder (and VMs)

halfadder and its resident VMs (auth2, mail, db, and www) saw some reboots today. All looks to be in order.

September 12th, 2014

replacing sdc (failed drive) on data1 md0 array

On Sunday, mdadm e-mailed me notification that one of its drives was failing.

I was able to get a replacement ordered the next day, and finally here on Friday, I have my hands on the replacement and am ready to go about installing it.

I figured I should finally document the proper way to remove and add drives. /dev/sdc was the failed drive; md already discovered it and marked it as faulty, but if we wanted to do that manually, we would do:

data1:~# mdadm --manage /dev/md0 --fail /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0
data1:~# 

Next up is to remove the faulty drive from the RAID array:

data1:~# mdadm --manage /dev/md0 --remove /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md0
data1:~# 

I then powered down the machine and physically removed the faulty drive, and replaced it with the new one. Aside from physical issues identifying the drive, replacement seems to have gone smoothly.

Booted back up, we then add the new drive to the array as follows:

data1:~# mdadm --manage /dev/md0 --add /dev/sdb
mdadm: added /dev/sdb
data1:~# 

It then proceeds to rebuild (looking to take about 4 hours).

September 17th, 2014

clearing/disabling userlist on gnome login screen

A growing problem, the longer the pods have remained in service, has been the untenability to the user list on the login screen. This semester, the most popular table (back by the door), has a ridiculously long list, requiring blind scrolling to obtain the desired option to actually log in.

I've been meaning to do something about this for some time; finally, I had an urge to act on it.

Solution

Create /etc/dconf/db/gdm.d/00-login-screen, and put in the following contents:

[org/gnome/login-screen]
disable-user-list=true

Then, regenerate the dconf databases:

pod00:~# dconf update
pod00:~# 

No reboot required, it seemed to live update to a login screen with no users listed. Huzzah!

Useful URLs:

The RedHat one proved ultimately useful; and not necessarily because the pods are running Fedora Core 17.

zeotools back in action

After almost 2 years, I had the urge to resume development… which for starters is a bunch of debugging.

Awesomely, after much wrangling, a functional and modern version of zeograph graces the universe once again!

September 19th, 2014

tune2fs

The magic line for new Xen VM disk images:

sokraits:/xen/domains/newvm# tune2fs -o journal_data_writeback ./disk.img

September 24th, 2014

accidental fork bomb

A user inadvertently launched a fork bomb on lab46 (at approximately 10pm); during my lab46 rebuild over the summer, I apparently forgot to replace my locked down /etc/security/limits.conf (whoops).

Luckily, the LAIR night watch team was on the case, and had functionality restored within a few hours. By 1:30am, all was back to normal.

Later that day, the other side of the story was heard, and much to everyone's relief:

September 28th, 2014

sokraits boot drive failing

I discovered that the boot drive on sokraits was approaching its demise. Seemed to come on all of a sudden.

I migrated all production VMs over to halfadder for the time being (working live migration is a wonderful thing), and am contemplating an NFS root for the VM servers (still copied to a RAMdisk on boot)… then I can remove yet one more drive from the equipment calculus.

Ubuntu 14.04.1 LTS VM

You know, despite all the grumbling, and even my initial gripes about the differentness with Unity vs. GNOME (how many years has it been?), on a whim I installed this version of Ubuntu in a VM so that I could have a reliable means of testing various SDL functionality for the Data Communications pong-fall2014 project.

And I must say: I no longer mind it. In fact, you might say its simplicity has grown on me.

I was still able to configure my desired features:

And build environment/SDL libs easily installed and everything working, and currently only using 4.2GB of space (and feeling QUITE comfortable).

September 29th, 2014

neat SoX stuff

Found myself tweaking some WAV files. The following page was quite useful:

Speaking of bash / ShellShock

It looks like the fun isn't over yet, as new (currently undisclosed, thankfully) bugs have been discovered, some at the same level of severity as the original ones that had us scrambling last week.

To keep score:

There's an unofficial patch, which is here:

DSLAB cluster updated

I've been forgetting it for days, but I finally got around to updating the DSLAB cluster. New features:

Updates were applied, including the current bash with fixes. I'll need to remember to do an update again in a few days once the next round of bash updates trickle through.

USB gamepad access in VirtualBox

During pong-fall2014 development, I discovered my USB gamepads were not being detected in my newly installed Ubuntu 14.04.1 VM (not Ubuntu's fault, but VirtualBox USB device filtering).

To fix, I had to add a filter to grab the specific device, and then edit the filter and remove the “Vendor ID” (left everything else as detected), and it lit right up.

Useful URL:

Other Days