User Tools

Site Tools


dslab:wopr

WOPR Laptop Cluster

The laptop cluster project is a project to turn 25 Dell Latitude D820 laptops into a cluster, and possibly a self contained video wall cluster. The project is in an alpha stage and is currently in the testing phase.

Test WOPR is currently on DHCP with the following DNS names:

WOPR00

WOPR01 (DOWN FOR REBUILDING)

WOPR02

WOPR03

Hardware

The hardware to work with is 25 Dell Latitude D820 laptops. They have the following specs (all subject to change as Wyatt finds out more information):

  • CPU: Intel Core 2 Duo 1.66 GHz
  • RAM: 2-3 GB DDR2 (Mostly 3 GB)
  • HDD: 60/80 GB (Mostly 80 GB)
  • CD: CD-RW/DVD-ROM
  • Screen: 15.4“ flatscreen LCD (1920×1200)
  • Video: nVidia Quadro NVS 110M TurboCache 256 MB
  • Bluetooth enabled
  • Wireless enabled
  • 4 USB ports
  • 1 VGA Out
  • Stereo Laptop Speakers
  • Line out & Line in
  • 1 ethernet port
  • 1 PC Card slot
  • NO BATTERIES
  • They will all come with an AC power adapter

We would also likely need a 25+ port switch, and quite a bit of power (multiple power strips) to power the AC bricks and thus the laptops.

The machines do have hard disks, my apologizes. I added it to the list above. While they once may or may not have held sensitive data, rest assured that the data has been wiped to high heaven, and we will be receiving (for the most part) blank disks. Some have an install of windows that I created for less technically inclined charities.

Storage is not listed; being from a government agency, the assumption should be made (unless we hear otherwise) that the machines will lack disk… a netboot approach is a logical course of action to take.

(Proposed) Video Wall

Wyatt is thinking we can remove the bezels from all the screens, turn them around and mount them facing outward on the side of the laptop with the keyboard on it (to allow air intake on the bottom/back of the laptop). We could then have a 5×5 video wall of 15.4 inch monitors (9600×6000). This could be as little as 2.5 inches thick. Keeping the screens much farther than a couple inches away from the motherboards would create the need for extra cabling to extend the connections and would prove difficult/expensive.

(Herbs comment was moved to the discussion box below)

We could also strip each motherboard from it's plastic casing and simply have the motherboard, removing quite a bit of air flow problems, free up much blank space and allow us to compact the entire project substantially.

Logistics

In order to implement the wifi idea for the master node (theoretically reducing the cabling requirements to merely a single power cord) we would need to maintain the head node's wifi antennae (not difficult).

We will need room for the laptops as well as the 25 power supplies, power strips and network switch.

The mounting method for all this would likely have to be custom, and the method of mounting the monitors side by side would be an undertaking in and of itself. There are some tabs that would simplify the process, but it would still be a challenge.

Rack

I suspect it will be somewhat hard to find an off the shelf unit that fits our exact requirements. Here are instructions for how to build a rolling shelving rack out of wood: http://ana-white.com/2011/01/rolling-industrial-shelves.html This would keep the costs down, however the problem here is that we don't have immediate access to the tools we need for assembly. Maybe physics people have something we could use? I was just looking at the laptop screens, they have tabs on the top that we could use to attach them to the rack. The laptop could then sit on the shelf either directly above or below the screen.

September 13 2011

Today I worked on getting NFS working. The plan is to have an NFS server running on the head node that serves the user home directories to the slaves. Also we were thinking about getting things setup to sync the head's folders with data. I'm working on fixing a access denied error that is preventing me from mounting the shares on the servers. I think this might have to do with the fact that NIS isn't configured, so I'm setting that up now.

September 28 2011

Today I got Xdmx working as per the instructions on this wiki. Touchpad is not working due to a driver issue so I disabled it by commenting out all lines in /usr/share/X11/xorg.conf.d/50-synaptics.conf Video wall is working! We were able to play some screensavers on it. Command to start xdmx: ./Xdmx :1 +xinerama -ac -ignorebadfontpaths -noglxproxy -configfile ./wall-config

September 29 2011

NIS could not be configured, both Herb and Wyatt tried but could not get a client to bind to the server on wopr00. We are going to look into replicating the LDAP server (auth) on the wopr cluster. Here are some useful links: http://edin.no-ip.com/blog/hswong3i/configurate-openldap-mirror-mode-replication http://www.openldap.org/doc/admin24/replication.html#MirrorMode%20replication Wyatt is also working on configuring MPI on the cluster.

September 30 2011

Wyatt had to make user accounts for himself on all the wopr machines, but due to apparent conflicts the user accounts don't seem to be creating properly, or with the proper permissions. I suggest setting up the LDAP mirror before doing MPI.

Wopr01 also seems to be having trouble, and it is likely that it needs to be rebuilt (strangely slow SSH, significant issues with user account creation).

Wyatt is also looking into imaging solutions so we can image a template machine and use it to image the future nodes. FOG is likely to be the recommendation. (http://sourceforge.net/projects/freeghost/)

October 6 2011

Herb worked on setting up an LDAP server. The server is running. Also he installed phpldapadmin, a web interface for the server. It can be accessed at http://wopr00/phpldapadmin He imported hps1.ldif from auth1 into the server.

dslab/wopr.txt · Last modified: 2011/10/06 14:44 by hps1