Project 1 - Linux Install / Linux VM Server Install
Steps to fresh Linux install.
Insert Disk.
Select Install.
Follow the seven walk-through steps that are promted too you.
When install is complete remove disk and restart the machine.
Follow steps to set up computer name, ect…
Enable networking.
Install updates.
?????
Profit!
I did not have to install any extra packages outside of updates, no troubles in need of shooting arose and no files were in need of editing.
To set up remote desktop do the following..
(On your Ubuntu machine) System > Remote Dekstop > General tab; enable “allow other users to view your desktop”, enable “allow other users to control your desktop. Set password if desired.
Install vinagre on second machine.(update the intltool file!)
Input IP of the machine acting as the server and hit connect.
You should be remotely viewing the desktop at this point!
Project 3 - Video Wall!
Steps to set up a video wall
Install the same operating system on both machenes. Setup intstallation, setting names, passwords, ect.
The following line where done to both systems
Update. * ' aptitude update '
' aptitude upgrade '
-
chmod 777 nvidia173.run ; to compile the driver
./nvidia173.run ; to install the driver. We ran into some problems here. To get around them we used 'aptitude install build-essential' and aptitude install linux-sorce-2.6.31 to bring the system up to date.
Using the dpkg -l | grep nvidia we found all the nvidia files we needed too, and installed all files that showed up with aptitude install
The following line was done to vidwall and scp to vidwall2
' scp /etc/X11/xorg.conf 10.80.2.194 '
- Next we needed to install Fluxbox to have a window manager; ' **aptitude install fluxbox** ' on both systems
* To set the screens to either act verticly or horizantly under the /etc/X11 directory we needed to edit the xorg.conf file on both systems. Under the "device" section we need to add **Option "MetaModes" "1280x1024 +0+0, 1802x1024 +0+0"**
*
- Use **xhost +** to enable network communitcations on both machines.
- Use export** DISPLAY=10.80.2.194;0** to set the monitors of the second machine to display what is on the first machine.
- Installing Xdmx by using **aptitude install xdmx**, on both machenes. (Make sure both machenes are set to the same resolution)
- At this point we where able to launch the fluxbox desktop on both systems.
- Then we created a confiuration file that looked like this.
startx -- /usr/bin/X11/Xdmx:1 -display 10.80.2.199:0 -display 10.80.2.194:0 -ignorebadfontpaths + xinerana
- Patch Xdmx. We retrived the new patches from [[https://launchpad.net/ubuntu/karmic/+source/xorg-server/2:1.6.4-2ubuntu4.2]] (xorg-server_164.org.tar.gz and xorg-server_1.6.4-2ubuntu.4.2.diff)
- To patch files and output any errors to an error file, patch -p1 <xorg-server_1.6.4 2>err
- All of those patched files should be in the /home/vidwall/xorg-server-1.6.4 directory
* config, glx, debian, hw, doc, xkb (all of these should be in the zorg-server-1.6.4 directory)
- To compile these dpkg-buildpackage -rfakeroot -us -uc
* Remove old files by aptitude remove xdmx (should remove both xdmx and xdmx-tools
* To install new, dpkg -i packagename
At this point we should have had a working system, but we did not. This is what else we tried.
NOTE: Because this did not satisfy me I rebuilt the systems using debian Lenny and this process. Once I got this done and tried to bring up the wall I keep having a problem with the mouse. I then googled the problem. The resolution to the problem ended up being to load Xdmx from debian etch. This worked and I was able to bring up the vidoe wall.
Project 3 - VM Server Install
First step of this project is to do a fresh install of the physical VM server.
F12 → System Setup → Integrated Devices → Integrated NIC. Restart.
32-bit(i386)→ Debian/i386 Netboot.install Lenny Stable[text]
Setup → Enter mirror manually → mirror → /debian/
Setup install → install nothing → install grub boot loader
Reboot and change the boot prioirty back to normal.
* While running as root run the following….
userdel -r vm02
cd/etc/apt
rm sources.list
-
aptitude updaate
aptitude upgrade
* Next…
Aptitude install openssh-client open ssh-server
aptitude install xen-linux-system-2.26.26-2-xen-686
aptitude install xen-tools
* Next…
cd /etc/xen
vi xend-config.sxp
Find the line (network-script network-bridge) and remove the # to uncomment this line.
Find the line (network-script network-dummy) and add a # to commentout this line.
Make sure that (vif-script vif-bridge)
To quickly find these lines rather than scrolling through hit enter while not in insert mode and use a forward slash(/) and type in what you want to match.
* Next…
vi /etc/modules
Set loop maxloop = 255
* Next…
cd /boot/grub
vi menu.lst~
Scroll down untill you find 4 lines pertaining to xen that do not have a .gz attached to them; remove those lines.
Do this for the 3rd and 4th group of quad lines as well.
After you've removed those 3 groups of 4 lines. There should be one remaining, on the line speficied as Kernal change it to match..
* Next…
cd /etc/xen-tools
vi xen-tools.conf
Search for the line #dir = /home/xen
Remove the # and /home from the line. It should look like dir = /xen
Uncomment install-method = debootstrap
Down a little further find the “Disk and Sizing options” section and change the following…
Down further find the line that says dhcp = 1, and uncomment.
Going further, the line passwd = 1, uncomment.
serial_device = hvc0 #default, uncomment at the beginning of the line, leave the #default alone.
A couple lines down, disk_device = xvda #default, uncomment, leave #default the same.
At the bottom of the file uncomment and change the output line to match
And the same for the extention line, it should read as..
* Next
cd /
mkdir /xen
mkdir /xen/images
mkdir /xen/conf
mkdir /xen/save
At this point you should reboot, and see your final product up and running!
Project 4 - Setting up a Virtual Machine
To do this, you must first have set up the Virtual Machine Server, a how-to on this can be found here…
You must also claim a VM number for yourself, here (Under the VM server users list.)…
After you have claimed your territory follow this walkthrough and enjoy your new VM!
Project 5 - RAID Arrays
RAID stands for Redudant Array of Inexpensive Disks, it can be incredibly useful for backing up your data, increaseing your disk speed and size, or both!
There are different types of RAID arrays, for more information consult this wiki
There are more types of raid than these, but these are the different types I used.
RAID0 - Two or more disks with stripped data.
RAID1 - Two or more disks with mirrored data.
RAID5 - Three or more disks with parity.
RAID6 - Four or more disks with parity.
RAID10 - Four or more disks mirrored then stripped.
RAID01 - Four or more disks stripped then mirrored.
To setup your virtual disks on your VM, you'll need to be on your VM server (the VM server I used was named vmserver02).
CD into /xen/conf and open up you're virtual machines .cfg file (my VM was named vm11)
In your .cfg file you'll want to add disks. To do this; Under the “disk devices” section add..
'file:/xen/domains/vm11/disk_1.img,xvda3,w',
'file:/xen/domains/vm11/disk_2.img,xvda4,w',
'file:/xen/domains/vm11/disk_3.img,xvda5,w',
'file:/xen/domains/vm11/disk_4.img,xvda6,w',
'file:/xen/domains/vm11/disk_5.img,xvda7,w',
'file:/xen/domains/vm11/disk_6.img,xvda8,w',
'file:/xen/domains/vm11/disk_7.img,xvda9,w',
'file:/xen/domains/vm11/disk_8.img,xvda10,w',
While still on your server..
cd /xen/domains/vm11
dd if=/dev/zero of=disk_1.img bs=1M count=1024
Do this for all disks, 1 through 8.
dd is the data dump command, which will dump the data from the input file (if=/dev/zero) to the output file (of=disk_x.img). The /dev/zero file is a file full of infinate zeros.
bs=1M is bit size equals 1 megabyte count=1024, do it 1024 times.
At this point you'll need to install mdadm, mdadm will configure and allow the RAID array to functnuh.
Next, run the following
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid01
modprobe raid10
Modprobing all the raid types will allow the system to reconize what raids you are going to run.
-To create the virtual raid do the following..
Raid1
mdadm –create /dev/md0 –level=1 –raid-disks=2 /dev/xvda3 /dev/xvda4
This will do the same thing as raid0, only setting up a raid1 instead of a raid0, obvisouly.
Raid5
mdadm –create /dev/md0 –leve=5 –raid-disks=3 /dev/xvda3 /dev/xvda4 /dev/xvda5
Like raid0 and raid1 this will set up a raid 5, only it uses three disks instead of two, too account for parity.
Raid6
mdadm –crate /dev/md0 –level=6 –raid-disks=4 /dev/xvda3 /dev/xvda4 /dev/xvda5 /dev/xvda6
Again, like the previous raids this will set up raid6 using 4 disks.
Raid01
For raid 10, we have to do a little bit more, but really, we're only using the same thing we've all ready done.
mdadm –create /dev/md0 –level=0 –raid-disk=2 /dev/xvda3 /dev/xvda4
mdadm –create /dev/md1 –leve=0 –raid-disk=2 /dev/xvda5/xvda6
Here, we are creating 2 raid0 arrays, that we will then use together, as one, in our raid01
mdadm –create /dev/md2 –level=1 –raid-disks=2 /dev/md0 /dev/md1
You can see that rather than using the virtual disks, we use the two lower level raid's we set up, to act as the disks in the higher level raid.
mkfs.ext3 /dev/md0
mount /dev/md0 /mnt