User Tools

Site Tools


opus:spring2012:tgalpin2:start

SUPER TYLER'S JOURNAL IV: ARCADE EDITION

An Opus for Tyler Galpin's Spring 2012 Semester

Introduction

Hello! I'm Tyler. I'm 19 20 years old, and this is my fourth and final semester here at Corning. I'm a Computer Science major. I plan to transfer to Binghamton or Buffalo in the fall. I like to play my guitar and sing, and as is such, I am an avid music fan. As you may have guessed, I like computers. I have a nice gaming desktop (talk specs with me sometime, if you want!) and a netbook running Linux Mint Debian edition for school and work. I'll be “Tyler” on the class IRC channels.

Part 1

Entries

Entry 1: January 25, 2012

Today was the first day of Computer Organization. In it, we very, very vaguely talked about what the course would be about. We've all been here in Computer-based programs for a while now, so there wasn't much need for the usual introduction. We started by talking about how a computer actually sees things– not as ones and zeros per se, but as electrical signals. It is as though one is given only two things, and from these two things only, the he must build whatever he wants. Afterwards, we were tasked with writing 0 through 63 in binary. A large group in the class worked up on the board. They split the work up in to sections, each person having a set number to complete. Derek and I worked separately from that group, and from each other, but we seemed to get it all done faster (not to sound like a snot, or anything.) I mean, it is only counting. I don't think the group is bad at it, I just think they thought about it in a way that made it harder than it was. Anyhow, I hope we get to some coding soon.

As for the rolling class schedule that is HPC II, I talked to Matt and I've settled on two projects on which to start. I'm going to be making my one gaming PC in to two with the help of new parts and old, so one of my projects will be the assembly and set up of each. It will be well documented so as to be a how-to guide (let's face it, building computers is easy once you've done it, but it can be rather daunting if you haven't). My other project will involve a guide for setting up wireless on the model of netbook I have when Linux Mint Debian edition is installed. This seems to be a problem for many people, myself included, who use the OS in general, and there are many different fixes that may or may not work generally. So, even though I have already figured out how to get mine to work, I will try to document the process as best I can. More on both of those projects later.

Entry 2: January 27, 2012

Today, we talked about sneeches and flabulators. Some sneeches have stars, and others do not. A tunnel flabulator does not change a sneech, but a vortex flabulator does. …We made a truth table?

Okay, basically, the lesson represented how a computer handles bits (sneeches, in this case) to send instructions. We made a truth table to represent how two sets of four bits could be manipulated based on various conditions in order to send certain instructions. That was about it for Computer Organization.

As for HPC, I asked Matt if I could do another project based off of my computer building project, which involves making benchmarks for my soon to be created system on Nvidia PhysX enabled games between the system using only one card (a GTX 560 Ti) or that same card along with a lesser card (9600 GT) as a PhysX dedicated card. That will be an interesting project, I think. I just need to check to see if there are certain attributes I need to be meeting for these HPC II projects.

Entry 3: February 7, 2012

Things have been going relatively slow in ASM so far. We've just been talking about logic gates and how the processor functions in general. Don't get me wrong, this is very interesting stuff, but we usually had these sort of discussions along with some code analysis. As in, Matt would write some code for us to compile and study up on the board, and discussion with model drawing and all that would compliment the code to drive the point home. We're just getting started, so it's to early to tell what we'll be up to for the whole semester.

As for HPC II, Matt told me that my own little personal second excursion in to SDL could count as a project. I figured it would be worth looking in to on its own, but I didn't even consider the possibility of it being a project. This works for me though. If I can keep it up, I have so grandiose plans for it all. I am using the Lazy Foo' guide yet again, and I am up to Lesson 6. I'm learning quite a bit, as it is an excellent guide. I wish I had more time to work on it all, though.

Entry 4: February 15, 2012

Today was a rather interesting class. Unfortunately, I did have to leave early for a dentist appointment (which went well, for what it is worth!). In class, however, we had a nice discussion on registers, including what they are and how they function in a basic sense. Derek was able to draw a very nice diagram of a register, simply from his experiences in our high school engineering program, which is honestly impressive. We talked about how a register is essentially storage in the processor for commonly executed tasks. This makes it easier for the computer to be efficient, as less work is needed to execute said tasks.

As for HPC II, my progress has been partially slowed by other work. I have some nice projects lined up, and they will be started and completed soon. My SDL adventure is also taking a bit of a break so that other more pressing tasks can be completed. However, I left off on lesson 8, so progress has been made. I can't wait until I learn more, as making games with what I've learned is a very exciting prospect.

Entry 5: February 24, 2012

Today was a particularly interesting class in Comp. Org., as we discussed some ~*particularly interesting*~ things. We went over Karl's meticulous and painstakingly printed on-board explanation of the Fetch-Execute cycle (recently catch-phrased into the trendy “fetchecute” cycle), which could be described simply as the method by which the computer executes an instruction and proceeds to acquire (or, fetch) a new one. This led in to our discussion about Turing machines, which are wonderful devices of which there are no physical manifestations. Simply, it is a method of thinking developed by Alan Turing which led to the development of modern day computing. We had a nice little visual representation on the board drawn to represent the Turing machine's hypothetical infinite storage tape from which we spent the remainder of class trying to figure out how many symbols we would need to have the Turing machine print a certain sequence.

Keywords

asm Keywords

Logical Operators

Definition

A Logical Operator is a “connector” of sorts between two items with value, which then in turn yields a truth value based on the truth value of those two items. In relation to the course, this would of course reference the electrical signals sent through the processor (on and off, 1 and 0, etc.).

The logical operator AND takes two items and will only yield a truth value of true if both of those items also have a truth value of true. In the scope of the course, this means a couple things– of course, we use AND as “&&” in C/C++ code for a conditional statement. In terms of a processor, AND will only yield an on signal if both input signals are on, so to speak.

The logical operator OR will yield a true value so long as one of the inputted truth values is also true. In relation to the course, we also use this operation in our code, as shown by the “||” operator in C/C++. In terms of the hardware, an on signal is sent when one or two on signals is received.

The logical operator XOR is the exclusive or operation. This means that it is a lot like OR, except that it will only yield a true value if and only if one of the two truth values are true. That is to say, having both values be true will not yield a true value under XOR.

Demonstration

As we know, some examples of logical operators include AND, OR and XOR.

If we were to make a small bitwise truth table to demonstrate these concepts, it would look like this–

     A O X 
     N R O
     D   R
--------------------
1 1 |1 1 0
1 0 |0 1 1
0 1 |0 1 1
0 0 |0 0 0

Here is a code snippet of AND being used in C:

//AND in C
if( x == 1 && y == 1)
{
   exampleFunction();
}
 
//OR in C
if( x == 1 || y == 1)
{
    exampleFunction();
}
 
//XOR in C
if( (x == 1 && y == 0) || (x == 0 && y == 1) )
{
   exampleFunction();
}

Negated Logic Operator

Definition

A negated logic operator is precisely what it sounds like– it takes our standard logic operator, and essentially negates them in the sense that it yields true values based on whether or not given truth values are false. They could be called opposites of the regular logic operators. Examples include NOR, NAND, XNOR, and NOT.

NOR yields a true value if and only if no given conditions are true. It is the inverse of OR.

NAND yields a true value if one given condition is not true. The yielded value is false if both given are true. It is the inverse of AND.

XNOR is exclusive NOR. It yields true only if both given conditions are true, or both are false. It is the inverse of XOR.

NOT simply inverts the given truth values (usually in the form of bits). True becomes false, false becomes true.

As is such, every negated logic operator is essentially a logic operator put through NOT.

Demonstration
       N X    |               |
     N A N    |               |
     O N O    |               |
     R D R    | NOT Inversion |
-------------------------------
1 1 |0 0 1    |1| -> 0
1 0 |0 1 0    |1| -> 0
0 1 |0 1 0    |1| -> 0
0 0 |1 1 1    |0| -> 1
//NAND in C
if(!(x == 1 && y == 1))
{
   exampleFunction();
}
 
//NOR in C
if(!(x == 1 || y == 1))
{
    exampleFunction();
}
 
//XNOR in C
if(!((x == 1 && y == 0) || (x == 0 && y == 1)))
{
   exampleFunction();
}

Storage

Definition

Storage is simply the various parts of the computer that are used to store the digital data that is used and created. This can refer to many things, including hard disks, RAM, registers, solid state drives and more.

It is worth noting that there are varying degrees, so to speak of storage. That is, there is primary, secondary, tertiary and off-line.

Primary storage is can be accessed directly by the CPU. An example of this would be RAM.

Secondary storage is non-volatile and can't be accessed by the CPU directly. An example would be a hard disk drive.

Tertiary storage is usually used for archiving, as it takes much longer to access than secondary storage. An example would be a tape cartridge.

Off-line storage is storage independent of a processing unit. An example would be a an optical disc drive.

Demonstration

Registers (General Purpose/Integer, Floating Point, Accumulator, Data)

Definition

Simply, a register is a small amount of storage on the processor that contains instructions for completing simple or commonly used tasks. With these instructions being stored in a storage location on the processor, it is much easier to access, which makes these processes quicker.

Data registers can hold data in the form of integers and floating point values. Older CPUs, like the one we seek to emulate, have a special register called an accumulator that deals with that data specifically.

Demonstration

register_7.jpg

A 4-bit register. [Source: http://cpuville.com/register.htm]

Address Bus

Definition

Before defining this word, it is important to understand what a bus is. Simply, it is a subsystem that forms a connection between components for the transfer of various forms of data.

Now, an Address Bus is the bus that is used to specifically address a memory location. That is to say, when one component (say the CPU) needs to access data, the memory address of this data is transmitted through the address bus.

Demonstration

Data Bus

Definition

Before defining this word, it is important to understand what a bus is. Simply, it is a subsystem that forms a connection between components for the transfer of various forms of data.

The Data Bus is the bus on which a certain value is transmitted. To put it in perspective, if the the address bus transmits a memory location, the data bus transmits the value stored in this memory location.

Demonstration

Below is a system bus, highlighting how each specific bus interfaces with the other components of the computer.

<html> <img src=“http://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Computer_system_bus.svg/400px-Computer_system_bus.svg.png”> </html>

[Source: Wikipedia: Bus (computing)]

Control and Data Flow

Definition

Control Flow refers to the order or method by which instructions are carried out by the computer. This encompasses the types of conditional statements and functions (subroutines) that we would see in our programming code.

Data Flow, on the other hand, refers to the stream of information that is passed around the computer's components. It is not concerned with how and when like control flow, but rather where and what.

Demonstration

<html> <img src=“http://upload.wikimedia.org/wikipedia/commons/thumb/e/e3/Performance_seeking_control_flow_diagram.jpg/454px-Performance_seeking_control_flow_diagram.jpg” height=500> </html>

Above is an example of control flow, in diagram form. Obviously, the diagram has subject matter not pertaining to computer science, but the logic of control flow is there, which is the focus. [Source: Wikipedia: Control Flow diagram]

<html> <img src=“http://upload.wikimedia.org/wikipedia/commons/thumb/c/c8/DataFlowDiagram_Example.png/360px-DataFlowDiagram_Example.png”> </html>

Above is an example of data flow, in diagram form [Source: Wikipedia: Data Flow diagram]

Boolean Arithmetic Operations

Definition

Boolean Arithmetic Operations are algebraic operations that are used on boolean or binary values. The typical operations of addition, subtraction, multiplication and division are either fundamentally different or non-existent in the scope of boolean algebra. To explain the latter, it is worth knowing that there is no such thing as subtraction, as that would require negative numbers, and division, as it is compounded subtraction just as multiplication is compounded addition, in boolean algebra.

Luckily, addition and multiplication still exist, though this is self-evident. Simply, boolean addition will yield a 1 or true value so long as there is a 1/true value being added. Multiplication will yield a 1/true value if and only if there are no zeros involved.

Demonstration

Below are some examples of boolean addition and multiplication. As you can see, they use standard mathematical notation, logic gate representation and circuitry representation.

Addition <html> <p> <img src=“http://sub.allaboutcircuits.com/images/14009.png” height=100> </p>

<p>

</p>

<p> <img src=“http://sub.allaboutcircuits.com/images/14010.png” height=100> </p>

<p>

</p>

<p> <img src=“http://sub.allaboutcircuits.com/images/14011.png” height=100> </p>

<p>

</p>

<p> <img src=“http://sub.allaboutcircuits.com/images/14012.png” height=100> </p>

<p> </html>

Multiplication <html> <p> <img src=“http://sub.allaboutcircuits.com/images/14013.png” height=75> </p>

<p>

</p>

<p> <img src=“http://sub.allaboutcircuits.com/images/14014.png” height=75> </p>

<p>

</p>

<p> <img src=“http://sub.allaboutcircuits.com/images/14015.png” height=75> </p>

<p>

</p> <p> <img src=“http://sub.allaboutcircuits.com/images/14016.png” height=75> </p>

<p> </html>

[Source: Boolean arithmetic -- allaboutcircuits.com]

asm Objective: Understanding the Impact of Number Systems

Definition

Personally, I think that meeting this objective means having a thorough understanding of how the various number systems we use (binary, octal, decimal, hexidecimal, etc.) in computer science work, along with an appreciation of how they let us solve problems and think of situations in different ways.

Method

I'm not sure if there is any one test that would prove that I have met this objective, but I do think that a small discussion about the topic in the below space should suffice.

Measurement

Suffice it to say, number systems have a profound impact on our field, computer science, and an understanding of such is integral to our success. As previously stated, using different number systems lets us look at problems in different ways. Thinking in terms of a different number system may help one understand how a certain component or program works.

The main number systems covered in our course thus far (decimal, binary, octal and hexadecimal) all have their place in computer science and in the very computers we work with. Chief among these number systems would be binary, as it is the representation of how the hardware works at the most basic level, which is to say the electrical signals being sent in the circuitry. One would not get very far as in this class, let alone as a CS major, without understanding the implications of the binary number system.

Another important system would be hexidecimal, which is often used in addressing memory locations, and, in a less pertinent matter, the colors displayed in our programs and web pages.

Generally speaking, understanding that numbers can be looked at in a different way than we grew up with in decimal speaks to a larger message that all computer science majors should abide by. That message is of course that with computing, there is always more than one way to get the job done.

Analysis

Upon further analysis, I do believe I get it.

bool numberSystems;
bool important = true;
numberSystems = important;
while(numberSystems == important)
{
  tylerGetsIt(numberSystems, important);
}

hpc2 Keywords

Partion

Definition

A partition is a logical division of storage space on a hard drive. In essence, partitioning your hard drive is like turning your one drive in to multiple, as far as the computer is concerned. Of course, this is only logical– the physical drive is treated as multiple logical drives.

Demonstration

Below, is a graphical representation of the partitioning of a hard drive, shown through the program GParted.

There is a ext4 filesystem partition, on which the shown OS is installed, a linux swap partition, and unallocated space.

Kernel

Definition

The kernel is generally the main component of the operating system. It acts as the middleman between the hardware (cpu, memory, etc.) and the software applications being run on the OS by managing the system's resources and letting the software being run to use those resources.

Demonstration
Simply:
 
[Hardware] <---> [Kernel] <---> [Software]

On an OS with unix, you can check your current kernel version like so:

tyler@aleron ~ $ uname -a
Linux aleron 3.0.0-1-amd64 #1 SMP Sun Jul 24 02:24:44 UTC 2011 x86_64 GNU/Linux

Kernel Module

Definition

A kernel module is code that can be loaded in to a kernel at any time. In doing so, the functionality of the kernel is expanded accordingly. Without kernel modules, the kernel would have to be rebuilt every single time new functionality is added, which obviously is not efficient or convenient.

Demonstration

Behold, a part of the list of the kernel modules on my system–

…and the long version. Use the “lsmod” command to list all of your modules.

tyler@aleron ~ $ lsmod
Module                  Size  Used by
arc4                   12458  2 
brcmsmac              528689  0 
brcmutil               13419  1 brcmsmac
mac80211              182631  1 brcmsmac
cfg80211              132564  2 brcmsmac,mac80211
crc_ccitt              12347  1 brcmsmac
pci_stub               12429  1 
vboxpci                19059  0 
vboxnetadp             13202  0 
vboxnetflt             23595  0 
vboxdrv               194054  3 vboxpci,vboxnetadp,vboxnetflt
powernow_k8            17688  1 
mperf                  12453  1 powernow_k8
cpufreq_conservative    13147  0 
cpufreq_userspace      12576  0 
cpufreq_stats          12862  0 

...

modprobe

Definition

modprobe is a program in Linux that allows the user to manage the kernel modules that are loaded. As is such, using modprobe on the command line allows one to add or remove a module from the kernel. It can be useful when certain functionality on the system is not working like it should, as sometimes reloading a module will fix a problem.

Demonstration
tyler@aleron ~ $ man modprobe
tyler@aleron ~ $ modprobe [module name to be added]
tyler@aleron ~ $ modprobe -r [module to be removed]
tyler@aleron ~ $ modprobe -a [module to be added] [module to be added] [module to be added] 

Synaptic Package Manager

Definition

Synaptic Package Manager is a graphical Linux program that lets one manage the software packages installed on the system. It is the graphical version of the package managing program apt. As is such, it is an important program that most every Linux user should be familiar with, as it is key to maintaining one's system in a desirable manner.

Demonstration

Behold, Synaptic running on my system.

You can see the list of packages here. They can be searched through and categorized based on type.

Dedicated vs. Integrated Graphics Processing Units

Definition

Every modern computer has some sort of graphical display processing unit as one of its many components. As one might imagine, there are different types to suit the needs of different users. This keyword serves as a general definition of the two general types one will find.

Dedicated GPUs work independently of the computer's CPU and/or motherboard. They are most often hooked in to PCI Express (modernly, that is. In the not too distant past, AGP and PCI interfaces were the norm) buses on the motherboard, and feature their own board and chipset. Dedicated cards are used for more resource intensive processes, such as gaming and computer aided design (or, CAD). As is such they are much more powerful than integrated chipsets, typically, and require greater cooling solutions, such as a large fan and heatsink specifically for the card.

Integrated GPUs, on the other hand, are either found on the motherboard, or, in recent times, within the main processor (seen in Intel's Core iX series and AMD's Fusion line). This usually means a weaker graphics processing experience, but it comes with the advantage of less space and power used. These solutions are suitable for general use, but not for more demanding processes.

Demonstration
Dedicated Card

Nvidia's stock model Geforce GTX 560 Ti

Integrated Chipset

An Asus motherboard with Nvidia's ION Graphics chipset

asus_mobo_new.jpg

System Cooling Solutions

Definition

Simply put, being that a computer runs on electricity, it is bound to get warmer and warmer as the system does work (requiring more electricity). Performance will suffer proportionately to how hot the system's components become. That is why it is necessary to utilize proper system cooling solutions, generally listed below.

One cooling solution would be air cooling. We know this simply as the fans that run within our cases. They blow air on to and or away from system components in order to maintain lower temperatures.

Another cooling solution that is commonly used would be heat dissipation. This takes the form of heat-sinks which are pieces of various metallic materials attached to components that are designed in such a way to absorb heat and spread it across itself, which keeps heat from building up in a concentrated area.

Finally, a less common, yet very effective and expensive solution is water cooling, which uses tubes to circulate cooled water on to components' surfaces to keep cool. As is such, it is highly effective, but is not very practical in the sense that it is quite expensive, comparatively.

Demonstration
Heatsink + Fan for a CPU

replace-computer-fan-heatsink-800x800.jpg

[Source: eHow]

A System with Watercooling

69f1aa68cb826818eee68e36048badc7.jpg

[Source: Gizmodo]

Advanced Packaging Tool

Definition

Advanced Packaging Tool, or APT is a program for Linux that lets you manage the packages on a system. It is a command line driven program. As with Synaptic, this is a program that Linux users should be familiar with, as packages can be installed simply with the use of a command. This is, of course, very useful if you know what packages you want to install already. Another terminal based program, aptitude provides a higher level interface to the package manager, but operates much like apt in many cases.

Demonstration
Updating packages
tyler@aleron ~ $ sudo apt-get update
[sudo] password for tyler: 
Ign http://ftp.us.debian.org squeeze InRelease
Hit http://ftp.us.debian.org squeeze Release.gpg                               
Hit http://ftp.us.debian.org squeeze Release                                   
Get:1 http://security.debian.org testing/updates InRelease [87.8 kB]           
Hit http://debian.linuxmint.com testing InRelease                              
Hit http://ftp.us.debian.org squeeze/main amd64 Packages                       
Hit http://ftp.us.debian.org squeeze/contrib amd64 Packages                    
Hit http://ftp.us.debian.org squeeze/non-free amd64 Packages                   
Ign http://ftp.us.debian.org squeeze/contrib TranslationIndex                  
Hit http://ftp.us.debian.org squeeze/main TranslationIndex                     
Hit http://debian.linuxmint.com testing/main amd64 Packages/DiffIndex          
Ign http://ftp.us.debian.org squeeze/non-free TranslationIndex                 
Ign http://www.debian-multimedia.org testing InRelease                         
Hit http://debian.linuxmint.com testing/contrib amd64 Packages/DiffIndex       
Hit http://debian.linuxmint.com testing/non-free amd64 Packages/DiffIndex      
Ign http://debian.linuxmint.com testing/contrib TranslationIndex               
Hit http://debian.linuxmint.com testing/main TranslationIndex                  
Ign http://debian.linuxmint.com testing/non-free TranslationIndex              
Get:2 http://security.debian.org testing/updates/main amd64 Packages [14 B]    
Get:3 http://www.debian-multimedia.org testing Release.gpg [198 B]             
Get:4 http://security.debian.org testing/updates/contrib amd64 Packages [14 B] 
Get:5 http://security.debian.org testing/updates/non-free amd64 Packages [14 B]
Ign http://security.debian.org testing/updates/contrib TranslationIndex        
Ign http://security.debian.org testing/updates/main TranslationIndex           
Get:6 http://www.debian-multimedia.org testing Release [32.1 kB]               
Ign http://security.debian.org testing/updates/non-free TranslationIndex       
Ign http://ftp.us.debian.org squeeze/contrib Translation-en_US                 
Ign http://ftp.us.debian.org squeeze/contrib Translation-en                    
Ign http://ftp.us.debian.org squeeze/non-free Translation-en_US                
Ign http://ftp.us.debian.org squeeze/non-free Translation-en                   
Get:7 http://www.debian-multimedia.org testing/main amd64 Packages/DiffIndex [2,023 B]
Ign http://debian.linuxmint.com testing/contrib Translation-en_US              
Ign http://debian.linuxmint.com testing/contrib Translation-en                 
Ign http://debian.linuxmint.com testing/non-free Translation-en_US             
Ign http://debian.linuxmint.com testing/non-free Translation-en                
Get:8 http://www.debian-multimedia.org testing/non-free amd64 Packages/DiffIndex [2,023 B]
Ign http://www.debian-multimedia.org testing/main TranslationIndex             
Ign http://www.debian-multimedia.org testing/non-free TranslationIndex         
Get:9 http://www.debian-multimedia.org testing/main amd64 Packages [72.7 kB]   
Ign http://security.debian.org testing/updates/contrib Translation-en_US       
Ign http://security.debian.org testing/updates/contrib Translation-en          
Ign http://security.debian.org testing/updates/main Translation-en_US          
Get:10 http://www.debian-multimedia.org testing/non-free amd64 2012-03-03-1139.41.pdiff [361 B]
Get:11 http://www.debian-multimedia.org testing/non-free amd64 2012-03-03-1139.41.pdiff [361 B]
Ign http://security.debian.org testing/updates/main Translation-en             
Ign http://security.debian.org testing/updates/non-free Translation-en_US      
Ign http://security.debian.org testing/updates/non-free Translation-en         
Ign http://www.debian-multimedia.org testing/main Translation-en_US            
Ign http://www.debian-multimedia.org testing/main Translation-en               
Ign http://www.debian-multimedia.org testing/non-free Translation-en_US
Ign http://www.debian-multimedia.org testing/non-free Translation-en
Ign http://packages.linuxmint.com debian InRelease        
Get:12 http://packages.linuxmint.com debian Release.gpg [197 B]
Get:13 http://packages.linuxmint.com debian Release [12.2 kB]
Get:14 http://packages.linuxmint.com debian/main amd64 Packages [12.6 kB]
Get:15 http://packages.linuxmint.com debian/upstream amd64 Packages [5,192 B]
Get:16 http://packages.linuxmint.com debian/import amd64 Packages [20.2 kB]
Ign http://packages.linuxmint.com debian/import TranslationIndex
Ign http://packages.linuxmint.com debian/main TranslationIndex
Ign http://packages.linuxmint.com debian/upstream TranslationIndex
Ign http://packages.linuxmint.com debian/import Translation-en_US
Ign http://packages.linuxmint.com debian/import Translation-en                 
Ign http://packages.linuxmint.com debian/main Translation-en_US                
Ign http://packages.linuxmint.com debian/main Translation-en                   
Ign http://packages.linuxmint.com debian/upstream Translation-en_US            
Ign http://packages.linuxmint.com debian/upstream Translation-en               
Fetched 248 kB in 6s (39.4 kB/s)                                               
Reading package lists... Done
tyler@aleron ~ $ 
Installing packages
tyler@aleron ~ $ sudo apt-get install [package to be installed]

hpc2 Objective: Apply Improved Troubleshooting Skills

Definition

While I think the objective's meaning is self-evident, I will try to elaborate. Simply put, one should be able to be more efficient and effective of a troubleshooter. Issues should be identified sooner, with possible solutions researched and attempted. Ultimately, the problem should be solved much sooner than it would have if the course was not taken.

Method

Well, the only way to find out if I achieved this objective is to solve problems. Solving a major problem with any system on my own should prove that I have met this objective.

Measurement

I can list a couple different examples of why I have met (or will meet) this objective–

  • Wireless Troubleshooting: If you've read my opus, you know I have had issues recently with my LMDE installation's wireless connection. After considerable research and trial and error (which will be detailed in my portfolio soon), I was able to come to a solution.
  • Basic troubleshooting at work: I work as an intern at Hilliard Corporation. They have me do basic things such as setting up and repairing systems. This is an outlet where improved troubleshooting skills come in handy, as I can be more productive.
Analysis

There's always room to improve when it comes to troubleshooting, so that will come in time. I think I am a decent troubleshooter as is. The measurement process could stand to be a little more objective, but the actual objective is a little subjective. I'm not entirely sure how to improve upon the objective, though.

Experiments

Using a Startup script to finish solving Wireless setup issue

Question

Though this problem has already been solved, this will serve as a sort of retroactive experiment. In the beginning of the semester, I was having issues with my netbook's Linux Mint Debian installation and the wireless connection. After much troubleshooting, I was able to solve it, but only if I removed and reloaded a module from the kernel. Naturally, this is kind of annoying to do each time when I start up, so naturally the question was asked– “Could I use a start up script to solve this wireless issue?”

The answer may (or may not) surprise you.

Resources

  • Matthew Haas
  • Various Linux distribution's Support Forums (for finding help with the module in the first place)

Hypothesis

Being that it takes only a couple terminal commands to fix the situation via modprobe, adding these commands to a script that runs at start up should fix my wireless issues. Matt suggested that I add them to “/etc/rc.local”.

Experiment

The experiment is simple– add the commands to the start up script (/etc/rc.local), restart the computer and check to see if the wireless issue is resolved.

Data

The commands
modprobe -r brcmsmac
modprobe brcmsmac
/etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bitso
#
# By default this script does nothing.

mkdir -p /dev/cgroup/cpu
mount -t cgroup cgroup /dev/cgroup/cpu -o cpu
mkdir -m 0777 /dev/cgroup/cpu/user
echo "/usr/local/sbin/cgroup_clean" > /dev/cgroup/cpu/release_agent
modprobe -r brcmsmac
modprobe brcmsmac

exit 0

Analysis

Ultimately, the script works. My wireless starts up perfectly each time the system boots.

Conclusions

Scripts can do many a great thing, even if it is something as simple as this. This was resulted in my system acting as it should, after a great deal of stress and Googling. There was SO much Googling. Too much, even. Even so, I learned a great deal about the workings of a Linux system through the troubleshooting process. Really, that is how I've learned most things about computer maintenance and set up– break something, stress a lot, fix it somehow, lesson(s) learned.

Simple Logic Gate Program

Question

Simply put, I wrote some code snippets for some of my keywords regarding logic operators. I wanted to make sure that the code was valid, so this is the experiment that puts these code snippets to the test. The question being, “Is my syntax correct in such a way that the concepts of the basic logic gates are accurately displayed?”

Resources

Hypothesis

Naturally, my hypothesis is that my code is correct, otherwise I wouldn't have written it as such. This experiment is solely to make sure. For science!

Experiment

Simply, I will run the program (code is below) with every possible input for two items ([1,1], [1,0], [0,1], [0,0]), and check to see if the logic gates are working as they should, according to the definitions provided above, in the keywords section. If they are working as they should, then my hypothesis is correct.

Data

Source code
#include<stdio.h>
 
int main()
{
	int x,y;
	do
	{
		printf("Enter an X value (0/1): ");
		scanf("%d", &x);
		if (x < 0 || x > 1)
		{
			printf("ERROR: Please enter either a 0 or 1. \n");
		}
	} while (x < 0 || x > 1);
	do
	{
		printf("Enter a Y value (0/1): ");
		scanf("%d", &y);
		if (y < 0 || y > 1)
		{
			printf("ERROR: Please enter either a 0 or 1. \n");
		}
	} while (y < 0 || y > 1);
 
	//AND in C
	if( x == 1 && y == 1)
	{
 		printf("AND: True\n");
	}
	else
	{
		printf("AND: False\n");
 	}
 
	//OR in C
	if( x == 1 || y == 1)
	{
 		printf("OR: True\n");
	}
	else
	{
		printf("OR: False\n");
 	}
 
 
	//XOR in C
	if( (x == 1 && y == 0) || (x == 0 && y == 1) )
	{
 		printf("XOR: True\n");
	}
	else
	{
		printf("XOR: False\n");
 	}
 
	//NAND in C
	if(!(x == 1 && y == 1))
	{
 		printf("NAND: True\n");
	}
	else
	{
		printf("NAND: False\n");
 	}
 
 
	//NOR in C
	if(!(x == 1 || y == 1))
	{
 		printf("NOR: True\n");
	}
	else
	{
		printf("NOR: False\n");
 	}
 
	//XNOR in C
	if(!((x == 1 && y == 0) || (x == 0 && y == 1)))
	{
 		printf("XNOR: True\n");
	}
	else
	{
		printf("XNOR: False\n");
 	}
 
	return(0);
}
CLI Output
tyler@aleron ~/src/asm $ ./logicgates
Enter an X value (0/1): 1
Enter a Y value (0/1): 1
AND: True
OR: True
XOR: False
NAND: False
NOR: False
XNOR: True
tyler@aleron ~/src/asm $ ./logicgates
Enter an X value (0/1): 1
Enter a Y value (0/1): 0
AND: False
OR: True
XOR: True
NAND: True
NOR: False
XNOR: False
tyler@aleron ~/src/asm $ ./logicgates
Enter an X value (0/1): 0
Enter a Y value (0/1): 1
AND: False
OR: True
XOR: True
NAND: True
NOR: False
XNOR: False
tyler@aleron ~/src/asm $ ./logicgates
Enter an X value (0/1): 0
Enter a Y value (0/1): 0
AND: False
OR: False
XOR: False
NAND: True
NOR: True
XNOR: True
tyler@aleron ~/src/asm $ 

Analysis

My hypothesis was correct– the code compiles, and the logic gates work as intended, as demonstrated above.

Conclusions

It is worth noting that I did not use the bitwise operators here, however. In C, there actually is a bitwise operator for xor. Obviously, I did not do that, opting only for the logical operators the language provides. Perhaps a bitwise version of this will come in handy later. I did notice that I could have made the XOR and XNOR gates more efficient, code-wise. It's essentially checking to see if the two values are the same, so the condition could be if x==y or !(x==y), respectively, instead. Oh well! The experiment wasn't so much to make super efficient code, it was to test the code I whipped up in a minute.

Retest: Karl's "Not" Experiment

Question

A discussion in class led to the question being posed regarding the nature of the NOT operator in C/C++. We wondered what happened if an integer is defined as a given number and then has the NOT operator applied to it. We know that if the number is 1, we will get a zero. But what if the number is larger than 1? What if it is an arbitrarily large number? This is what the experiment seeks to answer.

Resources

Hypothesis

One might think that the number returned would be negative (for whatever reason?), but given that using NOT on 1 yields 0, it is likely that using it on a different number will yield 0 as well.

Experiment

I will be testing this by writing code that accepts an inputted number, runs NOT on it, then prints the result.

Data

Source Code
#include<stdio.h>
 
int main()
{
    int a, b;
    printf("Please enter a value to use NOT on: ");
    scanf("%d", &a);
    b=!a;
    printf("a = %d\nb = %d\n!b = %d\n", a, b, !b);
    return(0);
}
CLI Output
tyler@aleron ~/src/asm $ ./notexp
Please enter a value to use NOT on: 1
a = 1
b = 0
!b = 1
tyler@aleron ~/src/asm $ ./notexp
Please enter a value to use NOT on: 3
a = 3
b = 0
!b = 1
tyler@aleron ~/src/asm $ ./notexp
Please enter a value to use NOT on: 9001
a = 9001
b = 0
!b = 1

Analysis

My hypothesis was correct, and Karl's original test checks out. The not operation, when placed on a variable, will yield a 0 if the variable is another number other than that, and a 1 if it is zero.

Conclusions

My conclusion is largely in the analysis portion of this experiment. But, through this experiment, it has become clear that the NOT operator in C/C++ will serve as a valuable tool in our CPU simulation project.

Part 2

Entries

Entry 5: March 7, 2012

Today's Computer Organization class was actually a little weird. We were missing a good chunk of our class inexplicably! We still continued on with class, naturally. We started to talk about the different types of registers that different processors will implement, and how many they use. We took a look at the instruction sets for our given research processors. After a few class-wide side-tracking discussions, I was able to find out that my research processor (PowerPC) has 32 general purpose registers and 32 floating-point registers. Unfortunately, we did not get much father than this for the class.

As for HPC II, I started a new project. I'm writing a guide on updating the kernel of a Linux system. It covers the basic, easy way, and (more importantly) the difficult, manual way of doing it, which involves compiling the kernel yourself and manipulating some boot directories. Fun stuff, after you get past the “learning how to do it” part!

Entry 6: March 14, 2012

Today's class was a sort of recap and reinvestigation of topics touched on in the class before. Basically, we were discussing the idea of instructions and the bits and bytes used to call them. In our specific example we were running off of, our instructions used 4 bytes. We determined that in a so-called Greatly Reduced Instruction Set Computer (or, GRISC), we needed only a handful of commands to ultimately execute all of the actions that we need in an emulator. These included the logic operators (AND, OR, NOT) and a few others (I need to check on the board and do a bit more research, evidently!). Joe also mentioned that there should be separate versions of some of these instructions that deal with either memory or a register.

Entry 7: March 27, 2012

This will serve as a back entry for recent classes, considering a slight lack of activity. Recently, we've been discussing the instructions contained within the instruction set, and how we'd be able to represent and use them. This led to some nice diagrams on the board that highlighted how we would use 4 bytes for each instruction, and how each bit of each byte would be used. For clarity, our chosen instructions included AND, OR, NOT, BRANCH, etc. For an example of how we would use some of the bits within the given bytes, we decided that the first three bits of the first byte would represent the instruction for identification. Afterwards, for example, in the AND instruction, the next two pairs of bits would represent the register being drawn upon, with a misc. bit at the end. The first two bits of the next byte would be for the output register of the AND instruction. That would be a brief explanation of the subject which we've recently started to discuss.

Entry 8: March 30, 2012

Classes lately have been more about independent study than anything else, which is okay. There are projects, programs and opuses to be worked on. As far as relevant in class discussion goes, more time has been dedicated to discussing the properties of the instructions included with our computer simulator. Understanding these of course is important when it comes to creating a solid base to start from. The instructions seem pretty simple in a very general sense, but implementing them isn't as easy. Not necessarily difficult, but not as easy as understanding what AND does in a general sense.

Keywords

asm Keywords

Interrupt

Definition

An interrupt is a signal that can either be asynchronous or synchronous and indicates either a need for specific attention or a change in the computer's current execution. Interrupts are dealt with by the interrupt handler. Interrupts are an important part of the multitasking aspect of computers. Without interrupts, multiple programs would not be able to be run at the same time.

A hardware interrupt has the processor execute its interrupt handler after saving the execution state it left off at. A software interrupt usually takes the form of instructions within the instruction set, as we can see in the instruction sets of many of the processors we've seen.

Demonstration

A diagram of how interrupts are routed by the Linux Kernel.

[Source: tldp.org: Interrupts and Interrupt Handling

I/O

Definition

As we all know, I/O refers to Input and Output. This refers to the interaction between the forces of the outside world (say, a person, for example) and all of the components that make up the computer system. These interactions can take the form of either data or signals. Input is obviously either of those things going in to the system, and output are these things leaving the system. There are various interfaces for Input and Ouput. Some examples of Input interfaces include a keyboard or a mouse, while output interfaces can include a screen or a speaker.

Demonstration

Block diagram of I/O in a 6502 styled CPU.

[Source: http://www.cast-inc.com/ip-cores/processors/c6502/index.html]

Machine Word

Definition

Machine Word is the basic, natural unit of information used in a computer processor. The details of which are, of course, dependent on the type of processor. Words can be based off of a fixed amount of bits (ultimately, of course), bytes, digits and characters which are handled collectively by hardware or the instruction set of the processor. The length of the words used by the processor is an important, defining characteristic, which would be how we determine what X-bit a processor is (as in 32-bit or 64-bit, for example).

Instruction Sets

Definition

An Instruction Set is the compiled set of machine words and native commands that can possibly be carried out by a given processor. Instruction sets will vary in size along with the variation of processors.

There are different types of instruction sets, notably CISC and RISC instruction sets.

CISC stands for Complex Instruction Set Computer, whereas RISC stands for Reduced Instruction Set Computer. CISC means that single instructions can perform multiple lesser tasks, and RISC means that smaller, simpler instructions will be used in the hopes that these smaller instructions will execute quicker, leading to better performance.

Demonstration

Here is the 6502's instruction set, as an example–

http://www.masswerk.at/6502/6502_instruction_set.html

Registers (Stack Pointer, Program Counter, Flag/Status)

Definition

These are some of the different types of special purpose registers that can be used:

  • Stack Pointer: A type of hardware register which points to the most recently used location of a stack (which is usually a group of memory locations).
  • Program Counter: A register within the processor which tells where the computer currently is within the program sequence. It is incremented after completing an instruction, and holds the address of the next instruction.
  • Flag/Status Register: Hardware register that details information on the current state of the process being executed.

Registers (Index/Pointer)

Definition

An index register is used to modify certain addressing while an operation within a program is taking place. The contents of the register are added to an address being used in the instruction to form the location of the actual data.

Pointer registers include either stack pointer registers or base pointer registers. The stack pointer was obviously mentioned in the above keyword. A base pointer is a general-purpose register that points to the base of the stack of instructions/memory locations.

Demonstration

von Neumann vs. Harvard architecture

Definition

These are two different styles of computer architecture, which describes the design and function of a computer that uses it.

von Neumann architecture was developed by John von Neumann around 1945, and suggests that a processing unit has several different divisions within it, each assigned specific processing duties. These divisions include the ALU, registers, memory, I/O, etc. It is also referred to as stored-program, as instructions can not be fetched an executed at the same time.

Harvard architecture is different in that it can fetch and execute instructions at the same time. This is because it has physically separated storage and signal pathways dedicated to instructions and data respectively. This, naturally leads to a faster computer.

Demonstration

Binary and Hexadecimal Number Representation

Definition

Binary and Hexidecimal are two different types of number systems. To new CS students, number systems may be confusing, as the only number system we grow up with would be decimal (i.e. 0-9, 0, 1, 2, 3, …). However, we do not use decimal when it comes to the finer, inner workings of a computer. We use binary and hexidecimal instead.

Most people have heard of binary. Binary has a two number digit system, which is most commonly represented as 0 and 1. A binary digit is known as a bit. As we know, eight bits are a unit known as a byte. As is such, binary has an inherent importance to computers.

Hexidecimal, on the other hand, is a little more confusing. It uses a 16 number digit system, which means that a digit can be incremented 16 times before a new bit is added. Most commonly, hexidecimal is represented from 0 to F (explained below). It is commonly used in addressing and color codes.

Demonstration
Binary

Digits: 0, 1 Counting: 0, 1, 10, 11, 100, 101, 110, 111, 1000, etc.

Hexidecimal

Digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F Counting (by 5): 0, 5, A, F, 14, 19, 1E, 23, etc.

asm Objective

asm Objective: Familiarity with the role of the C library

Definition

In order to effectively code generally, let alone for a computer simulator, one must know how to use the numerous functions that the C library has to offer.

Method

Discussion on the topic along with some code examples.

Measurement

We make use of many different functions within the C library, and most often these functions come from the specific library of stdio.h and stdlib.h. stdio.h obviously is our standard library that contains the basic functions for input and output, such as our trusty printf and scanf and its many variations. stdlib.h opens things up for us as far as what we can do with code, as it provides us with dynamic memory allocation functions such as malloc, realloc and free, lets us generate “random” numbers, convert strings to other data types and manipulate the working environment with certain exit commands.

These two libraries account for much of what we do in our classes, and knowing them is integral to being able to efficiently transfer ideas and concepts in to working code.

Analysis

I believe I have met the goal, as there isn't a whole lot here that I can do to demonstrate a familiarity with how the C library works.

hpc2 Keywords

make/Makefile

Definition

make is a Linux program that updates parts of a large program when it is determined that those parts need to be recompiled. Once this happens, make will issue the commands needed to carry out the needed processes.

In order to use make, a makefile is needed. A makefile contains information that details how the parts of a program are related, and how they should be updated. make will search your directory for “Makefile” or “makefile.”

Demonstration
tyler@aleron ~/example $ ls
Makefile README examplesrc.c othersrc.c examplelib.h
tyler@aleron ~/example $ make

GRUB (Linux Bootloader)

Definition

GRUB, whose name is derived from Grand Unified Bootloader, is a bootloader with multiboot functionality, which means that it can load multiple different systems at start up. A bootloader is a program that runs at startup and loads the kernel of the operating system, which then loads the rest of the operating system.

GRUB is highly configurable, with many different options available. Basically, one can choose a specific operating system out of how ever many they have loaded on to their system. Even further, if one of these operating systems has multiple versions of kernels, you can choose which version you'd like to boot up with. Also, GRUB provides a simple command line like interface during boot up.

Demonstration

Below, GRUB running during boot up.

[Source: Wikipedia: GNU GRUB]

kernel.org

Definition

kernel.org is the Linux Kernel Archives, the website where one can find Linux kernels for download. The newest stable version, the newest unstable version, and past stable versions are available here. You can download installable patches, view changelogs, and download the full source code for each kernel, should you want to compile it yourself. It goes without saying that if one wished to manually update their Linux kernel, here would be a good place to start.

Demonstration

Added Repositories

Definition

The added repositories on your system are what your system draws on as software sources. They are used in the updating, repairing and installing of packages on your system. Mainline distributions of Linux have their on repository that installations of their distribution have installed by default, and often times, they have repositories from other distributions that their OS is based on (example: Ubuntu has repositories for Debian on their systems, and Mint has repositories from Ubuntu and consequently Debian). You can add (or remove) whatever repositories suit the needs of your system.

Demonstration

One can add or remove repositories through Synaptic Package Manager, as shown above. Go to Settings → Repositories.

Linux swap partition

Definition

During the initial partitioning of the hard drive during the installation of a system, a Linux swap partion (linux-swap) is needed. This swap partition is an extra amount of space on the hard drive that is set aside for the system to use when RAM is not readily available. Without such a thing, a system may become unstable, depending on its use. Use of this swap space will free up RAM for prominent tasks within the system, making for faster processing, while the tasks being used less at the time are put in to the swap partition, assuming it is taking up space in RAM. Conventional wisdom dictates that the amount of storage dedicated to swap should be twice as large as the amount of RAM you have, though this is by no means an objective standard.

Demonstration

This is reusing a picture, but you can plainly see the amount of memory allocated to swap in Gparted.

Now, this is how tasks are swapped:

Task 1 is being run, using memory in RAM
Task 1 is now in standby, not priority task in system, still taking up RAM
Task 2 is being run as a priority, needs more RAM than is currently easily available
Task 1 is swapped to the swap partition on the disk
Task 2 uses RAM to execute faster
etc. etc.

Software Package

Definition

A Software Package is a specific piece of software that a system can install and uninstall using a package management program such a Synaptic or Aptitude, as opposed to manipulation through a file manager. Packages are useful, because they contain metadata that can be read by your package management system such as description, version name, and the other packages needed in order for the given package to work. Ultimately, the use of packages makes updating and maintenance relatively simple and efficient.

Demonstration

A picture of GDebi, a package manager that deals with .deb packages, installing a package on Ubuntu.

[Source: Wikipedia: GDebi ]

Dependencies

Definition

Dependencies refer to the software packages a system needs in order to install and use a given package. Dependencies are often made known when the user prompts the system to install a given package, due to the metadata provided with it. Often times, a package management system can go and fetch the dependencies for a package when the installation prompt is made. Naturally, all of these situations use the repositories added to your system.

Demonstration

A window showing the dependencies needed for a selected package in Synaptic.

dpkg

Definition

dpkg is a low-level packaging tool found in Debian-based system. As is such, it is used from the command line and it manages .deb packages on the system. Being a lower-level tool, it is not used as often as often as tools like apt, which are much more user-friendly, but dpkg comes in handy when you need (and are able) to get your hands dirty within the system, so to speak.

Demonstration

Use of dpkg:

Install a package:

tyler@aleron ~ $ dpkg -i examplepackage.deb

Install a list of packages:

tyler@aleron ~ $ dpkg -l examplepackage.deb otherpackage.deb ohlookanotherdeb.deb

Remove a package:

tyler@aleron ~ $ dpkg -r examplepackage.deb

hpc2 Objective

hpc2 Objective: Demonstrate knowledge of Linux & Open Source

Definition

The only way to have true knowledge of the world of Linux and Open Source software and philosophy is to have hands-on experience with it. One must understand the importance of the availability of code that is free to use and modify.

Method

Well, shucks. Why don't I just talk about my experience with Linux and Open Source software for a bit, yeah?

Measurement

I've been using Linux since I began my time here at CCC, and I've been using open source software for longer, whether or not I realized the implications behind open source. First and foremost, from this, I've learned the utility of open source software, and the supreme convenience of having powerful, effective software that met my needs, and could be modified further to meet specific needs, provided I had the knowledge and the inclination. Many of these open source solutions provided a superior end product to their proprietary counterparts.

Linux has provided me with greater insight to the inner workings of a computer's operating system, along with an alternative, that is superior in various ways, to the same OS that has been forced on me for most of my life. It, too, allowed me to have a system which suited me best, and, more importantly, did not require a significant amount of technical knowledge to achieve that system. Through manipulating the system through a terminal, altering the packages on the system, stylizing and experimenting with various desktop environments, a greater understanding of Linux is gained, along with a better general understanding of computers. This is demonstrated in myself through my various Linux related troubleshooting projects and general discussions.

Analysis

I think my goal is met. At the very least, my laptop should be sufficient evidence of my knowledge on the subject.

Experiments

Experiment 4

Question

What is the question you'd like to pose for experimentation? State it here.

Resources

Collect information and resources (such as URLs of web resources), and comment on knowledge obtained that you think will provide useful background information to aid in performing the experiment.

Hypothesis

Based on what you've read with respect to your original posed question, what do you think will be the result of your experiment (ie an educated guess based on the facts known). This is done before actually performing the experiment.

State your rationale.

Experiment

How are you going to test your hypothesis? What is the structure of your experiment?

Data

Perform your experiment, and collect/document the results here.

Analysis

Based on the data collected:

  • Was your hypothesis correct?
  • Was your hypothesis not applicable?
  • Is there more going on than you originally thought? (shortcomings in hypothesis)
  • What shortcomings might there be in your experiment?
  • What shortcomings might there be in your data?

Conclusions

What can you ascertain based on the experiment performed and data collected? Document your findings here; make a statement as to any discoveries you've made.

Experiment 5

Question

What is the question you'd like to pose for experimentation? State it here.

Resources

Collect information and resources (such as URLs of web resources), and comment on knowledge obtained that you think will provide useful background information to aid in performing the experiment.

Hypothesis

Based on what you've read with respect to your original posed question, what do you think will be the result of your experiment (ie an educated guess based on the facts known). This is done before actually performing the experiment.

State your rationale.

Experiment

How are you going to test your hypothesis? What is the structure of your experiment?

Data

Perform your experiment, and collect/document the results here.

Analysis

Based on the data collected:

  • Was your hypothesis correct?
  • Was your hypothesis not applicable?
  • Is there more going on than you originally thought? (shortcomings in hypothesis)
  • What shortcomings might there be in your experiment?
  • What shortcomings might there be in your data?

Conclusions

What can you ascertain based on the experiment performed and data collected? Document your findings here; make a statement as to any discoveries you've made.

Retest 2

Perform the following steps:

State Experiment

Whose existing experiment are you going to retest? Provide the URL, note the author, and restate their question.

Resources

Evaluate their resources and commentary. Answer the following questions:

  • Do you feel the given resources are adequate in providing sufficient background information?
  • Are there additional resources you've found that you can add to the resources list?
  • Does the original experimenter appear to have obtained a necessary fundamental understanding of the concepts leading up to their stated experiment?
  • If you find a deviation in opinion, state why you think this might exist.

Hypothesis

State their experiment's hypothesis. Answer the following questions:

  • Do you feel their hypothesis is adequate in capturing the essence of what they're trying to discover?
  • What improvements could you make to their hypothesis, if any?

Experiment

Follow the steps given to recreate the original experiment. Answer the following questions:

  • Are the instructions correct in successfully achieving the results?
  • Is there room for improvement in the experiment instructions/description? What suggestions would you make?
  • Would you make any alterations to the structure of the experiment to yield better results? What, and why?

Data

Publish the data you have gained from your performing of the experiment here.

Analysis

Answer the following:

  • Does the data seem in-line with the published data from the original author?
  • Can you explain any deviations?
  • How about any sources of error?
  • Is the stated hypothesis adequate?

Conclusions

Answer the following:

  • What conclusions can you make based on performing the experiment?
  • Do you feel the experiment was adequate in obtaining a further understanding of a concept?
  • Does the original author appear to have gotten some value out of performing the experiment?
  • Any suggestions or observations that could improve this particular process (in general, or specifically you, or specifically for the original author).

Part 3

Entries

Entry 9: Week of April 8, 2012

This is actually the week of Spring Break, so not a whole lot is going on. I do have 3 full days of work, though, so that will give me time to do some computer related work on the side.

Unfortunately for me, though, a lot of time was spent trying to fix my Debian Mint system, as the latest Update Pack properly borked my system. Essentially, it only let me boot in to bash, with seemingly no internet connection, because many packages were broken, and some dependencies were missing. Luckily, I was able to fix it, an adventure which I will detail through an HPC2 project explaining how to fix packages and dependencies in such a way.

Entry 10: April 20, 2012

Not much is going on, as you might imagine. It seems to be independent work time for everyone. Some are working on computer simulation codes, others on the opus (like me!). I managed to finish up all of my keywords for the second part of my opus (a little late on it, but hey, I blame spring break. Yes, that thing I said I was going to do a lot of work during. That's always how it is planned out to be, isn't it?). I do believe my keywords are of a notable quality, and I am quite pleased with them. Now, I just have to get some experiments for part 2 and 3, and also start the keywords for part 3. Comp. Org. keywords are just a matter of doing a bit of reading and writing, but HPC2 requires making up some keywords, so that might take a little longer this time around.

Entry 11: April 27-May 2, 2012

Things are starting to wind down at this point. Our EOCE's have been announced and posted and whatnot. The end of the semester is necessarily stressful, it seems. Looking at the EOCE's, they seem to be just challenging enough. Well, except for the EOCE for HPC2, but it's understandable why it is extremely simple given the nature of the class. ASM's EOCE is actually pretty exciting, since the codes for it look to be pretty fun to write, barring and serious problems. At any rate, I'm going to try to keep to a very detailed schedule for the week leading up to my last final. I'll probably end up slacking, but with the detailed schedule I've written, plenty will still get done.

Entry 12: End of Semester, May 2012

Things are wrapping up for real, now. It's the last day of the semester as far as our CS classes are concerned, as everything is due tonight. I'm writing about the end of the semester here due to the general lack of notable activity throughout the end of April. My EOCEs are more or less done, and all that remains is typing for my Opus and HPC2 projects. Those projects have been done already, they just haven't been archived in the halls of Lab46 forever via text format. Either way, the ASM EOCE was actually pretty fun, as I expected. I had some problems, but they were predominantly from my…adventurousness, shall I say? I wanted my code to be fancy, which was okay for all but the last bit of code. Wasn't as nice as I'd hoped, so I had to settle for a little above the bare minimum. A strong finish to my last semester here at Corning. Off to Bing after the summer!

asm Keywords

Fetch-Execute Cycle
Definition

The Fetch-Execute Cycle, also known as the Instruction Cycle, refers to the actions that the processor takes to essentially function. Simply, it explains how the processor grabs an instruction from memory, and carries out the actions required by the fetched instruction.

Demonstration

Below, a diagram detailing the Fetch-Execute cycle:

To clarify acronyms used within the image:

MAR: Memory Address Register
MDR: Memory Data Register
CIR: Current Instruction Register

[Source: Wikipedia: Instruction Cycle]

Processor & Memory Organization
Definition

Processor Organization refers to the different parts of the processor and their relationship with one another. Some parts of a processor include–

  • Arithmetic logic unit- Performs logic and arithmetic operations
  • Clock- Refers to the frequency at which the processor is operating
  • Control Unit- Coordinates operations of other parts of the processor and the flow of data
  • Registers- Storage within the CPU for quicker operation
  • Cache- Memory within the CPU for quicker operation

Memory Organization refers to the hierarchy in which memory is put. From the highest level, which is closest to the processor, to the lowest–

  • Processor registers
  • CPU cache (all of the various levels)
  • Main memory (RAM)
  • External memory
  • Hard disk storage
  • Removable media
Demonstration

Die map for the Intel Core i7 2600k processor, detailing the different parts of a processor. Note: the Core iX series of processors features an integrated graphics processor, which was not mentioned above. Integration of GPU's in to the CPU's die are a phenomenon that, while becoming more common, is still fairly recent.

243264-intel-core-i7-2600k-die-map.jpg

[Source: PCMag.com]

Data Instructions (Data Movement, Address Movement, Data Conversion, Bit Manipulation)
Definition
  • Data Movement: Instruction that moves given data from one location, be it a location in memory or in a register, to another location.
  • Address Movement: Instruction that moves a given address from one location to another. The locations here can also be a memory location or registers.
  • Data Conversion: Instruction that simply changes the data type of the type of data being dealt with.
  • Bit Manipulation: Instructions that change individual bits. Setting a bit sets the value to 1, where clearing a bit sets it to 0.
Subroutines (Calling, Return Address)
Definition

A Subroutine is a break in the main flow of instruction, as we know, that operates a set of instructions mostly independent of the main set of instructions. We know this from the concept of functions already. Calling a subroutine is when the main set of instructions branches off in to said subroutine to perform the instructions defined within it. A return address is, of course, at the end of the subroutine, and it lets the processor know where operation left off before deferring to the subroutine.

Demonstration

Here's a simple example of code in C to demonstrate how a subroutine works:

int main()
{
   int subroutineInstruction(); //Calling a subroutine
   return(0);
}
 
int subroutineInstruction()
{
   printf("Operating subroutine...\n");
   return(0); //Returning
}
Stack Operations
Definition

Stack Operations refers to how the FILO data structure known as a stack, in the case, the stack of operations and instructions for a processor, can be manipulated. Most importantly, we know about the push and pop operations.

  • Push– Add an element to a stack. In this case, an instruction. Adds an instruction to the top of the instruction stack.
  • Pop– Removes an element, or in this case instruction, from the stack. It is removed from the top of the stack as well.
Demonstration

Data Representation (Big Endian, Little Endian, Size, Integer, Floating Point, ASCII)
Definition

Data can be represented in a number of ways. The smallest unit of data, as we know, is a bit– the binary digit. We also a byte, which is 8 bits. Integer (int) and Floating Point (float) are types of data. Integer is self explanatory (ex. 1, 2, 3, etc.), and floating point would be numbers like 1.00, 2.00, 3.00. ASCII is the American Standard Code for Information Interchange, which is a character-encoding scheme that takes a value and converts it to a character. Big Endian and Little Endian refer to how bits are stored in a byte. Big Endian stores big-end data first (ex. 0x1234), where Little Endian stores little end data first (ex. 0x3412).

Demonstration
Data Representation (Sign Representation, One's Complement, Two's Complement)
Definition

Sign representation deals with encoding negative numbers in a binary system. One's Complement is one way to change a binary value to a negative complement value. All it does is flip every bit in the value, so 0's become 1 and 1's become 0. Two's Complement, however takes it a step further and adds one after flipping every value. This is so that negative values created through Two's Complement can easily coexist with positive numbers.

Demonstration
Example:

Decimal Value: 9
Binary Value: 01001
One's Complement: 10110
Two's Complement: 10111
Decimal Value After: -9  
Linking, Object and Machine Code
Definition

Linking refers to putting together a bunch of different objects produced by the compiler together to form one program during the compile process. Object code, as we know is what a compiler produces when it is given source code. It is a level above Machine Code which is comprised of our familiar ones and zeroes. Machine Code is read by the processor, and as is such, is the lowest level of code.

Demonstration

Below is a diagram of the process of linking.

[Source: Wikipedia: Linker]

asm Objective

asm Objective: Familiarity With The Organization Of A Computer System
Definition

Basically, to meet this objective, one must understand how all of the components of a computer work together to achieve what we see on our screens. This includes, specifically, how the processor works, as it is truly the heart of the computer.

Method

This can be measured out through my progress with the opus and EOCE. Any other possible method would simply require that I repeat myself over again.

Measurement
Analysis

Ultimately, as I said, there wasn't a whole lot else I could do other than point to my previous work or to repeat myself. I think my work on my opus speaks for itself, as far as this objective goes.

hpc2 Keywords

wicd
Definition

wicd is a network manager for Linux, which provides an alternative to the traditionally used NetworkManager program. It has a simple graphical interface that does not have any graphical dependencies, which allows it to be run on many different systems. It is often suggested as an alternative to NetworkManager when typical problems arise from generally simple situations.

Demonstration

Above, a screen cap of Wicd in action. The interface includes wired and wireless connections, and also provides the opportunity to adjust various options, making it much more flexible than NetworkManager.

Intel vs. AMD
Definition

When it comes to processor, the main two manufacturers are Intel and AMD. Both have a niche market, so to speak, and have various advantages over the other.

  • Intel– Has a strong grip on the higher-end, enthusiast market. Produces very expensive, but very capable hardware. Produces chips such as Xeon series for servers and workstations, and the Core iX series, which includes Core i7 (Highest price, performance), Core i5 (Good value for powerful chips, overclock well. Middle of the road pricewise) and Core i3 (more entry level, but still capable chips). Price to performance ratio is not always the greatest, if one is looking for low price to high performance.
  • AMD– Strong presence in low to mid-end market. Known for excellent price-to-performance ratio. Maker of vary capable chips that meet the needs of most users, even those with higher-end needs like gaming, hence the competition. Producers of general purpose chips such as Athlon II series, higher-end desktop chips like Phenom II, APUs like the FX series, which have integrated graphics capable of boosting AMD GPUs with CrossFireX, and server chips like Opteron. More often than not, AMD chips are what you want to spring for on a budget, as they provide excellent punch for a good price.
Nvidia vs. AMD/ATI
Definition

Much like the processor market, there are two big names in graphics processing– Nvidia and ATI/AMD. Their niche markets are not as clearly defined, but the loyalties to each manufacturer are the cause of many internet debates.

  • Nvidia– producer of the Geforce desktop series and Quadro workstation series. Nvidia cards come with PhysX, which is an advanced physics processing method to provide more realism in games. Recently, Nvidia has managed to take a hold of the very middle and very top of the market. That is to say, they have produced cards in their recent Geforce iterations that have specifically provided excellent performance for a mid-range price (see GTX 460, GTX 560 Ti) and cards that have reigned as the most powerful consumer desktop cards (see GTX 480, GTX 580) for a notable amount of time. Typically, Nvidia is known for solid driver support, though a recent situation with drivers for a period of time caused many problems among users, and went unfixed for an unusual amount of time. Regardless, Nvidia cards are generally solid on all systems.
  • AMD (formerly ATI)– Makers of the Radeon HD desktop series and FireGL workstation series. Produces solid cards for every price range, and even tends to cover some in between places that Nvidia leaves behind. Recently, AMD has released high-end cards that actually have two GPUs in one in order to compete with Nvidia's higher end. Unfortunately, AMD/ATI cards have a notorious history of bad drivers and poor driver support. If you can work around this, however, there is value to be found.
Case/Motherboard Form Factors
Definition

There are different form factors to consider with cases and motherboards. Here are some common form factors to take note of when building desktops:

  • ATX– (305 × 244) Most common size motherboard/case form factor for full-size desktops. Cases typically come in Mid-ATX and Full-ATX. Typically have multiple PCI/PCI-E, memory slots and SATA ports.
  • Micro ATX– (244 × 244) Smaller than ATX (and Full/Mid-ATX cases). Fewer PCI/PCI-E, memory slots, SATA ports.
  • Mini-ITX– (170 × 170) Even smaller than Micro ATX. Made for very small systems, and integrated components are typical as a result.
Demonstration
ATX

atx-motherboard-parts-terminology.jpg

An example of an ATX Motherboard. [Source: Nomenclaturo]

antec-p183_2.jpg

The inside of a Full ATX case. [Source: geeky-gadgets.com]

atx-mid-tower-cases2.jpg

A Mid-ATX case for comparison. [Source: desinformado.com]

Micro ATX

msi-890gxm-g65-microatx-matx-crossfirex-amd-phenom-ii-am3-sata6g-usb3-motherboard.jpg

Mini-ITX

p8h67miniitx.jpg

Definition

When it comes to Linux, there are many different desktop environments (or, DEs) to choose from. Here are some popular ones to consider when building a Linux system.

  • Gnome– [http://www.gnome.org/] One of the most, if not the most, popular DE due to distros like Ubuntu shipping with it by default. Gnome 2.x was a sort of de facto standard Linux DE for a long time, but the new Gnome 3 has caused a bit of disenfranchisement among Gnome 2 fans. Gnome 3, currently, is not as easily customizable as other DEs (Gnome 2 in particular). It however features simplified search and organization features that users of Windows 7 may come to appreciate.
  • KDE– [http://www.kde.org/] Another immensely popular DE, often considered the first place to look for an alternative to Gnome, also due to its wide inclusion in various linux distros.
  • Xfce– [www.xfce.org/] A highly modular and lightweight (uses fewer resources) DE. Nearly every aspect of the desktop is customizable, and features cross compatibility with Gnome and KDE features, given the installation of the correct desktop applications. Strongly suggested for fans of desktops that actually work AND look niceGnome 2.
Demonstration

objectively best DE here whoop whoop i'm sorry do you see this beautiful desktop look at xfce in all of its glory

*ahem* An example of a Xfce desktop.

Definition

Here will be a minor discussion of the three major types of operating systems commonly used in personal computers nowadays.

  • Windows (7/Vista/XP) – Proprietary OS by Microsoft. Licenses are available for any build that meets system requirements, and as is such, is featured predominantly in prebuilt PCs by 3rd party manufacturers. 7 has a desktop environment with a taskbar that can group windows, and pin frequently used icons, much like a dock. Most PC video games are developed for play on Windows. As is such, this is really the OS' strong suit from a system builders perspective.
  • Mac OS X– Proprietary OS by Apple for use on computers manufactured by them. As is such, OS X is not distributed on 3rd party prebuilt computers. OS X is Unix based, and can be manipulated from a terminal. OS X is known for its artistic functionality in the way of graphical and musical applications. Games developed for Windows are being ported to OS X more frequently, but not nearly enough to seriously consider OS X for a gaming machine. Also well known for being “virus free” and “idiot-proof.”
  • Linux (Debian, Ubuntu, Mint, Fedora, etc.) – The one, the only, the open source alternative. There are many different distributions of Linux to suit your needs. In fact, if you wanted, you could make your own distro and maintain it. Linux OSes make use of open source software, and as is such, is often developed by the community to suit needs not previously met. Of course, this is limited by the skill set and time available to work on such things. At any rate, Linux OSes are very capable of your typical computing needs and then some. It is highly customizable and configurable to your needs. Of course, the only draw back is that there is typically a lack of proprietary support, and people tend to keep up with code for an OS when it is how you make a living, and the market demands it.

At any rate, here would be my suggestions, if you forced me to pin a couple specific tasks for each OS to specialize in–

  • Windows ⇒ Gaming, Engineer Drawing
  • OS X ⇒ Music, Drawing
  • Linux ⇒ General use, Programming
BIOS
Definition
Demonstration
Open Source vs. Proprietary/Closed Source
Definition
Demonstration

hpc2 Objective

hpc2 Objective

State the course objective

Definition

In your own words, define what that objective entails.

Method

State the method you will use for measuring successful academic/intellectual achievement of this objective.

Measurement

Follow your method and obtain a measurement. Document the results here.

Analysis

Reflect upon your results of the measurement to ascertain your achievement of the particular course objective.

  • How did you do?
  • Is there room for improvement?
  • Could the measurement process be enhanced to be more effective?
  • Do you think this enhancement would be efficient to employ?
  • Could the course objective be altered to be more applicable? How would you alter it?

Experiments

Experiment 7

Question

What is the question you'd like to pose for experimentation? State it here.

Resources

Collect information and resources (such as URLs of web resources), and comment on knowledge obtained that you think will provide useful background information to aid in performing the experiment.

Hypothesis

Based on what you've read with respect to your original posed question, what do you think will be the result of your experiment (ie an educated guess based on the facts known). This is done before actually performing the experiment.

State your rationale.

Experiment

How are you going to test your hypothesis? What is the structure of your experiment?

Data

Perform your experiment, and collect/document the results here.

Analysis

Based on the data collected:

  • Was your hypothesis correct?
  • Was your hypothesis not applicable?
  • Is there more going on than you originally thought? (shortcomings in hypothesis)
  • What shortcomings might there be in your experiment?
  • What shortcomings might there be in your data?

Conclusions

What can you ascertain based on the experiment performed and data collected? Document your findings here; make a statement as to any discoveries you've made.

Experiment 8

Question

What is the question you'd like to pose for experimentation? State it here.

Resources

Collect information and resources (such as URLs of web resources), and comment on knowledge obtained that you think will provide useful background information to aid in performing the experiment.

Hypothesis

Based on what you've read with respect to your original posed question, what do you think will be the result of your experiment (ie an educated guess based on the facts known). This is done before actually performing the experiment.

State your rationale.

Experiment

How are you going to test your hypothesis? What is the structure of your experiment?

Data

Perform your experiment, and collect/document the results here.

Analysis

Based on the data collected:

  • Was your hypothesis correct?
  • Was your hypothesis not applicable?
  • Is there more going on than you originally thought? (shortcomings in hypothesis)
  • What shortcomings might there be in your experiment?
  • What shortcomings might there be in your data?

Conclusions

What can you ascertain based on the experiment performed and data collected? Document your findings here; make a statement as to any discoveries you've made.

Retest 3

Perform the following steps:

State Experiment

Whose existing experiment are you going to retest? Provide the URL, note the author, and restate their question.

Resources

Evaluate their resources and commentary. Answer the following questions:

  • Do you feel the given resources are adequate in providing sufficient background information?
  • Are there additional resources you've found that you can add to the resources list?
  • Does the original experimenter appear to have obtained a necessary fundamental understanding of the concepts leading up to their stated experiment?
  • If you find a deviation in opinion, state why you think this might exist.

Hypothesis

State their experiment's hypothesis. Answer the following questions:

  • Do you feel their hypothesis is adequate in capturing the essence of what they're trying to discover?
  • What improvements could you make to their hypothesis, if any?

Experiment

Follow the steps given to recreate the original experiment. Answer the following questions:

  • Are the instructions correct in successfully achieving the results?
  • Is there room for improvement in the experiment instructions/description? What suggestions would you make?
  • Would you make any alterations to the structure of the experiment to yield better results? What, and why?

Data

Publish the data you have gained from your performing of the experiment here.

Analysis

Answer the following:

  • Does the data seem in-line with the published data from the original author?
  • Can you explain any deviations?
  • How about any sources of error?
  • Is the stated hypothesis adequate?

Conclusions

Answer the following:

  • What conclusions can you make based on performing the experiment?
  • Do you feel the experiment was adequate in obtaining a further understanding of a concept?
  • Does the original author appear to have gotten some value out of performing the experiment?
  • Any suggestions or observations that could improve this particular process (in general, or specifically you, or specifically for the original author).
opus/spring2012/tgalpin2/start.txt · Last modified: 2012/08/19 20:24 by 127.0.0.1