projects
- intro (due 20140124)
- "Hello, World!" (due 20140131)
- data types (due 20140207)
- Squares (due 20140214)
- Day of Week (due 20140221)
- Nikhilam (due 20140228)
- Multiply by 11 (due 20140307)
- Vertically and Crosswise (due 20140321)
projects
Corning Community College
CSCS1320 C/C++ Programming
~~TOC~~
To be clear: there is nothing magical or special about hex values. Hexadecimal is base 16. And that's it.
We use bases like base 8 (octal) and base 16 (hexadecimal) as a convenient (and lossless) short hand to represent binary values, due to this mathematical property:
2^n
If we have 1 binary bit, how many unique combinations can we represent?
2^1 = ???
The answer is '2'. 1 bit gives us 2 distinct states (which is why binary is used on the computer– on/off, true/false, 1/0). This is akin to a single manual lightswitch on the wall… it can be set to be light, or turn on the dark.
However, having many bits together representing a value can be problematic for us, because if we make a typo, the whole value can be thrown off (and with computers being such precise instruments, will cause them problems).
So, we use base 8 or 16 as an effective shorthand to easily represent such values. And easy is the key word.
We don't use decimal (we could, numbers ARE numbers, but we don't) because there is no clean transition between binary and decimal… many stepped processes are involved in the “conversion” of a binary number to a decimal one (actually converting from any base to another base, especially if there is no mathematical relationship between them).
What? Take octal for example.
Octal (base 8) has 8 counting digits (octets?)… 0, 1, 2, 3, 4, 5, 6, 7. The range is 0-7.
Using our above formula… we know 1 bit gives us 2 values.
How about 2 bits? 2^2 = 4
And 3 bits? 2^3 = 8. Ah, 8. It takes 3 bits to represent 1 octal value (or, more applicably for us, it takes 1 octal value to represent 3 binary bits).
Observe the following table:
binary octal 000 0 001 1 010 2 011 3 100 4 101 5 110 6 111 7
So, if we had the binary value: 0110001010111101001
And wanted to manipulate it in some form (by hand), but didn't want to get bogged down with carefully keeping track of each bit, we use an intermediary base (ie represent it in short hand– value is preserved, nothing is lost).
That value can be represented in octal quite simply… we just cordon off that binary value in groups of 3 bits (remember- 3 binary bits to represent 1 octal value)… starting on the right and moving left.
0110001010111101001 -> 0 110 001 010 111 101 001 6 1 2 7 5 1
So, 0110001010111101001(base 2) = 612751(base 8).
And what about that stray 0 in front? Ignore it. WHO CARES. 0 in front doesn't modify the value.
If you were getting $4, would it be any more advantageous to get $0004?
See how much shorter it is? MUCH easier to write. LESS opportunity for error.
Same with hexadecimal (it merely represents more binary bits each digit).
So now we can start to address the original question: how to come up with the values we're ANDing/ORing to get the desired end result?
For the sake of simplicity, let us use 3 bits (only 8 total combinations, vs. 8-bits / 2^8 = 256). Plus, because we're dealing with binary values ultimately, this all seamlessly scales.
First up, a table of binary, octal, unsigned decimal, signed decimal:
binary octal unsigned signed 000 0 0 0 001 1 1 1 010 2 2 2 011 3 3 3 100 4 4 -4 101 5 5 -3 110 6 6 -2 111 7 7 -1
Although our examples predominantly dealing with signed DECIMAL numbers, there really is no reason to limit ourselves to just decimal. We can represent negative values in any base, and we'd essentially follow the same logic to do so. But since a lot of this is new concepts, I'm trying to save your brain from exploding, so we're just sticking with negative numbers in decimal (for now)
As we can see, when we choose to interpret a number as signed, the overall quantity remains the same, we are merely “shifting” the perceived bounds of our number range.
So, to start, let's look at how to find the bounds of an unsigned range:
If we had a binary value of 010, how could we conveniently (and universally) get to become all 0s using logic?
If you recall, an AND operation is only true (let's say 1 is true, 0 is false) if BOTH/ALL inputs are true. Anything else results in a false result.
An OR, on the other hand, is true if ANY piece of input is true (one, the other, both).
An XOR (exclusive OR), is true if ONLY one piece of input is true, but NOT both.
A NAND is the opposite of the AND, it is false if both inputs are true, but true for every other combination (NOT the same as an OR, although similar looking patterns)
A NOR is the opposite of an OR, it is therefore only true if both inputs are false.
An XNOR is the opposite of an XOR, it is true only if both inputs are the SAME (both false, both true).
These are 6 of the 7 basic logical operations (the 7th is NOT, which does a straight inversion (1→0, 0→1)).
C/C++ have actual operators for AND (&), OR (|), NOT (!), and XOR (^). We can derive the others (and many more!) through combinations of these.
It turns out that, upon looking at these logical operations, that to make all the bits in a bitstring 0, regardless of its initial value, we could use AND, and AND it against a bitstring containing zeros (ANDing by 0 always results in 0).
So, here is our 010 being ANDed against 000:
010 <-- our initial value 000 <-- bitstring (perhaps also called a bit mask) --- 000 <-- result
However, we cannot easily represent binary numbers directly in C, so we have to represent them in some other base. It assumes decimal by default, but can also use Octal (prefix a leading 0) or hexadecimal (prefix a leading “0x”)
If 010 was in a variable called number1 and we stored the result in a variable called result, the C code would resemble this (these are all equivalent):
Using octal: result = number1 & 00; // first 0 says "Octal", second 0 is the octal value unsigned decimal: result = number1 & 0; signed decimal: result = number1 & 0; // does this make sense? hexadecimal: result = number1 & 0x0; // only one zero is needed, and in this case it is // a little in excess (why?)
For the unsigned HIGH, we want what? ALL bits to be one, regardless of their initial state.
It looks like an OR can be useful in that context, and we'd OR against a bitstring of all 1's (111 to continue our 3-bit example).
So, we'd now have the following solutions (these are all equivalent):
Using octal: result = number1 | 07; // 7 in octal is 111 unsigned decimal: result = number1 & 7; signed decimal: result = number1 & -1; // does this make sense? hexadecimal: result = number1 & 0x7;
For the signed quantities, our position of LOW and HIGH changes.
We see that LOW is now -4 (100 in binary) and HIGH is now 3 (011 in binary).
The solution lies in the binary:
We see, then, that the bit pattern to describe a the low value in a signed quantity is: MSb 1, all the rest 0.
By comparison, the HIGH signed pattern is: MSb 0, all the rest are 1.
Using one (or more) of the above logical operations, how could we convert 010 to 100?
An AND will not do (1 and 0 is 0). An OR will not do (any 1 will make it true).
An XOR will not do (we'll merely set our intended bit, and preserve existing bits– in THIS example)
What if we combined AND and OR?
Tell me, what is the result of this:
number1 = 02; result = number1 | 04; // what is stored in result following this operation? result = result & 04; // how about now?
Do we now have the desired binary value that represents the lowest signed value? (100)?
Does it work if we change the initial value stored in number1? Try it and see.