Corning Community College
ENGR1050 C for Engineers
To begin our exploration of programming, starting with an investigation into the various data types available in C, along with their properties, and collaboratively authoring and documenting the project and its specifications.
To assist with consistency across all implementations, data files for use with this project are available on lab46 via the grabit tool. Be sure to obtain it and ensure your implementation properly works with the provided data.
lab46:~/src/SEMESTER/DESIG$ grabit DESIG PROJECT
You will want to go here to edit and fill in the various sections of the document:
Number systems are a way in which one can represent quantitative values. Certain number systems are used for certain applications.
For example, the decimal number system, also known as base-10, is our typical counting numbers used in daily math. This uses the numbers 0 through 9 to represent a given value: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16.
Yet, there are still more number systems that are used. Besides decimal (which is most common) there is hexadecimal which is base-16, octal which is base-8, and binary which is base-2.
The prefix “bi” stands for two. This means that binary uses two numbers to count based on zeroes and ones. Computers use binary to store and manipulate data. Zeroes represents no flow of electricity, whereas one represents electricity being allowed to flow.
The binary number system, also known as base-2, is the numbers used by computers. This uses the numbers 0 and 1 to represent a given value: 000000, 000001, 000010, 000011, 000100, 000101, 000110, 000111, 001000, 001001, 001010, 001011, 001100, 001101, 001110, 001111, 010000 (These are equivalent to the values 0 through 16 in the decimal number system)
Computers will always convert the numbers from any number system into binary for the purposes of consistent computation, and convert them back into their original number system once finished with these computations. For example, say someone wanted to perform the computation 3 + 2 in the decimal number system. A computer would convert these values into the binary values of 000011 and 000010, respectively, compute the binary value 000101 from these values, and convert this value back into the decimal value of 5.
When you say a binary number, pronounce each digit (example, the binary number “101” is spoken as “one zero one”, or sometimes “one-oh-one”). This way people don't get confused with the decimal number. A single binary digit (like “0” or “1”) is called a “bit”. For example 11010 is five bits long. The word bit is made up from the words “binary digit”
The prefix “hexa” stands for six, and the prefix “deci” stands for ten, and as such, the combined prefix of “hexadeci” stands for sixteen. This means that hexadecimal uses 16 numbers to count based on both numbers and letters of the English alphabet. Hexadecimal is primarily used when converting large strings of binary numbers used by computers into a lesser number of digits that is more easily understandable to the human eye.
The hexadecimal number system, also known as base-16, uses the numbers 0 through 9 and the English letters A through F to represent a given value: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, 0x10 (These are equivalent to the values 0 through 16 in the decimal number system)
Because the hexadecimal base of 16 can also be written as 2^4, this makes converting between binary and hexadecimal much more intuitive than that of decimal. Each digit of the hexadecimal number system can be represented by a specific set of four digits in the binary number system. One can represent larger hexadecimal values in binary by combining strings of binary numbers in these sets of four digits. For example, let's take the binary number 01011010. This binary number can be split into two smaller binary numbers, 0101 and 1010, each with four digits. Individually, these equal 0x05 and 0x0A, respectively, in the hexadecimal number system. Combining these two binary numbers into one number thus yields 0x5A in hexadecimal.
In hex, four digits of a binary number can be represented by a single hex digit. Dividing a binary number into 4-bit sets means that each set can have a possible value of between 0000 and 1111, allowing 16 number combinations from 0 to 15. With the base value as 16, the maximum value of a digit is 15.
Signed char- This type of data occupies 1 byte of memory (8 bits) and allows expressing a maximum of 256 values. Signed char can contain both positive and negative values along with zero. The range of values is from -128 to 127.
Unsigned char- This type of data occupies 1 byte of memory (8 bits) and allows expressing a maximum of 256 values as well. Unlike signed char, unsigned char can only contain positive values and zero. The range of values is from 0 to 255.
Signed short int- This type of data occupies 2 bytes of memory (16 bits) and allows expressing a maximum of 65,536 values as well. Signed short int can contain both positive and negative values along with zero. The range of values is from -32,768 to 32,767.
Unsigned short int- This type of data occupies 2 bytes of memory (16 bits) and allows expressing a maximum of 65,536 values as well. Unlike signed short int, unsigned short int can only contain positive values and zero. The range of values is from 0 to 65,535.
Signed int- This type of data occupies 2 or 4 bytes of memory (16 or 32 bits) depending on the compiler and allows expressing a maximum of 65,536 at 2 bytes or 4,294,967,296 values at 4 bytes. Signed short int can contain both positive and negative values along with zero. The range of values is from -32,768 to 32,767 at 2 bytes or -2,147,483,648 to 2,147,483,647 at 4 bytes.
Unsigned int- This type of data occupies 2 or 4 bytes of memory (16 or 32 bits) depending on the compiler and allows expressing a maximum of 65,536 at 2 bytes or 4,294,967,296 values at 4 bytes. Signed short int can contain only positive values along with zero. The range of values is from 0 to 65,535 at 2 bytes or 0 to 4,294,967,295 at 4 bytes.
The various printf functions take a format string and optional arguments and produce a formatted sequence of characters for output. In this project there are 2 specifiers that we must include.
First there is the type specifier, the type specifier character specifies how it should interpret the corresponding argument. Should it interpret it as a character, string, pointer, integer, or a float?
Important type specifiers used within this project are;
%hhd- Specifies the output type as half of a half of a signed int (4/2/2 = 1 byte)
%hhu- Specifies the output type as half of a half of an unsigned int (4/2/2 = 1 byte)
%hd - Specifies the output type as a half of a signed int (4/2 = 2 bytes)
%hu - Specifies the output type as half of an unsigned int (4/2 =2 bytes)
%lld- Specifies the output type as a signed long long int
%llu- Specifies the output type as an unsigned long long int
%p- Specifies the output type as an address in hexadecimal digits
To be successful in this project, the following criteria (or their equivalent) must be met:
Let's say you have completed work on the project, and are ready to submit, you would do the following (assuming you have a program called uom0.c):
lab46:~/src/SEMESTER/DESIG/PROJECT$ make submit
You should get some sort of confirmation indicating successful submission if all went according to plan. If not, check for typos and or locational mismatches.
I'll be evaluating the project based on the following criteria:
39:dtr0:final tally of results (39/39) *:dtr0:used grabit to obtain project by the Sunday prior to duedate [6/6] *:dtr0:clean compile, no compiler messages [7/7] *:dtr0:program conforms to project specifications [20/20] *:dtr0:code tracked in lab46 semester repo [6/6]