User Tools

Site Tools


blog:fall2015:mmalik1:start

Mohsin Malik's Fall 2015 Opus

Injections

Introduction

My name is Mohsin Malik. I am currently in my 3rd semester at CCC out of (hopefully) 4 semesters. I am pursuing Computer Science as my major.

I mainly program in Java but have also ventured into C/C++ and some web development. I also play around with Linux in any free time I have.

Computers are my number one hobby. My number two hobby is following soccer. I enjoy playing the sport as well as following the major leagues around the world, as well as international fixtures.

That's basically all there is.

HPC Experience

September 1, 2015

On the first day of HPC, all we did was organize the tables we had in the room into a more “lair” like feel. We moved the tables into their own pods. That's basically all we did.

On the second day of HPC, we still didn't have the computers due to an error when ordering them. So, all we did was set up the monitors on all the pods.

On the third day of HPC, the computers finally came! We set everything up. The goal for that day was to have a fully functional lair. I believe we were able to successfully set every pod up except for one (which was set up the next day).

That's essentially all we have done so far.

September 7, 2015

We didn't really do much in HPC. I believe Dan and Brian set some stuff up (I can't remember what they did). Currently we are just looking for any possible bugs that is in the current pod system. For example, there was a problem with VIM where you could not backspace. Andrew Hoover found the solution to this problem, where setting backspace=2 solved it.

vim ~/.vimrc

Add in the line “set backspace=2”.

I believe for next class we will be working on cable management. We will try our best to organize the cables, as in their current state, it looks very messy and unorganized.

September 16, 2015

In HPC, we worked on organizing the cables for the pods. That's essentially all we did, although, we didn't really finish. I'm hoping to finish it up soon though.

September 28, 2015

I haven't really picked out what exactly I will be doing for HPC yet. I'm trying to think of something but nothing comes to mind. I have been looking into Android development, so I could do something with that. Who knows, I will continue to find something.

In the meantime, I will state what I have been doing in my free time in relation to computers. I recently purchased a solid state drive for my desktop. Originally I purchased a 120GB Samsung 850 Evo for $70, but I realized that it definitely wouldn't be enough space. So, I returned it and instead of going Samsung, I went with the Crucial MX200. 250GB for $90. I spent almost 2 hours researching different SSDs, and it came down between two of them. The Crutial MX200 and the Crucial BX100. I found that the main difference between the two drives is that the BX100 is mainly for consumers who have never really purchased an SSD before, it's more standard, or basic. The MX200 has more advanced features, such as a longer endurance, encryption, and much more. I researched some bench marking tests and both the drives seems to be very equal to one another in almost every aspect. In the end, I went with the MX200 because the price different was only $5, so it didn't really matter to be honest.

My original plan was to purchase a 500GB SSD to replace my current internal 1TB hard drive. I was then going to transport my 1 TB hard drive to my home server, which is currently very low on space. I decided against this because it would be a lot more efficient if I simply purchased a smaller capacity SSD (250GB) and then another hard drive for my home server (maybe 2 or 3GB; we'll see) later down the road.

Another project I thought about doing for HPC is playing around with virutalization. I have a home server and currently, the only thing it does is runs an Apache web server. I want to look into more cool things I can do with it. One of those things would be virtualization through command line. Sure, you could go the GUI way, but learning it through command line would probably be more effective. I could look into ways of creating virtual machines with ease and using them with ease. Stuff like that.

I have also been looking into Python. I see a lot of people writing Python scripts for their UNIX/Linux machines and I thought it would be nice if I could learn that, maybe do some neat stuff with that. We'll see.

November 16, 2015

The Ford Motor Company has many strategies to keep the environment stable. They have mainly focused on reducing the amount of carbon dioxide that their cars produce. The company is attempting to reduce their carbon dioxide output in their cars by thirty percent by 2025. Also, they have been creating more electric cars in the hopes to reduce carbon dioxide output. The Ford Motor Company is using a multitude of methods to attempt to sustain the environmental resources that are left on the earth.

      Ford has produced sustainable technologies and alternative fuel plans over the last few years in an attempt to reduce the carbon dioxide amount that their cars produce. Most of their near term strategies are in place while they are working on implementing their mid term and long term plans. They also have plans that they have already put in place to increase the sustainability at the current date. Some of the technologies in place include Ecoboost engines, diesel use as market demands, aerodynamics improvements, and the introduction of smaller vehicles. The small vehicles that have been introduced include the C-Max energi plug-in hybrid, and the ford focus electric. There are also hybrid cars that are out on the market, such as the Ford fusion hybrid, that have the ability to run off of gas engines and a battery-driven electric motor. These vehicles decrease on the amount of gas used by the cars, therefore creating a smaller amount of carbon dioxide that is released into the air. The cars with Ecoboost engines in them, like the Ford F-150 truck, increases how long the fuel will last in the car without sacrificing the power that a Ford consumer has come to enjoy. Ecoboost works by using turbocharging and direct injection along with reduced displacement to deliver significant fuel-efficiency gains and carbon dioxide reductions. The better aerodynamics decreases the amount of resistance a car experiences from the wind, and the less wind resistance the easier the car can run which in turn increases gas mileage and decreases the amount of carbon dioxide produced by the car. Already, the Ford Motor Company is increasing their sustainability by implementing plans in the present state of time.
      In addition to the plans that are introduced the Ford Motor Company plans to introduce more plans that will increase environmental sustainability in the near future, and further. Some of the plans that are already introduced to help in the near future are increased use of hybrid technologies, vehicle and powertrain capability to leverage available renewable fuels, and additional aerodynamics improvements. With the increased use of hybrid technologies in their vehicles, Ford hopes to bring a standard of carbon dioxide produced from their cars to an all time low. Also, if Ford is able to leverage renewable fuels into their vehicles then, hopefully, they can significantly lower the amount of carbon dioxide produced from their various car and truck models. Within the next few years you will be able to see major changes being made by Ford with the intent of reducing their impact on the environment.     
      Ford Motor Company’s long term plans aim to reduce carbon dioxide production far into the future. The speculated time for these long term changes to take effect is around 2050-2100, so quite a ways away. By this time Ford would will develop a second-generation Ecoboost engine and be creating and distributing it at high volumes. Also, they want to have lightweight materials and the next generation of hybrid and electric vehicles. The long term plans seem promising, and a large difference in the amount of carbon dioxide produced by their vehicles should be able to be seen in the far future.
      While the Ford Motor Company is giving off the idea that they are completely geared toward economic sustainability there are members of the community that describe themselves as cautious optimists. Some of these cautious optimists question whether the Ford Motor Company is actually going towards an economically conscious side, or if it is all just for good public relations. Even though there are members of the community who don’t trust Ford, Daniel Becker, director of the Sierra Club’s Global Warming and Energy Program says he “applauds Ford for recognizing the seriousness of global warming, acknowledging that its vehicles create a large part of the problem and committing to cut that pollution,” but adds “Ford is accelerating the race for cleaner cars, but we’re only in the first lap”. While the people who are cautious about trusting the Ford Motor Company are watching them closely, so far there has not been any merit to their concerns. Ford has delivered on their all of their short term promises that they have made and look to continue on the path of environmental stability.
      The Ford Motor Company has proven to be on the path of environmental stability. So far it has already implemented many immediate, and near future plans that have decreased their output of carbon dioxide by four percent. In the long term they plan to implement even more engine renovations and electric cars in order to further reduce their carbon dioxide emissions to about thirty percent. With Ford’s research into aerodynamics and engine innovations they hope to create vehicles that will have greatly improved efficiency compared to the modern day vehicle. In the near future Ford appears as if it will be the active leader in environmental stability among motor companies and if this trend continues the overall carbon dioxide production from every motor company will decrease their carbon dioxide production and the world will be much better off.

In the first publicly disclosed deployment by a government agency of computing hardware based on specs that came out of the Open Compute Project, the Facebook-led open source hardware and data center design initiative, the US Department of Energy has contracted Penguin Computing to install a supercomputing systems at three national labs.

The systems, called Tundra Extreme Scale, will support the work of the National Nuclear Security Administration, whose mission is “enhancing national security through the military application of nuclear science.” Among other things, the NNSA is charged with maintaining safety and reliability of the US nuclear weapons stockpile and drives nuclear propulsion technology for the US Navy.

Not only is the $39 million deployment of supercomputers at Los Alamos, Sandia, and Lawrence Livermore National Labs will be the first publicly known deployment of OCP gear in the public sector, it will also be one of the largest deployments of OCP gear in the world, according Penguin. The largest deployment is at Facebook data centers.

Another major deployment is the hardware in Rackspace data centers designed by Rackspace to support its cloud services.

Penguin, headquartered in the Silicon Valley, occupies a niche within the OCP hardware market, selling OCP-based high-performance computing systems rather than simple, “commodity” gear most other vendors in the space, such as Quanta and Hyve, sell.

The deal shows that companies like IBM and Cray, who have been mainstays in the government HPC market for many years, are facing a new major competitive threat.

Penguin expects the supercomputers at the national labs to achieve peak performance between 7 and 9 petaflops. One petaflop represents a quadrillion of calculations per second.

The Tianhe-2 supercomputer in China, currently considered the world’s fastest supercomputer, is capable of 33.86 petaflops, according to Top500, the organization that ranks HPC systems biannually. An IBM Blue Gene at DoE’s Argonne National Lab, at 8.59 petaflops, is comparable in performance to the Penguin systems and ranks fifth on the most recent Top500 list.

Penguin’s Tundra Extreme Scale systems at the three national labs will be powered by Intel Xeon processors.

Removal: We have removed the specific SKU of the Intel Xeon processors that power the three systems. The SKU was included in Penguin Computing’s announcement, but an Intel spokesperson contacted DCK and asked us to remove the information from the story because the information was “still under NDA” and was included in the release by mistake.

The use of high-performance computing is continuing to grow. The critical nature of information and complex workloads have created a growing need for HPC systems. Through it all, compute density plays a big role in the number of parallel workloads we’re able to run. So, how are HPC, virtualization, and cloud computing playing together? Let’s take a look.

One variant of HPC infrastructure is vHPC (“v” stands for “virtual”). A typical HPC cluster runs a single operating system and software stack across all nodes. This could be great for scheduling jobs, but what if multiple people and groups are involved? What if a researcher needs their own piece of HPC space for testing, development, and data correlation? Virtualized HPC clusters enable sharing of compute resources, letting researchers “bring their own software.” You can then archive images, test against them, and maintain the ability for individual teams to fully customize their OS, research tools, and workload configurations.

Effectively, you are eliminating islands of compute and allowing the use of VMs in a shared environment, which removes another obstacle to centralization of HPC resources. These benefits can have an impact in fields like life sciences, finance, and education, to name just a few examples.

November 23, 2015

Ever wonder about that mysterious Content-Type tag? You know, the one you're supposed to put in HTML and you never quite know what it should be?

Did you ever get an email from your friends in Bulgaria with the subject line “???? ?????? ??? ????”?

I've been dismayed to discover just how many software developers aren't really completely up to speed on the mysterious world of character sets, encodings, Unicode, all that stuff. A couple of years ago, a beta tester for FogBUGZ was wondering whether it could handle incoming email in Japanese. Japanese? They have email in Japanese? I had no idea. When I looked closely at the commercial ActiveX control we were using to parse MIME email messages, we discovered it was doing exactly the wrong thing with character sets, so we actually had to write heroic code to undo the wrong conversion it had done and redo it correctly. When I looked into another commercial library, it, too, had a completely broken character code implementation. I corresponded with the developer of that package and he sort of thought they “couldn't do anything about it.” Like many programmers, he just wished it would all blow over somehow.

But it won't. When I discovered that the popular web development tool PHP has almost complete ignorance of character encoding issues, blithely using 8 bits for characters, making it darn near impossible to develop good international web applications, I thought, enough is enough.

So I have an announcement to make: if you are a programmer working in 2003 and you don't know the basics of characters, character sets, encodings, and Unicode, and I catch you, I'm going to punish you by making you peel onions for 6 months in a submarine. I swear I will.

And one more thing:

IT'S NOT THAT HARD.

In this article I'll fill you in on exactly what every working programmer should know. All that stuff about “plain text = ascii = characters are 8 bits” is not only wrong, it's hopelessly wrong, and if you're still programming that way, you're not much better than a medical doctor who doesn't believe in germs. Please do not write another line of code until you finish reading this article.

Before I get started, I should warn you that if you are one of those rare people who knows about internationalization, you are going to find my entire discussion a little bit oversimplified. I'm really just trying to set a minimum bar here so that everyone can understand what's going on and can write code that has a hope of working with text in any language other than the subset of English that doesn't include words with accents. And I should warn you that character handling is only a tiny portion of what it takes to create software that works internationally, but I can only write about one thing at a time so today it's character sets.

A Historical Perspective

The easiest way to understand this stuff is to go chronologically.

You probably think I'm going to talk about very old character sets like EBCDIC here. Well, I won't. EBCDIC is not relevant to your life. We don't have to go that far back in time.

ASCII tableBack in the semi-olden days, when Unix was being invented and K&R were writing The C Programming Language, everything was very simple. EBCDIC was on its way out. The only characters that mattered were good old unaccented English letters, and we had a code for them called ASCII which was able to represent every character using a number between 32 and 127. Space was 32, the letter “A” was 65, etc. This could conveniently be stored in 7 bits. Most computers in those days were using 8-bit bytes, so not only could you store every possible ASCII character, but you had a whole bit to spare, which, if you were wicked, you could use for your own devious purposes: the dim bulbs at WordStar actually turned on the high bit to indicate the last letter in a word, condemning WordStar to English text only. Codes below 32 were called unprintable and were used for cussing. Just kidding. They were used for control characters, like 7 which made your computer beep and 12 which caused the current page of paper to go flying out of the printer and a new one to be fed in.

And all was good, assuming you were an English speaker.

Because bytes have room for up to eight bits, lots of people got to thinking, “gosh, we can use the codes 128-255 for our own purposes.” The trouble was, lots of people had this idea at the same time, and they had their own ideas of what should go where in the space from 128 to 255. The IBM-PC had something that came to be known as the OEM character set which provided some accented characters for European languages and a bunch of line drawing characters… horizontal bars, vertical bars, horizontal bars with little dingle-dangles dangling off the right side, etc., and you could use these line drawing characters to make spiffy boxes and lines on the screen, which you can still see running on the 8088 computer at your dry cleaners'. In fact as soon as people started buying PCs outside of America all kinds of different OEM character sets were dreamed up, which all used the top 128 characters for their own purposes. For example on some PCs the character code 130 would display as é, but on computers sold in Israel it was the Hebrew letter Gimel (ג), so when Americans would send their résumés to Israel they would arrive as rגsumגs. In many cases, such as Russian, there were lots of different ideas of what to do with the upper-128 characters, so you couldn't even reliably interchange Russian documents.

Eventually this OEM free-for-all got codified in the ANSI standard. In the ANSI standard, everybody agreed on what to do below 128, which was pretty much the same as ASCII, but there were lots of different ways to handle the characters from 128 and on up, depending on where you lived. These different systems were called code pages. So for example in Israel DOS used a code page called 862, while Greek users used 737. They were the same below 128 but different from 128 up, where all the funny letters resided. The national versions of MS-DOS had dozens of these code pages, handling everything from English to Icelandic and they even had a few “multilingual” code pages that could do Esperanto and Galician on the same computer! Wow! But getting, say, Hebrew and Greek on the same computer was a complete impossibility unless you wrote your own custom program that displayed everything using bitmapped graphics, because Hebrew and Greek required different code pages with different interpretations of the high numbers.

Meanwhile, in Asia, even more crazy things were going on to take into account the fact that Asian alphabets have thousands of letters, which were never going to fit into 8 bits. This was usually solved by the messy system called DBCS, the “double byte character set” in which some letters were stored in one byte and others took two. It was easy to move forward in a string, but dang near impossible to move backwards. Programmers were encouraged not to use s++ and s– to move backwards and forwards, but instead to call functions such as Windows' AnsiNext and AnsiPrev which knew how to deal with the whole mess.

But still, most people just pretended that a byte was a character and a character was 8 bits and as long as you never moved a string from one computer to another, or spoke more than one language, it would sort of always work. But of course, as soon as the Internet happened, it became quite commonplace to move strings from one computer to another, and the whole mess came tumbling down. Luckily, Unicode had been invented.

Unicode

Unicode was a brave effort to create a single character set that included every reasonable writing system on the planet and some make-believe ones like Klingon, too. Some people are under the misconception that Unicode is simply a 16-bit code where each character takes 16 bits and therefore there are 65,536 possible characters. This is not, actually, correct. It is the single most common myth about Unicode, so if you thought that, don't feel bad.

In fact, Unicode has a different way of thinking about characters, and you have to understand the Unicode way of thinking of things or nothing will make sense.

Until now, we've assumed that a letter maps to some bits which you can store on disk or in memory:

A → 0100 0001

In Unicode, a letter maps to something called a code point which is still just a theoretical concept. How that code point is represented in memory or on disk is a whole nuther story.

In Unicode, the letter A is a platonic ideal. It's just floating in heaven:

A

This platonic A is different than B, and different from a, but the same as A and A and A. The idea that A in a Times New Roman font is the same character as the A in a Helvetica font, but different from “a” in lower case, does not seem very controversial, but in some languages just figuring out what a letter is can cause controversy. Is the German letter ß a real letter or just a fancy way of writing ss? If a letter's shape changes at the end of the word, is that a different letter? Hebrew says yes, Arabic says no. Anyway, the smart people at the Unicode consortium have been figuring this out for the last decade or so, accompanied by a great deal of highly political debate, and you don't have to worry about it. They've figured it all out already.

Every platonic letter in every alphabet is assigned a magic number by the Unicode consortium which is written like this: U+0639. This magic number is called a code point. The U+ means “Unicode” and the numbers are hexadecimal. U+0639 is the Arabic letter Ain. The English letter A would be U+0041. You can find them all using the charmap utility on Windows 2000/XP or visiting the Unicode web site.

Data Structures

September 1, 2015

We talked about what data structures is all about. Essentially, it is about writing code which is efficient and doesn't eat up resources. In previous programming courses, we are taught how to program, but not how to program effectively and efficiently.

We learned about something called a LinkedList. It is basically an array except each element is in a random location in memory (instead of being lined up, like in an array). Matt showed us an example of this using a struct called node. Node contained two variables, a signed short int called value and a struct node pointer called next. Basically, each instance of struct node is an element in a LinkedList. Each element of this struct node will then point to the next instance in the LinkedList.

Our first project for this course is to create a program which will simulate a LinkedList except using an array (for now). The purpose of this project is so that we can get a refresher of C and so that we can start leading into data structures.

September 7, 2015

Today I finished the first project for dsi0. Here is the code:

#include <stdio.h>
 
int main() {
	// initialization of variables
	int loopMenu = 1;
	int numbers[21];
	numbers[0] = -1;
	// loop through all indexes in array and set it to zero
	int j;
	for(j = 1; j <= 20; j++) {
		numbers[j] = 0;
	}
	int currentIndex = 0;
	// will loop the main menu until the user requests to exit
	do {
		printf("\nMenu Options\n");
        	printf("1 = build list\n");
        	printf("2 = display list\n");
        	printf("3 = insert into list\n");
        	printf("4 = append into list\n");
        	printf("5 = obtain from list\n");
        	printf("6 = clear list\n");
        	printf("7 = quit\n");
        	int option = 0;
        	printf("\nEnter an option: ");
		// scans in option number
        	scanf("%d", &option);
		// checks which option it is
		if(option == 1) {
			printf("\n");
			// checks if the max amount of numbers has been built into array
			if(currentIndex == 20) { 
				printf("You have reached the limit of 20 numbers.\n");
			} else {
				int loopAppend = 1;
				// will loop until -1 is input or the limit is reached
                        	do {
                                	int input = 0;
                                	printf("Input a number to add to the list: ");
                                	scanf("%d", &input);
					// checks if they have reached the limit
                                	if(currentIndex == 20 && input != -1) {
						printf("You have reached the limit of 20 numbers.\n");
						loopAppend = 0;
						numbers[20] = -1;
                                	} else {
						numbers[currentIndex] = input;
						printf("numbers[%d] = %d\n", currentIndex, input);
						if(input == -1) {
							loopAppend = 0;
						} else {
							currentIndex++;
						}
					}
                        	} while(loopAppend == 1);
			}
		} else if(option == 2) {
			// checks if the list has any numbers
			if(numbers[0] == -1) {
				printf("Your list has no numbers.\n");
			} else {
				printf("List Contents:\n");
                        	int i;
				// loops through all indexes; prints them
                        	for(i = 0; i < 21; i++) {
                                	int number = numbers[i];
                                	if(number == -1) {
                                        	break;
                                	} else {
                                        	printf("numbers[%d] = %d\n", i, number);
                                	}
                        	}
			}
		} else if(option == 3) {
			int index = 0;
			printf("Enter index: ");
			scanf("%d", &index);
			if(index < 0 || index > currentIndex) {
				printf("The index must be between 0 and %d.\n", currentIndex);
			} else {
				int input = 0;
				printf("Enter the number you wish to insert: ");
				scanf("%d", &input);
				int i;
				// shifts the array
				for(i = currentIndex; i >= index; i--) {
					int numAt = numbers[i];
					int newIndex = i + 1;
					if(newIndex == 20) {
						numbers[20] = -1;
					} else {
						numbers[newIndex] = numAt;
					}
				}
				currentIndex++;
				if(currentIndex > 20) {
					currentIndex = 20;
				}
				// sets the input in for the destination index
				numbers[index] = input;
			}
                } else if(option == 4) {
			int index = 0;
			printf("Insert index: ");
			scanf("%d", &index);
			if(index < -1 || index > (currentIndex - 1)) {
				printf("The index must be between -1 and %d.", currentIndex - 1);
			} else {
				index++;
				int input = 0;
				printf("Enter the number you wish to append: ");
				scanf("%d", &input);
				int i;
				// shifts the array elements over
				for(i = currentIndex; i >= index; i--) {
					int numAt = numbers[i];
					int newIndex = i + 1;
					if(newIndex == 20) {
						numbers[20] = -1;
					} else {
						numbers[newIndex] = numAt;
					}
				}
				currentIndex++;
				if(currentIndex > 20) {
					currentIndex = 20;
				}
				// sets the input number in for the dest index
				numbers[index] = input;
			}
                } else if(option == 5) {
			int index = 0;
			printf("Insert index: ");
			scanf("%d", &index);
			// checks if the index is present in the list
			if(index < 0 || index >= currentIndex) {
				printf("The index must be between 0 and %d.", currentIndex - 1);
			} else {
				// prints what the number is in that index
				printf("numbers[%d] = %d\n", index, numbers[index]);
				int i;
				// removes the number at that index and shifts everything over
				for(i = index + 1; i <= currentIndex; i++) {
					int numAt = numbers[i];
					int newIndex = i - 1;
					numbers[newIndex] = numAt;
				}
				currentIndex--;
				if(currentIndex < 0) {
					currentIndex = 0;
					numbers[0] = -1;
				}
			}
                } else if(option == 6) {
			// sets the first (zero-th) index to -1
			numbers[0] = -1;
			currentIndex = 0;
			printf("The list has been cleared.\n");
                } else if(option == 7) {
			// exits...
			loopMenu = 0;
			printf("Exitting...\n");
                } else {
			printf("Incorrect menu option.\n");
		}
	} while(loopMenu == 1);
	return 0;
}

The project wasn't insanely difficult, but it did require some thinking, especially for the insert into and append features of the program. Everything else wasn't very hard but it did require some thinking to make it so that everything worked together.

September 16, 2015

In data structures, we have a new project. Unfortunately though, I'm not allowed to post anything in regards to it (any work I did for the project) because it shouldn't be a “temptation” to others who are currently not finished with the project.

Basically, it involved drawing pictures are writing pseudo-code. We had to draw pictures of nodes and pointers in a singly linked list. We had to follow the directions of how the list would be modified and show that on paper and in pseudo-code.

That is all we have really been doing in data structures so far.

September 21, 2015

We got a new project in data structures. This time, we aren't drawing a bunch of pictures! Instead, we applied those pictures to code. We began writing a node library for a singly-linked list.

The functions that we began to write was cp.c, mk.c, and rm.c

cp.c is for copying an existing node. mk.c is for creating a brand new node. rm.c is for removing a node and de-allocating it.

Creating a new node requires creating a new pointer of the Node structure. Allocate the memory, and assign the variable(s). Quite simple. For copying a Node, we had to create a new Node pointer (using the mknode function located in mk.c). With this new node pointer, we would copy the variables from the node input into the new node. For removing the node, we had to use the free function to mark that memory for de-allocation. Next, we set this currently existing node pointer to NULL, which breaks off any and all ties to the node.

After completion of the node library, we had to complete certain application which took advantage of the node library. These programs were the following:

  • node-app-arrtolist.c
  • node-app-display.c
  • node-app-display2.c

node-app-arrtolist.c

This application will convert a currently existing array of chars to use the node system we just created. Using the nod e library, we will create a singly-linked list. This involved first off, creating a starting node, to indicate where the list would begin. From there, we had to create other nodes which would link off of the starting node, to the next node, to the next, etc… The elements of the list had to be the elements of the currently existing array.

node-app-display.c

This application would display the contents of a singly-linked list. The only argument is the starting node of the list. From there, it would continue to access the “after” node pointer until this node pointer is equal to NULL. This indicates the end of the list, there is nothing past this point.

node-app-display2.c

This application does the same thing as the above application except the display portion is in it's own function.

That is essentially all the project was. Along with that, we got a “make” system, to help make the project more simple to work with. For example, to compile all currently existing code, we would simply go to the root of the project and type the command “make”. If we wanted to clean out all currently exiting compiled code, we would type the command “make clean” in the root of the project. This system also enables Matt Haas to push updates to the project, in case if something comes up where a file needs changing. It's a very handy tool and will definitely come in use later down the road whilst working on other projects.

September 28, 2015

In data structures, I completed another project. This project was sll0. This project involved a new struct known as List. This List struct contained two different Node pointers, one called first, and one called last. These pointers did exactly what you think they would do. The Node pointer first points to the first Node in the list and the pointer last points at the last Node in the list. This struct made creating a list very easy, and made managing the variables a lot more simple.

The sll0 project involved us creating some new functions.

displayf.c

This file contains one function. This function effectively displayed a list, from the start to the end. It involves getting the first Node in the list and looping through until the after Node pointer is NULL. With iteration, display the contents of that Node.

insert.c

This file contains one function. This function would insert a Node into the list before the Node specified. So the arguments would be a List, a currently existing Node in the specified List, and the Node you wish to insert.

Before insert: start → NODE(A) → NULL

After insert: start → NODE(B) → NODE(A) → NULL

mk.c

This file contains one function. This function will simply allocate memory for a new List.

pos.c

This file contains two different functions.

getpos

This function will get the position of a Node from a List. The function takes in two arguments, a List and a Node. If the number returned is a negative number, then that means there was an error of some sort (the List may have been NULL, the Node may have been NULL, or the Node may have not been in the specified List).

setpos

This function returns a Node at a specific position. Personally, I think that this function should be renamed to “getnode” because setpos was a little confusing for myself to understand exactly what this function was doing. It will just return the Node at the specified location. If the Node returned is NULL, then that means there was an error (the list may have been NULL, the position provided was invalid, or the position was out of bounds).

All of these functions are meant to work together to make the applications that Matt wrote work properly.

October 5, 2015

I don't exactly remember what I did in data structures (more projects) so here's an essay I wrote for vegetarian experience.

The article which I chose was found on DailyMail.co.uk and is about when dozens of PETA supporters decided to protest against meat-eating on World Vegan Day. The way they protested is that they went to central London, stripped down and pouring fake blood on their bodies, and then laid there in the streets. They also has some signs there saying stuff about a vegan lifestyle. Many pictures of this event are seen in the article. To me, the people associated with this protest seem to be very into a vegan lifestyle and seem to care a lot about it, which is a good thing for them. It shows that they are willing to protest for something that they believe will better this planet and better other people’s lives. It is also a peaceful protest, which makes it even better. Nobody was hurt in doing this and it sent a message to anyone who would happen to see the protest take place. It was also in a very good and public location. This shows that they have the bravery to do something like this, stripping almost nude in public to convey a message is not something that is very easy to do. As for if I agree with this demonstration or not, I can’t really say. Myself personally, I would not do something like this. The reason I wouldn’t do something like this is because I think that people everywhere already know of vegetarianism, and that if they had the interest in becoming a vegetarian, they would research the benefits of it on the internet, similar to how I am taking this class on vegetarianism to learn more about it. Instead of lying in the streets nude covered in fake blood, they should do something else which is more informational, rather than “hysterical” in other people’s eyes. Something like handing out flyers or creating posters which show the many benefits of vegetarianism. To me, that would send a more clear message.

And here's another essay. This is an essay on Fuzzy Logic that I had to write for Discrete Structures.

Fuzzy Logic is a concept of logic where instead of having 0 be false and 1 be true, there is an in-between. Anyone who has programmed before knows that there are variables called “boolean” variables, which can only be true or false, where true usually represents a one and false represents a zero. With Fuzzy Logic, you can choose any real number between zero and one. For example, the weatherman says that it will be very sunny outside today. However, I look outside and there are some clouds in the sky. I could say that the weatherman’s statement was 75% true, or 25% false. Fuzzy Logic allows us to work more with boolean logic, rather than just two values, we have an infinite amount of values to work with. These values are called degrees of truth.

One example of Fuzzy Logic which is used very often is a shower controller. The temperature of the water coming from a shower will not be strictly cold or strictly hot. There are many in between states of temperature. So, the water could be slightly cold, or it could be slightly warm. It shows how there aren’t just two separate states of the temperature of water, there are many different states.

Another example of Fuzzy Logic being used is in a washing machine. Everytime you turn on the wash, the settings aren’t the same. The washing machine considers many different factors to decide how fast it spins and what temperature it should wash at. These factors could be how much clothes was put into the washing machine, or how dirty the water is with each cycle. The washing machine will operate at the most efficient and eco-friendly way as possible.

One of the most famous examples of Fuzzy Logic being used is the Sendai Subway located in Japan. This subway system uses Fuzzy Logic to determine how fast it should go. It takes into consideration different factors, such as the weight of the passengers, where the subway is at that time, stuff like that. The subway system uses this information to determine the optimal travel speed for smooth and eco-friendly travel for all travelers. This subway was also the first notable use of Fuzzy Logic in a real life scenario.

So essentially, with Fuzzy Logic, we can input certain factors and data into a computer. The computer will then analyze this information according to a set of predefined rules. For example, we could have a rule for how dirty water is, or a rule for what temperature is considered very hot or very cold. The information is checked against all of these rules, and depending on which rules the information checks out on, the operation of the machine the computer is controlling will be modified to some extend. For example, back to the washing machine. There could be some information inputted that the water is very dirty. There could also be a rule that if the water is very dirty, then have the washer do 50 more cycles, or up the amount of detergent being used.

One more example of Fuzzy Logic being applied in the real world is for heating ventilation and air conditioning units. These systems will use thermostats controlled with Fuzzy Logic to control the flow of hot or cold air. This will make the whole system much more efficient, which will result in energy being saved.

October 12, 2015

I did some work on the new project in data structures. So far, I have only gotten one of the functions done, but I plan on finishing it up tomorrow at some point. Hopefully I will actually complete all of it in a timely fashion.

I don't really have anything else to write about, so here's an essay I wrote on transcendental numbers.

Start

A transcendental number is a number which is real or complex. It is not algebraic. The best known transcendental numbers are pi and Euler's number (e). Even though we only know of a few transcendental numbers, they aren't that rare. Almost all real and complex numbers are transcendental. All transcendental are irrational.

I suppose the best way to prove that a transcendental number is by using a proof by contradiction. I watched a Numberphile video on transcendental numbers and the man in the video explains that an algebraic number will always have a formula. This formula can be used to find the said number. However, transcendental numbers do not have an algebraic formula. The examples shown above are e and Pi. Both these numbers are transcendental numbers because there is no algebraic formula to represent the numbers. In the video, it was explained that to see if a number is algebraic, you try to get it down to zero by adding, subtracting, multiplying, or putting it to a power. All additions must be whole numbers. So, for example, you can't add 1.5, or subtract 6.75. I suppose that to prove that a number is transcendental you would have to do something like that and show that it can't be done.

Current numbers which are shown to be transcendental are e and Pi, two very famous constants. It was proven that e is a transcendental number, and it was also proven that e to any power is a transcendental number. I'm not sure as to what the exact process is to finding out if a number is transcendental, whether is simply fails the game I listed above or if there are other conditions. I did find some interesting theorems on transcendental, however, I did not really understand them because I don't know a whole lot on the subject in the first place.

As for numbers which we aren't sure if they are transcendental, I tried searching for some but I couldn't find any. I think the main issue with transcendental is finding that number in the first place. After the number is found, then you try to see if it is transcendental or not.

End

I had to write this essay for discrete structures. The idea of these kind of numbers is very interesting. What I want to know is the exact methods of proving these kind of numbers. I couldn't really find a definite answer when looking online, so I can only assume that it is pretty complicated.

Many people embark on the journey of following a whole-food, plant-based diet with the sole intention of improving their heath and enhancing their quality of life. And while improved physical wellbeing is a worthy goal in-and-of itself and its benefits are indisputable, it is by no means the only benefit of eating “plant-strong.” We cannot reduce eating to nothing more than what we put in our bodies any more than we can reduce foods to nothing more than their chemical properties (a whole apple, for instance, is not simply an aggregate of fiber, calories, and vitamins). Whether intentional or not, following a whole-food, plant-based diet is about much more than simply what we do – or do not – eat, and it therefore impacts much more than simply our physical health.

Following a whole-food, plant-based diet is about having the courage to step outside of the mainstream, animal-eating culture, a culture that seeks to keep us intellectually anesthetized and comfortably numb. (For more information on the mentality of the animal-eating culture, which I refer to as carnism, see carnism.com). It is about reclaiming our health and redefining the very meaning and nature of eating and food. It is about being open to ideas that challenge the myths of the dominant culture in which we have all been indoctrinated – to critically examine longstanding “truths” that have been drummed into us by our parents, teachers, doctors, and society. It is about questioning the authorities we have learned to place our trust in and thus questioning our own relationship with authority and truth. It is about resisting the pressure to conform to a seductive yet destructive status quo and having the strength to hold onto our convictions in the face of deep-seated resistance to our lifestyle. (How often have you thrown up your hands in perplexed exasperation when, for instance, your dangerously overweight loved one, who’s undergone triple bypass surgery, calls you crazy for suggesting they reduce their meat consumption? How often have you felt alienated at meals where otherwise conscientious, rational people cannot seem to remember or figure out how to prepare a plant-based meal so that you don’t have to starve?) Following a whole-food, plant-based diet is about saying yes to health, life, and truth – and therefore saying no to the beliefs and behaviors of the dominant, animal-eating culture.

The above were some interesting essays I read on vegetarianism during my vegetarianism online course. The course kind of sucked but it was informative to say the least. I hope you enjoyed reading my latest Opus entry.

October 26, 2015

As a computer scientist, there’s nothing that annoys me more than when my friends ask me for help setting up their wireless internet, or when my mom calls and asks why her laptop keeps freezing. I try to tell them that I’m not studying computer repairs or computer usage, I’m studying computer science.

But that doesn’t help, because nobody seems to know exactly what the term “computer science” means. When I urge my friends to take a computer science course, they shrug me off with comments like “I’m no good with computers” or “I don’t do science.” Assuming my friends aren’t just unadventurous, there must be some big misconceptions outside of the computer science community about what computer science is all about.

Computer scientists are concerned with questions like: How do you find the shortest route between two points on a map? How do you translate Spanish into English without a dictionary? How do you identify the genes that make up the human genome using fragments of a DNA sequence?

There’s a difference between the question, “How do you identify the genes that make up the human genome?” and the question, “What are the genes that make up the human genome?” The latter, a question posed by biologists, asks for a specific fact, while the former asks for a procedure which can produce that fact.

Consider any science: chemistry, biology, physics, or even one of the “soft” sciences like psychology. All are concerned with answering factual questions about the world around us. In computer science, the goal is not to figure out the answers to factual questions, but rather to figure out how to get answers. The procedure is the solution. While scientists want to figure out what is, computer scientists want to know how to.

This is not to say that scientists don’t ever need to know how to figure out the answers to their questions. The key distinction is that computer scientists care only about how to figure out the answer, and not what the answer is. Scientists, in some sense, either rely on computer science to help with their process (for instance, if they make use of data-analysis software) or are in part computer scientists themselves.

The distinction between questions of fact and questions of procedure leads naturally to a difference in methodology between scientists and computer scientists. When scientists come up with a possible answer to a question–a hypothesis–they try to prove or disprove it using experiments. Experiments are in essence tests to see whether a hypothesis matches the behavior of the natural world. If a hypothesis accounts for how the world behaves (or at least the behavior that the scientists can see), then it’s a useful theory.

We’re all familiar with this process from elementary school. It’s called the scientific method: you observe some occurence, come up with a hypothesis about it, test your hypothesis with experiments, and then analyze the results. This is how scientists justify the answers to their factual questions, and it’s how our society generates knowledge.1

Knowledge in computer science, however, doesn’t work the same way. Procedures don’t exist in the natural world–they’re devised by humans. When we come up with a procedure, we can’t just run experiments to see if it works. Although the procedure might be applied to data gathered from the real world, the procedure itself is not a part of nature. Think back to all the sciences I mentioned before. All of them seek knowledge about that which already exists. Procedures, however, are completely constructed–they only exist in the abstract.

For instance, consider the procedure used in a spell checker that recommends possible correct spellings when you make a typo. This procedure takes a sequence of letters and tries to find the closest match in a giant list of valid sequences, or as we normally call them, words. What separates this procedure from the real world problem of correcting spelling is that the sequences don’t have to represent words–that’s just one possible application for the procedure. The procedure itself can be reused with other kinds of sequences. In fact, this very same procedure is used for the DNA sequencing problem I mentioned before.2

Since the problems solved by computer scientists are defined separate from the real world, we can’t use the scientific method to analyze their validity. We can only analyze procedures within the realm of abstraction in which we have created them. Luckily, this type of reasoning is exactly why we have mathematical logic. Mathematicians, too, are concerned with the idea of truth in the abstract. Instead of running experiments, computer scientists define problems and procedures mathematically, and then analyze them using logic. This is the fundamental reason why computer science is not a science.

Given that the correctness of procedures is proved using mathematical logic, it might seem like computer science is really just a branch of mathematics, which it is, in some sense. In fact, much of the “math” we learn in school is actually computation.

Consider, for example, the problem of dividing two numbers. When presented with this problem, a mathematician might derive the properties of division, such as when there will be a remainder. A computer scientist, in contrast, would focus on figuring out how to perform the division.3

The computer scientist might eventually come up with the long division algorithm. Just like any 4th grader, however, he wouldn’t want to perform the division by hand. Instead, he would write a series of instructions, or program, describing how to perform the calculation, and tell a computer to execute it.

Notice that this is the first time I’ve mentioned computers at all. That’s because there’s nothing fundamental about procedures that requires the use of computers. Computers aren’t the only tools that can be used to execute programs. For instance, elementary school students are perfectly capable of executing the long division algorithm. We use computers instead of small children because computers are fast and reliable (after all, that’s why we built them), while small children are adorably uncoordinated and prone to unexpected naps.

The great computer scientist Edsger Dijkstra summed it up best: “Computer science is no more about computers than astronomy is about telescopes.” Even though a complex ecosystem of programs has developed, allowing computers to serve a variety of purposes, computers are still nothing more than a tool for executing procedures. Computer science is about the procedures themselves, not so much the tools used to execute them.

At this point, though, I should say that I haven’t painted an entirely accurate picture of the field–or rather, I left out some parts. There are probably some computer scientists reading this who are thinking, “This doesn’t describe my work at all.”

While at its core, computer science really is the pure study of procedures in the abstract as I described, in reality, the field has grown to encompass a wide variety of pursuits. Some computer scientists are concerned mostly with designing intricate systems that rely heavily on the specifics of computer architecture. Others study human-computer interaction, which actually does use the scientific method to determine what types of interfaces work the best for computer users.

It would be easy to dismiss the outliers and say they are not true computer scientists, that their work falls under the umbrella of some related but fundamentally different field. But I think the breadth of study within computer science is not necessarily a bad thing. It doesn’t need to be strictly defined.

Within the computer science department at my university, there’s a huge variety of interests among the students and professors. The multitude of perspectives complement each other, and help the field grow.

In the end, its the rate of growth of the field that makes all this definition business so tricky. Computer science is still young, and always undergoing new growth spurts. It’s that awkward teenage boy at the school dance whose limbs are growing so fast that he can’t make them all move together harmoniously just yet.

November 2, 2015

The concept of adding integers has been around for thousands of years. However, the implementation of that concept to be used in something like a calculator hasn’t been around quite that long. The best way to view the history of computational addition of integers is to look at how calculators originated. The very first known version of a calculator is something known as the abacus. “In the very beginning, of course was the abacus, a sort of hand operated mechanical calculator using beads on rods, first used by Sumerians and Egyptians around 2000 BC. The principle was simple, a frame holding a series of rods, with ten sliding beads on each. When all the beads had been slid across the first rod, it was time to move one across on the next, showing the number of tens, and thence to the next rod, showing hundreds, and so on” (). This device made adding less error-prone. The first real device which did not involve any real human interaction (other than saying what you want to add) came about in 1820, and it was called the Arithmometer. Up until around the 1930’s, many different devices were created which all essentially did the same thing and used the same concepts. Eventually, the calculator was transformed into an electrical device, using electrical currents. It began with war, where a device was needed to constantly calculate the trajectory required to drop a bomb on Japanese warships. “All were basically mechanical devices using geared wheels and rotating cylinders, but producing electrical outputs that could be linked to weapon systems.” (). Shortly after, the Colossus was made, which was used as a code-breaking device. All it did was perform exclusive-or Boolean algorithms. Later on, the ENIAC (Electronic Numerical Integrator And Computer) was created. The purpose of the ENIAC was a calculator which was capable of using the four basic arithmetic functions. “ENIAC was 1,000 times faster than electro-mechanical computers and could hold a ten-digit decimal number in memory. But to do this required 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed around 27 tonnes, took up 1800 square feet of floorspace and consumed as much power as a small town.” (). Afterwards, the valve and tube calculator was invented. And then after that, came the transistor age calculators. Each age of calculators were less bulky and could compute faster. Eventually, we reached to the era we currently reside in, the era of the microchip, which is what allows calculators to be so small and portable now.

Addition of integers is extremely important in regards to computer science because almost any program you write will contain some sort of integer addition. Even if you do not write something in your code which does integer addition itself, a function or method you call most likely does all the integer addition for you. Without integer addition, there are many simply things in computers we would not be able to accomplish. One example is keeping track of the time on a computer. Time is usually kept track of as a single number, which is represented as milliseconds. This number has to be incremented every millisecond, which would be really hard to do if we didn’t have integer addition. There are also loops in computer programs, specifically for loops. For loops usually increment or decrement a certain integer to keep track of how long the loop should continue for. Again, this would be difficult to accomplish without the use of integer addition.
Computers add integers together by using Boolean algebra. Boolean algebra is “a type of math that deals with bits instead of numbers” (). The concept behind any computations a computer performs is Boolean algebra. Engineers have created extremely small devices which will implement Boolean operations. “These little devices are called ‘logic gates’ and, analogous to the idealized mathematical versions we talked about, these physical devices have wires leading into them that signal the values of input bits (1 if the voltage on the wire is above some threshold and 0 otherwise) and a single wire leading out of them that gives the return value. By combining gates together in smart ways, they can be made to let you do incredible things like add numbers, write emails, play video games, chat with people around the world, and everything else you do with computing devices” (). Logic gates are the cores of any computation a computer performs, including adding integers. A very important logic gate is known as XOR, which stands for exclusive-or. XOR will return a value of 0 if both inputs are the same and a value of 1 if both input are different. The main important thing about XOR in regards to computations is that “these output values are exactly the same as the values of the right-most bit (called the ‘sum bit’) when adding two binary numbers” (). For example: XOR with inputs 0 and 0 will return 0. If you add binary numbers 0 and 0 together, you will get 0. The output is the same. If you XOR with inputs 0 and 1, you will get 1. When you add binary 0 and 1 together, you get 1. If you XOR with the inputs being 1 and 1, you will get an output of 0. If you add binary numbers 1 and 1 together, the sum bit will be a 0, and the carry bit will be a 1. So essentially, the output of XOR is the same as the sum bit when adding two single digit binary numbers together. This gives us a device known as a half-adder. The reason this is only a half-adder is because this can only add together 2 binary numbers. If that is the case, we won’t be able to deal with carry over bits. First off, we should discuss how to obtain the carry over bit. The AND gate will output what the carry over bit is, just like how XOR will output the sum bit. Now, in order to add this carry over bit in the next column of digits, we need something called a full-adder. “A full-adder—which is basically just two half-adders cleverly stuck together—works just like a half-adder, except that it also adds the value of a carry bit to the two input bits. If we chain several of these full-adders together—so the output carry bit from one becomes the input carry bit to the next, and so on—we end up with what’s called a ripple carry adder which allows us to add not just two numbers, but as many as we want” ().

This is an essay I wrote for discrete structures. This essay is about how computer add number, as well as the history of calculators themselves. It is very interesting because it shows how calculators originated, and how they were eventually implemented into modern day computing. Back then, calculators were the size of a briefcase. Now, they are smaller than your palm.

I hope that this essay was informational and fun to read. Thanks for reading.

November 9, 2015

In this essay, I will discuss the issues relating to the teaching of computer science that were raised in the article “A Debate on Teaching Computer Science” [Dijkstra et al 1989]. The structure of the paper is as follows: first I will briefly summarise Dijkstra's contribution and his reply. I will do this to illustrate what I consider to be his main points. I will then discuss the relevance of these points to the teaching of computer science. In the section after that I will summarise the points raised by the other contributors. As many points were repeated, I will give an overview of all the issues raised and discuss them. In the final section I will address some issues that were not raised and bring together my conclusions.

Dijkstra originally presented the talk at the ACM Computer Science Education Conference in February 1989 and it was decided to print the text of the talk in CACM with other computer scientists entering into the debate. The editor of the Communications of the ACM, Peter Denning introduces the debate by describing Dijkstra as challenging “some of the basic assumptions on which our curricula are based” [Dijkstra et al 1989].

Dijkstra's basic position is that computer science consists of two radical novelties and that this has implications for the teaching of computer science especially introductory programming courses for first year students (I will use the terms “first years” and “first year students” to replace the term “freshmen”). A radical novelty is such a sharp discontinuity in thought that existing approaches cannot be used to reason about it and it is necessary to approach a radical novelty with a blank mind. The two radical novelties in computer science are the depth of conceptual hierarchies that occur in computer science and the fact that computers are the first large scale digital devices. Radical novelties require much work to come to grips with, and people are not prepared to do this, so they pretend the radical novelties do not exist. Examples of this in computer science are software engineering and artificial intelligence.

Dijkstra investigates the scientific and educational consequences, by examining what computer science is. He reduces this to the manipulation of symbols by computers. In order for meaningful manipulations to be made, a program must be written. He then defines a program as an abstract symbol manipulator, which can be turned into a concrete symbol manipulator by adding a computer to it. Programs are elaborate formulae that must be derived by mathematics. Dijkstra hopes that symbolic calculation will become an alternative to human reasoning. Computing will go beyond mathematics and applied logic because it deals with the “effective use of formal methods” [Dijkstra et al 1989]. He points out that this view of computing science is not welcomed by many people, for various reasons.

Dijkstra makes a number of recommendations for education. Bugs should be called errors since a program with an error is wrong, and lecturers should avoid anthropomorphic terminology as it causes students to compare the analog human with discrete computer and this leads to operational reasoning which is tremendous waste of time. When attempting to prove something about a set, it is better to work from the definition than with each individual item in the set. The approach can be applied to programming as well where programs can be reasoned about without dealing with specific behaviours of the program. The programmer's task is to prove that the program meets the functional specification. Dijkstra suggests that an introductory programming course for first years should consist of a boolean algebra component, and a program proving component. The language that will be used will be a simple imperative language with no implementation so that students cannot test their programs. Dijkstra argues that these proposals are too radical for many. The responses that he expects, are that he is out of touch with reality, that the material is too difficult for first years, and that it would differ from what first years expect. Dijkstra states that students would quickly learn that they can master a new tool (that of manipulating uninterpreted formulae) that, although simple, gives them a power they never expected. Dijkstra also states that teaching radical novelties is a way of guarding against dictatorships.

In his reply to the panel's responses, Dijkstra points out that this is only a recommendation for an introductory programming course for first years. He notes that functional specifications fulfil the “pleasantness problem” - whether the product specified is the product wanted -, and the proof fulfils the “correctness problem” - whether the product implemented is the product specified and that these two problems require different techniques. He admits that the choice of functional specification and notation is not clear. He addresses concerns about the possibility of a more formalised mathematics and gives a number of reasons for his belief that it will be developed. Well chosen formalisms can give short proofs, logic has not been given a chance to provide an alternative to human reasoning, heuristic guidance can be obtained by syntactic analysis of theorems and informal mathematics is hard to teach because of things like “intuition”, whereas symbol manipulation will be easy to teach. In this section, I will discuss some of the issues raised by Dijkstra. He has described the two radical novelties of computer science and he uses this as a justification for approaching the discipline with a blank mind, because previous knowledge cannot help in the understanding of computer science. He later advances an approach to computer science that involves taking a mathematical approach, by using existing techniques of predicate calculus to prove that programs meet their specifications. This would seem to contradict his argument that one cannot use the familiar to reason about a radical novelty.

Generally it is accepted that to restrict thinking to one particular framework is undesirable, and leads to the formation of dictatorships. Dijkstra's argument is that students taking the introductory course should only think in the specified way. In my opinion, different approaches to a topic can only help comprehension of that topic. A point raised in a number of letters that appeared in later issues of Communications of the ACM is that different students approaching learning new concepts in different ways and that teaching should cater for this [Bernstein 1990; Herbison-Evans 1991]. Dijkstra also seems to have a general suspicion of tools, even those that can help students (or professionals) better understand a topic. A more pragmatic issue is that some students doing an introductory course at university will already have been exposed to programming and therefore operational thinking. How are these students to keep their thinking “pure” when doing Dijkstra's course? Dijkstra's approach to teaching seems to be one of training not education. He wants to teach students doing an introductory course, a rigid set of rules and that set of rules only. This does not leave room for intuition, judgement and discussion, which all relate to education. He also emphasises the need for a specific skill without grounding it in any larger context.

One of the participants, William Scherlis, raised the question of why use an imperative language if operational thinking is to be avoided. Luker raises the point made by Turner and Backus that variables and assignment statements in imperative languages make verification difficult [Luker 1989]. A rigorous formalism could be introduced using a functional programming language (or perhaps using a formal language that does not relate to a programming language) and an understanding of the use of formalisms could be related to various issues of computer science and mathematics. This would give a broader area of application for the course.

Dijkstra states that learning to manipulate uninterpreted formulae would be satisfying for a first year student as it would “give him a power that far surpasses his wildest dreams”. I believe that a course of this kind would consist of boring and repetitive work that would become mechanical, as it is training and not education. It could give a student a sense of power, but only in a limited domain, and as I understand the course that Dijkstra has outlined, this knowledge could not be applied to other domains within computer science.

blog/fall2015/mmalik1/start.txt · Last modified: 2015/08/27 09:44 by 127.0.0.1