This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
blog:fall2015:mmalik1:journal [2015/10/13 02:58] – mmalik1 | blog:fall2015:mmalik1:journal [2015/11/24 01:16] (current) – week12 mmalik1 | ||
---|---|---|---|
Line 36: | Line 36: | ||
I have also been looking into Python. I see a lot of people writing Python scripts for their UNIX/Linux machines and I thought it would be nice if I could learn that, maybe do some neat stuff with that. We'll see. | I have also been looking into Python. I see a lot of people writing Python scripts for their UNIX/Linux machines and I thought it would be nice if I could learn that, maybe do some neat stuff with that. We'll see. | ||
+ | |||
+ | ====November 16, 2015==== | ||
+ | |||
+ | The Ford Motor Company has many strategies to keep the environment stable. They have mainly focused on reducing the amount of carbon dioxide that their cars produce. The company is attempting to reduce their carbon dioxide output in their cars by thirty percent by 2025. Also, they have been creating more electric cars in the hopes to reduce carbon dioxide output. The Ford Motor Company is using a multitude of methods to attempt to sustain the environmental resources that are left on the earth. | ||
+ | Ford has produced sustainable technologies and alternative fuel plans over the last few years in an attempt to reduce the carbon dioxide amount that their cars produce. Most of their near term strategies are in place while they are working on implementing their mid term and long term plans. They also have plans that they have already put in place to increase the sustainability at the current date. Some of the technologies in place include Ecoboost engines, diesel use as market demands, aerodynamics improvements, | ||
+ | In addition to the plans that are introduced the Ford Motor Company plans to introduce more plans that will increase environmental sustainability in the near future, and further. Some of the plans that are already introduced to help in the near future are increased use of hybrid technologies, | ||
+ | Ford Motor Company’s long term plans aim to reduce carbon dioxide production far into the future. The speculated time for these long term changes to take effect is around 2050-2100, so quite a ways away. By this time Ford would will develop a second-generation Ecoboost engine and be creating and distributing it at high volumes. Also, they want to have lightweight materials and the next generation of hybrid and electric vehicles. The long term plans seem promising, and a large difference in the amount of carbon dioxide produced by their vehicles should be able to be seen in the far future. | ||
+ | While the Ford Motor Company is giving off the idea that they are completely geared toward economic sustainability there are members of the community that describe themselves as cautious optimists. Some of these cautious optimists question whether the Ford Motor Company is actually going towards an economically conscious side, or if it is all just for good public relations. Even though there are members of the community who don’t trust Ford, Daniel Becker, director of the Sierra Club’s Global Warming and Energy Program says he “applauds Ford for recognizing the seriousness of global warming, acknowledging that its vehicles create a large part of the problem and committing to cut that pollution, | ||
+ | The Ford Motor Company has proven to be on the path of environmental stability. So far it has already implemented many immediate, and near future plans that have decreased their output of carbon dioxide by four percent. In the long term they plan to implement even more engine renovations and electric cars in order to further reduce their carbon dioxide emissions to about thirty percent. With Ford’s research into aerodynamics and engine innovations they hope to create vehicles that will have greatly improved efficiency compared to the modern day vehicle. In the near future Ford appears as if it will be the active leader in environmental stability among motor companies and if this trend continues the overall carbon dioxide production from every motor company will decrease their carbon dioxide production and the world will be much better off. | ||
+ | In the first publicly disclosed deployment by a government agency of computing hardware based on specs that came out of the Open Compute Project, the Facebook-led open source hardware and data center design initiative, the US Department of Energy has contracted Penguin Computing to install a supercomputing systems at three national labs. | ||
+ | |||
+ | The systems, called Tundra Extreme Scale, will support the work of the National Nuclear Security Administration, | ||
+ | |||
+ | Not only is the $39 million deployment of supercomputers at Los Alamos, Sandia, and Lawrence Livermore National Labs will be the first publicly known deployment of OCP gear in the public sector, it will also be one of the largest deployments of OCP gear in the world, according Penguin. The largest deployment is at Facebook data centers. | ||
+ | |||
+ | Another major deployment is the hardware in Rackspace data centers designed by Rackspace to support its cloud services. | ||
+ | |||
+ | Penguin, headquartered in the Silicon Valley, occupies a niche within the OCP hardware market, selling OCP-based high-performance computing systems rather than simple, “commodity” gear most other vendors in the space, such as Quanta and Hyve, sell. | ||
+ | |||
+ | The deal shows that companies like IBM and Cray, who have been mainstays in the government HPC market for many years, are facing a new major competitive threat. | ||
+ | |||
+ | Penguin expects the supercomputers at the national labs to achieve peak performance between 7 and 9 petaflops. One petaflop represents a quadrillion of calculations per second. | ||
+ | |||
+ | The Tianhe-2 supercomputer in China, currently considered the world’s fastest supercomputer, | ||
+ | |||
+ | Penguin’s Tundra Extreme Scale systems at the three national labs will be powered by Intel Xeon processors. | ||
+ | |||
+ | Removal: We have removed the specific SKU of the Intel Xeon processors that power the three systems. The SKU was included in Penguin Computing’s announcement, | ||
+ | |||
+ | The use of high-performance computing is continuing to grow. The critical nature of information and complex workloads have created a growing need for HPC systems. Through it all, compute density plays a big role in the number of parallel workloads we’re able to run. So, how are HPC, virtualization, | ||
+ | |||
+ | One variant of HPC infrastructure is vHPC (“v” stands for “virtual”). A typical HPC cluster runs a single operating system and software stack across all nodes. This could be great for scheduling jobs, but what if multiple people and groups are involved? What if a researcher needs their own piece of HPC space for testing, development, | ||
+ | |||
+ | Effectively, | ||
+ | |||
+ | ===November 23, 2015=== | ||
+ | |||
+ | Ever wonder about that mysterious Content-Type tag? You know, the one you're supposed to put in HTML and you never quite know what it should be? | ||
+ | |||
+ | Did you ever get an email from your friends in Bulgaria with the subject line "???? ?????? ??? ????"? | ||
+ | |||
+ | I've been dismayed to discover just how many software developers aren't really completely up to speed on the mysterious world of character sets, encodings, Unicode, all that stuff. A couple of years ago, a beta tester for FogBUGZ was wondering whether it could handle incoming email in Japanese. Japanese? They have email in Japanese? I had no idea. When I looked closely at the commercial ActiveX control we were using to parse MIME email messages, we discovered it was doing exactly the wrong thing with character sets, so we actually had to write heroic code to undo the wrong conversion it had done and redo it correctly. When I looked into another commercial library, it, too, had a completely broken character code implementation. I corresponded with the developer of that package and he sort of thought they " | ||
+ | |||
+ | But it won't. When I discovered that the popular web development tool PHP has almost complete ignorance of character encoding issues, blithely using 8 bits for characters, making it darn near impossible to develop good international web applications, | ||
+ | |||
+ | So I have an announcement to make: if you are a programmer working in 2003 and you don't know the basics of characters, character sets, encodings, and Unicode, and I catch you, I'm going to punish you by making you peel onions for 6 months in a submarine. I swear I will. | ||
+ | |||
+ | And one more thing: | ||
+ | |||
+ | IT'S NOT THAT HARD. | ||
+ | |||
+ | In this article I'll fill you in on exactly what every working programmer should know. All that stuff about "plain text = ascii = characters are 8 bits" is not only wrong, it's hopelessly wrong, and if you're still programming that way, you're not much better than a medical doctor who doesn' | ||
+ | |||
+ | Before I get started, I should warn you that if you are one of those rare people who knows about internationalization, | ||
+ | |||
+ | A Historical Perspective | ||
+ | |||
+ | The easiest way to understand this stuff is to go chronologically. | ||
+ | |||
+ | You probably think I'm going to talk about very old character sets like EBCDIC here. Well, I won't. EBCDIC is not relevant to your life. We don't have to go that far back in time. | ||
+ | |||
+ | ASCII tableBack in the semi-olden days, when Unix was being invented and K&R were writing The C Programming Language, everything was very simple. EBCDIC was on its way out. The only characters that mattered were good old unaccented English letters, and we had a code for them called ASCII which was able to represent every character using a number between 32 and 127. Space was 32, the letter " | ||
+ | |||
+ | And all was good, assuming you were an English speaker. | ||
+ | |||
+ | Because bytes have room for up to eight bits, lots of people got to thinking, "gosh, we can use the codes 128-255 for our own purposes." | ||
+ | |||
+ | Eventually this OEM free-for-all got codified in the ANSI standard. In the ANSI standard, everybody agreed on what to do below 128, which was pretty much the same as ASCII, but there were lots of different ways to handle the characters from 128 and on up, depending on where you lived. These different systems were called code pages. So for example in Israel DOS used a code page called 862, while Greek users used 737. They were the same below 128 but different from 128 up, where all the funny letters resided. The national versions of MS-DOS had dozens of these code pages, handling everything from English to Icelandic and they even had a few " | ||
+ | |||
+ | Meanwhile, in Asia, even more crazy things were going on to take into account the fact that Asian alphabets have thousands of letters, which were never going to fit into 8 bits. This was usually solved by the messy system called DBCS, the " | ||
+ | |||
+ | But still, most people just pretended that a byte was a character and a character was 8 bits and as long as you never moved a string from one computer to another, or spoke more than one language, it would sort of always work. But of course, as soon as the Internet happened, it became quite commonplace to move strings from one computer to another, and the whole mess came tumbling down. Luckily, Unicode had been invented. | ||
+ | |||
+ | Unicode | ||
+ | |||
+ | Unicode was a brave effort to create a single character set that included every reasonable writing system on the planet and some make-believe ones like Klingon, too. Some people are under the misconception that Unicode is simply a 16-bit code where each character takes 16 bits and therefore there are 65,536 possible characters. This is not, actually, correct. It is the single most common myth about Unicode, so if you thought that, don't feel bad. | ||
+ | |||
+ | In fact, Unicode has a different way of thinking about characters, and you have to understand the Unicode way of thinking of things or nothing will make sense. | ||
+ | |||
+ | Until now, we've assumed that a letter maps to some bits which you can store on disk or in memory: | ||
+ | |||
+ | A -> 0100 0001 | ||
+ | |||
+ | In Unicode, a letter maps to something called a code point which is still just a theoretical concept. How that code point is represented in memory or on disk is a whole nuther story. | ||
+ | |||
+ | In Unicode, the letter A is a platonic ideal. It's just floating in heaven: | ||
+ | |||
+ | A | ||
+ | |||
+ | This platonic A is different than B, and different from a, but the same as A and A and A. The idea that A in a Times New Roman font is the same character as the A in a Helvetica font, but different from " | ||
+ | |||
+ | Every platonic letter in every alphabet is assigned a magic number by the Unicode consortium which is written like this: U+0639. | ||
====Data Structures==== | ====Data Structures==== | ||
Line 345: | Line 437: | ||
The above were some interesting essays I read on vegetarianism during my vegetarianism online course. The course kind of sucked but it was informative to say the least. I hope you enjoyed reading my latest Opus entry. | The above were some interesting essays I read on vegetarianism during my vegetarianism online course. The course kind of sucked but it was informative to say the least. I hope you enjoyed reading my latest Opus entry. | ||
+ | |||
+ | ====October 26, 2015==== | ||
+ | |||
+ | As a computer scientist, there’s nothing that annoys me more than when my friends ask me for help setting up their wireless internet, or when my mom calls and asks why her laptop keeps freezing. I try to tell them that I’m not studying computer repairs or computer usage, I’m studying computer science. | ||
+ | |||
+ | But that doesn’t help, because nobody seems to know exactly what the term “computer science” means. When I urge my friends to take a computer science course, they shrug me off with comments like “I’m no good with computers” or “I don’t do science.” Assuming my friends aren’t just unadventurous, | ||
+ | |||
+ | Computer scientists are concerned with questions like: How do you find the shortest route between two points on a map? How do you translate Spanish into English without a dictionary? How do you identify the genes that make up the human genome using fragments of a DNA sequence? | ||
+ | |||
+ | There’s a difference between the question, “How do you identify the genes that make up the human genome?” and the question, “What are the genes that make up the human genome?” The latter, a question posed by biologists, asks for a specific fact, while the former asks for a procedure which can produce that fact. | ||
+ | |||
+ | Consider any science: chemistry, biology, physics, or even one of the “soft” sciences like psychology. All are concerned with answering factual questions about the world around us. In computer science, the goal is not to figure out the answers to factual questions, but rather to figure out how to get answers. The procedure is the solution. While scientists want to figure out what is, computer scientists want to know how to. | ||
+ | |||
+ | This is not to say that scientists don’t ever need to know how to figure out the answers to their questions. The key distinction is that computer scientists care only about how to figure out the answer, and not what the answer is. Scientists, in some sense, either rely on computer science to help with their process (for instance, if they make use of data-analysis software) or are in part computer scientists themselves. | ||
+ | |||
+ | The distinction between questions of fact and questions of procedure leads naturally to a difference in methodology between scientists and computer scientists. When scientists come up with a possible answer to a question–a hypothesis–they try to prove or disprove it using experiments. Experiments are in essence tests to see whether a hypothesis matches the behavior of the natural world. If a hypothesis accounts for how the world behaves (or at least the behavior that the scientists can see), then it’s a useful theory. | ||
+ | |||
+ | We’re all familiar with this process from elementary school. It’s called the scientific method: you observe some occurence, come up with a hypothesis about it, test your hypothesis with experiments, | ||
+ | |||
+ | Knowledge in computer science, however, doesn’t work the same way. Procedures don’t exist in the natural world–they’re devised by humans. When we come up with a procedure, we can’t just run experiments to see if it works. Although the procedure might be applied to data gathered from the real world, the procedure itself is not a part of nature. Think back to all the sciences I mentioned before. All of them seek knowledge about that which already exists. Procedures, however, are completely constructed–they only exist in the abstract. | ||
+ | |||
+ | For instance, consider the procedure used in a spell checker that recommends possible correct spellings when you make a typo. This procedure takes a sequence of letters and tries to find the closest match in a giant list of valid sequences, or as we normally call them, words. What separates this procedure from the real world problem of correcting spelling is that the sequences don’t have to represent words–that’s just one possible application for the procedure. The procedure itself can be reused with other kinds of sequences. In fact, this very same procedure is used for the DNA sequencing problem I mentioned before.2 | ||
+ | |||
+ | Since the problems solved by computer scientists are defined separate from the real world, we can’t use the scientific method to analyze their validity. We can only analyze procedures within the realm of abstraction in which we have created them. Luckily, this type of reasoning is exactly why we have mathematical logic. Mathematicians, | ||
+ | |||
+ | Given that the correctness of procedures is proved using mathematical logic, it might seem like computer science is really just a branch of mathematics, | ||
+ | |||
+ | Consider, for example, the problem of dividing two numbers. When presented with this problem, a mathematician might derive the properties of division, such as when there will be a remainder. A computer scientist, in contrast, would focus on figuring out how to perform the division.3 | ||
+ | |||
+ | The computer scientist might eventually come up with the long division algorithm. Just like any 4th grader, however, he wouldn’t want to perform the division by hand. Instead, he would write a series of instructions, | ||
+ | |||
+ | Notice that this is the first time I’ve mentioned computers at all. That’s because there’s nothing fundamental about procedures that requires the use of computers. Computers aren’t the only tools that can be used to execute programs. For instance, elementary school students are perfectly capable of executing the long division algorithm. We use computers instead of small children because computers are fast and reliable (after all, that’s why we built them), while small children are adorably uncoordinated and prone to unexpected naps. | ||
+ | |||
+ | The great computer scientist Edsger Dijkstra summed it up best: “Computer science is no more about computers than astronomy is about telescopes.” Even though a complex ecosystem of programs has developed, allowing computers to serve a variety of purposes, computers are still nothing more than a tool for executing procedures. Computer science is about the procedures themselves, not so much the tools used to execute them. | ||
+ | |||
+ | At this point, though, I should say that I haven’t painted an entirely accurate picture of the field–or rather, I left out some parts. There are probably some computer scientists reading this who are thinking, “This doesn’t describe my work at all.” | ||
+ | |||
+ | While at its core, computer science really is the pure study of procedures in the abstract as I described, in reality, the field has grown to encompass a wide variety of pursuits. Some computer scientists are concerned mostly with designing intricate systems that rely heavily on the specifics of computer architecture. Others study human-computer interaction, | ||
+ | |||
+ | It would be easy to dismiss the outliers and say they are not true computer scientists, that their work falls under the umbrella of some related but fundamentally different field. But I think the breadth of study within computer science is not necessarily a bad thing. It doesn’t need to be strictly defined. | ||
+ | |||
+ | Within the computer science department at my university, there’s a huge variety of interests among the students and professors. The multitude of perspectives complement each other, and help the field grow. | ||
+ | |||
+ | In the end, its the rate of growth of the field that makes all this definition business so tricky. Computer science is still young, and always undergoing new growth spurts. It’s that awkward teenage boy at the school dance whose limbs are growing so fast that he can’t make them all move together harmoniously just yet. | ||
+ | |||
+ | ====November 2, 2015==== | ||
+ | |||
+ | The concept of adding integers has been around for thousands of years. However, the implementation of that concept to be used in something like a calculator hasn’t been around quite that long. The best way to view the history of computational addition of integers is to look at how calculators originated. The very first known version of a calculator is something known as the abacus. “In the very beginning, of course was the abacus, a sort of hand operated mechanical calculator using beads on rods, first used by Sumerians and Egyptians around 2000 BC. The principle was simple, a frame holding a series of rods, with ten sliding beads on each. When all the beads had been slid across the first rod, it was time to move one across on the next, showing the number of tens, and thence to the next rod, showing hundreds, and so on” (). This device made adding less error-prone. The first real device which did not involve any real human interaction (other than saying what you want to add) came about in 1820, and it was called the Arithmometer. Up until around the 1930’s, many different devices were created which all essentially did the same thing and used the same concepts. Eventually, the calculator was transformed into an electrical device, using electrical currents. It began with war, where a device was needed to constantly calculate the trajectory required to drop a bomb on Japanese warships. “All were basically mechanical devices using geared wheels and rotating cylinders, but producing electrical outputs that could be linked to weapon systems.” (). Shortly after, the Colossus was made, which was used as a code-breaking device. All it did was perform exclusive-or Boolean algorithms. Later on, the ENIAC (Electronic Numerical Integrator And Computer) was created. The purpose of the ENIAC was a calculator which was capable of using the four basic arithmetic functions. “ENIAC was 1,000 times faster than electro-mechanical computers and could hold a ten-digit decimal number in memory. But to do this required 17,468 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors and around 5 million hand-soldered joints. It weighed around 27 tonnes, took up 1800 square feet of floorspace and consumed as much power as a small town.” (). Afterwards, the valve and tube calculator was invented. And then after that, came the transistor age calculators. Each age of calculators were less bulky and could compute faster. Eventually, we reached to the era we currently reside in, the era of the microchip, which is what allows calculators to be so small and portable now. | ||
+ | Addition of integers is extremely important in regards to computer science because almost any program you write will contain some sort of integer addition. Even if you do not write something in your code which does integer addition itself, a function or method you call most likely does all the integer addition for you. Without integer addition, there are many simply things in computers we would not be able to accomplish. One example is keeping track of the time on a computer. Time is usually kept track of as a single number, which is represented as milliseconds. This number has to be incremented every millisecond, | ||
+ | Computers add integers together by using Boolean algebra. Boolean algebra is “a type of math that deals with bits instead of numbers” (). The concept behind any computations a computer performs is Boolean algebra. Engineers have created extremely small devices which will implement Boolean operations. “These little devices are called ‘logic gates’ and, analogous to the idealized mathematical versions we talked about, these physical devices have wires leading into them that signal the values of input bits (1 if the voltage on the wire is above some threshold and 0 otherwise) and a single wire leading out of them that gives the return value. By combining gates together in smart ways, they can be made to let you do incredible things like add numbers, write emails, play video games, chat with people around the world, and everything else you do with computing devices” (). Logic gates are the cores of any computation a computer performs, including adding integers. A very important logic gate is known as XOR, which stands for exclusive-or. XOR will return a value of 0 if both inputs are the same and a value of 1 if both input are different. The main important thing about XOR in regards to computations is that “these output values are exactly the same as the values of the right-most bit (called the ‘sum bit’) when adding two binary numbers” (). For example: XOR with inputs 0 and 0 will return 0. If you add binary numbers 0 and 0 together, you will get 0. The output is the same. If you XOR with inputs 0 and 1, you will get 1. When you add binary 0 and 1 together, you get 1. If you XOR with the inputs being 1 and 1, you will get an output of 0. If you add binary numbers 1 and 1 together, the sum bit will be a 0, and the carry bit will be a 1. So essentially, | ||
+ | |||
+ | This is an essay I wrote for discrete structures. This essay is about how computer add number, as well as the history of calculators themselves. It is very interesting because it shows how calculators originated, and how they were eventually implemented into modern day computing. Back then, calculators were the size of a briefcase. Now, they are smaller than your palm. | ||
+ | |||
+ | I hope that this essay was informational and fun to read. Thanks for reading. | ||
+ | |||
+ | ====November 9, 2015==== | ||
+ | |||
+ | In this essay, I will discuss the issues relating to the teaching of computer science that were | ||
+ | raised in the article "A Debate on Teaching Computer Science" | ||
+ | structure of the paper is as follows: first I will briefly summarise Dijkstra' | ||
+ | his reply. I will do this to illustrate what I consider to be his main points. I will then discuss | ||
+ | the relevance of these points to the teaching of computer science. In the section after that I | ||
+ | will summarise the points raised by the other contributors. As many points were repeated, I | ||
+ | will give an overview of all the issues raised and discuss them. In the final section I will | ||
+ | address some issues that were not raised and bring together my conclusions. | ||
+ | |||
+ | Dijkstra originally presented the talk at the ACM Computer Science Education Conference in | ||
+ | February 1989 and it was decided to print the text of the talk in CACM with other computer | ||
+ | scientists entering into the debate. The editor of the Communications of the ACM, Peter | ||
+ | Denning introduces the debate by describing Dijkstra as challenging "some of the basic | ||
+ | assumptions on which our curricula are based" [Dijkstra et al 1989]. | ||
+ | |||
+ | Dijkstra' | ||
+ | this has implications for the teaching of computer science especially introductory | ||
+ | programming courses for first year students (I will use the terms "first years" and "first year | ||
+ | students" | ||
+ | thought that existing approaches cannot be used to reason about it and it is necessary to | ||
+ | approach a radical novelty with a blank mind. The two radical novelties in computer science | ||
+ | are the depth of conceptual hierarchies that occur in computer science and the fact that | ||
+ | computers are the first large scale digital devices. Radical novelties require much work to | ||
+ | come to grips with, and people are not prepared to do this, so they pretend the radical | ||
+ | novelties do not exist. Examples of this in computer science are software engineering and | ||
+ | artificial intelligence. | ||
+ | |||
+ | Dijkstra investigates the scientific and educational consequences, | ||
+ | computer science is. He reduces this to the manipulation of symbols by computers. In order | ||
+ | for meaningful manipulations to be made, a program must be written. He then defines a | ||
+ | program as an abstract symbol manipulator, | ||
+ | manipulator by adding a computer to it. Programs are elaborate formulae that must be | ||
+ | derived by mathematics. Dijkstra hopes that symbolic calculation will become an alternative | ||
+ | to human reasoning. Computing will go beyond mathematics and applied logic because it | ||
+ | deals with the " | ||
+ | view of computing science is not welcomed by many people, for various reasons. | ||
+ | |||
+ | Dijkstra makes a number of recommendations for education. Bugs should be called errors | ||
+ | since a program with an error is wrong, and lecturers should avoid anthropomorphic | ||
+ | terminology as it causes students to compare the analog human with discrete computer and | ||
+ | this leads to operational reasoning which is tremendous waste of time. When attempting to | ||
+ | prove something about a set, it is better to work from the definition than with each individual | ||
+ | item in the set. The approach can be applied to programming as well where programs can be | ||
+ | reasoned about without dealing with specific behaviours of the program. The programmer' | ||
+ | task is to prove that the program meets the functional specification. Dijkstra suggests that an | ||
+ | introductory programming course for first years should consist of a boolean algebra | ||
+ | component, and a program proving component. The language that will be used will be a | ||
+ | simple imperative language with no implementation so that students cannot test their | ||
+ | programs. Dijkstra argues that these proposals are too radical for many. The responses that he | ||
+ | expects, are that he is out of touch with reality, that the material is too difficult for first years, | ||
+ | and that it would differ from what first years expect. Dijkstra states that students would | ||
+ | quickly learn that they can master a new tool (that of manipulating uninterpreted formulae) | ||
+ | that, although simple, gives them a power they never expected. Dijkstra also states that | ||
+ | teaching radical novelties is a way of guarding against dictatorships. | ||
+ | |||
+ | In his reply to the panel' | ||
+ | for an introductory programming course for first years. He notes that functional | ||
+ | specifications fulfil the " | ||
+ | wanted -, and the proof fulfils the " | ||
+ | is the product specified and that these two problems require different techniques. He admits | ||
+ | that the choice of functional specification and notation is not clear. He addresses concerns | ||
+ | about the possibility of a more formalised mathematics and gives a number of reasons for his | ||
+ | belief that it will be developed. Well chosen formalisms can give short proofs, logic has not | ||
+ | been given a chance to provide an alternative to human reasoning, heuristic guidance can be | ||
+ | obtained by syntactic analysis of theorems and informal mathematics is hard to teach because | ||
+ | of things like " | ||
+ | In this section, I will discuss some of the issues raised by Dijkstra. He has described the two | ||
+ | radical novelties of computer science and he uses this as a justification for approaching the | ||
+ | discipline with a blank mind, because previous knowledge cannot help in the understanding | ||
+ | of computer science. He later advances an approach to computer science that involves taking | ||
+ | a mathematical approach, by using existing techniques of predicate calculus to prove that | ||
+ | programs meet their specifications. This would seem to contradict his argument that one | ||
+ | cannot use the familiar to reason about a radical novelty. | ||
+ | |||
+ | Generally it is accepted that to restrict thinking to one particular framework is undesirable, | ||
+ | and leads to the formation of dictatorships. Dijkstra' | ||
+ | introductory course should only think in the specified way. In my opinion, different | ||
+ | approaches to a topic can only help comprehension of that topic. A point raised in a number | ||
+ | of letters that appeared in later issues of Communications of the ACM is that different | ||
+ | students approaching learning new concepts in different ways and that teaching should cater | ||
+ | for this [Bernstein 1990; Herbison-Evans 1991]. Dijkstra also seems to have a general | ||
+ | suspicion of tools, even those that can help students (or professionals) better understand a | ||
+ | topic. A more pragmatic issue is that some students doing an introductory course at | ||
+ | university will already have been exposed to programming and therefore operational | ||
+ | thinking. How are these students to keep their thinking " | ||
+ | Dijkstra' | ||
+ | students doing an introductory course, a rigid set of rules and that set of rules only. This does | ||
+ | not leave room for intuition, judgement and discussion, which all relate to education. He also | ||
+ | emphasises the need for a specific skill without grounding it in any larger context. | ||
+ | |||
+ | One of the participants, | ||
+ | language if operational thinking is to be avoided. Luker raises the point made by Turner and | ||
+ | Backus that variables and assignment statements in imperative languages make verification | ||
+ | difficult [Luker 1989]. A rigorous formalism could be introduced using a functional | ||
+ | programming language (or perhaps using a formal language that does not relate to a | ||
+ | programming language) and an understanding of the use of formalisms could be related to | ||
+ | various issues of computer science and mathematics. This would give a broader area of | ||
+ | application for the course. | ||
+ | |||
+ | Dijkstra states that learning to manipulate uninterpreted formulae would be satisfying for a | ||
+ | first year student as it would "give him a power that far surpasses his wildest dreams" | ||
+ | believe that a course of this kind would consist of boring and repetitive work that would | ||
+ | become mechanical, as it is training and not education. It could give a student a sense of | ||
+ | power, but only in a limited domain, and as I understand the course that Dijkstra has | ||
+ | outlined, this knowledge could not be applied to other domains within computer science. |