User Tools

Site Tools


Sidebar

projects

dsi0 (20180822)
wcp1 (20180822)
pnc0 [faq] [metrics] (20180829)
wcp2 (20180829)
bdt0 (20180905)
wcp3 (20180905)
dcf0 (20180912)
wcp4 (20180912)
pnc1 [faq] [metrics] (20180919)
wcp5 (20180919)
bdt1 (20180926)
wcp6 (20180926)
dcf1 (20181003)
wcp7 (20181003)
nbm0 (20181017)
wcp8 (20181017)
pnc2 (bonus; 20181017)
yol0 (20181024)
wcp9 (20181024)
dcf2 (20181031)
wcpA (20181031)
pnc3 (20181107)
wcpB (20181107)
ewn0 (20181114)
wcpC (20181114)
EoCE (20181213-172959)
haas:fall2018:discrete:projects:bdt1

Corning Community College

CSCS2330 Discrete Structures

Project: Binary Data Tool (bdt1)

Objective

To apply our binary data knowledge and interactions in the creation of a useful tool for aiding us in the debugging of our dcfX endeavors.

Background

With our recent xxd(1) tool implementation in bdt0, we are going to continue down this rabbit hole a bit by writing a tool of particular value to our dcfX endeavors.

One of the things I noticed while helping debug was a frequent loss of place while looking at the hex output of encoded data. Seconds were lost relocating the points of difference, and over time, those lost seconds add up.

It would be uniquely useful if we had a way to highlight the first point (byte) of difference in two files, so we can then focus on why/how they are different, vs. devoting far too much time to discovering what is different.

Task

Your task is to write a custom binary difference visualizer, in a format not unlike that of xxd(1), but certainly different from the output format we strove for in bdt0.

The idea is to take 2 files as input, parse through those (ideally similar) files, until the first point of difference is found, at which point your tool will display:

  • the bytes leading up to the difference (in both files)
  • the byte of difference (highlighted/colored in some fashion)
  • the following bytes (for quick assessment if we have just a one-off issue, or something larger at play)

Thought empowerment vs. thought slavery

Something I've noticed with many who are so used to conforming and following authority, is that the thought of “questioning why things are” rarely comes into the picture.

I've certainly seen plenty of examples… of people messing something up, and then proceeding to live with the mistake, maybe bothered by the inconvenience, but seemingly powerless to fix it.

The thing is, we are very much in control, and if the universe doesn't conform to our demands, we must simply realign the universe.

So here, while debugging binary data… instead of just going with the flow and inconveniencing ourselves, losing our place and wasting time elongating our debugging process, we will be writing a specialized tool that should assist us greatly in the dcfX debugging process.

The key is to identify an inconvenience. If we have a tool that helps, but is limited, is that a limitation we can live with, or can we improve our overall process by improving the tool (either by extending it, or by writing a new tool altogether).

We've done this a bit with pipes… xxd(1) doesn't natively support capping its lines of display, so we've been using UNIX pipes to have commands like head(1) and tail(1) greatly enhance the utility of our xxd(1) output (versus haplessly scrolling through hundreds of lines of hex values). Thing is, how many would have done this if I never showed you examples?

So please, be on the lookout for limitations in the process- ANY process. Sometimes there is nothing we can really do, but other times, we definitely can. Don't just go with some mindless flow- constantly evaluate whatever process you are following:

  • does it suit you?
  • is it effective/efficient?
  • what is detracting from ideal efficiency?
  • what might improve the process?
    • is there an existing tool that could be brought into the fold?
      • have you investigated?
      • have you asked?
    • is there a new tool that can be written that would fill this niche?
      • what would it do?
      • would it be a burden to write?

There are constantly opportunities for enhancement of process. It is our job to identify strategic ones that can make significant gains. That's why we automate things with shell scripts, that's why we learn to solve problems, that's why we learn about different approaches to algorithms.

So these bdt# projects are a specific foray into this special case study of writing our own custom tool that can get a certain job done, faster. Reducing OUR particular need to keep tabs on something the computer is very much better at doing.

Implementation Restrictions

As our goal is not only to explore the more subtle concepts of computing but to promote different methods of thinking (and arriving at solutions seemingly in different ways), one of the themes I have been harping on is the stricter adherence to the structured programming philosophy. It isn't just good enough to be able to crank out a solution if you remain blind to the many nuances of the tools we are using, so we will at times be going out of our way to emphasize focus on certain areas that may see less exposure (or avoidance due to it being less familiar).

As such, the following implementation restrictions are also in place:

  • use any break or continue statements sparingly. I am not forbidding their use, but I also don't want this to turn into a lazy solution free-for-all. I am letting you use them, but with justification.
    • justification implies some thoughtful why/how style comments explaining how a particular use of one of these statements is effective and efficient (not: “I couldn't think of any other way to do it”).
  • absolutely NO infinite loops (while(1) or the like).
  • no forced redirection of the flow of the process (no seeking to the end of the file to grab a max size only to zip back somewhere else: deal with the data in as you are naturally encountering it).
  • All “arrays” must be declared and referenced using ONLY pointer notation, NO square brackets.
  • NO logic shunts (ie having an if statement nested inside a loop to bypass an undesirable iteration)- this should be handled by the loop condition!

Basically, I am going loosen my implementation restriction grip for this project: I would like you NOT to disappoint me. Write clean, effective code… show me that you have learned something from this class.

Program Specifications

For this project, I am looking for a minimum subset of functionality. But there are many potential improvements that can be made, which I would consider for bonus points.

Basic functionality

Your program should:

  • accept two files as command-line arguments (these would be the files you'd like to compare)
  • display the address/offset on the left just as xxd(1) does
  • display the row preceding the first identified byte of difference for the first, then second file
  • display the row containing (and coloring/highlighting) the identified byte of difference for the first, then second file
  • display the row following the identified byte of difference for the first, then second file

The focus is the FIRST byte of difference. The algorithm could get considerably trickier when dealing with additional differences (especially if extra bytes are involved in the difference).

Bonus opportunities

Some ideas to enhance your program for potential bonus points:

  • accept some sort of mode argument, a number, that would alter the behavior of your tool. Such as:
    • 0: display as project specifies
    • 1: display on separate lines, vs. the same line of difference (first file, newline, second file).
    • additional modes as justified
  • accept numeric offset arguments, 1 for each file, to instruct your tool where they should start reading/comparing
    • this would be a way for your tool to natively support “additional” points of difference without needing an overly-complicated algorithm. You would be able to specify the starting points, from visual inspection on previous runs of the tool or xxd(1), which would add considerable debugging value.
    • this would likely require displaying the pertinent offsets for each file.
  • you could endeavor to explore some algorithmic enhancements to automatically detect additional points of difference. Note that this could be rather fragile, depending on the identified differences.

Output

A basic mockup (pictures coming soon) of desired output:

lab46:~/src/discrete/bdt1$ ./bdt1 in/sample0.txt in/sample0.off
00000090: 0011 2233 4455 6677 8899 aabb ccdd eeff | 0011 2233 4455 6677 8899 aabb ccdd eeff
000000a0: 55aa 66bb 0401 77cc 88dd 99ee aaff 89af | 55aa 66bb 0501 77cc 88dd 99ee aaff 89af
000000b0: 9988 7766 5544 3322 1100 ffee ddcc bbaa | 9988 7766 5544 3322 1100 ffee ddcc bbaa
lab46:~/src/discrete/bdt1$ 

Submission

To successfully complete this project, the following criteria must be met:

  • Code must compile cleanly (no warnings or errors)
    • Use the -Wall and –std=gnu99 flags when compiling.
  • Code must be nicely and consistently indented (you may use the indent tool)
  • Code must utilize the algorithm/approach presented above
  • Output must match the specifications presented above (when given the same inputs)
  • Code must be commented
    • be sure your comments reflect the how and why of what you are doing, not merely the what.
  • Track/version the source code in a repository
  • Submit a copy of your source code to me using the submit tool.

To submit this program to me using the submit tool, run the following command at your lab46 prompt:

$ submit discrete bdt1 bdt1.c
Submitting discrete project "bdt1":
    -> bdt1.c(OK)

SUCCESSFULLY SUBMITTED

You should get some sort of confirmation indicating successful submission if all went according to plan. If not, check for typos and or locational mismatches.

haas/fall2018/discrete/projects/bdt1.txt · Last modified: 2018/09/17 13:38 by wedge