Working On Assignments

Video Walkthrough

You will work on the assignments for CS107 on the myth machines, which you access remotely. You'll initially make a copy of the starter project to modify, use command-line tools to edit and debug your code, and use some 107-specific tools like Sanity Check and Submit to test and submit your work. See below for the common steps you'll use to work on assignments.

Logging Into Myth

You will work on your programs in CS107 remotely on the myth machines, which are pre-installed with all the necessary development tools. Check out the Resources page for a guide on how to log into Myth remotely.

Starting An Assignment

For each assignment, you will first "clone" a copy of the assignment starter files into your own directory so you will be able to modify files. Some assignments will have randomized or user-specific data, and each student will have their own copy of the assignment to copy. The assignments are managed using a "version control system" called git; we will not be focusing on git in CS107, but you can feel free to look into how git works if you're interested.

Note: Do not put any CS107 assignments online on GitHub or any other publicly available website. This is a violation of the Stanford Honor Code.

To clone a copy of the assignment, first navigate to the directory where you would like to store the copy of the assignment. You may wish to create a CS107 folder, for instance, in your personal AFS space, using the mkdir command (see the Resources page!). Then, use the git clone command as follows:

$ git clone /afs/ir/class/cs107/repos/assign0/$USER assign0
Cloning into 'assign0'...

done.

This will make a folder named assign0 that you can then go into to start working on the assignment. You should type the above commands exactly as shown, including the odd-looking $USER at the end, but replacing assign0 with assign1, assign2, etc., depending on the assignment you are cloning. Now you can start working on the assignment!

Working On The Assignment

You'll use a variety of tools to work on the assignment, such as gdb and make. Check out the resources page for an overview of how to use each of these tools.

Using Sanity Check For Testing

As part of each assignment, we provide a testing program called "sanity check" that ensures you have worked on certain required files, and compares the output of your program to that of the provided sample executable and reports on discrepancies, allowing you to detect and address any issues before you submit. It also allows you to add your own custom tests.

To run the sanity check tool for a given assignment with our provided tests, first navigate to the directory containing the assignment you would like to test. Then, execute the tools/sanitycheck command as follows:

$ cd cs107/assign0
$ tools/sanitycheck
Will run default sanity check for assign0 in current directory ~/cs107/assign0.

+++ Test 1-CheckReadme on ~/cs107/assign0
Descr:   verify file(s) have changed from starter
NOT OK:  Deferred

This project passes 0 of the 1 default sanity check cases.
$

In the output, if a test fails, it will indicate either "MISMATCH" or "NOT OK". MISMATCH indicates that your program successfully ran to completion but the output it produced did not match the output produced by the sample. NOT OK reports that your program did not successfully complete (exited due to a fatal error or timed out) and its output was not compared to the sample.

  • Passing sanity check suggests the autotester won't have problems interpreting your output, and that's good. If it doesn't match, you should fix your output to meet the required format so that your output is not misjudged in grading. To earn proper credit, your program must conform to the output specification given in the assignment writeup and match the behavior of our sample executable. Minor variations like different amounts of whitespace can usually be ignored, but changing the format, reordering output, or leaving behind extraneous print debugging statements will thwart the autotester and cause your program to be marked wrong. If sanitycheck fails on a test before you submit the assignment, you will lose points when we re-run that test when we grade the assignment. Just because sanitycheck passes does not mean that your assignment is correct, but failing a test means that it is not correct.
  • There are a few situations, such as allowed latitude in the spec or equivalent re-wording of error messages, where a mismatch is not actually an error--- i.e. the program's behavior is a valid alternative to the sample, but sanity check doesn't know that. The autotester defers these cases to the judgment of the grading CA to identify whether such mismatches are true failures or harmless variation.
  • You can run sanity check as many times as you need. Our submit tool will even encourage one final run before you submit.
  • An additional benefit of running sanitycheck early and often is that it makes a snapshot of your code as a safety precaution. This backup replaces the original starter code generated for you, so you can re-clone the assignment using the same git clone command and get the last code you backed up. (If you'd like more tools to manage/restore backups, take a look at the resources page on the .backup folder, as well as the optional page on how to use git to manage versions of your work.)

Using Sanity Check With Your Own Custom Tests

The default tests supplied for sanity check may not be particularly rigorous nor comprehensive, so you will want to supplement with additional tests. You can create inputs of your own and write them into custom tests to be used by the sanitycheck tool. Create a text file using this format:

# File: custom_tests
# ------------------
# This file contains a list of custom tests to be run by the sanity check tool.
# Each custom test is given on a single line using format:
#
#     executable  arg(s)
#
# The executable is the name of the program to run (e.g. mygrep or mywhich)
# The args are optional. If given, they are treated as a sequence of space-separated
# command-line arguments with which to invoke the executable program.
#
# For each custom test, sanity check will invoke your executable program and the
# solution program (using same command-line arguments), compare the two
# outputs to verify if they match, and report the outcome.
#
# Blank lines and comment lines beginning with # are ignored.
#
# Below is an example custom test, edit as desired.

myprogram arg1 arg2

To run your custom tests, invoke sanitycheck with its optional argument, which is the name of the custom test file

 tools/sanitycheck custom_tests

When invoked with an argument, sanity check will use the test cases from the named file instead of the standard ones. For each custom test listed in the file, sanity check runs the sample solution with the given command-line arguments and captures its output, then runs your program with the same arguments to capture its output, and finally compares the two results and reports any mismatches. For more information about recommended testing strategies, take a look at the software testing strategies page linked to from the assignments page.

Submitting An Assignment

Once you've finished working on an assignment, it's time to submit! The tools/submit command lets you submit your work right from myth. The submit tool verifies your project's readiness for submission. It will make the project to ensure there is no build failure and will offer you the option to run sanity check. If any part of verification fails, the submission is rejected and you must fix the issues and try submit again. Here's an example of using this command.

$ cd cs107/assign0
$ tools/submit
This tool submits the repo in the current directory for grading.
Current directory is ~/cs107/assign0

We recommend verifying your output is conformant using sanity check.
Would you like to run sanity check right now? [y/n]:y
...etc

  • If verification passes and submissions are being accepted, the project is submitted and a confirmation message indicates success. If the deadline has passed and grace period expired, the submission is rejected.
  • To submit an updated version, just repeat the same steps. Only your most recent submission is graded.
  • submitting performs the same backup process as sanity check.
  • If you run into a submit failure that you cannot resolve, please seek help from the course staff. We recommend that you make a test submit well in advance of the deadline to confirm things will roll smoothly when the time comes.

Submission deadlines are firm. Cutting it too close runs the risk of landing on the wrong side -- don't let this happen to you! Submit early to give yourself a safety cushion and avoid the last-minute stress.

Assignment Tips

Be cautious with C: C is designed for high efficiency and unrestricted programmer control, with no emphasis on safety and little support for high-level abstractions. A C compiler won't complain about such things as uninitialized variables, narrowing conversions, or functions that fail to return a needed value. C has no runtime error support, which means no helpful messages when your code accesses an array out of bounds or dereferences an invalid pointer; such errors compile and execute with surprising results. Keep an eye out for problems that you may have previously depended on the language to detect for you.

Memory and pointers: Bugs related to memory and/or pointers can be tricky to resolve. Make sure you understand every part of your code that you write or change. Also keep in mind that the observable effects of a memory error can come at a place and time far removed from the root cause (i.e. running off the end of a array may "work fine" until you later read the contents of a supposedly unrelated variable). gdb and Valgrind can be extremely helpful in resolving these kinds of bugs. In particular, Valgrind is useful throughout the programming process, not just at the end. Valgrind reports on two types of memory issues: errors and leaks. Memory errors are toxic and should be found and fixed without delay. Memory leaks are of less concern and can be ignored early in development. Given that the wrong deallocation can wreak havoc, we recommend you write the initial code with all free() calls commented out. Much later, after having finished with the correct functionality and turning your attention to polishing, add in the free calls one at a time, run under Valgrind, and iterate until you verify complete and proper deallocation. Check out the resources page for overviews of each of these tools.

Use good style from the start: Always start with good decomposition, rather than adding it later. Sketch each function's role and have a rough idea of its inputs and outputs. A function should be designed to complete one well-defined task. If you can't describe the function's role in a sentence or two then maybe your function is doing too much and should be decomposed further. Commenting the function before you write the code may help you clarify your design (what the function does, what inputs it takes, and what outputs it produces, how it will be used). Start by using good variable names, rather than going through and changing them later. Using good style the first time makes your code better designed, easier to understand, and easier to debug.

Understand your code: At every step, you want to ensure that you understand the code you are writing, what it does, and how it works. Don't make changes without understanding why you are making them, and what the result will be.

Test: use our recommended testing techniques to incrementally develop your program, test at each step, and always have a working program.

Get help if you need it!: 107 has a lot of helpful resources, including written materials on the web site, textbook readings, lectures, labs, the online discussion forum, helper hours, and more. We are happy to help or answer your questions!

Frequently Asked Questions

How can I reproduce/debug a problem that appears during sanity check?

Look through the sanity check output to find the command being executed:

Command: ./mygrep fun /afs/ir/class/cs107/samples/assign1/hymn

Run that same command (in shell, gdb, or Valgrind) to replicate the situation being tested. You can also view the file contents (such as the hymn file in the above command) to better understand what is being tested.

Is it possible to write a custom test to verify Valgrind correctness or memory/time efficiency?

Unfortunately not; custom sanity check tests compare on output only. You will need to supplement with other forms of testing to verify those additional requirements.

Can I submit a program that doesn't pass sanity check?

We strongly recommend that you resolve any sanity check failures before submitting, but the submit tool will not force you to do so. To submit a project doesn't pass sanity check, respond no when asked if you want to run sanity check, and the project will be submitted without that check.

How can I verify that my submission was successful?

Your gradebook page (accessible from the navigation bar at the top) lists the timestamp of the most recent submission we have received.

Although it is not necessary, if you would like to triple-check, you can view the contents of your submission by re-cloning your class repo. For example, navigate to your home directory and git clone /afs/ir/class/cs107/repos/assignN/$USER mysubmission. This will create a mysubmission directory that contains the files you submitted for assignN (be sure to replace N with the assignment number). If you're satisfied that everything is as intended in mysubmission, then you're done and you can delete the mysubmission directory. If not, figure out what's not right, fix it, and submit again.