+Godmar says:
+
+- In Project 2, we're missing tests that pass arguments to system calls
+that span multiple pages, where some are mapped and some are not.
+An implementation that only checks the first page, rather than all pages
+that can be touched during a call to read()/write() passes all tests.
+
+- Need some tests that test that illegal accesses lead to process
+termination. I have written some, will add them. In P2, obviously,
+this would require that the students break this functionality since
+the page directory is initialized for them, still it would be good
+to have.
+
+- There does not appear to be a test that checks that they close all
+fd's on exit. Idea: add statistics & self-diagnostics code to palloc.c
+and malloc.c. Self-diagnostics code could be used for debugging.
+The statistics code would report how much kernel memory is free.
+Add a system call "get_kernel_memory_information". User programs
+could engage in a variety of activities and notice leaks by checking
+the kernel memory statistics.
+
+From: Godmar Back <godmar@gmail.com>
+Subject: on caching in project 4
+To: Ben Pfaff <blp@cs.stanford.edu>
+Date: Mon, 9 Jan 2006 20:58:01 -0500
+
+here's an idea for future semesters.
+
+I'm in the middle of project 4, I've started by implementing a buffer
+cache and plugging it into the existing filesystem. Along the way I
+was wondering how we could test the cache.
+
+Maybe one could adopt a similar testing strategy as in project 1 for
+the MLQFS scheduler: add a function that reads "get_cache_accesses()"
+and a function "get_cache_hits()". Then create a version of pintos
+that creates access traces for a to-be-determined workload. Run an
+off-line analysis that would determine how many hits a perfect cache
+would have (MAX), and how much say an LRU strategy would give (MIN).
+Then add a fudge factor to account for different index strategies and
+test that the reported number of cache hits/accesses is within (MIN,
+MAX) +/- fudge factor.
+
+(As an aside - I am curious why you chose to use a clock-style
+algorithm rather than the more straightforward LRU for your buffer
+cache implementation in your sample solution. Is there a reason for
+that? I was curious to see if it made a difference, so I implemented
+LRU for your cache implementation and ran the test workload of project
+4 and printed cache hits/accesses.
+I found that for that workload, the clock-based algorithm performs
+almost identical to LRU (within about 1%, but I ran nondeterministally
+with QEMU). I then reduced the cache size to 32 blocks and found again
+the same performance, which raises the suspicion that the test
+workload might not force any cache replacement, so the eviction
+strategy doesn't matter.)
+