-Godmar says:
-
-- In Project 2, we're missing tests that pass arguments to system calls
-that span multiple pages, where some are mapped and some are not.
-An implementation that only checks the first page, rather than all pages
-that can be touched during a call to read()/write() passes all tests.
-
-- In Project 2, we're missing a test that would fail if they assumed
-that contiguous user-virtual addresses are laid out contiguously
-in memory. The loading code should ensure that non-contiguous
-physical pages are allocated for the data segment (at least.)
-
-- Need some tests that test that illegal accesses lead to process
-termination. I have written some, will add them. In P2, obviously,
-this would require that the students break this functionality since
-the page directory is initialized for them, still it would be good
-to have.
-
-- There does not appear to be a test that checks that they close all
-fd's on exit. Idea: add statistics & self-diagnostics code to palloc.c
-and malloc.c. Self-diagnostics code could be used for debugging.
-The statistics code would report how much kernel memory is free.
-Add a system call "get_kernel_memory_information". User programs
-could engage in a variety of activities and notice leaks by checking
-the kernel memory statistics.
-
-From: "Godmar Back" <godmar@gmail.com>
-Subject: multiple threads waking up at same clock tick
-To: "Ben Pfaff" <blp@cs.stanford.edu>
-Date: Wed, 1 Mar 2006 08:14:47 -0500
-
-Greg Benson points out another potential TODO item for P1.
-
-----
-One thing I recall:
-
-The alarm tests do not test to see if multiple threads are woken up if
-their timers have expired. That is, students can write a solution
-that just wakes up the first thread on the sleep queue rather than
-check for additional threads. Of course, the next thread will be
-woken up on the next tick. Also, this might be hard to test.
-
----
-Way to test this: (from Godmar Back)
-
-Thread A with high priority spins until 'ticks' changes, then calls to
-timer_sleep(X), Thread B with lower priority is then resumed, calls
-set_priority to make its priority equal to that of thread A, then
-calls timer_sleep(X), all of that before the next clock interrupt
-arrives.
-
-On wakeup, each thread records wake-up time and calls yield
-immediately, forcing the scheduler to switch to the other
-equal-priority thread. Both wake-up times must be the same (and match
-the planned wake-up time.)
-
-PS:
-I actually tested it and it's hard to pass with the current ips setting.
-The bounds on how quickly a thread would need to be able to return after
-sleep appear too tight. Need another idea.
-
-From: "Godmar Back" <godmar@gmail.com>
-
-For reasons I don't currently understand, some of our students seem
-hesitant to include each thread in a second "all-threads" list and are
-looking for ways to implement the advanced scheduler without one.
-
-Currently, I believe, all tests for the mlfqs are such that all
-threads are either ready or sleeping in timer_sleep(). This allows for
-an incorrect implementation in which recent-cpu and priorities are
-updated only for those threads that are on the alarm list or the ready
-list.
-
-The todo item would be a test where a thread is blocked on a
-semaphore, lock or condition variable and have its recent_cpu decay to
-zero, and check that it's scheduled right after the unlock/up/signal.
-
-From: "Godmar Back" <godmar@gmail.com>
-Subject: set_priority & donation - a TODO item
-To: "Ben Pfaff" <blp@cs.stanford.edu>
-Date: Mon, 20 Feb 2006 22:20:26 -0500
-
-Ben,
-
-it seems that there are currently no tests that check the proper
-behavior of thread_set_priority() when called by a thread that is
-running under priority donation. The proper behavior, I assume, is to
-temporarily drop the donation if the set priority is higher, and to
-reassume the donation should the thread subsequently set its own
-priority again to a level that's lower than a still active donation.
-
- - Godmar
-
-From: Godmar Back <godmar@gmail.com>
-Subject: project 4 question/comment regarding caching inode data
-To: Ben Pfaff <blp@cs.stanford.edu>
-Date: Sat, 14 Jan 2006 15:59:33 -0500
-
-Ben,
-
-in section 6.3.3 in the P4 FAQ, you write:
-
-"You can store a pointer to inode data in struct inode, if you want,"
-
-Should you point out that if they indeed do that, they likely wouldn't
-be able to support more than 64 open inodes systemwide at any given
-point in time.
-
-(This seems like a rather strong limitation; do your current tests
-open more than 64 files?
-It would also point to an obvious way to make the projects harder by
-specifically disallowing that inode data be locked in memory during
-the entire time an inode is kept open.)
-
- - Godmar
-
-From: Godmar Back <godmar@gmail.com>
-Subject: on caching in project 4
-To: Ben Pfaff <blp@cs.stanford.edu>
-Date: Mon, 9 Jan 2006 20:58:01 -0500
-
-here's an idea for future semesters.
-
-I'm in the middle of project 4, I've started by implementing a buffer
-cache and plugging it into the existing filesystem. Along the way I
-was wondering how we could test the cache.
-
-Maybe one could adopt a similar testing strategy as in project 1 for
-the MLQFS scheduler: add a function that reads "get_cache_accesses()"
-and a function "get_cache_hits()". Then create a version of pintos
-that creates access traces for a to-be-determined workload. Run an
-off-line analysis that would determine how many hits a perfect cache
-would have (MAX), and how much say an LRU strategy would give (MIN).
-Then add a fudge factor to account for different index strategies and
-test that the reported number of cache hits/accesses is within (MIN,
-MAX) +/- fudge factor.
-
-(As an aside - I am curious why you chose to use a clock-style
-algorithm rather than the more straightforward LRU for your buffer
-cache implementation in your sample solution. Is there a reason for
-that? I was curious to see if it made a difference, so I implemented
-LRU for your cache implementation and ran the test workload of project
-4 and printed cache hits/accesses.
-I found that for that workload, the clock-based algorithm performs
-almost identical to LRU (within about 1%, but I ran nondeterministally
-with QEMU). I then reduced the cache size to 32 blocks and found again
-the same performance, which raises the suspicion that the test
-workload might not force any cache replacement, so the eviction
-strategy doesn't matter.)
-
-Godmar Back <godmar@gmail.com> writes:
-
-> in your sample solution to P4, dir_reopen does not take any locks when
-> changing a directory's open_cnt. This looks like a race condition to
-> me, considering that dir_reopen is called from execute_process without
-> any filesystem locks held.
-
-* Get rid of rox--causes more trouble than it's worth