+Godmar says:
+
+- In Project 2, we're missing tests that pass arguments to system calls
+that span multiple pages, where some are mapped and some are not.
+An implementation that only checks the first page, rather than all pages
+that can be touched during a call to read()/write() passes all tests.
+
+- In Project 2, we're missing a test that would fail if they assumed
+that contiguous user-virtual addresses are laid out contiguously
+in memory. The loading code should ensure that non-contiguous
+physical pages are allocated for the data segment (at least.)
+
+- Need some tests that test that illegal accesses lead to process
+termination. I have written some, will add them. In P2, obviously,
+this would require that the students break this functionality since
+the page directory is initialized for them, still it would be good
+to have.
+
+- There does not appear to be a test that checks that they close all
+fd's on exit. Idea: add statistics & self-diagnostics code to palloc.c
+and malloc.c. Self-diagnostics code could be used for debugging.
+The statistics code would report how much kernel memory is free.
+Add a system call "get_kernel_memory_information". User programs
+could engage in a variety of activities and notice leaks by checking
+the kernel memory statistics.
+
+From: "Godmar Back" <godmar@gmail.com>
+Subject: multiple threads waking up at same clock tick
+To: "Ben Pfaff" <blp@cs.stanford.edu>
+Date: Wed, 1 Mar 2006 08:14:47 -0500
+
+Greg Benson points out another potential TODO item for P1.
+
+----
+One thing I recall:
+
+The alarm tests do not test to see if multiple threads are woken up if
+their timers have expired. That is, students can write a solution
+that just wakes up the first thread on the sleep queue rather than
+check for additional threads. Of course, the next thread will be
+woken up on the next tick. Also, this might be hard to test.
+
+---
+Way to test this: (from Godmar Back)
+
+Thread A with high priority spins until 'ticks' changes, then calls to
+timer_sleep(X), Thread B with lower priority is then resumed, calls
+set_priority to make its priority equal to that of thread A, then
+calls timer_sleep(X), all of that before the next clock interrupt
+arrives.
+
+On wakeup, each thread records wake-up time and calls yield
+immediately, forcing the scheduler to switch to the other
+equal-priority thread. Both wake-up times must be the same (and match
+the planned wake-up time.)
+
+PS:
+I actually tested it and it's hard to pass with the current ips setting.
+The bounds on how quickly a thread would need to be able to return after
+sleep appear too tight. Need another idea.
+
+From: "Godmar Back" <godmar@gmail.com>
+
+For reasons I don't currently understand, some of our students seem
+hesitant to include each thread in a second "all-threads" list and are
+looking for ways to implement the advanced scheduler without one.
+
+Currently, I believe, all tests for the mlfqs are such that all
+threads are either ready or sleeping in timer_sleep(). This allows for
+an incorrect implementation in which recent-cpu and priorities are
+updated only for those threads that are on the alarm list or the ready
+list.
+
+The todo item would be a test where a thread is blocked on a
+semaphore, lock or condition variable and have its recent_cpu decay to
+zero, and check that it's scheduled right after the unlock/up/signal.
+
+From: "Godmar Back" <godmar@gmail.com>
+Subject: set_priority & donation - a TODO item
+To: "Ben Pfaff" <blp@cs.stanford.edu>
+Date: Mon, 20 Feb 2006 22:20:26 -0500
+
+Ben,
+
+it seems that there are currently no tests that check the proper
+behavior of thread_set_priority() when called by a thread that is
+running under priority donation. The proper behavior, I assume, is to
+temporarily drop the donation if the set priority is higher, and to
+reassume the donation should the thread subsequently set its own
+priority again to a level that's lower than a still active donation.
+
+ - Godmar
+
+From: Godmar Back <godmar@gmail.com>
+Subject: project 4 question/comment regarding caching inode data
+To: Ben Pfaff <blp@cs.stanford.edu>
+Date: Sat, 14 Jan 2006 15:59:33 -0500
+
+Ben,
+
+in section 6.3.3 in the P4 FAQ, you write:
+
+"You can store a pointer to inode data in struct inode, if you want,"
+
+Should you point out that if they indeed do that, they likely wouldn't
+be able to support more than 64 open inodes systemwide at any given
+point in time.
+
+(This seems like a rather strong limitation; do your current tests
+open more than 64 files?
+It would also point to an obvious way to make the projects harder by
+specifically disallowing that inode data be locked in memory during
+the entire time an inode is kept open.)
+
+ - Godmar
+