5 - In Project 2, we're missing tests that pass arguments to system calls
6 that span multiple pages, where some are mapped and some are not.
7 An implementation that only checks the first page, rather than all pages
8 that can be touched during a call to read()/write() passes all tests.
10 - Need some tests that test that illegal accesses lead to process
11 termination. I have written some, will add them. In P2, obviously,
12 this would require that the students break this functionality since
13 the page directory is initialized for them, still it would be good
16 - There does not appear to be a test that checks that they close all
17 fd's on exit. Idea: add statistics & self-diagnostics code to palloc.c
18 and malloc.c. Self-diagnostics code could be used for debugging.
19 The statistics code would report how much kernel memory is free.
20 Add a system call "get_kernel_memory_information". User programs
21 could engage in a variety of activities and notice leaks by checking
22 the kernel memory statistics.
24 From: "Godmar Back" <godmar@gmail.com>
25 Subject: set_priority & donation - a TODO item
26 To: "Ben Pfaff" <blp@cs.stanford.edu>
27 Date: Mon, 20 Feb 2006 22:20:26 -0500
31 it seems that there are currently no tests that check the proper
32 behavior of thread_set_priority() when called by a thread that is
33 running under priority donation. The proper behavior, I assume, is to
34 temporarily drop the donation if the set priority is higher, and to
35 reassume the donation should the thread subsequently set its own
36 priority again to a level that's lower than a still active donation.
40 From: Godmar Back <godmar@gmail.com>
41 Subject: on caching in project 4
42 To: Ben Pfaff <blp@cs.stanford.edu>
43 Date: Mon, 9 Jan 2006 20:58:01 -0500
45 here's an idea for future semesters.
47 I'm in the middle of project 4, I've started by implementing a buffer
48 cache and plugging it into the existing filesystem. Along the way I
49 was wondering how we could test the cache.
51 Maybe one could adopt a similar testing strategy as in project 1 for
52 the MLQFS scheduler: add a function that reads "get_cache_accesses()"
53 and a function "get_cache_hits()". Then create a version of pintos
54 that creates access traces for a to-be-determined workload. Run an
55 off-line analysis that would determine how many hits a perfect cache
56 would have (MAX), and how much say an LRU strategy would give (MIN).
57 Then add a fudge factor to account for different index strategies and
58 test that the reported number of cache hits/accesses is within (MIN,
59 MAX) +/- fudge factor.
61 (As an aside - I am curious why you chose to use a clock-style
62 algorithm rather than the more straightforward LRU for your buffer
63 cache implementation in your sample solution. Is there a reason for
64 that? I was curious to see if it made a difference, so I implemented
65 LRU for your cache implementation and ran the test workload of project
66 4 and printed cache hits/accesses.
67 I found that for that workload, the clock-based algorithm performs
68 almost identical to LRU (within about 1%, but I ran nondeterministally
69 with QEMU). I then reduced the cache size to 32 blocks and found again
70 the same performance, which raises the suspicion that the test
71 workload might not force any cache replacement, so the eviction
72 strategy doesn't matter.)
74 * Get rid of rox--causes more trouble than it's worth
76 * Reconsider command line arg style--confuses everyone.
78 * Finish writing tour.
82 * Get rid of mmap syscall, add sbrk.
84 * page-linear, page-shuffle VM tests do not use enough memory to force
85 eviction. Should increase memory consumption.
87 * Add FS persistence test(s).
89 * process_death test needs improvement
93 * Improve automatic interpretation of exception messages.
97 - Mark read-only pages as actually read-only in the page table. Or,
98 since this was consistently rated as the easiest project by the
99 students, require them to do it.
101 - Don't provide per-process pagedir implementation but only
102 single-process implementation and require students to implement
103 the separation? This project was rated as the easiest after all.
104 Alternately we could just remove the synchronization on pid
105 selection and check that students fix it.
109 - Need a better way to measure performance improvement of buffer
110 cache. Some students reported that their system was slower with
111 cache--likely, Bochs doesn't simulate a disk with a realistic
116 - Add "Digging Deeper" sections that describe the nitty-gritty x86
117 details for the benefit of those interested.
119 - Add explanations of what "real" OSes do to give students some
126 . Low-level x86 stuff, like paged page tables.
128 . Specifics on how to implement sbrk, malloc.
132 . opendir/readdir/closedir
134 . everything needed for getcwd()
136 To add partition support:
138 - Find four partition types that are more or less unused and choose to
139 use them for Pintos. (This is implemented.)
141 - Bootloader reads partition tables of all BIOS devices to find the
142 first that has the "Pintos kernel" partition type. (This is
143 implemented.) Ideally the bootloader would make sure there is
144 exactly one such partition, but I didn't implement that yet.
146 - Bootloader reads kernel into memory at 1 MB using BIOS calls. (This
149 - Kernel arguments have to go into a separate sector because the
150 bootloader is otherwise too big to fit now? (I don't recall if I
151 did anything about this.)
153 - Kernel at boot also scans partition tables of all the disks it can
154 find to find the ones with the four Pintos partition types (perhaps
155 not all exist). After that, it makes them available to the rest of
156 the kernel (and doesn't allow access to other devices, for safety).
158 - "pintos" and "pintos-mkdisk" need to write a partition table to the
159 disks that they create. "pintos-mkdisk" will need to take a new
160 parameter specifying the type. (I might have partially implemented
161 this, don't remember.)
163 - "pintos" should insist on finding a partition header on disks handed
166 - Need some way for "pintos" to assemble multiple disks or partitions
167 into a single image that can be copied directly to a USB block
168 device. (I don't know whether I came up with a good solution yet or
169 not, or whether I implemented any of it.)
173 - Needs to be able to scan PCI bus for UHCI controller. (I
174 implemented this partially.)
176 - May want to be able to initialize USB controllers over CardBus
177 bridges. I don't know whether this requires additional work or if
178 it's useful enough to warrant extra work. (It's of special interest
179 for me because I have a laptop that only has USB via CardBus.)
181 - There are many protocol layers involved: SCSI over USB-Mass Storage
182 over USB over UHCI over PCI. (I may be forgetting one.) I don't
183 know yet whether it's best to separate the layers or to merge (some
184 of) them. I think that a simple and clean organization should be a
187 - VMware can likely be used for testing because it can expose host USB
188 devices as guest USB devices. This is safer and more convenient
189 than using real hardware for testing.
191 - Should test with a variety of USB keychain devices because there
192 seems to be wide variation among them, especially in the SCSI
193 protocols they support. Should try to use a "lowest-common
194 denominator" SCSI protocol if any such thing really exists.
196 - Might want to add a feature whereby kernel arguments can be given
197 interactively, rather than passed on-disk. Needs some though.