-is made to read or write a block, check to see if it is stored in the
-cache, and if so, fetch it immediately from the cache without going to
-disk. (Otherwise, fetch the block from disk into cache, evicting an
-older entry if necessary.) You are limited to a cache no greater than
-64 sectors in size. Be sure to choose an intelligent cache
-replacement algorithm. Experiment to see what combination of accessed,
-dirty, and other information results in the best performance, as
-measured by the number of disk accesses. (For example, metadata is
-generally more valuable to cache than data.) Document your
-replacement algorithm in your design document.
-
-The provided file system code uses a ``bounce buffer'' in @struct{file}
-to translate the disk's sector-by-sector interface into the system call
-interface's byte-by-byte interface. It needs per-file buffers because,
-without them, there's no other good place to put sector
-data.@footnote{The stack is not a good place because large objects
-should not be allocated on the stack. A 512-byte sector is pushing the
-limit there.} As part of implementing the buffer cache, you should get
-rid of these bounce buffers. Instead, copy data into and out of sectors
-in the buffer cache directly. You will probably need some
-synchronization to prevent sectors from being evicted from the cache
-while you are using them.
-
-In addition to the basic file caching scheme, your implementation
-should also include the following enhancements:
-
-@table @b
-@item write-behind:
-Instead of always immediately writing modified data to disk, dirty
-blocks can be kept in the cache and written out sometime later. Your
-buffer cache should write behind whenever a block is evicted from the
-cache.
-
-@item read-ahead:
-Your buffer cache should automatically fetch the next block of a file
-into the cache when one block of a file is read, in case that block is
-about to be read.
-@end table
-
-For each of these three optimizations, design a file I/O workload that
-is likely to benefit from the enhancement, explain why you expect it
-to perform better than on the original file system implementation, and
-demonstrate the performance improvement.
+is made to read or write a block, check to see if it is in the
+cache, and if so, use the cached data without going to
+disk. Otherwise, fetch the block from disk into cache, evicting an
+older entry if necessary. You are limited to a cache no greater than 64
+sectors in size.
+
+Be sure to choose an intelligent cache replacement algorithm.
+Experiment to see what combination of accessed, dirty, and other
+information results in the best performance, as measured by the number
+of disk accesses. For example, metadata is generally more valuable to
+cache than data.
+
+You can keep a cached copy of the free map permanently in memory if you
+like. It doesn't have to count against the cache size.