1 @node Project 3--Virtual Memory, Project 4--File Systems, Project 2--User Programs, Top
2 @chapter Project 3: Virtual Memory
4 By now you should be familiar with the inner workings of Pintos.
5 You've already come a long way: your OS can properly handle multiple
6 threads of execution with proper synchronization, and can load
7 multiple user programs at once. However, when loading user programs,
8 your OS is limited by how much main memory the simulated machine has.
9 In this assignment, you will remove that limitation.
11 You will be using the @file{vm} directory for this project. The
12 @file{vm} directory contains only the @file{Makefile}s. The only
13 change from @file{userprog} is that this new @file{Makefile} turns on
14 the setting @option{-DVM}. All code you write will either be newly
15 generated files (e.g.@: if you choose to implement your paging code in
16 their own source files), or will be modifications to pre-existing code
17 (e.g.@: you will change the behavior of @file{process.c}
20 There are only a couple of source files you will probably be
21 encountering for the first time:
26 Provides access to the physical disk, abstracting away the rather
30 You will be building this assignment on the last one. It will benefit
31 you to get your project 2 in good working order before this assignment
32 so those bugs don't keep haunting you.
34 All the test programs from the previous project should also work with
35 this project. You should also write programs to test the new features
36 introduced in this project.
38 You will continue to handle Pintos disks and file systems the same way
39 you did in the previous assignment (@pxref{Using the File System}).
41 Your submission should define @code{THREAD_JOIN_IMPLEMENTED} in
42 @file{constants.h} (@pxref{Conditional Compilation}).
47 * Disk as Backing Store::
48 * Memory Mapped Files::
50 * Problem 3-1 Page Table Management::
51 * Problem 3-2 Paging To and From Disk::
52 * Problem 3-3 Memory Mapped Files::
53 * Virtual Memory FAQ::
57 @section A Word about Design
59 It is important for you to note that in addition to getting virtual
60 memory working, this assignment is also meant to be an open-ended
61 design problem. We will expect you to come up with a design that
62 makes sense. You will have the freedom to choose how to handle page
63 faults, how to organize the swap disk, how to implement paging, etc.
64 In each case, we will expect you to provide a defensible justification
65 in your design documentation as to why your choices are reasonable.
66 You should evaluate your design on all the available criteria: speed
67 of handling a page fault, space overhead in memory, minimizing the
68 number of page faults, simplicity, etc.
70 In keeping with this, you will find that we are going to say as little
71 as possible about how to do things. Instead we will focus on what end
72 functionality we require your OS to support.
77 For the last assignment, whenever a context switch occurred, the new
78 process would install its own page table into the machine. The page
79 table contained all the virtual-to-physical translations for the
80 process. Whenever the processor needed to look up a translation, it
81 consulted the page table. As long as the process only accessed
82 memory that it didn't own, all was well. If the process accessed
83 memory it didn't own, it ``page faulted'' and @func{page_fault}
84 terminated the process.
86 When we implement virtual memory, the rules have to change. A page
87 fault is no longer necessarily an error, since it might only indicate
88 that the page must be brought in from a disk file or from swap. You
89 will have to implement a more sophisticated page fault handler to
92 On the 80@var{x}86, the page table format is fixed by hardware. We
93 have provided code for managing page tables for you to use in
94 @file{userprog/pagedir.c}. The functions in there should provide an
95 abstract interface to all the page table functionality that you need
96 to complete the project. However, you may still find it worthwhile to
97 understand a little about the hardware page table format, so we'll go
98 into a little of detail about that in this section.
100 The top-level paging data structure is a 4 kB page called the ``page
101 directory'' (PD) arranged as an array of 1,024 32-bit page directory
102 entries (PDEs), each of which represents 4 MB of virtual memory. Each
103 PDE may point to the physical address of another 4 kB page called a
104 ``page table'' (PT) arranged in the same fashion as an array of 1,024
105 32-bit page table entries (PTEs), each of which translates a single 4
106 kB virtual page into physical memory.
108 Thus, translation of a virtual address into a physical address follows
109 the three-step process illustrated in the diagram
110 below:@footnote{Actually, virtual to physical translation on the
111 80@var{x}86 architecture happens via an intermediate ``linear
112 address,'' but Pintos (and most other 80@var{x}86 OSes) set up the CPU
113 so that linear and virtual addresses are one and the same, so that you
114 can effectively ignore this CPU feature.}
118 The top 10 bits of the virtual address (bits 22:32) are used to index
119 into the page directory. If the PDE is marked ``present,'' the
120 physical address of a page table is read from the PDE thus obtained.
121 If the PDE is marked ``not present'' then a page fault occurs.
124 The next 10 bits of the virtual address (bits 12:22) are used to index
125 into the page table. If the PTE is marked ``present,'' the physical
126 address of a data page is read from the PTE thus obtained. If the PTE
127 is marked ``not present'' then a page fault occurs.
131 The bottom 12 bits of the virtual address (bits 0:12) are added to the
132 data page's physical base address, producing the final physical
139 +--------------------------------------------------------------------+
140 | Page Directory Index | Page Table Index | Page Offset |
141 +--------------------------------------------------------------------+
143 _______/ _______/ _____/
145 / Page Directory / Page Table / Data Page
146 / .____________. / .____________. / .____________.
147 |1,023|____________| |1,023|____________| | |____________|
148 |1,022|____________| |1,022|____________| | |____________|
149 |1,021|____________| |1,021|____________| \__\|____________|
150 |1,020|____________| |1,020|____________| /|____________|
153 | | . | /| . | \ | . |
154 \____\| . |_ | . | | | . |
155 /| . | \ | . | | | . |
156 | . | | | . | | | . |
158 |____________| | |____________| | |____________|
159 4|____________| | 4|____________| | |____________|
160 3|____________| | 3|____________| | |____________|
161 2|____________| | 2|____________| | |____________|
162 1|____________| | 1|____________| | |____________|
163 0|____________| \__\0|____________| \____\|____________|
168 Header @file{threads/mmu.h} has useful functions for various
169 operations on virtual addresses. You should look over the header
170 yourself, but its most important functions include these:
173 @item pd_no(@var{va})
174 Returns the page directory index in virtual address @var{va}.
176 @item pt_no(@var{va})
177 Returns the page table index in virtual address @var{va}.
179 @item pg_ofs(@var{va})
180 Returns the page offset in virtual address @var{va}.
182 @item pg_round_down(@var{va})
183 Returns @var{va} rounded down to the nearest page boundary, that is,
184 @var{va} but with its page offset set to 0.
186 @item pg_round_up(@var{va})
187 Returns @var{va} rounded up to the nearest page boundary.
190 @node Disk as Backing Store
191 @section Disk as Backing Store
193 In VM systems, since memory is less plentiful than disk, you will
194 effectively use memory as a cache for disk. Looking at it from
195 another angle, you will use disk as a backing store for memory. This
196 provides the abstraction of an (almost) unlimited virtual memory size.
197 Part of your task in this project is to do this, with the additional
198 constraint that your performance should be close to that provided by
199 physical memory. You will use the page tables' ``dirty'' bits to
200 denote whether pages need to be written back to disk when they're
201 evicted from main memory and the ``accessed'' bit for page replacement
202 algorithms. Whenever the hardware writes memory, it sets the dirty
203 bit, and if it reads or writes to the page, it sets the accessed bit.
205 As with any caching system, performance depends on the policy used to
206 decide which things are kept in memory and which are only stored on
207 disk. On a page fault, the kernel must decide which page to replace.
208 Ideally, it will throw out a page that will not be referenced for a
209 long time, keeping in memory those pages that are soon to be
210 referenced. Another consideration is that if the replaced page has
211 been modified, the page must be first saved to disk before the needed
212 page can be brought in. Many virtual memory systems avoid this extra
213 overhead by writing modified pages to disk in advance, so that later
214 page faults can be completed more quickly (but you do not have to
215 implement this optimization).
217 @node Memory Mapped Files
218 @section Memory Mapped Files
220 The traditional way to access the file system is via @code{read} and
221 @code{write} system calls, but that requires an extra level of copying
222 between the kernel and the user level. A secondary interface is
223 simply to ``map'' the file into the virtual address space. The
224 program can then use load and store instructions directly on the file
225 data. (An alternative way of viewing the file system is as ``durable
226 memory.'' Files just store data structures. If you access data
227 structures in memory using load and store instructions, why not access
228 data structures in files the same way?)
230 Memory mapped files are typically implemented using system calls. One
231 system call maps the file to a particular part of the address space.
232 For example, one might conceptually map the file @file{foo}, which is
234 long, starting at address 5000. Assuming that nothing else is already
235 at virtual addresses 5000@dots{}6000, any memory accesses to these
236 locations will access the corresponding bytes of @file{foo}.
238 A consequence of memory mapped files is that address spaces are
239 sparsely populated with lots of segments, one for each memory mapped
240 file (plus one each for code, data, and stack). You will implement
241 memory mapped files in problem 3-3. You should
242 design your solutions to problems 3-1 and 3-2 to anticipate this.
247 In project 2, the stack was a single page at the top of the user
248 virtual address space. The stack's location does not change in this
249 project, but your kernel should allocate additional pages to the stack
250 on demand. That is, if the stack grows past its current bottom, the
251 system should allocate additional pages for the stack as necessary
252 (unless those pages are unavailable because they are in use by another
255 It is impossible to predict how large the stack will grow at compile
256 time, so we must allocate pages as necessary. You should only allocate
257 additional pages if they ``appear'' to be stack accesses. You must
258 devise a heuristic that attempts to distinguish stack accesses from
259 other accesses. Document and explain the heuristic in your
260 design documentation.
262 The first stack page need not be loaded lazily. You can initialize it
263 with the command line at load time, with no need to wait for it to be
264 faulted in. Even if you did wait, the very first instruction in the
265 user program is likely to be one that faults in the page.
271 The user program's current stack pointer is in the @struct{intr_frame}'s
275 Only buggy user programs write to memory within the stack but below the
276 stack pointer. This is because more advanced OSes may interrupt a
277 process at any time to deliver a ``signal'' and this uses the stack.
280 The 80@var{x}86 @code{push} instruction may cause a page fault 4 bytes
281 below the stack pointer, because it checks access permissions before it
282 adjusts the stack pointer. (Otherwise, the instruction would not be
283 restartable in a straightforward fashion.)
286 Similarly, the 80@var{x}86 @code{pusha} instruction, which pushes all 32
287 bytes of the 8 general-purpose registers at once, may cause a page fault
288 32 bytes below the stack pointer.
291 Most OSes impose some sort of limit on the stack size. Sometimes it is
295 @node Problem 3-1 Page Table Management
296 @section Problem 3-1: Page Table Management
298 Implement page directory and page table management to support virtual
299 memory. You will need data structures to accomplish the following
304 Some way of translating in software from virtual page frames to
305 physical page frames. Consider using a hash table (@pxref{Hash
308 It is possible to do this translation without adding a new data
309 structure, by modifying the code in @file{userprog/pagedir.c}. However,
310 if you do that you'll need to carefully study and understand section 3.7
311 in @bibref{IA32-v3}, and in practice it is probably easier to add a new
315 Some way of finding a page on disk if it is not in memory. You won't
316 need this data structure until problem 3-2, but planning ahead is a
319 You can generalize the virtual-to-physical page table, so that it allows
320 you to locate a page wherever it is in physical memory or on disk, or
321 you can make this a separate table.
324 Some way of translating from physical page frames back to virtual page
325 frames, so that when you evict a physical page from its frame, you can
326 invalidate its translation(s).
329 The page fault handler, @func{page_fault} in
330 @file{threads/exception.c}, needs to do roughly the following:
334 Locate the page backing the virtual
335 address that faulted. It might be in the file system, in swap,
336 or it might be an invalid virtual address.
337 If you implement sharing, it might even
338 already be in physical memory and just not set up in the page table,
340 If the virtual address is invalid, that is, if there's nothing
341 assigned to go there, or if the virtual address is above
342 @code{PHYS_BASE}, meaning that it belongs to the kernel instead of the
343 user, then the process's memory access must be disallowed. You should
344 terminate the process at this point, being sure to free all of its
348 If the page is not in physical memory, fetch it by appropriate means.
349 If necessary to make room, first evict some other page from memory.
350 (When you do that you need to first remove references to the page from
351 any page table that refers to it.)
354 Point the page table entry for the faulting virtual address to the
355 physical page. You can use the functions in @file{userprog/pagedir.c}.
358 You'll need to modify the ELF loader in @file{userprog/process.c} to
359 do page table management according to your new design. As supplied,
360 it reads all the process's pages from disk and initializes the page
361 tables for them at the same time. For testing purposes, you'll
362 probably want to leave the code that reads the pages from disk, but
363 use your new page table management code to construct the page tables
364 only as page faults occur for them.
366 You should use the @func{palloc_get_page} function to get the page
367 frames that you use for storing user virtual pages. Be sure to pass
368 the @code{PAL_USER} flag to this function when you do so, because that
369 allocates pages from a ``user pool'' separate from the ``kernel pool''
370 that other calls to @func{palloc_get_page} make.
372 There are many possible ways to implement virtual memory. The above
373 is simply an outline of our suggested implementation.
375 @node Problem 3-2 Paging To and From Disk
376 @section Problem 3-2: Paging To and From Disk
378 Implement paging to and from files and the swap disk. You may use the
379 disk on interface @code{hd1:1} as the swap disk, using the disk
380 interface prototyped in @code{devices/disk.h}. From the @file{vm/build}
381 directory, use the command @code{pintos make-disk swap.dsk @var{n}} to
382 create an @var{n} MB swap disk named @file{swap.dsk}. Afterward,
383 @file{swap.dsk} will automatically be attached when you run
386 You will need routines to move a page from memory to disk and from
387 disk to memory, where ``disk'' is either a file or the swap disk. If
388 you do everything correctly, your VM should still work when you
389 implement your own file system for the next assignment.
391 You will need a way to track pages which are used by a process but
392 which are not in physical memory, to fully handle page faults. Pages
393 that you write to swap should not be constrained to be in sequential
394 order. You will also need a way to track all of the physical memory
395 pages, to find an unused one when needed, or to evict a page
396 when memory is needed but no empty pages are available. The data
397 structures that you designed for problem 3-1 should do most of the work for
400 You will need a page replacement algorithm. The hardware sets the
401 accessed and dirty bits when it accesses memory. You can gain access
402 to this information using the functions prototyped in
403 @file{userprog/pagedir.h}. You should be able to take advantage of
404 this information to implement some algorithm which attempts to achieve
405 LRU-type behavior. We expect that your algorithm perform at least as
406 well as a reasonable implementation of the second-chance (clock)
407 algorithm. You will need to show in your test cases the value of your
408 page replacement algorithm by demonstrating for some workload that it
409 pages less frequently using your algorithm than using some inferior
410 page replacement policy. The canonical example of a poor page
411 replacement policy is random replacement.
413 You must write your code so that we can choose a page replacement policy
414 at compile time. By default, the LRU-like algorithm must be in effect,
415 but we must be able to choose random replacement by inserting the line
416 @code{#define RANDOM_REPLACEMENT 1} in @file{constants.h}.
417 @xref{Conditional Compilation}, for details.
419 Since you will already be paging from disk, you should implement a
420 ``lazy'' loading scheme for new processes. When a process is created,
421 it will not run immediately. Therefore, it doesn't make sense to load
422 all its code, data, and stack into memory when the process is created,
423 since it might incur additional disk accesses to do so (if it gets
424 paged out before it runs). When loading a new process, you should
425 leave most pages on disk, and bring them in as demanded when the
426 program begins running. Your VM system should also use the executable
427 file itself as backing store for read-only segments, since these
428 segments won't change.
430 There are a few special cases. Look at the loop in
431 @func{load_segment} in @file{userprog/process.c}. Each time
432 around the loop, @code{read_bytes} represents the number of bytes to
433 read from the executable file and @code{zero_bytes} represents the number
434 of bytes to initialize to zero following the bytes read. The two
435 always sum to @code{PGSIZE}. The page handling depends on these
440 If @code{read_bytes} equals @code{PGSIZE}, the page should be demand
441 paged from disk on its first access.
444 If @code{zero_bytes} equals @code{PGSIZE}, the page does not need to
445 be read from disk at all because it is all zeroes. You should handle
446 such pages by creating a new page consisting of all zeroes at the
450 If neither @code{read_bytes} nor @code{zero_bytes} equals
451 @code{PGSIZE}, then part of the page is to be read from disk and the
452 remainder zeroed. This is a special case. You are allowed to handle
453 it by reading the partial page from disk at executable load time and
454 zeroing the rest of the page. This is the only case in which we will
455 allow you to load a page in a non-``lazy'' fashion. Many real OSes
456 such as Linux do not load partial pages lazily.
459 Incidentally, if you have trouble handling the third case above, you
460 can eliminate it temporarily by linking the test programs with a
461 special ``linker script.'' Read @file{tests/userprog/Makefile} for
462 details. We will not test your submission with this special linker
463 script, so the code you turn in must properly handle all cases.
465 For extra credit, you may implement sharing: when multiple processes
466 are created that use the same executable file, share read-only pages
467 among those processes instead of creating separate copies of read-only
468 segments for each process. If you carefully designed your data
469 structures in problem 3-1, sharing of read-only pages should not make this
470 part significantly harder.
472 @node Problem 3-3 Memory Mapped Files
473 @section Problem 3-3: Memory Mapped Files
475 Implement memory mapped files.
477 You will need to implement the following system calls:
481 @itemx bool mmap (int @var{fd}, void *@var{addr}, unsigned @var{length})
483 Maps the file open as @var{fd} into the process's address space
484 starting at @var{addr} for @var{length} bytes. Returns true if
485 successful, false on failure. Failure cases include the following:
489 @var{addr} is not page-aligned.
492 @var{length} is not positive.
495 The range of pages mapped overlaps any existing set of mapped pages,
496 including the stack or pages mapped at executable load time.
499 @var{length} is treated as if it were rounded up to the nearest
500 multiple of the page size, that is, as if the first statement in the
501 system call's implementation were
503 length = ROUND_UP (length, PGSIZE);
505 (The @code{ROUND_UP} macro is defined in @file{<round.h>}.)
506 The remainder of this description assumes that this has been done.
508 If @var{length} is less than @var{fd}'s length, you should only map
509 the first @var{length} bytes of the file. If @var{length} is greater
510 than @var{fd}'s length, when the file's length is also rounded up to a
511 page multiple, the call should fail. Ideally it would extend the
512 file, but our file system does not yet support growing files.
514 If @var{length} is greater than @var{fd}'s (unrounded) length, then some
515 bytes in the final mapped page ``stick out'' beyond the end of the
516 file. Set these bytes to zero when the page is faulted in from
517 disk, and discard them when the page is written back to disk.
519 Your VM system should use the @code{mmap}'d file itself as
520 backing store for the mapped segment. That is, to evict a page mapped by
521 @code{mmap} must be evicted, write it to the file it was mapped from.
522 (In fact, you may choose to implement executable mappings as a special
523 case of file mappings.)
526 @itemx bool munmap (void *addr, unsigned length)
528 Unmaps @var{length} bytes starting at @var{addr}. Returns true on
529 success, false on failure. Failure cases include the following:
533 @var{addr} is not page-aligned.
536 @var{length} is not positive.
539 One or more pages within the range to be unmapped were not mapped
540 using the @code{mmap} system call.
543 As with @code{mmap}, @var{length} is treated as if it were rounded up
544 to the nearest multiple of the page size.
546 It is valid to unmap only some of the pages that were mapped in a
547 previous system call.
550 All mappings are implicitly unmapped when a process exits, whether via
551 @code{exit} or by any other means. When a file is unmapped, whether
552 implicitly or explicitly, all outstanding changes are written to the
553 file, and the pages are removed from the process's list of used
556 @node Virtual Memory FAQ
561 @b{Do we need a working HW 2 to implement HW 3?}
567 @b{How do I use the hash table provided in @file{lib/kernel/hash.c}?}
569 First, you need to embed a @code{hash_elem} object as a member of the
570 object that the hash table will contain. Each @code{hash_elem} allows
571 the object to a member of at most one hash table at a given time. All
572 the hash table functions that deal with hash table items actually use
573 the address of a @code{hash_elem}. You can convert a pointer to a
574 @code{hash_elem} member into a pointer to the structure in which
575 member is embedded using the @code{hash_entry} macro.
577 Second, you need to decide on a key type. The key should be something
578 that is unique for each object, because a given hash table may not
579 contain two objects with equal keys. Then you need to write two
580 functions. The first is a @dfn{hash function} that converts a key
581 into an integer. Some sample hash functions that you can use or just
582 examine are given in @file{lib/kernel/hash.c}. The second function
583 needed is a @dfn{comparison function} that compares a pair and returns
584 true if the first is less than the second. These two functions have
585 to be compatible with the prototypes for @code{hash_hash_func} and
586 @code{hash_less_func} in @file{lib/kernel/hash.h}.
588 Here's a quick example. Suppose you want to put @struct{thread}s
589 in a hash table. First, add a @code{hash_elem} to the thread
590 structure by adding a line to its definition:
593 hash_elem h_elem; /* Hash table element. */
596 We'll choose the @code{tid} member in @struct{thread} as the key,
597 and write a hash function and a comparison function:
600 /* Returns a hash for E. */
602 thread_hash (const hash_elem *e, void *aux UNUSED)
604 struct thread *t = hash_entry (e, struct thread, h_elem);
605 return hash_int (t->tid);
608 /* Returns true if A's tid is less than B's tid. */
610 thread_less (const hash_elem *a_, const hash_elem *b_,
613 struct thread *a = hash_entry (a_, struct thread, h_elem);
614 struct thread *b = hash_entry (b_, struct thread, h_elem);
615 return a->tid < b->tid;
619 Then we can create a hash table like this:
624 hash_init (&threads, thread_hash, thread_less, NULL);
627 Finally, if @code{@var{t}} is a pointer to a @struct{thread},
628 then we can insert it into the hash table with:
631 hash_insert (&threads, &@var{t}->h_elem);
634 If you have any other questions about hash tables, the CS109
635 and CS161 textbooks have good chapters on them, or you can come
636 to any of the TA's office hours for further clarification.
639 @b{What are the @var{aux} parameters to the hash table functions good
642 In simple cases you won't have any need for the @var{aux} parameters.
643 In these cases you can just pass a null pointer to @func{hash_init}
644 for @var{aux} and ignore the values passed to the hash function and
645 comparison functions. (You'll get a compiler warning if you don't use
646 the @var{aux} parameter, but you can turn that off with the
647 @code{UNUSED} macro, as shown above, or you can just ignore it.)
649 @var{aux} is useful when you have some property of the data in the
650 hash table that's both constant and needed for hashing or comparisons,
651 but which is not stored in the data items themselves. For example, if
652 the items in a hash table contain fixed-length strings, but the items
653 themselves don't indicate what that fixed length is, you could pass
654 the length as an @var{aux} parameter.
657 @b{The current implementation of the hash table does not do something
658 that we need it to do. What gives?}
660 You are welcome to modify it. It is not used by any of the code we
661 provided, so modifying it won't affect any code but yours. Do
662 whatever it takes to make it work the way you want.
665 @b{What controls the layout of user programs?}
667 The linker is responsible for the layout of a user program in
668 memory. The linker is directed by a ``linker script'' which tells it
669 the names and locations of the various program segments. You can
670 learn more about linker scripts by reading the ``Scripts'' chapter in
671 the linker manual, accessible via @samp{info ld}.
675 * Problem 3-1 and 3-2 FAQ::
676 * Problem 3-3 Memory Mapped File FAQ::
679 @node Problem 3-1 and 3-2 FAQ
680 @subsection Problem 3-1 and 3-2 FAQ
684 @b{Does the virtual memory system need to support growth of the data
687 No. The size of the data segment is determined by the linker. We
688 still have no dynamic allocation in Pintos (although it is possible to
689 ``fake'' it at the user level by using memory-mapped files). However,
690 implementing it would add little additional complexity to a
691 well-designed system.
694 @b{Why do I need to pass @code{PAL_USER} to @func{palloc_get_page}
695 when I allocate physical page frames?}@anchor{Why PAL_USER?}
697 You can layer some other allocator on top of @func{palloc_get_page}
698 if you like, but it should be the underlying mechanism, directly or
699 indirectly, for two reasons. First, running out of pages in the user
700 pool just causes user programs to page, but running out of pages in
701 the kernel pool will cause all kinds of problems, because many kernel
702 functions depend on being able to allocate memory. Second, you can
703 use the @option{-ul} option to @command{pintos} to limit the size of
704 the user pool, which makes it easy to test your VM implementation with
705 various user memory sizes.
708 @node Problem 3-3 Memory Mapped File FAQ
709 @subsection Problem 3-3: Memory Mapped File FAQ
713 @b{How do we interact with memory-mapped files?}
715 Let's say you want to map a file called @file{foo} into your address
716 space at address @t{0x10000000}. You open the file, determine its
717 length, and then use @code{mmap}:
725 void *addr = (void *) 0x10000000;
726 int fd = open ("foo");
727 int length = filesize (fd);
728 if (mmap (fd, addr, length))
729 printf ("success!\n");
733 Suppose @file{foo} is a text file and you want to print the first 64
734 bytes on the screen (assuming, of course, that the length of the file
735 is at least 64). Without @code{mmap}, you'd need to allocate a
736 buffer, use @code{read} to get the data from the file into the buffer,
737 and finally use @code{write} to put the buffer out to the display. But
738 with the file mapped into your address space, you can directly address
742 write (addr, 64, STDOUT_FILENO);
745 Similarly, if you wanted to replace the first byte of the file,
746 all you need to do is:
752 When you're done using the memory-mapped file, you simply unmap
756 munmap (addr, length);
760 @b{What if two processes memory-map the same file?}
762 There is no requirement in Pintos that the two processes see
763 consistent data. Unix handles this by making the processes share the
764 same physical page, but the @code{mmap} system call also has an
765 argument allowing the client to specify whether the page is shared or
766 private (i.e.@: copy-on-write).
769 @b{What happens if a user removes a @code{mmap}'d file?}
771 You should follow the Unix convention and the mapping should still be
772 valid. @xref{Removing an Open File}, for more information.
775 @b{What if a process writes to a page that is memory-mapped, but the
776 location written to in the memory-mapped page is past the end
777 of the memory-mapped file?}
779 Can't happen. @code{mmap} checks that the mapped region is within the
780 file's length and Pintos provides no way to shorten a file. (Until
781 project 4, there's no way to extend a file either.) You can remove a
782 file, but the mapping remains valid (see the previous question).
785 @b{Do we have to handle memory mapping @code{stdin} or @code{stdout}?}
787 No. Memory mapping implies that a file has a length and that a user
788 can seek to any location in the file. Since the console device has
789 neither of these properties, @code{mmap} should return false when the
790 user attempts to memory map a file descriptor for the console device.
793 @b{What happens when a process exits with mapped files?}
795 When a process finishes, each of its mapped files is implicitly
796 unmapped. When a process @code{mmap}s a file and then writes into the
797 area for the file it is making the assumption the changes will be
801 @b{If a user closes a mapped file, should it be automatically
804 No, once created the mapping is valid until @code{munmap} is called
805 or the process exits.