X-Git-Url: https://pintos-os.org/cgi-bin/gitweb.cgi?a=blobdiff_plain;f=doc%2Fthreads.texi;h=706f76b7f035e5b7c0e17755dc5c7750aed4d34d;hb=837e5b7fb902bd749106309ef76a5276c73ca34c;hp=d83b52f4525770b2a89d7768713426e1ad1821b0;hpb=225e6b43b823eec0f3eef093f697c0b62538344f;p=pintos-anon diff --git a/doc/threads.texi b/doc/threads.texi index d83b52f..706f76b 100644 --- a/doc/threads.texi +++ b/doc/threads.texi @@ -1,4 +1,4 @@ -@node Project 1--Threads, Project 2--User Programs, Pintos Tour, Top +@node Project 1--Threads @chapter Project 1: Threads In this assignment, we give you a minimally functional thread system. @@ -12,9 +12,9 @@ side. Compilation should be done in the @file{threads} directory. Before you read the description of this project, you should read all of the following sections: @ref{Introduction}, @ref{Coding Standards}, @ref{Debugging Tools}, and @ref{Development Tools}. You should at least -skim the material in @ref{Threads Tour} and especially -@ref{Synchronization}. To complete this project you will also need to -read @ref{4.4BSD Scheduler}. +skim the material from @ref{Pintos Loading} through @ref{Memory +Allocation}, especially @ref{Synchronization}. To complete this project +you will also need to read @ref{4.4BSD Scheduler}. @menu * Project 1 Background:: @@ -137,10 +137,10 @@ here. @xref{Kernel Initialization}, for details. @item thread.c @itemx thread.h -Basic thread support. Much of your work will take place in these -files. @file{thread.h} defines @struct{thread}, which you are likely -to modify in all four projects. See @ref{struct thread} and @ref{Thread -Support} for more information. +Basic thread support. Much of your work will take place in these files. +@file{thread.h} defines @struct{thread}, which you are likely to modify +in all four projects. See @ref{struct thread} and @ref{Threads} for +more information. @item switch.S @itemx switch.h @@ -178,10 +178,11 @@ four projects. @xref{Synchronization}, for more information. Functions for I/O port access. This is mostly used by source code in the @file{devices} directory that you won't have to touch. -@item mmu.h -Functions and macros related to memory management, including page -directories and page tables. This will be more important to you in -project 3. For now, you can ignore it. +@item vaddr.h +@itemx pte.h +Functions and macros for working with virtual addresses and page table +entries. These will be more important to you in project 3. For now, +you can ignore them. @item flags.h Macros that define a few bits in the 80@var{x}86 ``flags'' register. @@ -216,14 +217,24 @@ call this code yourself. @item serial.c @itemx serial.h Serial port driver. Again, @func{printf} calls this code for you, -so you don't need to do so yourself. Feel free to look through it if -you're curious. +so you don't need to do so yourself. +It handles serial input by passing it to the input layer (see below). @item disk.c @itemx disk.h Supports reading and writing sectors on up to 4 IDE disks. This won't actually be used until project 2. +@item kbd.c +@itemx kbd.h +Keyboard driver. Handles keystrokes passing them to the input layer +(see below). + +@item input.c +@itemx input.h +Input layer. Queues input characters passed along by the keyboard or +serial drivers. + @item intq.c @itemx intq.h Interrupt queue, for managing a circular queue that both kernel @@ -608,13 +619,13 @@ to cause many of the tests to fail. You are probably looking at a backtrace that looks something like this: @example -0xc0108810: debug_panic (../../lib/kernel/debug.c:32) -0xc010a99f: pass (../../tests/threads/tests.c:93) -0xc010bdd3: test_mlfqs_load_1 (../../tests/threads/mlfqs-load-1.c:33) -0xc010a8cf: run_test (../../tests/threads/tests.c:51) -0xc0100452: run_task (../../threads/init.c:283) -0xc0100536: run_actions (../../threads/init.c:333) -0xc01000bb: main (../../threads/init.c:137) +0xc0108810: debug_panic (lib/kernel/debug.c:32) +0xc010a99f: pass (tests/threads/tests.c:93) +0xc010bdd3: test_mlfqs_load_1 (...threads/mlfqs-load-1.c:33) +0xc010a8cf: run_test (tests/threads/tests.c:51) +0xc0100452: run_task (threads/init.c:283) +0xc0100536: run_actions (threads/init.c:333) +0xc01000bb: main (threads/init.c:137) @end example This is just confusing output from the @command{backtrace} program. It @@ -675,7 +686,7 @@ list. @item If the highest-priority thread yields, does it continue running? -Yes. As long as there is a single highest-priority thread, it continues +Yes. If there is a single highest-priority thread, it continues running until it blocks or finishes, even if it calls @func{thread_yield}. If multiple threads have the same highest priority, @@ -696,6 +707,13 @@ priority to @var{L}. @var{L} releases the lock and thus loses the CPU and is moved to the ready queue. Now @var{L}'s old priority is restored while it is in the ready queue. +@item Can a thread's priority change while it is blocked? + +Yes. While a thread that has acquired lock @var{L} is blocked for any +reason, its priority can increase by priority donation if a +higher-priority thread attempts to acquire @var{L}. This case is +checked by the @code{priority-donate-sema} test. + @item Can a thread added to the ready list preempt the processor? Yes. If a thread added to the ready list has higher priority than the @@ -706,9 +724,11 @@ preempting whatever thread is currently running. @item How does @func{thread_set_priority} affect a thread receiving donations? -It should do something sensible, but no particular behavior is -required. None of the test cases call @func{thread_set_priority} from a -thread while it is receiving a priority donation. +It sets the thread's base priority. The thread's effective priority +becomes the higher of the newly set priority or the highest donated +priority. When the donations are released, the thread's priority +becomes the one set through the function call. This behavior is checked +by the @code{priority-donate-lower} test. @item Calling @func{printf} in @func{sema_up} or @func{sema_down} reboots! @@ -730,9 +750,6 @@ scheduler at the same time. @item Can I use one queue instead of 64 queues? -Yes, that's fine. It's easiest to describe the algorithm in terms of 64 -separate queues, but that doesn't mean you have to implement it that -way. - -If you use a single queue, it should probably be sorted. +Yes. In general, your implementation may differ from the description, +as long as its behavior is the same. @end table