-@node 4.4BSD Scheduler, Coding Standards, References, Top
+@node 4.4BSD Scheduler
@appendix 4.4@acronym{BSD} Scheduler
@iftex
with all these requirements simultaneously.
For project 1, you must implement the scheduler described in this
-appendix. Our scheduler resembles the one described in @bibref{4.4BSD},
+appendix. Our scheduler resembles the one described in @bibref{McKusick},
which is one example of a @dfn{multilevel feedback queue} scheduler.
This type of scheduler maintains several queues of ready-to-run threads,
where each queue holds threads with a different priority. At any given
threads start with a @var{nice} value inherited from their parent
thread. You must implement the functions described below, which are for
use by test programs. We have provided skeleton definitions for them in
-@file{threads/thread.c}. by test programs
+@file{threads/thread.c}.
@deftypefun int thread_get_nice (void)
Returns the current thread's @var{nice} value.
Our scheduler has 64 priorities and thus 64 ready queues, numbered 0
(@code{PRI_MIN}) through 63 (@code{PRI_MAX}). Lower numbers correspond
-to @emph{higher} priorities, so that priority 0 is the highest priority
-and priority 63 is the lowest. Thread priority is calculated initially
+to lower priorities, so that priority 0 is the lowest priority
+and priority 63 is the highest. Thread priority is calculated initially
at thread initialization. It is also recalculated once every fourth
clock tick, for every thread. In either case, it is determined by
the formula
-@center @t{@var{priority} = (@var{recent_cpu} / 4) + (@var{nice} * 2)},
+@center @t{@var{priority} = @code{PRI_MAX} - (@var{recent_cpu} / 4) - (@var{nice} * 2)},
@noindent where @var{recent_cpu} is an estimate of the CPU time the
thread has used recently (see below) and @var{nice} is the thread's
@math{f(t)} has a weight of approximately @math{1/e} at time @math{t+k},
approximately @am{1/e^2, 1/e**2} at time @am{t+2k, t+2*k}, and so on.
From the opposite direction, @math{f(t)} decays to weight @math{w} at
-@am{t = \log_aw, t = ln(w)/ln(a)}.
+time @am{t + \log_aw, t + ln(w)/ln(a)}.
The initial value of @var{recent_cpu} is 0 in the first thread
created, or the parent's value in other new threads. Each time a timer
interrupt occurs, @var{recent_cpu} is incremented by 1 for the running
-thread only. In addition, once per second the value of @var{recent_cpu}
+thread only, unless the idle thread is running. In addition, once per
+second the value of @var{recent_cpu}
is recalculated for every thread (whether running, ready, or blocked),
using this formula:
The value of @var{recent_cpu} can be negative for a thread with a
negative @var{nice} value. Do not clamp negative @var{recent_cpu} to 0.
+You may need to think about the order of calculations in this formula.
+We recommend computing the coefficient of @var{recent_cpu} first, then
+multiplying. Some students have reported that multiplying
+@var{load_avg} by @var{recent_cpu} directly can cause overflow.
+
You must implement @func{thread_get_recent_cpu}, for which there is a
skeleton in @file{threads/thread.c}.
Every thread has a @var{nice} value between -20 and 20 directly under
its control. Each thread also has a priority, between 0
(@code{PRI_MIN}) through 63 (@code{PRI_MAX}), which is recalculated
-using the following formula whenever the value of either term changes:
+using the following formula every fourth tick:
-@center @t{@var{priority} = (@var{recent_cpu} / 4) + (@var{nice} * 2)}.
+@center @t{@var{priority} = @code{PRI_MAX} - (@var{recent_cpu} / 4) - (@var{nice} * 2)}.
@var{recent_cpu} measures the amount of CPU time a thread has received
``recently.'' On each timer tick, the running thread's @var{recent_cpu}