--- /dev/null
+@node Multilevel Feedback Scheduling, , , Project 1--Threads
+@section Multilevel Feedback Scheduling
+
+This section gives a brief overview of the behavior of the Solaris 2.6
+Time-Sharing (TS) scheduler, an example of a Multilevel Feedback Queue
+scheduler. The information in this handout, in conjunction with that
+given in lecture, should be used to answer Problem 1-4. The end of
+this document specifies in more detail which aspects of the Solaris
+scheduler that you should implement.
+
+The goal of a multilevel feedback queue scheduler is to fairly and
+efficiently schedule a mix of processes with a variety of execution
+characteristics. By controlling how a process moves between priority
+levels, processes with different characteristics can be scheduled as
+appropriate. Priority-based schedulers attempt to provide a
+compromise between several desirable metrics (e.g.@: response time for
+interactive jobs, throughput for compute-intensive jobs, and fair
+allocations for all jobs).
+
+The queues in the system are ranked according to priority. Processes
+waiting in higher priority queues are always scheduled over those in
+lower priority queues. Processes at the same priority are usually
+scheduled in a round-robin fashion.
+
+Such schedulers tend to be preemptible in order to support interactive
+processes. That is, a higher priority process is immediately
+scheduled if a lower priority process is running on the CPU.
+
+@menu
+* Scheduling in Solaris::
+* Class Independent Functionality::
+* Time-Sharing Scheduling Class::
+* Dispatch Table::
+* Implementation::
+* Fairness::
+* Project Requirements::
+@end menu
+
+@node Scheduling in Solaris
+@subsection Scheduling in Solaris
+
+The Solaris operating system is based on Unix System V Release 4
+(SVR4). Scheduling in Solaris, as in all SVR4-based schedulers, is
+performed at two levels: class-independent routines and
+class-dependent routines. Class-independent routines are those that
+are responsible for dispatching and preempting processes (the
+low-level mechanisms). Class-dependent routines are those that are
+responsible for setting the priority of each of its processes (the
+high-level policy).
+
+By default, Solaris supports three scheduling classes: time-sharing
+(TS), real-time (RT), and system (SYS). Users with root privileges
+can easily implement and add new scheduling classes by adhering to a
+predefined interface. Each scheduling class gives each of its
+processes a priority, the range of which is shown below:
+
+@multitable {Scheduling Class} {Priorities}
+@item Scheduling Class @tab Priorities
+@item Real-Time @tab 100-159
+@item System @tab 60-99
+@item Time-Sharing @tab 0-59
+@end multitable
+
+As long as a user has the correct privileges, he or she can submit
+jobs to any scheduling class. By default, jobs are executed in the same
+scheduling class as the parent process that forked the job. Since
+your shell is running in the timesharing class, all of your jobs run
+by default in the time-sharing class.
+
+See the man pages for @command{priocntl} on any machine running
+Solaris for information on how to submit jobs to different scheduling
+classes. However, since you probably don't have root privileges on
+your machine, you won't be able to do much.
+
+To see the scheduling class of each process in the system, run
+@samp{ps -edaflc}. (@samp{-c} is the flag that shows the scheduling
+class.) The fourth column shows the scheduling class of the running
+process. Most jobs will be running in the TS class, with a few (owned
+by root) running in the SYS class.
+
+@example
+elaine1:~> ps -edafc
+ UID PID PPID CLS PRI STIME TTY TIME CMD
+ root 0 0 SYS 96 Aug 01 ? 0:00 sched
+ root 1 0 TS 58 Aug 01 ? 1:06 /etc/init -
+ root 2 0 SYS 98 Aug 01 ? 0:02 pageout
+ root 3 0 SYS 60 Aug 01 ? 15:22 fsflush
+ root 245 239 TS 59 Aug 01 ? 0:00 ttymon
+ root 181 1 TS 48 Aug 01 ? 0:00 sendmail -q15m
+ root 239 1 TS 59 Aug 01 ? 0:00 sac -t 300
+ root 96 1 TS 58 Aug 01 ? 0:00 rpcbind
+ root 125 1 TS 59 Aug 01 ? 0:32 syslogd
+@end example
+
+In this document, we only discuss the Solaris time-sharing (TS)
+class. Note the priorities of each of the processes, as listed in the
+fifth column.
+
+@node Class Independent Functionality
+@subsection Class Independent Functionality
+
+The class independent routines arbitrate across the scheduling
+classes. This involves three basic responsibilities.
+
+@itemize @bullet
+@item
+The process with the highest priority must be dispatched, and the
+state of the preempted process saved.
+
+@item
+The class independent functions must notifying the class-dependent
+routines when the state of its processes changes (for example, at
+creation and termination, when a process changes from blocked to
+runnable, or runnable to blocked, and when a 10ms timer expires).
+
+@item
+Processes must be moved between priority queues in the class
+independent data structures, as directed by its scheduling class, and
+must be moved between blocked and ready queues.
+@end itemize
+
+@node Time-Sharing Scheduling Class
+@subsection Time-Sharing Scheduling Class
+
+The time-sharing scheduler in Solaris is an example of a multi-level
+feedback queue scheduler. A job begins at priority 29. Compute-bound
+jobs then filter down to the lower priorities, where they are
+scheduled less frequently, but for longer time-slices. Interactive
+jobs propagate to the higher priorities, where they are scheduled
+whenever they have work to perform, on the assumption that they will
+soon relinquish the processor again. In the TS scheduler, the
+priority of a process is lowered after it consumes its allocated
+time-slice. Its priority is raised if it has not consumed its
+time-slice before a starvation interval expires.
+
+@node Dispatch Table
+@subsection Dispatch Table
+
+The durations of the time-slices, the changes in priorities, and the
+starvation interval are specified in a user-tunable dispatch table.
+The system administrator (or anyone with root privileges) can change
+the values in this table, thus configuring how the time-sharing
+scheduler manages its jobs. While this has the noble intention of
+allowing different systems to tune the scheduler to better handle
+their workloads, in reality no one really knows how to configure these
+tables well. Therefore, we will focus on the default dispatch table.
+
+To see how this table is configured in your system, run
+@samp{dispadmin -c TS -g}. You should see something like the table
+shown below. Looking at the man pages on @command{dispadmin} and
+@command{ts_dptbl} may also be helpful.
+
+@multitable {@code{ts_quantum}} {@code{ts_tqexp}} {@code{ts_slpret}} {@code{ts_maxwait}} {@code{ts_lwait}} {priority}
+@item @code{ts_quantum} @tab @code{ts_tqexp} @tab @code{ts_slpret} @tab @code{ts_maxwait} @tab @code{ts_lwait} @tab priority
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 0
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 1
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 2
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 3
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 4
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 5
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 6
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 7
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 8
+@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 9
+@item 160 @tab 0 @tab 51 @tab 0 @tab 51 @tab 10
+@item 160 @tab 1 @tab 51 @tab 0 @tab 51 @tab 11
+@item 160 @tab 2 @tab 51 @tab 0 @tab 51 @tab 12
+@item 160 @tab 3 @tab 51 @tab 0 @tab 51 @tab 13
+@item 160 @tab 4 @tab 51 @tab 0 @tab 51 @tab 14
+@item 160 @tab 5 @tab 51 @tab 0 @tab 51 @tab 15
+@item 160 @tab 6 @tab 51 @tab 0 @tab 51 @tab 16
+@item 160 @tab 7 @tab 51 @tab 0 @tab 51 @tab 17
+@item 160 @tab 8 @tab 51 @tab 0 @tab 51 @tab 18
+@item 160 @tab 9 @tab 51 @tab 0 @tab 51 @tab 19
+@item 120 @tab 10 @tab 52 @tab 0 @tab 52 @tab 20
+@item 120 @tab 11 @tab 52 @tab 0 @tab 52 @tab 21
+@item 120 @tab 12 @tab 52 @tab 0 @tab 52 @tab 22
+@item 120 @tab 13 @tab 52 @tab 0 @tab 52 @tab 23
+@item 120 @tab 14 @tab 52 @tab 0 @tab 52 @tab 24
+@item 120 @tab 15 @tab 52 @tab 0 @tab 52 @tab 25
+@item 120 @tab 16 @tab 52 @tab 0 @tab 52 @tab 26
+@item 120 @tab 17 @tab 52 @tab 0 @tab 52 @tab 27
+@item 120 @tab 18 @tab 52 @tab 0 @tab 52 @tab 28
+@item 120 @tab 19 @tab 52 @tab 0 @tab 52 @tab 29
+@item 80 @tab 20 @tab 53 @tab 0 @tab 53 @tab 30
+@item 80 @tab 21 @tab 53 @tab 0 @tab 53 @tab 31
+@item 80 @tab 22 @tab 53 @tab 0 @tab 53 @tab 32
+@item 80 @tab 23 @tab 53 @tab 0 @tab 53 @tab 33
+@item 80 @tab 24 @tab 53 @tab 0 @tab 53 @tab 34
+@item 80 @tab 25 @tab 54 @tab 0 @tab 54 @tab 35
+@item 80 @tab 26 @tab 54 @tab 0 @tab 54 @tab 36
+@item 80 @tab 27 @tab 54 @tab 0 @tab 54 @tab 37
+@item 80 @tab 28 @tab 54 @tab 0 @tab 54 @tab 38
+@item 80 @tab 29 @tab 54 @tab 0 @tab 54 @tab 39
+@item 40 @tab 30 @tab 55 @tab 0 @tab 55 @tab 40
+@item 40 @tab 31 @tab 55 @tab 0 @tab 55 @tab 41
+@item 40 @tab 32 @tab 55 @tab 0 @tab 55 @tab 42
+@item 40 @tab 33 @tab 55 @tab 0 @tab 55 @tab 43
+@item 40 @tab 34 @tab 55 @tab 0 @tab 55 @tab 44
+@item 40 @tab 35 @tab 56 @tab 0 @tab 56 @tab 45
+@item 40 @tab 36 @tab 57 @tab 0 @tab 57 @tab 46
+@item 40 @tab 37 @tab 58 @tab 0 @tab 58 @tab 47
+@item 40 @tab 38 @tab 58 @tab 0 @tab 58 @tab 48
+@item 40 @tab 39 @tab 58 @tab 0 @tab 59 @tab 49
+@item 40 @tab 40 @tab 58 @tab 0 @tab 59 @tab 50
+@item 40 @tab 41 @tab 58 @tab 0 @tab 59 @tab 51
+@item 40 @tab 42 @tab 58 @tab 0 @tab 59 @tab 52
+@item 40 @tab 43 @tab 58 @tab 0 @tab 59 @tab 53
+@item 40 @tab 44 @tab 58 @tab 0 @tab 59 @tab 54
+@item 40 @tab 45 @tab 58 @tab 0 @tab 59 @tab 55
+@item 40 @tab 46 @tab 58 @tab 0 @tab 59 @tab 56
+@item 40 @tab 47 @tab 58 @tab 0 @tab 59 @tab 57
+@item 40 @tab 48 @tab 58 @tab 0 @tab 59 @tab 58
+@item 20 @tab 49 @tab 59 @tab 32000 @tab 59 @tab 59
+@end multitable
+
+You will see one row for every priority in the scheduling class, from
+0 to 59. For each priority, there are five columns:
+
+@table @code
+@item ts_quantum
+Length of the time-slice. In the actual table, this value is specified
+in 10@dmn{ms} clock ticks, but in the output from running
+@command{dispadmin}, the value is specified in units of 1@dmn{ms}.
+
+@item ts_tqexp
+Priority level of the new queue on which to place a process if it
+exceeds its time quantum. Normally this field links to a lower
+priority time-sharing level.
+
+@item ts_slpret
+The new, generally increased, priority to adopt when the job returns
+from sleeping (i.e.@: from the blocked queue) if @code{ts_dispwait}
+exceeds @code{ts_maxwait}.
+
+@item ts_maxwait
+Each time a process is placed back on the dispatcher queue after its
+time quantum expires or when it is awakened, but not when it is
+preempted by a higher-priority process, a per-process counter named
+@code{ts_dispwait} is zeroed. This counter is incremented once per
+second. If a process's @code{ts_dispwait} exceeds its priority's
+@code{ts_maxwait}, then the process's priority is changed to
+@code{ts_lwait}. This prevents starvation.
+
+@item ts_lwait
+The new, generally increased, priority to adopt if the starvation
+timer expires before the job consumes its time-slice (i.e.@: if
+@code{ts_dispwait} exceeds @code{ts_maxwait}).
+@end table
+
+In this table, the priority of jobs ranges from a high of 59 down to
+0. Time-slices begin at 20@dmn{ms} at the highest priority and
+gradually increase in duration up to 200@dmn{ms} at the lowest
+priorities. Generally, the priority of a process decreases by 10
+levels after it consumes its time-slice. The priority of a process is
+increased to 50 or above when the starvation timer expires.
+
+@node Implementation
+@subsection Implementation
+
+For each job in the TS class, the following data structure is
+maintained (we've removed a few of the fields for simplicity):
+
+@example
+/*
+ * time-sharing class specific thread structure
+ */
+typedef struct tsproc @{
+ long ts_timeleft; /* time remaining in quantum */
+ long ts_dispwait; /* number of seconds since */
+ /* start of quantum (not reset */
+ /* upon preemption) */
+ pri_t ts_pri; /* priority (0-59) */
+ kthread_t *ts_tp; /* pointer to thread */
+ struct tsproc *ts_next; /* link to next tsproc on list */
+ struct tsproc *ts_prev; /* link to previous tsproc */
+@} tsproc_t;
+@end example
+
+The @code{kthread_t} structure tracks the necessary information to
+context-switch to and from this process. This structure is kept
+separate from the time-sharing class in order to separate the
+mechanisms of the dispatcher from the policies of the scheduler.
+
+There are seven interesting routines in the TS class:
+
+@table @code
+@item ts_enterclass(thread *@var{t})
+Called when a new thread is added to the TS class. It initializes a
+@code{tsproc} structure for this process and adds it to the list of
+processes.
+
+@item ts_exitclass(thread *@var{t})
+Called when the thread terminates or exits the class. The
+@code{tsproc} structure is removed from the list of processes.
+
+@item ts_tick(thread *@var{t})
+Called once every 10@dmn{ms} with a pointer to the currently running
+thread. The @code{ts_timeleft} variable of the running thread is
+decremented by one. If @code{ts_timeleft} reaches zero, then its new
+priority becomes its old priority's @code{ts_tqexp}, its timeslice is
+reset, and @code{ts_dispwait} is zeroed. The thread is then added to
+the back of the appropriate priority queue and a new job is scheduled.
+
+@item ts_update()
+Called once a second to check the starvation qualities of each job.
+The routine increments the @code{ts_dispwait} counter of every process
+in the class (even those that are blocked) by one. If the job is on
+the ready queue (i.e.@: the job is neither running nor blocked) and
+its @code{ts_dispwait} exceeds @code{ts_maxwait}, then its priority
+and @code{ts_dispwait} (but not @code{ts_timeleft}) are reset. This
+may involve rearranging the priority queues.
+
+@item ts_sleep(thread *@var{t})
+Called when the thread blocks (e.g.@: due to I/O or synchronization).
+The TS routine does not need to do anything in these circumstance, but
+the dispatcher, or class-independent routines, must add the thread to
+the blocked queue and schedule a new thread.
+
+@item ts_wakeup(thread *@var{t})
+Called when the blocked thread becomes ready. If @code{ts_dispwait}
+for the process is greater than its priority's @code{ts_maxwait}, then
+its priority is set to @code{ts_slpret}, its timeslice
+(@code{ts_timeleft}) is reset, and @code{ts_dispwait} is zeroed. If
+the priority of this job is higher than that of the running job, it
+preempts the currently running job. Otherwise the newly awoken job is
+added to the back of its priority queue.
+
+@item ts_preempt(thread *@var{t})
+Called when the thread is preempted by a higher priority thread. The
+preempted thread is added to the front of its priority queue.
+@end table
+
+@node Fairness
+@subsection Fairness
+
+The Solaris time-sharing scheduler approximates fair allocations by
+decreasing the priority of a job the more that it is scheduled.
+Therefore, a job that is runnable relatively infrequently remains at a
+higher priority and is scheduled over lower priority jobs. However,
+due to the configuration of the default dispatch table (in which the
+starvation interval is set to zero), you the priority of every process
+is raised once a second, regardless of whether or not it is actually
+starving. Thus, the allocation history of each process is erased
+every second and compute-bound processes tend to acquire more than
+their fair share of the resources.
+
+This behavior is illustrated in the graph below for three competing
+jobs that relinquish the processor at different rates while waiting
+for I/O to complete: acoarse job that rarely relinquishes the CPU, a
+medium job that does so more frequently, and afine job that often
+relinquishes the CPU. The graph shows a typical snapshot over a five
+second execution interval. As described, each second the priority of
+all three jobs is raised to level 50 or higher. As a job executes and
+consumes its timeslice, its priority is lowered about ten levels Since
+the coarse job runs more frequently, it drops in priority at a faster
+rate than the other two jobs.
+
+@image{mlfqs1}
+
+The impact of this policy on the relative execution times of the three
+applications is shown in the next graph below. Because the coarse
+application acquires more CPU time, it finishes its work earlier than
+the other applications, even though all three jobs require the same
+amount of time in a dedicated environment.
+
+@image{mlfqs2}
+
+@node Project Requirements
+@subsection Project Requirements
+
+For your project, you need to implement code that is similar in
+functionality to the Solaris TS scheduler, but your code does not have
+to be structured in the same way. Implement your code in whatever
+manner you find easiest when interfacing with the already existing
+Pintos code.
+
+Specifically, you are not required to separate the class-independent
+and class-dependent scheduling functionality. You do not need to
+support multiple scheduling classes. You can implement any routines
+that you feel are necessary, not just the seven functions we
+specifically listed. You can pass any parameters that you find
+helpful.
+
+However, it is important that your approach be table-driven, with a
+user-configurable dispatch table. Your table should be initialized to
+the default values in Solaris 2.6, but you may also want to experiment
+with different configurations. To demonstrate the functionality of
+your scheduler, you may want to print out the change in priorities of
+several different competing processes, as in the first graph above.
+