From 70125b0764332aaada1144956ec1c62d51726d21 Mon Sep 17 00:00:00 2001 From: Ben Pfaff Date: Sun, 12 Sep 2004 05:45:47 +0000 Subject: [PATCH] Describe multi-level feedback queue scheduler. --- doc/Makefile | 18 ++- doc/mlfqs.texi | 391 +++++++++++++++++++++++++++++++++++++++++++++++ doc/mlfqs1-1.pts | 28 ++++ doc/mlfqs1-2.pts | 24 +++ doc/mlfqs1-3.pts | 24 +++ doc/mlfqs1.jgr | 22 +++ doc/mlfqs1i | 10 ++ doc/mlfqs2-1.pts | 3 + doc/mlfqs2-2.pts | 8 + doc/mlfqs2-3.pts | 7 + doc/mlfqs2.jgr | 22 +++ doc/mlfqs2i | 8 + doc/threads.texi | 15 +- 13 files changed, 571 insertions(+), 9 deletions(-) create mode 100644 doc/mlfqs.texi create mode 100644 doc/mlfqs1-1.pts create mode 100644 doc/mlfqs1-2.pts create mode 100644 doc/mlfqs1-3.pts create mode 100644 doc/mlfqs1.jgr create mode 100755 doc/mlfqs1i create mode 100644 doc/mlfqs2-1.pts create mode 100644 doc/mlfqs2-2.pts create mode 100644 doc/mlfqs2-3.pts create mode 100644 doc/mlfqs2.jgr create mode 100755 doc/mlfqs2i diff --git a/doc/Makefile b/doc/Makefile index 138cf35..f053ce9 100644 --- a/doc/Makefile +++ b/doc/Makefile @@ -1,7 +1,19 @@ -TEXIS = projects.texi threads.texi userprog.texi filesys.texi vm.texi +TEXIS = projects.texi threads.texi mlfqs.texi userprog.texi \ +filesys.texi vm.texi + +all: projects.html projects.info + +projects.html: $(TEXIS) mlfqs1.png mlfqs2.png + texi2html -toc_file=$@ -split=chapter -no-sec_nav -no-menu $< projects.info: $(TEXIS) makeinfo $< -projects.html: $(TEXIS) - texi2html -toc_file=$@ -split=chapter -no-sec_nav -no-menu $< +%.eps: %.jgr + jgraph $< > $@.tmp && mv $@.tmp $@ + +%.png: %.eps + convert $< $@ + +clean: + rm -f *.info *.html *.eps *.png diff --git a/doc/mlfqs.texi b/doc/mlfqs.texi new file mode 100644 index 0000000..8d7fc4e --- /dev/null +++ b/doc/mlfqs.texi @@ -0,0 +1,391 @@ +@node Multilevel Feedback Scheduling, , , Project 1--Threads +@section Multilevel Feedback Scheduling + +This section gives a brief overview of the behavior of the Solaris 2.6 +Time-Sharing (TS) scheduler, an example of a Multilevel Feedback Queue +scheduler. The information in this handout, in conjunction with that +given in lecture, should be used to answer Problem 1-4. The end of +this document specifies in more detail which aspects of the Solaris +scheduler that you should implement. + +The goal of a multilevel feedback queue scheduler is to fairly and +efficiently schedule a mix of processes with a variety of execution +characteristics. By controlling how a process moves between priority +levels, processes with different characteristics can be scheduled as +appropriate. Priority-based schedulers attempt to provide a +compromise between several desirable metrics (e.g.@: response time for +interactive jobs, throughput for compute-intensive jobs, and fair +allocations for all jobs). + +The queues in the system are ranked according to priority. Processes +waiting in higher priority queues are always scheduled over those in +lower priority queues. Processes at the same priority are usually +scheduled in a round-robin fashion. + +Such schedulers tend to be preemptible in order to support interactive +processes. That is, a higher priority process is immediately +scheduled if a lower priority process is running on the CPU. + +@menu +* Scheduling in Solaris:: +* Class Independent Functionality:: +* Time-Sharing Scheduling Class:: +* Dispatch Table:: +* Implementation:: +* Fairness:: +* Project Requirements:: +@end menu + +@node Scheduling in Solaris +@subsection Scheduling in Solaris + +The Solaris operating system is based on Unix System V Release 4 +(SVR4). Scheduling in Solaris, as in all SVR4-based schedulers, is +performed at two levels: class-independent routines and +class-dependent routines. Class-independent routines are those that +are responsible for dispatching and preempting processes (the +low-level mechanisms). Class-dependent routines are those that are +responsible for setting the priority of each of its processes (the +high-level policy). + +By default, Solaris supports three scheduling classes: time-sharing +(TS), real-time (RT), and system (SYS). Users with root privileges +can easily implement and add new scheduling classes by adhering to a +predefined interface. Each scheduling class gives each of its +processes a priority, the range of which is shown below: + +@multitable {Scheduling Class} {Priorities} +@item Scheduling Class @tab Priorities +@item Real-Time @tab 100-159 +@item System @tab 60-99 +@item Time-Sharing @tab 0-59 +@end multitable + +As long as a user has the correct privileges, he or she can submit +jobs to any scheduling class. By default, jobs are executed in the same +scheduling class as the parent process that forked the job. Since +your shell is running in the timesharing class, all of your jobs run +by default in the time-sharing class. + +See the man pages for @command{priocntl} on any machine running +Solaris for information on how to submit jobs to different scheduling +classes. However, since you probably don't have root privileges on +your machine, you won't be able to do much. + +To see the scheduling class of each process in the system, run +@samp{ps -edaflc}. (@samp{-c} is the flag that shows the scheduling +class.) The fourth column shows the scheduling class of the running +process. Most jobs will be running in the TS class, with a few (owned +by root) running in the SYS class. + +@example +elaine1:~> ps -edafc + UID PID PPID CLS PRI STIME TTY TIME CMD + root 0 0 SYS 96 Aug 01 ? 0:00 sched + root 1 0 TS 58 Aug 01 ? 1:06 /etc/init - + root 2 0 SYS 98 Aug 01 ? 0:02 pageout + root 3 0 SYS 60 Aug 01 ? 15:22 fsflush + root 245 239 TS 59 Aug 01 ? 0:00 ttymon + root 181 1 TS 48 Aug 01 ? 0:00 sendmail -q15m + root 239 1 TS 59 Aug 01 ? 0:00 sac -t 300 + root 96 1 TS 58 Aug 01 ? 0:00 rpcbind + root 125 1 TS 59 Aug 01 ? 0:32 syslogd +@end example + +In this document, we only discuss the Solaris time-sharing (TS) +class. Note the priorities of each of the processes, as listed in the +fifth column. + +@node Class Independent Functionality +@subsection Class Independent Functionality + +The class independent routines arbitrate across the scheduling +classes. This involves three basic responsibilities. + +@itemize @bullet +@item +The process with the highest priority must be dispatched, and the +state of the preempted process saved. + +@item +The class independent functions must notifying the class-dependent +routines when the state of its processes changes (for example, at +creation and termination, when a process changes from blocked to +runnable, or runnable to blocked, and when a 10ms timer expires). + +@item +Processes must be moved between priority queues in the class +independent data structures, as directed by its scheduling class, and +must be moved between blocked and ready queues. +@end itemize + +@node Time-Sharing Scheduling Class +@subsection Time-Sharing Scheduling Class + +The time-sharing scheduler in Solaris is an example of a multi-level +feedback queue scheduler. A job begins at priority 29. Compute-bound +jobs then filter down to the lower priorities, where they are +scheduled less frequently, but for longer time-slices. Interactive +jobs propagate to the higher priorities, where they are scheduled +whenever they have work to perform, on the assumption that they will +soon relinquish the processor again. In the TS scheduler, the +priority of a process is lowered after it consumes its allocated +time-slice. Its priority is raised if it has not consumed its +time-slice before a starvation interval expires. + +@node Dispatch Table +@subsection Dispatch Table + +The durations of the time-slices, the changes in priorities, and the +starvation interval are specified in a user-tunable dispatch table. +The system administrator (or anyone with root privileges) can change +the values in this table, thus configuring how the time-sharing +scheduler manages its jobs. While this has the noble intention of +allowing different systems to tune the scheduler to better handle +their workloads, in reality no one really knows how to configure these +tables well. Therefore, we will focus on the default dispatch table. + +To see how this table is configured in your system, run +@samp{dispadmin -c TS -g}. You should see something like the table +shown below. Looking at the man pages on @command{dispadmin} and +@command{ts_dptbl} may also be helpful. + +@multitable {@code{ts_quantum}} {@code{ts_tqexp}} {@code{ts_slpret}} {@code{ts_maxwait}} {@code{ts_lwait}} {priority} +@item @code{ts_quantum} @tab @code{ts_tqexp} @tab @code{ts_slpret} @tab @code{ts_maxwait} @tab @code{ts_lwait} @tab priority +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 0 +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 1 +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 2 +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 3 +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 4 +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 5 +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 6 +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 7 +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 8 +@item 200 @tab 0 @tab 50 @tab 0 @tab 50 @tab 9 +@item 160 @tab 0 @tab 51 @tab 0 @tab 51 @tab 10 +@item 160 @tab 1 @tab 51 @tab 0 @tab 51 @tab 11 +@item 160 @tab 2 @tab 51 @tab 0 @tab 51 @tab 12 +@item 160 @tab 3 @tab 51 @tab 0 @tab 51 @tab 13 +@item 160 @tab 4 @tab 51 @tab 0 @tab 51 @tab 14 +@item 160 @tab 5 @tab 51 @tab 0 @tab 51 @tab 15 +@item 160 @tab 6 @tab 51 @tab 0 @tab 51 @tab 16 +@item 160 @tab 7 @tab 51 @tab 0 @tab 51 @tab 17 +@item 160 @tab 8 @tab 51 @tab 0 @tab 51 @tab 18 +@item 160 @tab 9 @tab 51 @tab 0 @tab 51 @tab 19 +@item 120 @tab 10 @tab 52 @tab 0 @tab 52 @tab 20 +@item 120 @tab 11 @tab 52 @tab 0 @tab 52 @tab 21 +@item 120 @tab 12 @tab 52 @tab 0 @tab 52 @tab 22 +@item 120 @tab 13 @tab 52 @tab 0 @tab 52 @tab 23 +@item 120 @tab 14 @tab 52 @tab 0 @tab 52 @tab 24 +@item 120 @tab 15 @tab 52 @tab 0 @tab 52 @tab 25 +@item 120 @tab 16 @tab 52 @tab 0 @tab 52 @tab 26 +@item 120 @tab 17 @tab 52 @tab 0 @tab 52 @tab 27 +@item 120 @tab 18 @tab 52 @tab 0 @tab 52 @tab 28 +@item 120 @tab 19 @tab 52 @tab 0 @tab 52 @tab 29 +@item 80 @tab 20 @tab 53 @tab 0 @tab 53 @tab 30 +@item 80 @tab 21 @tab 53 @tab 0 @tab 53 @tab 31 +@item 80 @tab 22 @tab 53 @tab 0 @tab 53 @tab 32 +@item 80 @tab 23 @tab 53 @tab 0 @tab 53 @tab 33 +@item 80 @tab 24 @tab 53 @tab 0 @tab 53 @tab 34 +@item 80 @tab 25 @tab 54 @tab 0 @tab 54 @tab 35 +@item 80 @tab 26 @tab 54 @tab 0 @tab 54 @tab 36 +@item 80 @tab 27 @tab 54 @tab 0 @tab 54 @tab 37 +@item 80 @tab 28 @tab 54 @tab 0 @tab 54 @tab 38 +@item 80 @tab 29 @tab 54 @tab 0 @tab 54 @tab 39 +@item 40 @tab 30 @tab 55 @tab 0 @tab 55 @tab 40 +@item 40 @tab 31 @tab 55 @tab 0 @tab 55 @tab 41 +@item 40 @tab 32 @tab 55 @tab 0 @tab 55 @tab 42 +@item 40 @tab 33 @tab 55 @tab 0 @tab 55 @tab 43 +@item 40 @tab 34 @tab 55 @tab 0 @tab 55 @tab 44 +@item 40 @tab 35 @tab 56 @tab 0 @tab 56 @tab 45 +@item 40 @tab 36 @tab 57 @tab 0 @tab 57 @tab 46 +@item 40 @tab 37 @tab 58 @tab 0 @tab 58 @tab 47 +@item 40 @tab 38 @tab 58 @tab 0 @tab 58 @tab 48 +@item 40 @tab 39 @tab 58 @tab 0 @tab 59 @tab 49 +@item 40 @tab 40 @tab 58 @tab 0 @tab 59 @tab 50 +@item 40 @tab 41 @tab 58 @tab 0 @tab 59 @tab 51 +@item 40 @tab 42 @tab 58 @tab 0 @tab 59 @tab 52 +@item 40 @tab 43 @tab 58 @tab 0 @tab 59 @tab 53 +@item 40 @tab 44 @tab 58 @tab 0 @tab 59 @tab 54 +@item 40 @tab 45 @tab 58 @tab 0 @tab 59 @tab 55 +@item 40 @tab 46 @tab 58 @tab 0 @tab 59 @tab 56 +@item 40 @tab 47 @tab 58 @tab 0 @tab 59 @tab 57 +@item 40 @tab 48 @tab 58 @tab 0 @tab 59 @tab 58 +@item 20 @tab 49 @tab 59 @tab 32000 @tab 59 @tab 59 +@end multitable + +You will see one row for every priority in the scheduling class, from +0 to 59. For each priority, there are five columns: + +@table @code +@item ts_quantum +Length of the time-slice. In the actual table, this value is specified +in 10@dmn{ms} clock ticks, but in the output from running +@command{dispadmin}, the value is specified in units of 1@dmn{ms}. + +@item ts_tqexp +Priority level of the new queue on which to place a process if it +exceeds its time quantum. Normally this field links to a lower +priority time-sharing level. + +@item ts_slpret +The new, generally increased, priority to adopt when the job returns +from sleeping (i.e.@: from the blocked queue) if @code{ts_dispwait} +exceeds @code{ts_maxwait}. + +@item ts_maxwait +Each time a process is placed back on the dispatcher queue after its +time quantum expires or when it is awakened, but not when it is +preempted by a higher-priority process, a per-process counter named +@code{ts_dispwait} is zeroed. This counter is incremented once per +second. If a process's @code{ts_dispwait} exceeds its priority's +@code{ts_maxwait}, then the process's priority is changed to +@code{ts_lwait}. This prevents starvation. + +@item ts_lwait +The new, generally increased, priority to adopt if the starvation +timer expires before the job consumes its time-slice (i.e.@: if +@code{ts_dispwait} exceeds @code{ts_maxwait}). +@end table + +In this table, the priority of jobs ranges from a high of 59 down to +0. Time-slices begin at 20@dmn{ms} at the highest priority and +gradually increase in duration up to 200@dmn{ms} at the lowest +priorities. Generally, the priority of a process decreases by 10 +levels after it consumes its time-slice. The priority of a process is +increased to 50 or above when the starvation timer expires. + +@node Implementation +@subsection Implementation + +For each job in the TS class, the following data structure is +maintained (we've removed a few of the fields for simplicity): + +@example +/* + * time-sharing class specific thread structure + */ +typedef struct tsproc @{ + long ts_timeleft; /* time remaining in quantum */ + long ts_dispwait; /* number of seconds since */ + /* start of quantum (not reset */ + /* upon preemption) */ + pri_t ts_pri; /* priority (0-59) */ + kthread_t *ts_tp; /* pointer to thread */ + struct tsproc *ts_next; /* link to next tsproc on list */ + struct tsproc *ts_prev; /* link to previous tsproc */ +@} tsproc_t; +@end example + +The @code{kthread_t} structure tracks the necessary information to +context-switch to and from this process. This structure is kept +separate from the time-sharing class in order to separate the +mechanisms of the dispatcher from the policies of the scheduler. + +There are seven interesting routines in the TS class: + +@table @code +@item ts_enterclass(thread *@var{t}) +Called when a new thread is added to the TS class. It initializes a +@code{tsproc} structure for this process and adds it to the list of +processes. + +@item ts_exitclass(thread *@var{t}) +Called when the thread terminates or exits the class. The +@code{tsproc} structure is removed from the list of processes. + +@item ts_tick(thread *@var{t}) +Called once every 10@dmn{ms} with a pointer to the currently running +thread. The @code{ts_timeleft} variable of the running thread is +decremented by one. If @code{ts_timeleft} reaches zero, then its new +priority becomes its old priority's @code{ts_tqexp}, its timeslice is +reset, and @code{ts_dispwait} is zeroed. The thread is then added to +the back of the appropriate priority queue and a new job is scheduled. + +@item ts_update() +Called once a second to check the starvation qualities of each job. +The routine increments the @code{ts_dispwait} counter of every process +in the class (even those that are blocked) by one. If the job is on +the ready queue (i.e.@: the job is neither running nor blocked) and +its @code{ts_dispwait} exceeds @code{ts_maxwait}, then its priority +and @code{ts_dispwait} (but not @code{ts_timeleft}) are reset. This +may involve rearranging the priority queues. + +@item ts_sleep(thread *@var{t}) +Called when the thread blocks (e.g.@: due to I/O or synchronization). +The TS routine does not need to do anything in these circumstance, but +the dispatcher, or class-independent routines, must add the thread to +the blocked queue and schedule a new thread. + +@item ts_wakeup(thread *@var{t}) +Called when the blocked thread becomes ready. If @code{ts_dispwait} +for the process is greater than its priority's @code{ts_maxwait}, then +its priority is set to @code{ts_slpret}, its timeslice +(@code{ts_timeleft}) is reset, and @code{ts_dispwait} is zeroed. If +the priority of this job is higher than that of the running job, it +preempts the currently running job. Otherwise the newly awoken job is +added to the back of its priority queue. + +@item ts_preempt(thread *@var{t}) +Called when the thread is preempted by a higher priority thread. The +preempted thread is added to the front of its priority queue. +@end table + +@node Fairness +@subsection Fairness + +The Solaris time-sharing scheduler approximates fair allocations by +decreasing the priority of a job the more that it is scheduled. +Therefore, a job that is runnable relatively infrequently remains at a +higher priority and is scheduled over lower priority jobs. However, +due to the configuration of the default dispatch table (in which the +starvation interval is set to zero), you the priority of every process +is raised once a second, regardless of whether or not it is actually +starving. Thus, the allocation history of each process is erased +every second and compute-bound processes tend to acquire more than +their fair share of the resources. + +This behavior is illustrated in the graph below for three competing +jobs that relinquish the processor at different rates while waiting +for I/O to complete: acoarse job that rarely relinquishes the CPU, a +medium job that does so more frequently, and afine job that often +relinquishes the CPU. The graph shows a typical snapshot over a five +second execution interval. As described, each second the priority of +all three jobs is raised to level 50 or higher. As a job executes and +consumes its timeslice, its priority is lowered about ten levels Since +the coarse job runs more frequently, it drops in priority at a faster +rate than the other two jobs. + +@image{mlfqs1} + +The impact of this policy on the relative execution times of the three +applications is shown in the next graph below. Because the coarse +application acquires more CPU time, it finishes its work earlier than +the other applications, even though all three jobs require the same +amount of time in a dedicated environment. + +@image{mlfqs2} + +@node Project Requirements +@subsection Project Requirements + +For your project, you need to implement code that is similar in +functionality to the Solaris TS scheduler, but your code does not have +to be structured in the same way. Implement your code in whatever +manner you find easiest when interfacing with the already existing +Pintos code. + +Specifically, you are not required to separate the class-independent +and class-dependent scheduling functionality. You do not need to +support multiple scheduling classes. You can implement any routines +that you feel are necessary, not just the seven functions we +specifically listed. You can pass any parameters that you find +helpful. + +However, it is important that your approach be table-driven, with a +user-configurable dispatch table. Your table should be initialized to +the default values in Solaris 2.6, but you may also want to experiment +with different configurations. To demonstrate the functionality of +your scheduler, you may want to print out the change in priorities of +several different competing processes, as in the first graph above. + diff --git a/doc/mlfqs1-1.pts b/doc/mlfqs1-1.pts new file mode 100644 index 0000000..67107c7 --- /dev/null +++ b/doc/mlfqs1-1.pts @@ -0,0 +1,28 @@ +206 155 +210 265 +213 236 +217 209 +224 182 +237 155 +253 130 +258 260 +261 233 +264 206 +279 180 +292 153 +306 264 +315 235 +316 209 +323 181 +333 155 +347 129 +355 216 +359 233 +364 206 +376 180 +389 152 +402 263 +408 235 +415 209 +434 182 +445 155 diff --git a/doc/mlfqs1-2.pts b/doc/mlfqs1-2.pts new file mode 100644 index 0000000..d2045bb --- /dev/null +++ b/doc/mlfqs1-2.pts @@ -0,0 +1,24 @@ +206 155 +209 263 +222 236 +228 209 +244 182 +261 265 +267 212 +283 185 +302 158 +306 263 +317 236 +321 209 +336 182 +351 155 +355 263 +361 236 +364 209 +376 181 +397 155 +403 263 +413 236 +422 209 +434 182 +445 182 diff --git a/doc/mlfqs1-3.pts b/doc/mlfqs1-3.pts new file mode 100644 index 0000000..7584d91 --- /dev/null +++ b/doc/mlfqs1-3.pts @@ -0,0 +1,24 @@ +206 182 +210 265 +225 238 +235 212 +259 185 +263 265 +267 238 +279 212 +300 185 +307 159 +310 262 +325 235 +336 210 +352 182 +355 262 +362 237 +366 212 +378 185 +400 158 +402 262 +413 235 +426 209 +442 183 +445 183 diff --git a/doc/mlfqs1.jgr b/doc/mlfqs1.jgr new file mode 100644 index 0000000..5ae28e7 --- /dev/null +++ b/doc/mlfqs1.jgr @@ -0,0 +1,22 @@ +newgraph +title : Priority of 3 Competing Jobs over Time (Default Scheduler) +xaxis + label : Elapsed Time (s) + min 23 + max 28.3 +yaxis + label : Process Priority + min 0 + max 60 +newline + linetype solid + label : Coarse + pts shell : ./mlfqs1i < mlfqs1-1.pts +newline + linetype dashed + label : Medium + pts shell : ./mlfqs1i < mlfqs1-2.pts +newline + linetype dotted + label : Fine + pts shell : ./mlfqs1i < mlfqs1-3.pts diff --git a/doc/mlfqs1i b/doc/mlfqs1i new file mode 100755 index 0000000..83b9af2 --- /dev/null +++ b/doc/mlfqs1i @@ -0,0 +1,10 @@ +#! /usr/bin/perl +my ($px, $py); +while (<>) { + my ($ix, $iy) = /^(\d+) (\d+)$/ or die; + my ($x) = ($ix - 206) / (436 - 206) * (28 - 23) + 23; + my ($y) = ($iy - 126) / (287 - 126) * (60 - 0) + 0; + printf "%.2f %.2f\n", $x, $py if defined $py; + printf "%.2f %.2f\n", $x, $y; + ($px, $py) = ($x, $y); +} diff --git a/doc/mlfqs2-1.pts b/doc/mlfqs2-1.pts new file mode 100644 index 0000000..e42d0ae --- /dev/null +++ b/doc/mlfqs2-1.pts @@ -0,0 +1,3 @@ +267 511 +334 562 +395 611 diff --git a/doc/mlfqs2-2.pts b/doc/mlfqs2-2.pts new file mode 100644 index 0000000..a744da2 --- /dev/null +++ b/doc/mlfqs2-2.pts @@ -0,0 +1,8 @@ +268 492 +324 522 +388 555 +397 559 +411 573 +422 585 +440 601 +451 612 diff --git a/doc/mlfqs2-3.pts b/doc/mlfqs2-3.pts new file mode 100644 index 0000000..c10cb62 --- /dev/null +++ b/doc/mlfqs2-3.pts @@ -0,0 +1,7 @@ +290 500 +359 534 +398 554 +416 573 +443 594 +453 601 +460 613 diff --git a/doc/mlfqs2.jgr b/doc/mlfqs2.jgr new file mode 100644 index 0000000..d84309f --- /dev/null +++ b/doc/mlfqs2.jgr @@ -0,0 +1,22 @@ +newgraph +title : Fariness with Default Time-sharing Scheduler +xaxis + label : Elapsed Time (s) + min 0 + max 65 +yaxis + label : Accumulated CPU Time (s) + min 0 + max 25 +newline + linetype solid + label : Coarse + pts shell : ./mlfqs2i < mlfqs2-1.pts +newline + linetype dashed + label : Medium + pts shell : ./mlfqs2i < mlfqs2-2.pts +newline + linetype dotted + label : Fine + pts shell : ./mlfqs2i < mlfqs2-3.pts diff --git a/doc/mlfqs2i b/doc/mlfqs2i new file mode 100755 index 0000000..33ef9b1 --- /dev/null +++ b/doc/mlfqs2i @@ -0,0 +1,8 @@ +#! /usr/bin/perl +print "0 0\n"; +while (<>) { + my ($ix, $iy) = /^(\d+) (\d+)$/ or die; + my ($x) = ($ix - 201) / (443 - 201) * 60; + my ($y) = ($iy - 456) / (645 - 456) * 25; + printf "%.2f %.2f\n", $x, $y; +} diff --git a/doc/threads.texi b/doc/threads.texi index 4341cac..61396b8 100644 --- a/doc/threads.texi +++ b/doc/threads.texi @@ -19,7 +19,8 @@ side. Compilation should be done in the @file{threads} directory. * Problem 1-2 Join:: * Problem 1-3 Priority Scheduling:: * Problem 1-4 Advanced Scheduler:: -* Threads FAQ:: +* Threads FAQ:: +* Multilevel Feedback Scheduling:: @end menu @node Understanding Threads, Debugging versus Testing, Project 1--Threads, Project 1--Threads @@ -149,7 +150,7 @@ implementation of Join works correctly, since if it's broken, you will need to fix it for future assignments. The other parts can be turned off in the future if you find you can't make them work quite right. -Also keep in mind that Problem 4 (the MFQS) builds on the features you +Also keep in mind that Problem 4 (the MLFQS) builds on the features you implement in Problem 3, so to avoid unnecessary code duplication, it would be a good idea to divide up the work among your team members such that you have Problem 3 fully working before you begin to tackle @@ -264,10 +265,10 @@ However, you do need to implement priority scheduling in all cases. @node Problem 1-4 Advanced Scheduler, Threads FAQ, Problem 1-3 Priority Scheduling, Project 1--Threads @section Problem 1-4 Advanced Scheduler -Implement Solaris's multilevel feedback queue scheduler (MFQS), as -explained in this @uref{mlfqs.pdf, , PDF} or @uref{mlfqs.ps, , -PostScript} file, to reduce the average response time for running jobs -on your system. +Implement Solaris's multilevel feedback queue scheduler (MLFQS) to +reduce the average response time for running jobs on your system. +@xref{Multilevel Feedback Scheduling}, for a detailed description of +the MLFQS requirements. Demonstrate that your scheduling algorithm reduces response time relative to the original Pintos scheduling algorithm (round robin) for @@ -573,3 +574,5 @@ However, you are free to do so. No. Hard-coding the dispatch table values is fine. @end enumerate @end enumerate + +@include mlfqs.texi -- 2.30.2