Sunday, February 14, 2010

Next: Process Synchronization: Semaphores Up: Process Management Previous: Process Contents


Process Scheduling [52]



summary
• only one process at a time is running on the CPU
• process gives up CPU:
• if it starts waiting for an event
• otherwise: other processes need fair access
• OS schedules which ready process to run next
• time slice or quantum for each process
• scheduling algorithms:
• different goals
• affect performance


Scheduling: Definitions [53]


long-term scheduler
• job scheduler
• which process on disk should be given memory?
• result: new process in ready queue
• important in batch systems
• many processes in memory IMPLIES high degree of multiprogramming


short-term scheduler
• CPU scheduler
• which process in ready queue should be given CPU
• result: new process on CPU


Scheduling: Definitions [54]


CPU-bound
• most of its time doing computation - little I/O


I/O-bound
• most of its time doing I/O - little computation


multilevel scheduling
• classified into different groups
• foreground (interactive) vs.
• background (batch)
• each group has its own ready queue


Performance: Definitions [55]


utilization
• percentage of time that the CPU is busy.
• if not busy, ready queue must be empty
• CPU actually executes NULL process
• goal: keep the CPU busy


throughput
• if busy, then work is being done
• number of processes completed per second


turnaround
• total time to complete a process
• includes waiting in the ready queue
• executing on the CPU
• waiting for I/O
• goal: fast turnaround


Performance: Definitions [56]


response
• time waiting in the ready queue and
• executing on CPU until some output produced
• average is across all output events
• goal: fast response time


waiting
• sum of periods spent waiting in ready queue
• average is across all visits to ready queue
• goal: short waiting time


important
• scheduler has a direct effect on waiting time
• decides which process in queue gets to run next
• remaining processes must then wait longer
• OS cannot control code, amount of I/O, etc.


Performance: Summary [57]
• UTILIZATION: CPU %busy
• THROUGHPUT: jobs/sec
• WAITING: sec/job
• RESPONSE: sec/job (usually in time-share systems)
• TURNAROUND: sec/job (usually in batch systems)





CPU Burst [58]


CPU burst
• cycle of CPU burst, I/O wait, CPU burst, ...
• program and data determine length of burst
• scheduler may interrupt a burst
• but does not affect the full length



scanf n, a, b /* I/O wait */
for (i=1; i<=n; i++) /* CPU burst */
x = x + a*b;
printf x /* I/O wait */
for (i=1; i<=n; i++) /* CPU burst */
for (j=1; j<=n; j++)
x = x + a*b;
printf x /* I/O wait */


Scheduling: FCFS [59]
• First-Come, First-Served is simplest scheduling algorithm
• ready queue is a FIFO queue: First-In, First-Out
• longest waiting process at the front (head) of queue
• new ready processes join the rear (tail)
• nonpreemptive: executes until voluntarily gives up CPU
• finished or waits for some event
• problem:
• CPU-bound process may require a long CPU burst
• other processes, with very short CPU bursts, wait in queue
• reduces CPU and I/O device utilization
• it would be better if the shorter processes went first


Scheduling: FCFS [60]

• assume processes arrive in this order: P1, P2, P3
• nonpreemptive scheduling
• average waiting time: (0+24+27)/3=17 ms
PID Burst
P1 24
P2 3
P3 3
Gantt chart


0242730

• assume processes arrive in this order: P2, P3, P1
• average waiting time: (6+0+3)/3=3 ms

PID Burst
P2 3
P3 3
P1 24
Gantt chart


03630

• in general, FCFS average waiting time is not minimal
• in general, better to process shortest jobs first


Scheduling: Round Robin (RR) [61]
• similar to FCFS, but preemption to switch between processes
• time quantum (time slice) is a small unit of time (10 to 100 ms)
• process is executed on the CPU for at most one time quantum
• implemented by using the ready queue as a circular queue
• head process gets the CPU
• uses less than a time quantum IMPLIES gives up the CPU voluntarily
• uses full time quantum IMPLIES timer will cause an interrupt
• context switch will be executed
• process will be put at the tail of queue


[III.B] Scheduling: RR [62]

• assume processes arrive in this order: P1, P2, P3
• preemptive scheduling
• time quantum: 4 ms
• P1 uses a full time quantum; P2, P3 use only a part of a quantum
• P1 waits 0+6=6; P2 waits 4; P3 waits 7
• average waiting time: (6+4+7)/3=5.66 ms

PID Burst
P1 24
P2 3
P3 3
Gantt chart


047101418222630

• very large time quantum IMPLIES RR = FCFS
• very small time quantum IMPLIES context switch is too much overhead
• quantum approximately CPU burst IMPLIES better turnaround
• rule of thumb: 80% should finish burst in 1 quantum


Scheduling: Shortest-Job-First (SJF) [63]
• assume the next burst time of each process is known
• SJF selects process which has the shortest burst time
• optimal algorithm because it has the shortest average waiting time
• impossible to know in advance
• OS knows the past burst times - make a prediction using an average
• nonpreemptive
• or preemptive:
• shortest-remaining-time-first
• interrupts running process if a new process enters the queue
• new process must have shorter burst than remaining time


Scheduling: SJF [64]

• assume all processes arrive at the same time: P1, P2, P3, P4
• nonpreemptive scheduling
• average waiting time: (3+16+9+0)/4=7 ms

PID Burst
P1 6
P2 8
P3 7
P4 3
Gantt chart


0391624

• SJF is optimal: shortest average waiting time
• but burst times are not known in advance
• next_predicted burst time by (weighted) average of past burst times

• next_predict = last_observed + last_predict
• next_predict = initialized value (usually 0)
• next_predict = last_observed


SJF: Weighted Average Burst [65]

:
:
: recent and past history the same
time 0 1 2 3 4 5 6 7
Burst ( )
6 4 6 4 13 13 13
Guess ( )
10 8 6 6 5 9 11 12



Scheduling: SJF [66]

• assume processes arrive at 1 ms intervals: P1, P2, P3, P4
• preemptive scheduling: shortest-remaining-time-first
• P1 waits 0+(10-1)=9; P2 waits 1-1=0
• P3 waits 17-2=15; P4 waits 5-3=2
• average waiting time: (9+0+15+2)/4=6.5 ms

PID Burst Arrival
P1 8 0
P2 4 1
P3 9 2
P4 5 3
Gantt chart


015101726

• nonpremptive SJF: 7.75 ms


Scheduling: Priority (PRIO) [67]
• assume a priority is associated with each process
• select highest priority process from the ready queue
• let be the (predicted) next CPU burst of a process
• SJF is a special case of priority scheduling
• assume: high numbers IMPLY high priority
• then priority is
• assume: low numbers IMPLY high priority
• then priority is
• equal-priority processes are scheduled in FCFS order
• PRIO can be preemptive or nonpreemptive
• priorities can be defined internally
• memory requirements, number of open files, burst times
• priorities can be defined externally
• user, department, company


Scheduling: PRIO [68]

• assume all processes arrive at the same time: P1, P2, P3, P4, P5
• nonpreemptive scheduling
• high priority: low number
• some OS use a high number!!! See VOS.
• average waiting time is: (6+0+16+18+1)/5=8.2 ms

PID Burst Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
Gantt chart


0161618 19

• indefinite blocking (starvation): low priority process never runs
• aging: low priorities increase with waiting time, will eventually run


VOS Scheduling: PRIO, FCFS, SJF [69]

for (i=1; i<=10; i++){ /* 10 CPU BURSTS */
for (j=1;j<=HOWLONG;j++) /* 1 CPU BURST */
pm_busywait(); /* PID1:long PID2:medium PID 3:short*/
pm_yield(); /* GO BACK TO READY QUEUE */
}
PRIO FCFS SJF
PID Burst priority=fixed priority=equal priority=1/burst
1 long 2 1 low
2 medium 3 high 1 medium
3 short 1 low 1 high


• schedulers favor different PIDs
• SUMMARY shows CPU burst (running) time for each PID
• SUMMARY shows waiting time for each PID in ready queue
• Gantt chart shows how long each PID is on the CPU
• schedulers have different performance


VOS Scheduling: PRIO [70]

===================================== SUMMARY =================================
FREE SUSPENDED READY RUNNING WAITING RECEIVING SLEEPING WRITING READ
PID time cnt time cnt time cnt time cnt time cnt time cnt time cnt time cnt ...
--- ---- --- ---- --- ---- --- ---- --- ---- --- ---- --- ---- --- ---- ---
0 0 1 0 0 72 2 17 3 0 0 0 0 0 0 0 0
1 29 2 1 1 25 11 34 11 0 0 0 0 0 0 0 0
2 64 2 0 1 1 11 24 11 0 0 0 0 0 0 0 0
3 16 2 1 1 58 11 14 11 0 0 0 0 0 0 0 0
4 89 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 89 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 89 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 89 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 89 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 89 2 0 1 0 1 0 1 0 0 0 0 0 0 0 0
---- --- ---- --- ---- --- ---- --- ---- --- ---- --- ---- --- ---- ---
TOT 643 14 2 4 156 36 89 37 0 0 0 0 0 0 0 0

Utilization: 80.9 %Busy
Throughput : 2.0 Jobs/Min
Wait Time : 28.0 Sec/Job
Burst Time : 24.0 Sec/Job


VOS Scheduling: PRIO [71]

Scheduling Algorithm: PRIO
>>> SUMMARY (READY) <<< >>>>>>>>>>> SUMMARY (RUNNING) <<<<<<<<<<
PID TOT Wait Time TOT Burst Time / Cnt = Single Burst
=== ============= ============== === ============
1 25 34 11 3.1
2 1 24 11 2.2
3 58 14 11 1.3
Sum Wait Time: 84 /3 Jobs Sum Burst Time: 72 /3 Jobs
Avg Wait Time: 28 Sec/Job Avg Burst Time: 24 Sec/Job
Longest Wait: 58(PID: 3) Longest Single Burst: 3.1(PID: 1)
Shortest Wait: 1(PID: 2) Shortest Single Burst: 1.3(PID: 3)
• other algorithms will have different average wait time


VOS Scheduling: PRIO [72]

Gantt chart of CPU Usage (Last Scheduler: PRIO)

------------------+-----------+---+-----+---+---+-----+---+--
PID |0 |2 |2 |2 |2 |2 |2 |2
------------------+-----------+---+-----+---+---+-----+---+--
time 9 15 17 20 22 24 27 29


--+-----+---+---+-+-----+-------+-----+-----+-------+-----+--
PID |2 |2 |2 |2|1 |1 |1 |1 |1 |1 |1
--+-----+---+---+-+-----+-------+-----+-----+-------+-----+--
time 31 34 36 3839 42 46 49 52 56 59


------+-----+-----+-------+-+---+-+---+-+-+---+-+---+-+------
PID |1 |1 |1 |1|3 |3|3 |3|3|3 |3|3 |3|3
------+-----+-----+-------+-+---+-+---+-+-+---+-+---+-+------
time 63 66 69 7374 7677 798081 8384 8687
• other algorithms will favor different PIDs


Mechanism (how) vs. Policy (what) [73]


mechanism
• how to do something
• implementation or function with parameters
• used in many ways (by policies)
• OS may be micro kernel - only basic mechanisms
• policies are decided at the user level


policy
• what or when to do something
• set of rules
• use mechanisms by setting parameters
• important choices in the design of the OS
• mechanisms should be separate from policies


Mechanism vs. Policy: Examples [74]


Timer(x sec)
• Policy 1: if LOW_PRIORITY Timer(0.1) else Timer(1.0)
• Policy 2: if LOW_PRIORITY Timer(0.1) else Timer(0.2)


Schedule(job)
• Policy 1: Schedule(I/O Job A); Schedule(CPU Job B)
• Policy 2: Schedule(CPU Job A); Schedule(I/O Job B)


Preempt(job)
• Policy 1: if A.running greater than 0.1 sec then Preempt(Job A)
• Policy 2: if A.running greater than 0.2 sec then Preempt(Job A)


Remove_From
Ready_Q(job)
• Policy 1: Remove_Ready_Q(oldest job): FCFS
• Policy 2: Remove_Ready_Q(highest priority job): PRIO
• Policy 3: Remove_Ready_Q(shortest job): SJF

No comments:

Post a Comment