Tuesday, 27 September 2011

SHORTEST JOB FIRST (SJF)

  •         Associate with each process the length of its next CPU burst.  Use these lengths to schedule the process with the shortest time.
Two schemes:
  1.       Non-preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst.
  2.       Preemptive – if a new process arrives with CPU burst length less than remaining             time of current executing process, preempt.  This scheme is know as the Shortest-Remaining-Time-First (SRTF).



Example of Non-Preemptive SJF

Process     Arrival Time  Burst Time
  P1                     0.0                      7
 P2                      2.0                      4
 P3                      4.0                      1
P4                      5.0                       4

  •        SJF (non-preemptive)

  •         Average waiting time = (0 + 6 + 3 + 7)/4 - 4

Example of Preemptive SJF

Process  Arrival Time  Burst Time
  P1              0.0                       7
 P2               2.0                      4
P3                4.0                      1
P4               5.0                       4

SJF (preemptive)

Average waiting time = (9 + 1 + 0 +2)/4 - 3

More SJF Examples

  • SJF non-preemptive     Proc   Arrives    Burst
                                                                P1         0               8
                                                               P2          1               4
                                                              P3           2               9
                                                             P4            3              5
And then preemptive

SJF non-preemptive  Proc   Arrives    Burst
                                         P1           1              2
                                        P2            0             7
                                       P3            2              7
                                      P4             5              3
                                     P5              6             1
And then preemptive

Determining Length of Next CPU Burst
SJF is optimal – gives minimum average waiting time for a given set of processes.
But we can only estimate the length of a CPU burst.
Can be done by using the length of previous CPU bursts, using exponential averaging.





Prediction of the Length of the Next CPU Burst





Examples of Exponential Averaging

a =0

tn+1 = tn
Recent history does not count.
a =1
 tn+1 = tn
Only the actual last CPU burst counts.
If we expand the formula, we get:
tn+1 = a tn+(1 - a) a tn -1 + …
            +(1 - a )j a tn -1 + …
            +(1 - a )n=1 tn t0

Since both a and (1 - a) are less than or equal to 1, each successive term has less weight than its predecessor.

Priority Scheduling
A priority number (integer) is associated with each process,
Internal, e.g., by resource needs
External, e.g., by user priority

The CPU is allocated to the process with the highest priority (smallest integer ยบ highest priority).
Non-preemptive or Preemptive

Example     Proc       Arrival      Burst       Priority
                      P1               0              10                3
                      P2               1               1                 1
                      P3               2               2                 3
                      P4               4               1                 4
                      P5               8               5                 2


  • SJF is a priority scheduling where priority is the predicted next CPU burst time.

  • Problem ยบ Starvation – low priority processes may never execute.

  • Solution ยบ Aging – as time progresses increase the priority of the process.




More Priority Examples
Proc  Arrival  Burst  Priority
P1  0  6  5
  P2  2  2  3
  P3  3  3  4
  P4  9  3  2
  P5  10  1  1









EDITED BY:
SITI NOORAINA MOHD JAMAL (F1075)

Round Robin Scheduling ( RR)

  •         Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.  After this time has elapsed, the process is preempted and added to the end of the ready queue.
  •         If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once.  No process waits more than (n-1)q time units.

Example of RR with Time Quantum = 20

Process  Burst Time
    P1               53
   P2                17
  P3                 68
 P4                  24



The Gantt chart is: 



More RR Examples :-
Proc       Arrival        Burst
 P1              0               53
 P2             25              17
 P3             50              68
 P4             75              24



POSTED BY :-
NURUL AISYAH YAHAYA (F1072)






Shortest remaining time

Shortest remaining time is a method of CPU scheduling that is a preemptive [defensive] version of shortest job next scheduling. In this scheduling algorithm, the process with the smallest amount of time remaining until completion is selected to execute. Since the currently executing process is the one with the shortest amount of time remaining by definition, and since that time should only reduce as execution progresses, processes will always run until they complete or a new process is added that requires a smaller amount of time.

Shortest remaining time is advantageous because short processes are handled very quickly. The system also requires very little overhead since it only makes a decision when a process completes or a new process is added, and when a new process is added the algorithm only needs to compare the currently executing process with the new process, ignoring all other processes currently waiting to execute. However, it has the potential for process starvation for processes which will require a long time to complete if short processes are continually added, though this threat can be minimal when process times follow a heavy-tailed distribution.

Like shortest job next scheduling, shortest remaining time scheduling is rarely used outside of specialized environments because it requires accurate estimations of the runtime of all processes that are waiting to execute.

by : Abdul Aziz

Monday, 26 September 2011

Medium-term Scheduling 


Medium term scheduling is part of the swapping function. This relates to processes that are in a blocked or suspended state. They are swapped out of real-memory until they are ready to 
execute. The swapping-in decision is based on memory-management criteria: 
  • Determines when processes are to be suspended and resumed.
  • The medium-term scheduler runs more frequently, deciding which process’s pages to 
    swap to and from the swapping device: typically once a second. 
  • The success of the medium-term scheduler is based on the degree of 
    multiprogramming that it can maintain, by keeping as many processes 
    “runnable” as possible.
  •  More processes can remain executable if we reduce the resident set size of all 
    processes.
  • Makes decisions as to :
- which pages of which processes  need stay resident.
- which pages must be swapped out to make room for other processes. 

In many systems today (those that support mapping virtual address space to secondary storage
other than the swap file), the medium-term scheduler may actually perform the role of the long
term scheduler, by treating binaries as "swapped out processes" upon their execution. In this
way, when a segment of the binary is required it can be swapped in on demand, or "lazyloaded".


Sunday, 25 September 2011

SHORT-TERM SCHEDULLING

-Short term scheduling concerns with the allocation of CPU time to processes in order to meet some pre-defined system performance objectives. The definition of these objectives (scheduling policy) is an overall system design issue, and determines the ``character'' of the operating system from the user's (i.e. the buyer's) point of view, giving rise to the traditional distinctions among ``multi-purpose, time shared'', ``batch production'', ``real-time'' systems, and so on.
-From a user's point of view, the performance criteria can be stated as follows:
  • Response time
  • Turnaround time
  • Meeting
  • Predictability
-When the overall system performance is considered, additional scheduling criteria must be taken into account:
  • Throughput
  • User processor utilisation
  • Overall processor utilisation
  • Resource utilisation balance
-When the interaction between users' needs and overall system performance is considered, additional criteria to be accounted for arise:
  • Fairness
  • Enforcing

-The design of the short term scheduler is one of the critical areas in the overall system design, because of the immediate effects on system performance from the user's point of view. It's usually one of the trickiest as well: since most processor architectures support their own task switching facilities, the implementation of the process switch mechanism is generally machine-dependent. The result is that the actual process switch software is usually written in the assembly language of a particular machine, whether the operating system is meant to be portable across different machines or not.
-Independently of the adopted scheduling policy, three basic issues in short term scheduler design can be individuated:
  • The process switch must be done efficiently, otherwise the users will suffer from poor system responsiveness. This is a platitude, yet it has an immediate translation in terms of performance specification: the time spent to switch one process out and the next one in must be in the order of a few microseconds (with current technologies). Since most of the time taken to carry out the process switch is sacrificed in saving and restoring the context of processes, it is important that the amount of information to be saved and restored be reduced to the absolute minimum (in other words, it must be a minimal or quasi-minimal representation of the process's state).
    An effective way to achieve fast process (and context) switches is to remap the (virtual) addresses of the new running process's PCB into an address that the kernel knows about. This requires simply the remapping of one address, as opposed to copying the whole PCB data into some storage area.
  • Since many data structures are shared among processes (e.g. device tables, timing information, relative priorities, etc.), there must be a consistent method for allowing data access between them. The only obvious answer is to store these data structures in memory.
  • Since the system manages multiple processes, each of which assumes the only process running on the time-shared machine, the system must choose a scheme in which processes have a fair share of time, where the ``fairness'' is conditioned by the overall system policy. This is discussed in the following sections


                                          By:Nor Iliyana Alias

Multilevel Queue Scheduling



A multilevel queue scheduling algorithm partitions the ready queue in several separate queues, for instance
Fig 5.6 - pp. 138 in Sinha
In a multilevel queue scheduling processes are permanently assigned to one queues.
The processes are permanently assigned to one another, based on some property of the process, such as

Memory size
Process priority
Process type

Algorithm choose the process from the occupied queue that has the highest priority, and run that process either

Preemptive
Non-preemptively


Each queue has its own scheduling algorithm or policy.

Possibility I 
    If each queue has absolute priority over lower-priority queues then no process in the queue could run unless the queue for the highest-priority processes were all empty.
For example, in the above figure no process in the batch queue could run unless the queues for system processes, interactive processes, and interactive editing processes will all empty.

Possibility II 
    If there is a time slice between the queues then each queue gets a certain amount of CPU times, which it can then schedule among the processes in its queue. For instance;

80% of the CPU time to foreground queue using RR.
20% of the CPU time to background queue using FCFS.

Since processes do not move between queue so, this policy has the advantage of low scheduling overhead, but it is inflexible.


Disediakan Oleh: Che Nurlailatul Ashikin