My name is nyomangedar

OS202


Project maintained by nyomangedar Hosted on GitHub Pages — Theme by mattgraham

HOME


Top 10 List of Week 08

  1. CPU-I/O Burst Cycle The success of CPU scheduling depends on an observed property of processes: process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states. Process execution begins with a CPU burst. That is followed by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so on. Eventually, the inal CPU burst ends with a system request to terminate execution.

  2. CPU Scheduler
    Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selectionprocess is carried out by the CPU scheduler, which selects a process from the processes in memory that are ready to execute and allocates the CPU to that process.

  3. Preemptive and Nonpreemptive Scheduling
    CPU will make schedule under the following circumstances:
    1. When a process switches from the running state to the waiting state (for example, as the result of an I/O request or an invocation of wait() for the termination of a child process)
    2. When a process switches from the running state to the ready state (for example, when an interrupt occurs)
    3. When a process switches from the waiting state to the ready state (for example, at completion of I/O)
    4. When a process terminates When schedulling take place under 1 and 4 circumstance, the schedulling scheme is called nonpreemptive. Otherwise, it is preemptive.
  4. Dispatcher
    The dispatcher is the module that gives control of the CPU’s core to the process selected by the CPU scheduler. This function involves the following:
    1. Switching context from one process to another
    2. Switching to user mode
    3. jumping to the proper location in the user program to resume that program.
  5. Shortest-Job-First Scheduling Algotrithm
    This algorithm associates with each process the length of the process’s next CPU burst. When the CPU is available, it is assignedto the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS(First Come First Serve) scheduling is used to break the tie.

  6. Round-Robin Scheduling The round-robin (RR) scheduling algorithmis similar to FCFS scheduling, but preemption is added to enable the system to switch between processes. A small unit of time, called a time quantum or time slice, is deined. A time quantum is generally from10 to 100 milliseconds in length. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.

  7. Priority Scheduling
    The SJF algorithmis a special case of the general priority-scheduling algorithm. A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal-priority processes are scheduled in FCFS order. An SJF algorithmis simply a priority algorithm where the priority (p) is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice versa.

  8. Multilevel Queue Scheduling
    With both priority and round-robin scheduling, all processes may be placed in a single queue, and the scheduler then selects the process with the highest priority to run. Depending on how the queues are managed, an O(n)search may be necessary to determine the highest-priority process. This approach—known as multilevel queue also works well when priority scheduling is combined with round-robin: if there are multiple processes in the highest-priority queue, they are executed in round-robin order.

  9. Multilevel Feedback Queue Scheduling
    The multilevel feedback queue scheduling algorithm, in contrast, allows a process to move between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-bound and interactive processes—which are typically characterized by short CPU bursts in the higher-priority queues.

  10. Thread Scheduling
    On most modern operating systems it is kernel-level threads—not processes—that are being scheduled by the operating system. User-level threads are managed by a thread library, and the kernel is unaware of them. To run on a CPU, user-level threads must ultimately be mapped to an associated kernel-level thread, although this mapping may be indirect and may use a lightweight process (LWP).