Tutorial Week 13



  1. What are the advantages and disadvantages of using a global scheduling queue over per-CPU queues? Under which circumstances would you use the one or the other? What features of a system would influence this decision?

  2. When does spinning on a lock (busy waiting, as opposed to blocking on the lock, and being woken up when it's free) make sense in a multiprocessor environment?

  3. Why is preemption an issue with spinlocks?

  4. How does a read-before-test-and-set lock work and why does it improve scalability?

  5. Describe how an MCS lock further improves scalability compared to a read-before-test-and-set lock? What is a disadvantage of an MCS lock?

  6. Scheduling

  7. What do the terms I/O bound and CPU bound mean when used to describe a process (or thread)?

  8. What is the difference between cooperative and pre-emptive multitasking?

  9. Consider the multilevel feedback queue scheduling algorithm used in traditional Unix systems. It is designed to favour IO bound over CPU bound processes. How is this achieved? How does it make sure that low priority, CPU bound background jobs do not suffer starvation?

    Note: Unix uses low values to denote high priority and vice versa, 'high' and 'low' in the above text does not refer to the Unix priority value.

  10. Why would a hypothetical OS always schedule a thread in the same address space over a thread in a different address space? Is this a good idea?

  11. Why would a round robin scheduler NOT use a very short time slice to provide good responsive application behaviour?

  12. Consider 3 periodic tasks with execution profiles shown below. Draw scheduling diagram for a rate monotonic and a earliest deadline first schedule.

    Process Arrival Time Execution Time Deadline
    A (1) 0 10 20
    A (2) 20 10 40
    . . .
    B (1) 0 10 50
    B (2) 50 10 100
    . . .
    C (1) 0 15 50
    C (2) 50 15 100