Page 107 - DCAP103_Principle of operating system
P. 107

Principles of Operating Systems



                   Notes         an operating system may have more or fewer priorities than that (but conceptually, each
                                 would still have its own linked list).
                                 For simplicity, we conceive of these threads as being on an ordered list; in reality, they may be
                                 held in simple pools. Keeping the threads in a linked list implies that threads will be selected
                                 to run in a particular order. While that is a useful way of thinking about the process, it is not
                                 necessarily the way an implementation may work. Let us see how this scheduling will occur with
                                 the example we show at the beginning of the chapter. That example has a total of four threads:
                                 the initial thread (which executes the main( ) method) and the three task threads we started.
                                 In fact, as we have mentioned, there are more threads because the virtual machine starts
                                 various background threads (like the garbage collection thread). But for our discussion, we
                                 will consider only the four threads that are executing our code.

                                 The threads that calculate a Fibonacci number never block: they move from the initial state
                                 to the runnable state to the executing state. The main thread is in the runnable state and then
                                 enters the blocking state when it executes the join( ) method to wait for the other threads.
                                 The second time that we run the program, the state of the threads follows the transition path
                                 shown in Figure 3.9. The main thread is the currently running thread until it blocks at time
                                 T1. At that point, one of the task threads becomes the currently running thread; it remains
                                 the currently running thread until time T2 when it finishes and transitions to the exiting
                                 state. Another task thread becomes the currently running thread, and the cycle continues
                                 until all threads have completed.

                                                      Figure 3.9: A Simple Thread-state Diagram





























                                 That explains the output that we see when we run the program for a second time: everything
                                 (including the output) proceeds sequentially. So why is the output different the first time
                                 we run the example?

                                 The  first  time  we  run  the  example,  we  do  so,  on  a  typical  operating  system.  The  thread
                                 scheduler on that OS, in addition to being priority-based and preemptive, is also time-slicing.
                                 That means when threads are waiting for the CPU, the operating system allows one of them



        100                               LOVELY PROFESSIONAL UNIVERSITY
   102   103   104   105   106   107   108   109   110   111   112