Operating System: 0 To Hero -Master Class

Nitty Gritty Aspects Of Operating System

Operating System: 0 To Hero -Master Class

TL,DR;

The article summarize what is an OS, it's different variants and how some of the functional parts work together to give you a worthy system to utilize in daily life.

What is An Operating System

The OS helps you to communicate with the computer without knowing how to speak the computer's language. It is not possible for the user to use any computer or mobile device without having an operating system.

An Operating System (OS) is a software that acts as an interface between computer hardware components and the user.

OS_Interfacing_Image

Applications like Browsers, MS Office, IDE, Games, etc., need some environment to run and perform its tasks which is provided by the Operating system.

History of OS

  • Operating System were first developed in late 1950s to manage Tape Storage.
  • "IBM 701" became the first system in 1950s to get an Implemented OS by General Motors Research Lab.
  • During late 1960s the first version for Unix OS was developed.
  • Microsoft Developed the DOS (Disk Operating System) in 1981.
  • The popular Operating System of Windows 'Windows OS' first came to existence in 1985 when GUI (Graphical User Interface) got created and paired with MS-DOS

Types of Operating System (OS)

Types of OS used nowadays:

  • Batch Operating System
  • Multitasking/Time Sharing OS
  • Multiprocessing OS
  • Real Time OS
  • Distributed OS
  • Network OS
  • Mobile OS

Functions

An OS performs all of the following Tasks:

InkedFunctions_OS_LI.jpg

  1. Process management: Process management helps OS to create and delete processes. It also provides mechanisms for synchronization and communication among processes.

  2. Memory management: Memory management module performs the task of allocation and de-allocation of memory space to programs in need of this resources.

  3. File management: It manages all the file-related activities such as organization storage, retrieval, naming, sharing, and protection of files.

  4. Device Management: Device management keeps tracks of all devices. This module also responsible for this task is known as the I/O controller. It also performs the task of allocation and de-allocation of the devices.

  5. I/O System Management: One of the main objects of any OS is to hide the peculiarities of that hardware devices from the user.

  6. Secondary-Storage Management: Systems have several levels of storage which includes primary storage, secondary storage, and cache storage. Instructions and data must be stored in primary storage or cache so that a running program can reference it.

  7. Security: Security module protects the data and information of a computer system against malware threat and authorized access.

  8. Command interpretation: This module is interpreting commands given by the and acting system resources to process that commands.

  9. Networking: A distributed system is a group of processors which do not share memory, hardware devices, or a clock. The processors communicate with one another through the network.

  10. Job accounting: Keeping track of time & resource used by various job and users.

  11. Communication management: Coordination and assignment of compilers, interpreters, and another software resource of the various users of the computer systems.

Features

List of important features of OS:

OperatingSy3.png

  • Protected and supervisor mode
  • Allows disk access and file systems Device drivers
  • Networking Security
  • Program Execution
  • Memory management Virtual Memory Multitasking
  • Handling I/O operations
  • Manipulation of the file system
  • Error Detection and handling
  • Resource allocation
  • Information and Resource Protection

Advantage of using Operating System

  • Easy to use with a GUI
  • Offers an environment in which a user may execute programs/applications
  • It provides the computer system resources with easy to use format
  • Acts as an intermediator between all hardware's and software's of the system

Disadvantages of using Operating System

  • If any issue occurs in OS, you may lose all the contents which have been stored in your system
  • Operating system's software is quite expensive for small size organization which adds burden on them

What is a Kernel?

The kernel is the central component of a computer operating systems. The only job performed by the kernel is to the manage the communication between the software and the hardware.

A Kernel is at the nucleus of a computer. It makes the communication between the hardware and software possible. While the Kernel is the innermost part of an operating system, a shell is the outermost one.

Features of Kennel

  • Low-level scheduling of processes
  • Inter-process communication
  • Process synchronization
  • Context switching

OperatingSy4.png

Types of Kernels

There are many types of kernels that exists, but among them, the two most popular kernels are:

  1. Monolithic

    A monolithic kernel is a single code or block of the program. It provides all the required services offered by the operating system. It is a simplistic design which creates a distinct communication layer between the hardware and software.

  2. Micro-kernels

    Micro-kernel manages all system resources. In this type of kernel, services are implemented in different address space. The user services are stored in user address space, and kernel services are stored under kernel address space. So, it helps to reduce the size of both the kernel and operating system.

Difference between Firmware and Operating System

FirmwareOperating System
Firmware is one kind of programming that is embedded on a chip in the device which controls that specific device.OS provides functionality over and above that which is provided by the firmware.
Firmware is programs that been encoded by the manufacture of the I.C. and cannot be changed.OS is a program that can be installed by the user and can be changed.
It is stored on non-volatile memory.OS is stored on the hard drive.

Difference between 32-Bit vs. 64 Bit Operating System

Parameters32. Bit64. Bit
Architecture and SoftwareAllow 32 bit of data processing simultaneouslyAllow 64 bit of data processing simultaneously
Compatibility32-bit applications require 32-bit OS and CPUs.64-bit applications require a 64-bit OS and CPU.
Systems AvailableAll versions of Windows 8, Windows 7, Windows Vista, and Windows XP, Linux, etc.Windows XP Professional, Vista, 7, Mac OS X and Linux.
Memory Limits32-bit systems are limited to 3.2 GB of RAM.64-bit systems allow a maximum 17 Billion GB of RAM.

CPU Scheduling

  • A scheduling system allows one process to use the CPU while another is waiting for I/O, thereby making full use of otherwise lost CPU cycles.
  • Almost all programs have some alternating cycle of CPU number crunching and waiting for I/O of some kind. Even a simple fetch from memory takes a long time relative to CPU speeds.
  • The challenge is to make the overall system as "efficient" and "fair" as possible, subject to varying and often dynamic conditions, and where "efficient" and "fair" are somewhat subjective terms, often subject to shifting priority policies.

    CPU- I/O Burst Cycle

    Almost all processes alternate between two states in a continuing cycle:
  • A CPU burst of performing calculations, and
  • An I/O burst, waiting for data transfer in or out of the system.
  • CPU bursts vary from process to process, and from program to program, but an extensive study shows frequency patterns similar to that shown in Figure 6_02_CPU_Histogram.jpg

CPU Scheduler

Whenever the CPU becomes idle, it is the job of the CPU Scheduler ( a.k.a. the short-term scheduler ) to select another process from the ready queue to run next.

  • The storage structure for the ready queue and the algorithm used to select the next process are not necessarily a FIFO queue.

There are several alternatives to choose from, as well as numerous adjustable parameters for each algorithm

Preemptive Scheduling

  • CPU scheduling decisions take place under one of four conditions:
  • When a process switches from the running state to the waiting state, such as for an I/O request or invocation of the wait( ) system call.
  • When a process switches from the running state to the ready state, for example in response to an interrupt.
  • When a process switches from the waiting state to the ready state, say at completion of I/O or a return from wait( ).
  • When a process terminates.

If scheduling takes place only under conditions 1 and 4, the system is said to be non-preemptive, or cooperative. Under these conditions, once a process starts running it keeps running, until it either voluntarily blocks or until it finishes. Otherwise the system is said to be preemptive.

Dispatcher

The dispatcher is the module that gives control of the CPU to the process selected by the scheduler.

This function involves:

  • Switching context.
  • Switching to user mode.
  • Jumping to the proper location in the newly loaded program.
  • The dispatcher needs to be as fast as possible, as it is run on every context switch.

The time consumed by the dispatcher is known as dispatch latency .

Scheduling Criteria

There are several different criteria to consider when trying to select the "best" scheduling algorithm for a particular situation and environment, including:

  1. CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste 0 CPU cycles. On a real system CPU usage should range from 40% ( lightly loaded ) to 90% ( heavily loaded. )
  2. Throughput - Number of processes completed per unit time. May range from 10 / second to 1 / hour depending on the specific processes.
  3. Turnaround time - Time required for a particular process to complete, from submission time to completion. ( Wall clock time. )
  4. Waiting time - How much time processes spend in the ready queue waiting their turn to get on the CPU.
    • ( Load average - The average number of processes sitting in the ready queue waiting their turn to get into the CPU. Reported in 1-minute, 5-minute, and 15-minute averages by "uptime" and "who". )
  5. Response time - The time taken in an interactive program from the issuance of a command to the commence of a response to that command.

Scheduling Algorithms

We will take a look at only a single CPU burst each for a small number of processes. Obviously real systems have to deal with a lot more simultaneous processes executing their CPU - I/O burst cycles Some very commonly used scheduling algorithms are:

  1. First-Come First-Serve Scheduling (FCFS)
  2. Shortest-Job-First Scheduling( SJF )
  3. Priority Scheduling
  4. Round Robin Scheduling
  5. Multilevel Queue Scheduling
  6. Multilevel Feedback-Queue Scheduling
  7. Thread Scheduling

First-Come First-Serve Scheduling (FCFS)

  • FCFS is very simple -Just a FIFO queue, like customers waiting in line at the post office or at a copying machine.
  • Unfortunately, however, FCFS can yield some very long average wait times, particularly if the first process to get there takes a long time.
  • FCFS can also block the system in a busy dynamic system in another way, known as the Convoy Effect.

    Convoy Effect

    When one CPU intensive process blocks the CPU, a number of I/O intensive processes can get backed up behind it, leaving the I/O devices idle. When the CPU hog finally relinquishes the CPU, then the I/O processes pass through the CPU quickly, leaving the CPU idle while everyone queues up for I/O, and then the cycle repeats itself when the CPU intensive process gets back to the ready queue.

Shortest-Job-First Scheduling( SJF )

  • Technically this algorithm picks a process based on the next shortest CPU burst, not the overall process time.
  • SJF algorithm picks the quickest, fastest, little job that needs to be done, get it out of the way first, and then pick the next smallest fastest job to do next.
  • SJF can be either preemptive or non-preemptive.
    • Preemption occurs when a new process arrives in the ready queue that has a predicted burst time shorter than the time remaining in the process whose burst is currently on the CPU.

Preemptive SJF is sometimes referred to as Shortest Remaining Time First (SRTF) scheduling.

Priority Scheduling

  • Priority scheduling is a more general case of SJF, in which each job is assigned a priority and the job with the highest priority gets scheduled first.

( SJF uses the inverse of the next expected burst time as its priority - The smaller the expected burst, the higher the priority. )

  • Priority scheduling can be either preemptive or non-preemptive.
  • Note that in practice, priorities are implemented using integers within a fixed range, but there is no agreed-upon convention as to whether "high" priorities use large numbers or small numbers.
  • Priorities can be assigned either internally or externally.

    Internal priorities are assigned by the OS using criteria such as average burst time, ratio of CPU to I/O activity, system resource use, and other factors available to the kernel. External priorities are assigned by users, based on the importance of the job, fees paid, politics, etc.

  • Priority scheduling can suffer from a major problem known as indefinite blocking, (or) starvation.

    Starvation

    Indefinite blocking is when a low-priority task can wait forever because there are always some other jobs around that have higher priority.

Round Robin Scheduling (RR)

  • Round Robin scheduling is similar to FCFS scheduling, except that CPU bursts are assigned with limits called Time Quantum.

When a process is given the CPU, a timer is set for whatever value has been set for a time quantum.

  • If the process finishes its burst before the time quantum timer expires, then it is swapped out of the CPU just like the normal FCFS algorithm.
  • If the timer goes off first, then the process is swapped out of the CPU and moved to the back end of the ready queue.

The ready queue is maintained as a circular queue, so when all processes have had a turn, then the scheduler gives the first process another turn, and so on.

  • RR scheduling can give the effect of all processors sharing the CPU equally, although the average wait time can be longer than with other scheduling algorithms
  • The performance of RR is sensitive to the time quantum selected. If the quantum is large enough, then RR reduces to the FCFS algorithm; If it is very small, then each process gets 1/nth of the processor time and share the CPU equally.
  • A real system invokes overhead for every context switch, and the smaller the time quantum the more context switches there are.
  • Most modern systems use time quantum between 10 and 100 milliseconds, and context switch times on the order of 10 microseconds, so the overhead is small relative to the time quantum.
  • Turn around time also varies with quantum time, in a non-apparent manner
  • In general, turnaround time is minimized if most processes finish their next cpu burst within one time quantum.

    For example, with three processes of 10 ms bursts each, the average turnaround time for 1 ms quantum is 29, and for 10 ms quantum it reduces to 20. However, if it is made too large, then RR just degenerates to FCFS.

  • A rule of thumb is that 80% of CPU bursts should be smaller than the time quantum.

Multilevel Queue Scheduling

  • When processes can be readily categorized, then multiple separate queues can be established, each implementing whatever scheduling algorithm is most appropriate for that type of job, and/or with different parametric adjustments.
  • Scheduling must also be done between queues, that is scheduling one queue to get time relative to other queues.

    Two common options are strict priority ( no job in a lower priority queue runs until all higher priority queues are empty ) and round-robin ( each queue gets a time slice in turn, possibly of different sizes. )

Note that under this algorithm jobs cannot switch from queue to queue - Once they are assigned a queue, that is their queue until they finish.

MultilevelQueueScheduling.jpg

Multilevel Feedback-Queue Scheduling

  • Multilevel feedback queue scheduling is similar to the ordinary multilevel queue scheduling described above, except jobs may be moved from one queue to another for a variety of reasons:
  • If the characteristics of a job change between CPU-intensive and I/O intensive, then it may be appropriate to switch a job from one queue to another.
  • Aging can also be incorporated, so that a job that has waited for a long time can get bumped up into a higher priority queue for a while.
  • Multilevel feedback queue scheduling is the most flexible, because it can be tuned for any situation. But it is also the most complex to implement because of all the adjustable parameters.
  • Some of the parameters which define one of these systems include:
    • The number of queues.
    • The scheduling algorithm for each queue.
    • The methods used to upgrade or demote processes from one queue to another. ( Which may be different. )
    • The method used to determine which queue a process enters initially.

MultilevelFeedbackQueues.jpg

Thread Scheduling

  • The process scheduler schedules only the kernel threads.
  • User threads are mapped to kernel threads by the thread library
  • The OS ( and in particular the scheduler ) is unaware of them.

Contention Scope

Contention scope refers to the scope in which threads compete for the use of physical CPUs. There are multiple types of contention, where as following are most commonly reffered

  1. System Contention Scope

    • System Contention Scope (SCS), involves the system scheduler scheduling kernel threads to run on one or more CPUs. Systems implementing one-to-one threads use only system contention scope.
  2. Process Contention Scope

    • Process Contention Scope (PCS), occurs on systems implementing many-to-one and many-to-many threads, because competition occurs between threads that are part of the same process.

      Process Contention Scope scheduling is typically done with priority, where the programmer can set and/or change the priority of threads created by his or her programs. Even time slicing is not guaranteed among threads of equal priority.

The management / scheduling of multiple user threads on a single kernel thread, and is managed by the thread library.

A Small Note:

This is my "First Ever Blog" leave your feedback & let me know if you like it!