Initial Overleaf Import
This commit is contained in:
BIN
os/images/Monolythic.png
Normal file
BIN
os/images/Monolythic.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 307 KiB |
BIN
os/images/memtable.png
Normal file
BIN
os/images/memtable.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 554 KiB |
363
os/main.tex
Normal file
363
os/main.tex
Normal file
@@ -0,0 +1,363 @@
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% writeLaTeX Example: A quick guide to LaTeX
|
||||
%
|
||||
% Source: Dave Richeson (divisbyzero.com), Dickinson College
|
||||
%
|
||||
% A one-size-fits-all LaTeX cheat sheet. Kept to two pages, so it
|
||||
% can be printed (double-sided) on one piece of paper
|
||||
%
|
||||
% Feel free to distribute this example, but please keep the referral
|
||||
% to divisbyzero.com
|
||||
%
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% How to use writeLaTeX:
|
||||
%
|
||||
% You edit the source code here on the left, and the preview on the
|
||||
% right shows you the result within a few seconds.
|
||||
%
|
||||
% Bookmark this page and share the URL with your co-authors. They can
|
||||
% edit at the same time!
|
||||
%
|
||||
% You can upload figures, bibliographies, custom classes and
|
||||
% styles using the files menu.
|
||||
%
|
||||
% If you're new to LaTeX, the wikibook is a great place to start:
|
||||
% http://en.wikibooks.org/wiki/LaTeX
|
||||
%
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
|
||||
\documentclass[10pt,landscape]{article}
|
||||
\usepackage{amssymb,amsmath,amsthm,amsfonts}
|
||||
\usepackage{multicol,multirow}
|
||||
\usepackage{calc}
|
||||
\usepackage{ifthen}
|
||||
\usepackage{graphicx}
|
||||
%\usepackage[fontsize=6pt]{fontsize}
|
||||
|
||||
\usepackage{helvet}
|
||||
\renewcommand{\familydefault}{\sfdefault}
|
||||
\usepackage[landscape]{geometry}
|
||||
|
||||
\geometry{a4paper, landscape, margin=0.5in}
|
||||
\usepackage[colorlinks=true,citecolor=blue,linkcolor=blue]{hyperref}
|
||||
\usepackage[
|
||||
protrusion=true,
|
||||
activate={true,nocompatibility},
|
||||
final,
|
||||
tracking=true,
|
||||
kerning=true,
|
||||
spacing=true,
|
||||
factor=1100]{microtype}
|
||||
\SetTracking{encoding={*}, shape=sc}{40}
|
||||
%%Packages added by Sebastian Lenzlinger:
|
||||
\usepackage{enumerate} %% Used to change the style of enumerations (see below).
|
||||
|
||||
\newtheorem{definition}{Definition}
|
||||
\newtheorem{theorem}{Theorem}
|
||||
\newtheorem{axiom}{Axiom}
|
||||
\newtheorem{lem}{Lemma}
|
||||
\newtheorem{corr}{Corollary}
|
||||
|
||||
\usepackage{tikz} %% Pagacke to create graphics (graphs, automata, etc.)
|
||||
\usetikzlibrary{automata} %% Tikz library to draw automata
|
||||
\usetikzlibrary{arrows} %% Tikz library for nicer arrow heads
|
||||
%%End
|
||||
\microtypecontext{spacing=nonfrench}
|
||||
|
||||
\ifthenelse{\lengthtest { \paperwidth = 11in}}
|
||||
{ \geometry{top=.5in,left=.5in,right=.5in,bottom=.5in} }
|
||||
{\ifthenelse{ \lengthtest{ \paperwidth = 297mm}}
|
||||
{\geometry{top=0.5cm,left=0.5cm,right=0.5cm,bottom=0.5cm} }
|
||||
{\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} }
|
||||
}
|
||||
\pagestyle{empty}
|
||||
\makeatletter
|
||||
\renewcommand{\section}{\@startsection{section}{1}{0mm}%
|
||||
{0.1mm}%
|
||||
{0.0mm}%x
|
||||
{\normalfont\normalsize\bfseries}}
|
||||
\renewcommand{\subsection}{\@startsection{subsection}{2}{0mm}%
|
||||
{0.01mm}%
|
||||
{0.0001mm}%
|
||||
{\normalfont\small\bfseries}}
|
||||
\renewcommand{\subsubsection}{\@startsection{subsubsection}{3}{0mm}%
|
||||
{-1ex plus -.5ex minus -.2ex}%
|
||||
{1ex plus .2ex}%
|
||||
{\normalfont\small\bfseries}}
|
||||
\makeatother
|
||||
\setcounter{secnumdepth}{0}
|
||||
\setlength{\parindent}{0pt}
|
||||
\setlength{\parskip}{0pt plus 0.5ex}
|
||||
% -----------------------------------------------------------------------
|
||||
|
||||
\title{Operating Systems FS23}
|
||||
|
||||
\begin{document}
|
||||
\tiny
|
||||
\raggedright
|
||||
\footnotesize
|
||||
|
||||
\begin{multicols*}{3}
|
||||
\setlength{\premulticols}{1pt}
|
||||
\setlength{\postmulticols}{1pt}
|
||||
\setlength{\multicolsep}{1pt}
|
||||
\setlength{\columnsep}{1pt}
|
||||
\section{Intro}
|
||||
\textbf{Batch Systems:} Multiprogramming, job queue, job scheduling, no interactivity, Job Switch possibile (e.g. wait on I/O). Reduce setup time by batching.
|
||||
\textbf{Time-sharing:} multitasking, frequent change of currently running job, cpu scheduler, interactivity, 'concurrent processes', virtual memory.\\ \textbf{Summary}
|
||||
• OS efficiently manage various resources (processes, memory, I/O, files, security) but
|
||||
also ‘extend’ machines with user abstractions: processes, address spaces, files.
|
||||
• OS can only execute programs available in the main memory (the only large storage
|
||||
area that the processor can access directly).
|
||||
• Storage hierarchy ranges from small and fast but expensive (registers, caches) to larger
|
||||
and slower, but less expensive (main memory- volatile), to even larger and much
|
||||
slower, and much less expensive (disk - nonvolatile, magnetic tapes – nonvolatile).
|
||||
• Caching brings programs and data closer to the CPU for execution.
|
||||
• Switch to kernel mode involves: I/O control, timer management, and interrupt management.
|
||||
• Ontogeny Recapitulates Phylogeny: changes in technology may bring obsolete concepts back.
|
||||
|
||||
\section{OS Structures}
|
||||
\textbf{Syscalls } •\emph{Implementation}: Typically, a number associated with each system call
|
||||
– System call interface maintains a table indexed according to these
|
||||
numbers
|
||||
The system call interface invokes the intended system call in OS kernel
|
||||
and returns status of the system call and any return values
|
||||
The caller need know nothing about how the system call is implemented
|
||||
– Just needs to ‘obey’ API and understand what OS will do as a result call
|
||||
– Most details of OS interface hidden from programmer by API
|
||||
– Managed by runtime support library (set of functions built into libraries
|
||||
included with compiler). •\emph{Parameter Passing}: Often, more information is required than simply the identity of the desired
|
||||
system call
|
||||
– Exact type and amount of information vary according to OS and call
|
||||
General methods used to pass parameters to the OS
|
||||
1. Simplest: pass the parameters in registers
|
||||
– In some cases, may have more system call parameters than registers
|
||||
2. Parameters stored in a block, or table, in memory, and address of
|
||||
block passed as a parameter in a register
|
||||
– Approach taken by Linux and Solaris
|
||||
– Does not limit the number or length of parameters being passed
|
||||
3. Parameters placed, or pushed, onto the stack by the program and
|
||||
popped off the stack by the operating system
|
||||
– Does not limit the number or length of parameters being passed.
|
||||
|
||||
\textbf{Monolithic Systems} • Advantages
|
||||
- Interaction between OS modules is easy and efficient
|
||||
- Very little overhead in system call interface
|
||||
- Communication within the kernel is fast
|
||||
• Disadvantages
|
||||
- Difficult to oversee and to follow through
|
||||
- Difficult to maintain, perform local changes
|
||||
- Stability (single-point of failure)
|
||||
- Failure in a single module can threaten the entire system
|
||||
- Complexity increases with kernel scale
|
||||
|
||||
\textbf{Summary: }
|
||||
•OS: environment to execute programs; provides services to users and programs.
|
||||
•User-OS interfaces: (1) command interpreters, (2) GUI, and (3) touch-screen.
|
||||
•System calls at the heart of OS: interface to the services offered by OS
|
||||
•Process control (creation and termination), file management (reading and writing),
|
||||
device management, information management, communication, protection
|
||||
•System programs: provide utilities to users
|
||||
•A linker combines several relocatable object modules into a single binary
|
||||
executable file. A loader loads the executable file into memory, where it
|
||||
becomes eligible to run on an available CPU.
|
||||
•Applications are OS-specific due to differences in binary formats, instruction
|
||||
sets, system calls
|
||||
•OS structures are monolithic, layered, microkernel, modular, and hybrid
|
||||
|
||||
\section{Processes}PCB is an internal struct for process management. Contains all relevant information of the process(Process state, PID, cpu registers etc. This is not the same as the process memory layout. Executable code is not part of the PCB. Multiple PC $\xrightarrow{}multithreading$ .
|
||||
|
||||
\section{Scheduling}
|
||||
\emph{Short-term} (CPU) Schedules from ready queue and allocates CPU.
|
||||
\emph{Mid-term} swapping: decrease \# processes in memory if necessary.
|
||||
\emph{Long-term} Selects process to bring into ready queue. Controlls degree of multi-programming.
|
||||
\textbf{Summary: }CPU scheduling: selecting a waiting process from the ready queue and
|
||||
allocating the CPU to it (by the dispatcher).
|
||||
• Scheduling algorithms: either preemptive (where the CPU can be taken
|
||||
away from a process) or nonpreemptive (where a process must voluntarily
|
||||
relinquish control of the CPU).
|
||||
• Almost all modern operating systems are preemptive.
|
||||
• Scheduling algorithms evaluation criteria: (1) CPU utilization, (2) throughput,
|
||||
(3) turnaround time, (4) waiting time, (5) response time.
|
||||
• First-come, first-served (FCFS) scheduling: simplest scheduling algorithm, can
|
||||
cause short processes to wait for very long processes.
|
||||
• Shortest-job-first (SJF) scheduling is provably optimal, providing shortest
|
||||
average waiting time. Implementation is difficult because predicting the length
|
||||
of the next CPU burst is difficult.
|
||||
• Round-robin (RR) scheduling allocates the CPU to each process for a time
|
||||
quantum (q). If process does not relinquish the CPU before its q expires,
|
||||
process is preempted, and another process is scheduled to run for a q.
|
||||
• Priority scheduling (Prio) assigns each process a priority, and CPU is allocated
|
||||
to the process with the highest priority. Same priority processes can be
|
||||
scheduled in FCFS order or using RR scheduling.
|
||||
• Multilevel queue (MLQ) scheduling partitions processes into several separate
|
||||
queues arranged by priority, and the scheduler executes processes in the
|
||||
highest-priority queue. Different scheduling algorithms may be used in each
|
||||
queue.
|
||||
• Multilevel feedback queues (MLFQ) are similar to multilevel queues, except
|
||||
that a process may migrate between different queues.
|
||||
• Multicore processors place one/more CPUs on same physical chip, each CPU
|
||||
may have more than one hardware thread. OS sees each hardware thread as
|
||||
a logical CPU.
|
||||
• Load balancing on multicore systems equalizes loads between CPU cores;
|
||||
thread migration between cores to balance loads may invalidate cache
|
||||
contents and increase memory access times.
|
||||
\textbf{Shared memory} is under the control of user space, not OS. User process has to synchronize when accessing shared memory.
|
||||
|
||||
\textbf{Prod. Consumer Problem} Bounded Buffer requires sync.
|
||||
\textbf{Message Passing} Com. Link implmented 1. Physcal: Shared Mem, HW Bus, Network. 2. Logical: Direct (send,recv via ProcessID)/Indirect(via Mailbox(aka Ports)), Sync/Async, Auto/Explicit. • \emph{Syncing} Blocking or Non-Blocking. Rendevouz: both send and recv blocking: correctness guaranteed, lower performance.
|
||||
\section{Syncro}
|
||||
• Race condition: processes have concurrent access to shared data
|
||||
and the final result depends on the particular order in which concurrent
|
||||
accesses occur. Can result in corrupted values of shared data.
|
||||
• Critical section: section of code where shared data may be
|
||||
manipulated and a possible race condition may occur. The criticalsection problem (CSP) is to design a protocol whereby processes
|
||||
can synchronize their activity to cooperatively share data.
|
||||
• Solutions to CSP must fulfil: (1) mutual exclusion, (2) progress, and (3)
|
||||
bounded waiting.
|
||||
• Software solutions to the critical-section problem, such as Peterson's
|
||||
solution, do not work well on modern computer architectures.
|
||||
• A mutex lock provides mutual exclusion by requiring that a process acquire
|
||||
a lock before entering a critical section and release the lock on exiting the
|
||||
critical section.
|
||||
• Semaphores, like mutex locks, can be used to provide mutual exclusion.
|
||||
However, whereas a mutex lock has a binary value that indicates if the
|
||||
lock is available or not, a semaphore has an integer value and can
|
||||
therefore be used to solve a variety of other synchronization problems.
|
||||
• A monitor is an abstract data type that provides a high-level form of
|
||||
process synchronization. A monitor uses condition variables that allow
|
||||
processes to wait for certain conditions to become true and to signal one
|
||||
another when conditions have been set to true.
|
||||
• Solutions to the critical-section problem may suffer from deadlocks.
|
||||
\textbf{Deadlocks Characterization: } A deadlock can arise if all these four (non-independent) conditions hold
|
||||
simultaneously:
|
||||
\textbf{D1}. Mutual exclusion: only one process at a time can use a resource.
|
||||
\textbf{D2}. Hold and wait: a process holding at least one resource is waiting to
|
||||
acquire additional resources held by other processes.
|
||||
\textbf{D3}. No preemption: a resource can be released only voluntarily by the
|
||||
process holding it, after that process has completed its task.
|
||||
\textbf{D4}. Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes
|
||||
such that P0 is waiting for a resource that is held by P1, P1 is waiting for a
|
||||
resource that is held by P2, …, Pn–1 is waiting for a resource that is held
|
||||
by Pn, and Pn is waiting for a resource that is held by P0. •\emph{Deadlock Prevention: } Avoid deadlock by having a strict lock hierarchy, i.e. addressing D.4 (only practical option).
|
||||
\section{Main Memory}
|
||||
\textbf{Address Binding}: Important: addresses represented in different ways at different stages of
|
||||
a program’s life
|
||||
– Source code addresses usually symbolic (not physical). Compile Time: If mem loc known, abs. code location generatable; recompile of loc. on changes.
|
||||
– Compiled code addresses bind to relocatable memory addresses.
|
||||
– i.e., “14 bytes from beginning of this memory module”
|
||||
– Linker or loader will bind relocatable addresses to absolute addresses, if not know at compile time.
|
||||
– Each binding maps one address space to another:
|
||||
code addresses to relocatable addresses to absolute addresses
|
||||
- Execute time: binding here if proc relocatable during execution (need hw support). Most OSs bind at exec time.
|
||||
\textbf{MMU:} HW device maps virt to phys add \emph{during execution}. logicl add. plus reloc. reg. = relocable mem add. \textbf{Exec-time addr binding: }log. addr. bound during exec. by MMU.
|
||||
\textbf{Dynamic Loading }($\not =$ d.linking) routine loaded only when called.
|
||||
\textbf{Two Paritions:} User proc (in low physical mem) plus resident OS(in high physical mem with interupt vec). \emph{Contiguous alloc:} each proc in single section of memm.
|
||||
\textbf{Mem protect:} Reloc plus limit reg used in context switch to protect user proc form each other. MMU \emph{dynamically} maps log. to phys. addr.
|
||||
\textbf{Multiple Part:} Deg. of multiprog. limmited by no. of part. in mem. for procs. \emph{Variable part. size.} for efficiency.
|
||||
\emph{First-fit:} allocate first hole.equiv to best-fit for storage, but faster. \emph{Best-fit:} smallest hole. large enough. must search entire list, unless ordered by size. $\Rightarrow$ get smallest leftover area.
|
||||
\textbf{External fra.} many small, non-contiguous holes. Results in unusable memory.\emph{SOl:} Compaction: shuffle mem content to have all free mem in 1 large block. Only possible if dyn. addr. reloc. aka done at exec time. also: non-contiguous alloc see paging. \textbf{Internal frag:} alloc mem block may be larger than requested $\Rightarrow$ not whole block used.
|
||||
\textbf{Paging:} alloc proc mem whenever and wherever available. procs. phys. addr. may be non-contiguous. Physical addr. div into fixed size blocks aka \emph{frames.}. Log. mem. div. into \emph{pages.}
|
||||
Use \emph{page table} to map N pages to N frames. Starage also split into same size blocks. •Addr trans scheme: trans page number p to frame number f, use same offset d. frame ID times pagesize plus offset $=$ phys.mem. addr. Like using table of base/relocation regs, per mem frame. $\Rightarrow$ NO external fragmentation. IMPORTANT: Page table is \emph{per proc}.
|
||||
\textbf{Paging implementatin:} HW w.o. TLB: every data/instr access need two mem access: page table then frame. With TLB: HW cache aka TLB with key,val pars. if TLB miss get from page table in mem.
|
||||
\textbf{Protection:} bit indicating RO or RW(can add more e.g. for exec). Valid-Invalid bit, additional bit per entry in PT set by OS. indicates if page in LAS of proc. Violation results in trap to kernel.
|
||||
\textbf{Demand Pageing Page Fault Handling:} 1. Ref. LA in PT invalid bit. 2. Trap to OS. 3. Page in storge. 4. bring in missing page to free frame. 5.reset PT 6. restart instruction.
|
||||
\textbf{Page Replacement Algorithms: } FIFO, Least recently used, Optimal Algo (OTP) as reference.
|
||||
\textbf{Frame Alloc Algo:} 1.Fixed: 1.1Equal, m frames n proces so m/n frames/proc. 1.2Proportional,then dep on size of programm. 2. Prority based: Proportional scheme plus prority instead of size (or size and prio mixed). \textbf{Global Repl:} Pro: greater troughput.more commonly used) Con: proc exec time can greatly vary. \textbf{Local Repl:} Pro: more consistent per proc perf. Con: unerutilized memory.
|
||||
|
||||
\textbf{Thrashing} Proc needs frames, high Page fault, get desired page, replace existing frame, quickly need replaced frame back, repeat. CPU seems unused $\Rightarrow$ OS gives CPU more work, thrashing repeats even more.
|
||||
\textbf{Locality:} Set of pages activly used together by proc. Model:Proc fault for locality until all pages of local in mem. not fault again until locality changes. Localities may overlap. A \textbf{working set} is based on locality and is defined as the set of pages currently in
|
||||
use by a process
|
||||
|
||||
\section{FileSys}
|
||||
• File: an abstract data type defined and implemented by OS. It is a sequence of logical records. A logical record may be a byte, a line (of fixed or variable length), or a more complex data item.
|
||||
• File operations: create, open, write, read, reposition, delete, truncate • File access methods: sequential and direct • Directories: Within a file system, it is useful to create directories to allow files to be organized.
|
||||
• Tree-structured directory allows a user to create subdirectories to
|
||||
organize files.
|
||||
• Acyclic-graph directory: enable users to share subdirectories and files but complicate searching and deletion.
|
||||
• A general graph: allows complete flexibility in the sharing of files and directories but sometimes requires garbage collection to recover
|
||||
unused disk space.
|
||||
• Most file systems reside on secondary storage, which is designed to hold a large amount of data permanently
|
||||
• The most common secondary-storage medium is the disk, but the use of NVM devices is increasing
|
||||
• File systems are mounted onto a logical file system architecture to make them available for use
|
||||
• File systems are often implemented in a layered or modular structure • Files within a file system can be allocated space on the storage device in three ways: through contiguous, linked, or indexed allocation
|
||||
• Contiguous allocation can suffer from external fragmentation. • Direct access is very inefficient with linked allocation.
|
||||
• Indexed allocation may require substantial overhead for its index block.
|
||||
• Free-space allocation methods also influence the efficiency of diskspace use, the performance of the file system, and the reliability of secondary storage.
|
||||
• Free-space allocation methods include bit vectors and linked lists.
|
||||
• A general-purpose computer system can have multiple storage devices, and those devices can be sliced up into partitions, which hold volumes, which in turn hold file systems
|
||||
• File systems must also be mounted before they are used • A mount point is a location within the directory structure where the file system will be attached
|
||||
• A boot loader is a set of blocks that contain enough code to know how to load the kernel from the file system
|
||||
• The root partition selected by the boot loader must be mounted at boot
|
||||
time
|
||||
\section{Security}
|
||||
• A general-purpose computer system can have multiple storage devices, and those devices can be sliced up into partitions, which hold volumes, which in turn hold file systems
|
||||
• File systems must also be mounted before they are used • A mount point is a location within the directory structure where the file system will be attached
|
||||
• A boot loader is a set of blocks that contain enough code to know how to load the kernel from the file system
|
||||
• The root partition selected by the boot loader must be mounted at boot
|
||||
time
|
||||
• System protection features are guided by the principle of need-toknow and implement mechanisms to enforce the principle of least privilege.
|
||||
• Computer systems contain objects that must be protected from misuse.
|
||||
• Objects may be hardware (such as memory, CPU time, and I/O devices) or software (such as files, programs, and semaphores).
|
||||
• An access right is permission to perform an operation on an object. • A domain is a set of access rights. • Processes execute in domains and may use any of the access rights
|
||||
in the domain to access and manipulate objects.
|
||||
• The access matrix is a general model of protection that provides a mechanism for protection without imposing a particular protection policy on the system or its users
|
||||
• The access matrix is sparse. It is normally implemented as access
|
||||
lists associated with each object
|
||||
\section{Virtualization:}
|
||||
\textbf{Trap-and-emulate:}
|
||||
• A virtual machine guest can execute only in user mode
|
||||
• The kernel, of course, runs in kernel mode, and it is not safe to allow userlevel code to run in kernel mode
|
||||
• We must have a virtual user mode and a virtual kernel mode, both of which run in physical user mode.
|
||||
• A transfer from virtual user mode to virtual kernel mode in the virtual machine happens when privileged
|
||||
instructions are executed
|
||||
|
||||
\textbf{Binary Translation:}
|
||||
• Some CPUs do not have a clean separation of privileged and nonprivileged instructions
|
||||
• Binary Translation is a technique that makes VMM inspect all instructions initiated by the guest OS.
|
||||
• Instructions that would behave differently in real kernel mode translated into a set of new
|
||||
instructions
|
||||
\textbf{Hypervisors:}
|
||||
• Type 0 Hypervisor • have existed for many years under many names, including “partitions” and “domains.”
|
||||
• The VMM itself is encoded in the firmware and loaded at boot time
|
||||
• VMM loads the guest images to run in each partition.
|
||||
• The feature set of a type 0 hypervisor tends to be smaller than those of the other types because it is
|
||||
implemented in hardware
|
||||
• Type 1 hypervisor (bare metal) • commonly found in company data centers • special-purpose operating system that runs natively on the hardware
|
||||
• Type 2 hypervisor (hosted) • simply another process run and managed by the host OS • the host host OS does not know that virtualization is happening
|
||||
• there is very little operating-system involvement in type 2 hypervisor
|
||||
• Both type 1 and type 2 hypervisors work with unmodified guest operating systems and they have to jump through hoops to get good performance (recall trap-thenemulate approach)
|
||||
• Paravirtualization takes another approach. It avoid causing traps by modifying the source code of the
|
||||
guest operating system
|
||||
\section{OS \& Performance}
|
||||
• The OS decides mapping of Processes/Threads in time and space
|
||||
• Preemptive scheduling to enable fairness among all threads
|
||||
• Too long Time Quantum => less responsiveness of the applications
|
||||
• Too short Time Quantum => frequent context switches
|
||||
• The OS decides mapping of Processes/Threads in time and space
|
||||
• Migrating the thread to a different CPU core.
|
||||
• Load Balancing aims to evenly distribute the workload across available
|
||||
CPU cores to maximize overall system performance
|
||||
• Cache Affinity refers to the preference for a thread to execute on the same
|
||||
CPU core where its associated cache data resides
|
||||
• OS memory management ensures that applications receive sufficient
|
||||
and contiguous memory blocks, minimizing fragmentation, and
|
||||
optimizing memory usage
|
||||
• Memory Paging
|
||||
• Page Size
|
||||
• Too large: leads to internal fragmentation
|
||||
• Too small: leads to external fragmentation, where free memory is divided
|
||||
into small, non-contiguous blocks, making it challenging to allocate larger
|
||||
memory chunk
|
||||
• Efficient and optimized file systems can improve file access times and
|
||||
overall I/O performance
|
||||
• Well-designed I/O scheduling algorithms can minimize seek times,
|
||||
reduce I/O bottlenecks, and enhance application responsiveness
|
||||
• Buffering and caching mechanisms reduce physical disk accesses,
|
||||
improving I/O performance by providing faster access times
|
||||
• Asynchronous I/O allows for concurrent execution of I/O operations,
|
||||
minimizing I/O-related delays and optimizing resource utilization
|
||||
|
||||
%\includegraphics[width=\linewidth]{images/memtable.png}O
|
||||
\end{multicols*}
|
||||
\end{document}
|
||||
286
os/quiz.tex
Normal file
286
os/quiz.tex
Normal file
@@ -0,0 +1,286 @@
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% writeLaTeX Example: A quick guide to LaTeX
|
||||
%
|
||||
% Source: Dave Richeson (divisbyzero.com), Dickinson College
|
||||
%
|
||||
% A one-size-fits-all LaTeX cheat sheet. Kept to two pages, so it
|
||||
% can be printed (double-sided) on one piece of paper
|
||||
%
|
||||
% Feel free to distribute this example, but please keep the referral
|
||||
% to divisbyzero.com
|
||||
%
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
% How to use writeLaTeX:
|
||||
%
|
||||
% You edit the source code here on the left, and the preview on the
|
||||
% right shows you the result within a few seconds.
|
||||
%
|
||||
% Bookmark this page and share the URL with your co-authors. They can
|
||||
% edit at the same time!
|
||||
%
|
||||
% You can upload figures, bibliographies, custom classes and
|
||||
% styles using the files menu.
|
||||
%
|
||||
% If you're new to LaTeX, the wikibook is a great place to start:
|
||||
% http://en.wikibooks.org/wiki/LaTeX
|
||||
%
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
|
||||
\documentclass[10pt,landscape]{article}
|
||||
\usepackage{amssymb,amsmath,amsthm,amsfonts}
|
||||
\usepackage{multicol,multirow}
|
||||
\usepackage{calc}
|
||||
\usepackage{ifthen}
|
||||
\usepackage{helvet}
|
||||
\renewcommand{\familydefault}{\sfdefault}
|
||||
%\usepackage[fontsize=6pt]{fontsize}
|
||||
|
||||
\usepackage[landscape]{geometry}
|
||||
|
||||
\geometry{a4paper, landscape, margin=0.5in}
|
||||
\usepackage[colorlinks=true,citecolor=blue,linkcolor=blue]{hyperref}
|
||||
\usepackage[
|
||||
protrusion=true,
|
||||
activate={true,nocompatibility},
|
||||
final,
|
||||
tracking=true,
|
||||
kerning=true,
|
||||
spacing=true,
|
||||
factor=1100]{microtype}
|
||||
\SetTracking{encoding={*}, shape=sc}{40}
|
||||
%%Packages added by Sebastian Lenzlinger:
|
||||
\usepackage{enumerate} %% Used to change the style of enumerations (see below).
|
||||
|
||||
\newtheorem{definition}{Definition}
|
||||
\newtheorem{theorem}{Theorem}
|
||||
\newtheorem{axiom}{Axiom}
|
||||
\newtheorem{lem}{Lemma}
|
||||
\newtheorem{corr}{Corollary}
|
||||
|
||||
\usepackage{tikz} %% Pagacke to create graphics (graphs, automata, etc.)
|
||||
\usetikzlibrary{automata} %% Tikz library to draw automata
|
||||
\usetikzlibrary{arrows} %% Tikz library for nicer arrow heads
|
||||
%%End
|
||||
\microtypecontext{spacing=nonfrench}
|
||||
|
||||
\ifthenelse{\lengthtest { \paperwidth = 11in}}
|
||||
{ \geometry{top=.5in,left=.5in,right=.5in,bottom=.5in} }
|
||||
{\ifthenelse{ \lengthtest{ \paperwidth = 297mm}}
|
||||
{\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} }
|
||||
{\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} }
|
||||
}
|
||||
\pagestyle{empty}
|
||||
\makeatletter
|
||||
\renewcommand{\section}{\@startsection{section}{1}{0mm}%
|
||||
{0.1mm}%
|
||||
{0.0001mm}%x
|
||||
{\normalfont\normalsize\bfseries}}
|
||||
\renewcommand{\subsection}{\@startsection{subsection}{2}{0mm}%
|
||||
{0mm}%
|
||||
{0mm}%
|
||||
{\normalfont\small\bfseries}}
|
||||
\renewcommand{\subsubsection}{\@startsection{subsubsection}{3}{0mm}%
|
||||
{-1ex plus -.5ex minus -.2ex}%
|
||||
{1ex plus .2ex}%
|
||||
{\normalfont\small\bfseries}}
|
||||
\makeatother
|
||||
\setcounter{secnumdepth}{0}
|
||||
\setlength{\parindent}{0pt}
|
||||
\setlength{\parskip}{0pt plus 0.5ex}
|
||||
% -----------------------------------------------------------------------
|
||||
|
||||
\title{Operating Systems FS23}
|
||||
|
||||
\begin{document}
|
||||
\tiny
|
||||
\raggedright
|
||||
\footnotesize
|
||||
|
||||
\begin{multicols*}{3}
|
||||
\setlength{\premulticols}{1pt}
|
||||
\setlength{\postmulticols}{1pt}
|
||||
\setlength{\multicolsep}{1pt}
|
||||
\setlength{\columnsep}{1pt}
|
||||
|
||||
|
||||
\section{Intro}
|
||||
\textbf{Asym. Mult. Proc. (AMP):} Special task per proc. Single OS code in mem, only run on CPU1. syscalls and cache buffer all to CPU1. Mem shared across all, scheduling accross all CPUs \textbf{SMP:} All procs perform all tasks. Sync issues. Syscalls proc on CPU where occurs.
|
||||
|
||||
\rule{\linewidth}{.1pt}
|
||||
\section{OS Structures}
|
||||
|
||||
\section{Processes-Threads-Concurrency}
|
||||
\textbf{What is a process?} In the context of operating systems, a process is an instance of a program that is being executed by one or many threads. It is a unit of work within the operating system that can be scheduled and allocated resources such as CPU time, memory, and I/O devices. Each process has its own address space, which contains the code being executed, data structures, and other resources required by the program. Processes are managed by the operating system's process management subsystem, which provides mechanisms for creating, terminating, and synchronizing processes.\\
|
||||
\textbf{For a single-processor system, there will never be more than one process in the Running state.} This statement is not entirely true. In a single-processor system, there can be multiple processes in the Running state, but only one process can be executing on the CPU at any given time. The operating system uses a scheduling algorithm to determine which process should be given access to the CPU next. The scheduler may switch between processes frequently, giving each process a small amount of CPU time before switching to another process. This gives the illusion that multiple processes are running simultaneously, even though only one process is executing on the CPU at any given time. QUIZ says this statement is true!\\
|
||||
\textbf{An I/O-bound process spends more time doing computations than I/O.} This statement is not correct. An I/O-bound process is a process that spends most of its time waiting for I/O operations to complete, such as reading from or writing to a disk or network. In contrast, a CPU-bound process is a process that spends most of its time performing computations and using the CPU. Therefore, an I/O-bound process spends more time waiting for I/O operations to complete than performing computations.\\
|
||||
\textbf{What / who selects which process should be executed next (from the ready queue) and allocates CPU?} The responsibility of selecting which process should be executed next and allocating CPU time to that process lies with the operating system's scheduler. The scheduler is a component of the operating system's process management subsystem and is responsible for managing the execution of processes on the CPU. When a process is ready to run, it is placed in a queue called the ready queue. The scheduler selects a process from the ready queue based on its scheduling algorithm and allocates CPU time to that process. The scheduling algorithm can be preemptive or non-preemptive, depending on whether the scheduler can interrupt a running process to allocate CPU time to another process.\\
|
||||
\textbf{An I/O-bound process spends more time doing I/O than computations.} Yes, this statement is correct. An I/O-bound process is a process that spends most of its time waiting for I/O operations to complete, such as reading from or writing to a disk or network. Therefore, an I/O-bound process spends more time doing I/O operations than computations. In contrast, a CPU-bound process is a process that spends most of its time performing computations and using the CPU.\\
|
||||
\textbf{Which benefits go to multithreaded programming?} Multithreaded programming offers several benefits, including:
|
||||
|
||||
1. Responsiveness: Multithreading allows a program to continue executing even if part of it is blocked or waiting for I/O operations to complete. This is especially important for user interfaces and time-sharing systems.
|
||||
|
||||
2. Resource sharing: Threads share the resources of their parent process, making it easier to share data between threads than between separate processes.
|
||||
|
||||
3. Faster execution: Creating a thread is generally faster and less resource-intensive than creating a new process, and switching between threads has lower overhead than switching between processes.
|
||||
|
||||
4. Scalability: Even a single process can take advantage of multiprocessor architectures by using multiple threads to execute tasks in parallel.
|
||||
|
||||
5. Simplified code: Multithreading can simplify the code by allowing different parts of the program to execute concurrently without requiring complex synchronization mechanisms.
|
||||
|
||||
These benefits make multithreaded programming an attractive option for developing efficient and responsive applications that can take advantage of modern hardware architectures.\\
|
||||
\textbf{What is the difference between parallelism and concurrency?} Concurrency and parallelism are related concepts in computer science, but they have different meanings.
|
||||
|
||||
Concurrency refers to the ability of a program to make progress on multiple tasks simultaneously. In a concurrent program, different parts of the program can execute independently and make progress at the same time. Concurrency is often achieved through the use of threads or other mechanisms that allow different parts of the program to execute concurrently.
|
||||
|
||||
Parallelism, on the other hand, refers to the ability of a system to perform multiple tasks simultaneously. In a parallel system, multiple processors or cores work together to execute different tasks simultaneously. Parallelism is often used in high-performance computing applications that require large amounts of computation.
|
||||
|
||||
In summary, concurrency is a property of a program that allows it to make progress on multiple tasks simultaneously, while parallelism is a property of a system that allows it to perform multiple tasks simultaneously by using multiple processors or cores.\\
|
||||
\textbf{Amdahl's Law describes performance gains for applications with both a serial and parallel component.} Yes, that is correct. Amdahl's Law is a formula that describes the theoretical maximum speedup that can be achieved by parallelizing a program. It takes into account the proportion of the program that can be parallelized and the number of processors used to execute the program.
|
||||
|
||||
Amdahl's Law states that the speedup of a program is limited by its serial portion, which cannot be parallelized. The formula for Amdahl's Law is:
|
||||
|
||||
Speedup = 1 / (Serial fraction + Parallel fraction / Number of processors)
|
||||
|
||||
This formula shows that as the number of processors increases, the speedup of a program approaches a limit determined by its serial portion. Therefore, Amdahl's Law highlights the importance of identifying and optimizing the serial portion of a program to achieve maximum performance gains from parallelization.\\
|
||||
\textbf{What concepts refers to the capability to allow multiple tasks to make progress on a single processor?} The concept that refers to the capability to allow multiple tasks to make progress on a single processor is concurrency. Concurrency allows different parts of a program to execute independently and make progress at the same time, even if they are running on a single processor.
|
||||
|
||||
Concurrency can be achieved through various mechanisms, such as time slicing or cooperative multitasking, which allow different tasks to take turns executing on the processor. However, it's important to note that while concurrency can improve the responsiveness and performance of a program, it does not provide true parallelism since only one task is executing at any given moment.\\
|
||||
\textbf{The dispatcher module gives control of the CPU (core) to the process selected by the short-term scheduler, this means:} Yes, that is correct. The dispatcher module is responsible for giving control of the CPU (core) to the process selected by the short-term scheduler. Once the short-term scheduler selects a process to run, the dispatcher is responsible for performing a context switch to save the state of the currently running process and restore the state of the selected process.
|
||||
|
||||
The dispatcher then transfers control of the CPU to the selected process, allowing it to execute on the processor. This process continues until either the selected process completes its execution or is preempted by another process due to a higher-priority task or time slice expiration.
|
||||
|
||||
Overall, the dispatcher plays a critical role in managing and scheduling processes on a system's CPU, ensuring that each process gets its fair share of processing time and that system resources are used efficiently.
|
||||
|
||||
CPU switches to user mode. Switches context from one proc to another, jumps to proper location in the userprogram to resume that program.
|
||||
\\
|
||||
\textbf{Turnaround time is the amount of time to execute a particular process.} I'm sorry, but that statement is not entirely accurate. Turnaround time is actually the amount of time it takes for a process to complete execution from the moment it was submitted for execution. It includes both the time spent waiting in the ready queue and the actual execution time on the CPU. Therefore, it is a measure of how long a process takes to run from start to finish. This information can be found on page 16 of this PDF file under "Process Scheduling". In Quiz statement eas marked as correct!\\
|
||||
\textbf{First-Come, First-Served (FCFS) Scheduling gives minimum average waiting time for a given set of processes.} I'm sorry, but that statement is not entirely accurate. First-Come, First-Served (FCFS) Scheduling is a non-preemptive scheduling algorithm in which the process that arrives first is executed first. While it may seem like FCFS gives minimum average waiting time for a given set of processes, this is not always the case. In fact, FCFS can lead to long waiting times for processes that arrive later if the first process has a long CPU burst time. Therefore, it may not be the most efficient scheduling algorithm in terms of minimizing average waiting time. This information can be found on page 18 of this PDF file under "Scheduler Types".\\
|
||||
\textbf{Multilevel feedback queues are similar to multilevel queues, except that a process may migrate between different queues.} Yes, that statement is correct. Multilevel feedback queues are similar to multilevel queues in that they have multiple priority levels, but they differ in that a process can move between different queues based on its behavior. In a multilevel feedback queue, a process starts at the highest priority queue and moves down the hierarchy if it uses too much CPU time or has too many input/output operations. Conversely, if a process uses very little CPU time or has few input/output operations, it may move up the hierarchy to a higher priority queue. This allows for more efficient scheduling of processes based on their behavior and resource needs. This information can be found on page 20 of this PDF file under "Multilevel Feedback Queues".\\
|
||||
\rule{\linewidth}{.1pt}
|
||||
\section{Synchronization}
|
||||
\textbf{Race Conditons:} A race condition is a situation that occurs when two or more processes have concurrent access to shared data and the final result depends on the particular order in which concurrent accesses occur. This can lead to inconsistent or corrupted values of shared data. You can find more information about race conditions and the critical section problem on page 53 of this PDF file (Sync.a). Outcome depends on order of access.\\
|
||||
\textbf{Ch. Sync a}\\
|
||||
\textbf{A mutex lock is released immediately after entering a critical section?} A mutex lock is released on exiting the critical section, not immediately after entering it. A mutex lock provides mutual exclusion by requiring that a process acquire a lock before entering a critical section and release the lock on exiting the critical section. You can find more information about mutex locks and their usage in solving the critical section problem on page 21 and 53 of this PDF file (sync a).
|
||||
\\
|
||||
\textbf{A semaphore has an integer value.} Yes, a semaphore has an integer value. There are two types of semaphores: counting semaphore and binary semaphore. A counting semaphore is initialized to a positive integer value and can range over an unrestricted domain, while a binary semaphore is initialized to either 0 or 1 and can range only between 0 and 1. You can find more information about semaphores and their usage on page 25 of this PDF file. (sync a).\\
|
||||
\textbf{Semaphores and mutex locks both provide mutual exclusion.} Yes, both semaphores and mutex locks can be used to provide mutual exclusion. A mutex lock has a binary value that indicates if the lock is available or not, while a semaphore has an integer value and can therefore be used to solve a variety of other synchronization problems. You can find more information about semaphores and mutex locks on page 54 of this PDF file.\\
|
||||
\textbf{The value of a counting semaphore can range only between 0 and 1.} No, the statement is incorrect. A counting semaphore can have an integer value that ranges over an unrestricted domain, while a binary semaphore can have a value that ranges only between 0 and 1. You can find more information about semaphores and their types on page 25 of this PDF file. A binary semaphore has value 0 or 1, but not inbetween. Semaphores are integers. --> false in quiz??\\
|
||||
\textbf{Which of the following may cause a liveness failure?
|
||||
1. an infinite loop
|
||||
2. a deadlock
|
||||
3. a busy waiting loop
|
||||
4. all of the other answers} Option 4 is correct. All of the given options may cause a liveness failure. An infinite loop can cause a process to never complete, leading to a liveness failure. A deadlock occurs when two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes, leading to a liveness failure. Busy waiting loop can also lead to a liveness failure as it consumes CPU cycles without performing any useful work. You can find more information about liveness failures and their causes on page 28 of this PDF file. A liveness failure is a type of concurrency bug that occurs when a process or system fails to make progress towards its intended goal. In other words, the system becomes stuck and cannot proceed to the next state or complete its task.\\
|
||||
\textbf{Solutions to the critical section problem may suffer from liveness failures.} Yes, that is correct. Solutions to the critical section problem may suffer from liveness failures such as deadlocks, livelocks, and starvation. Deadlocks occur when two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. Livelocks occur when two or more processes change their states in response to the state changes of other processes without making progress towards completing their tasks. Starvation occurs when a process is perpetually denied access to a resource it needs to complete its task. You can find more information about liveness failures and their causes on page 28 of this PDF file.\\
|
||||
\textbf{When are atomic integers in Linux useful? When not?} Atomic integers in Linux are useful when multiple threads or processes need to access and modify a shared integer variable concurrently. Atomic integers provide a way to perform read-modify-write operations on the shared variable atomically, without the need for locks or other synchronization mechanisms. This ensures that the shared variable is always in a consistent state, even when accessed by multiple threads or processes simultaneously. Atomic integers are commonly used in Linux kernel programming for implementing synchronization primitives such as semaphores and spinlocks. You can find more information about atomic operations and their usage in Linux on page 47 of this PDF file. However, they are not useful when several variables contribute to possible race conditions. (Page 21 Examples). In Quiz: Useful when, Several variables are invonvled in rc, single proc access several lvars involved in rc, int needs to be updated.\\
|
||||
\textbf{Emergence of multicore systems has put greater emphasis on developing novel concurrency techniques, because} the emergence of multicore systems has made it possible to execute multiple threads or processes simultaneously on a single machine. However, this also introduces new challenges related to concurrency and synchronization. Traditional concurrency techniques such as locks and semaphores may not be efficient or scalable enough to handle the increased number of threads or processes running on a multicore system. This is because these techniques often rely on serialization, which can lead to contention and overhead when multiple threads or processes try to access the same shared resource. In addition, traditional techniques may not take full advantage of the parallelism offered by multicore systems, leading to suboptimal performance. Therefore, there is a need for novel concurrency techniques that can take advantage of the parallelism offered by multicore systems while ensuring correctness and avoiding issues such as deadlocks and race conditions. You can find more information about the challenges posed by multicore systems and the need for novel concurrency techniques on page 21 of this PDF file. QUIZ Answer: Increasing number of processing cores causes more risk of race conditions. NOT Fundamentally new problems, race conditions are not more difficult to solve nor are deadlocks more difficult to prevent or avoid in multicore system. But the risk of running into a risk conditions rises.\\
|
||||
\
|
||||
\textbf{Critical Region:} Part of programm where shared memory is accessed.\\
|
||||
\textbf{Ch Sync Deadlocks}\\
|
||||
\textbf{Which of the following is not a factor in choosing which process to terminate during recovery from a deadlock?
|
||||
1. How many more resources the process needs in order to complete.
|
||||
2. How many processes will need to be terminated.
|
||||
3. Who the parent of the process is.
|
||||
4. What the priority of the process is.} Option 3 is not a factor in choosing which process to terminate during recovery from a deadlock. The other options are all factors that may be considered when deciding which process to terminate in order to resolve a deadlock. Specifically, the number of resources needed by the process, the number of processes that need to be terminated, and the priority of the process are all important factors that can help determine which process should be terminated to minimize disruption and maximize system performance. However, who the parent of the process is typically not relevant in this context.\\
|
||||
\textbf{What are the conditions for a deadlock?} There are four necessary conditions for a deadlock to occur in a system. These conditions are:
|
||||
|
||||
1. Mutual exclusion: At least one resource must be held in a non-shareable mode, meaning that only one process at a time can use the resource.
|
||||
|
||||
2. Hold and wait: A process holding at least one resource is waiting to acquire additional resources held by other processes.
|
||||
|
||||
3. No preemption: A resource cannot be forcibly removed from a process holding it; it can only be released voluntarily by the process after it has completed its task.
|
||||
|
||||
4. Circular wait: There exists a set of waiting processes such that each process in the set is waiting for a resource held by another process in the set, creating a circular chain of dependencies.
|
||||
|
||||
If all four of these conditions hold simultaneously, then a deadlock can occur in the system.\\
|
||||
\textbf{The circular-wait condition for a deadlock implies the hold-and-wait condition.} Yes, the circular-wait condition for a deadlock implies the hold-and-wait condition. The hold-and-wait condition states that a process holding at least one resource is waiting to acquire additional resources held by other processes. The circular-wait condition, on the other hand, states that there exists a set of waiting processes such that each process in the set is waiting for a resource held by another process in the set. If this condition holds, then it implies that at least one process in the set must be holding a resource while waiting for another resource to be released by another process in the set. Therefore, the circular-wait condition implies the hold-and-wait condition.\\
|
||||
\textbf{A system will never enter a deadlocked state if
|
||||
1. None of the other answers.
|
||||
2. The system uses the detection and recovery technique.
|
||||
3. The system uses the deadlock avoidance technique.
|
||||
4. The system chooses to ignore the problem altogether.} Option 3 is not correct. A system will never enter a deadlocked state if it employs a protocol to ensure that the system will never enter a deadlock state. This can be achieved through either of the following techniques:
|
||||
|
||||
1. Deadlock prevention: This technique ensures that at least one of the four necessary conditions for deadlock cannot occur in the system.
|
||||
|
||||
2. Deadlock avoidance: This technique involves dynamically allocating resources to processes in a way that avoids the possibility of a deadlock.
|
||||
|
||||
Option 2 is incorrect because detection and recovery is not a prevention technique, but rather a way to handle deadlocks after they have already occurred. Option 4 is also incorrect because ignoring the problem altogether does not prevent deadlocks from occurring; it simply means that the system will not attempt to handle them proactively. The information can be found on page 20 of the PDF file. QUIZ says that using a deadlock avoidance tequnique will make a system not deadlock...\\
|
||||
\textbf{Deadlock prevention and deadlock avoidance are essentially the same approaches for handling a deadlock.} No, deadlock prevention and deadlock avoidance are not the same approaches for handling a deadlock. Deadlock prevention involves designing the system in such a way that at least one of the four necessary conditions for deadlock cannot occur. This can be achieved by using techniques such as resource allocation, ordering of resources, or disabling interrupts. On the other hand, deadlock avoidance involves dynamically allocating resources to processes in a way that avoids the possibility of a deadlock. This is done by using algorithms that predict whether a particular resource allocation will lead to a deadlock or not. If it is predicted that a particular allocation will lead to a deadlock, then the system will avoid making that allocation.
|
||||
|
||||
Therefore, while both techniques aim to prevent deadlocks from occurring, they use different approaches to achieve this goal. The information can be found on page 20 of the PDF file.\\
|
||||
\textbf{A system in an unsafe state will ultimately deadlock.}Yes, a system in an unsafe state can ultimately deadlock. An unsafe state is a state in which a set of processes is holding resources and waiting for others that are also being held by other processes. If the system enters an unsafe state, it may be possible to allocate resources in such a way that a deadlock will not occur. However, if no safe sequence of resource allocations exists, then the system will eventually deadlock. Therefore, it is important to ensure that the system does not enter an unsafe state in the first place by using techniques such as deadlock prevention or avoidance. The information can be found on page 21 of the PDF file. QUIZ says: False. I think that it is not deterministic to get into a deadlock\\
|
||||
|
||||
\rule{\linewidth}{.1pt}
|
||||
\section{Main Memory}
|
||||
\textbf{Ch Main Memory}\\
|
||||
\textbf{Why do Computing Systems need cache? }Computing systems need cache to reduce memory stalls. Main memory access can take many clock cycles, causing a stall. Cache (fast memory) is added between main memory and CPU registers to reduce memory stalls. This helps improve the overall performance of the system by reducing the time it takes for the CPU to access data that is frequently used. (Page 6)\\
|
||||
\textbf{An address generated by a CPU} is referred to as a logical address. The memory management unit (MMU) then translates this logical address to a physical address in memory. (Page 13)\\
|
||||
\textbf{A large page size results in}
|
||||
\\
|
||||
\textbf{Hashed page tables are particularly usefilffor processes with sparse address space }
|
||||
p.59 says clustered page tables (veriation of hashed p t) are good for sparse address space, does not mention processes though...\\
|
||||
\textbf{An advantage of virtual memory is that
|
||||
1. you do not have to worry about the amount of physical memory available
|
||||
2. provides a way to execute a program that is only partially loaded in memory
|
||||
3. all of the other answers
|
||||
4. a program can be much larger than the size of physical memory} The correct answer is 3. All of the other answers are advantages of virtual memory. Virtual memory allows for efficient and safe sharing of memory among multiple processes, enables larger programs to run on a system with limited physical memory, and provides a way to execute a program that is only partially loaded in memory. Additionally, virtual memory eliminates the need for programmers to worry about the amount of physical memory available. (Page 5)\\
|
||||
\textbf{Anonymous memory of a process refers to
|
||||
1. the pages not associated with the binary executable file of the process.
|
||||
2. the pages that cannot be swapped out of the physical memory.
|
||||
3. the pages associated with the binary executable file of the process.
|
||||
4. the pages associated with the static data of the process.} The correct answer is 1. Anonymous memory of a process refers to the pages not associated with the binary executable file of the process. (Page 19) \\
|
||||
\textbf{A sign of thrashing is
|
||||
1. CPU utilization decreases as the degree of multiprogramming is increased
|
||||
2. CPU utilization decreases as the number of allocated pages is increased
|
||||
3. CPU utilization increases as the degree of multiprogramming is increased
|
||||
4. CPU utilization increases as the number of allocated pages is increased} The correct answer is 1. A sign of thrashing is that CPU utilization decreases as the degree of multiprogramming is increased. (Page 19)\\
|
||||
\textbf{Current best practice to avoid thrashing is to inlucde enoughh physical memory} While having enough physical memory is important to avoid thrashing, it is not the only best practice. Other best practices include using efficient memory allocation algorithms, reducing the degree of multiprogramming, and using demand paging. True according to lecture notes page 30 VM part 2.\\
|
||||
\textbf{Larger page size results in less total allocated memory?} No, it increases allocated memory since each process has a higher minimum amount of momory.
|
||||
\textbf{Does a larger page size reduce I/O time?} Yes, since the CPU can get bigger blocks of data off disk into main memory.
|
||||
\textbf{How to imporve I/O?} Caching.
|
||||
\section{File System}
|
||||
\textbf{Chapter file Systems a.}
|
||||
\textbf{How is a file identified within a file system?}A file is uniquely identified within a file system by its identifier, which is usually a unique number. This identifier is one of the basic attributes of a file, along with its name, type (if the OS supports different types of files), location, size, timestamps (creation, modification, and latest use), protection information on who has access (read, write, and modify), and other attributes such as type of encoding. You can find more information about this on page 11 of this PDF file.\\
|
||||
\textbf{What steps are necessary to create a file?} To create a file, there are two steps that need to be taken. First, the operating system needs to find a space to store the new file. Second, the operating system needs to make an entry for the new file in a directory. (p 12).
|
||||
\\
|
||||
\textbf{The FAT method does not store information about free blocks.} QUIZ says is correct--> ask in discussion!!!!!!
|
||||
That is correct. The File Allocation Table (FAT) method does not store information about free blocks. Instead, it uses a linked list to keep track of free blocks. This means that the FAT method can be less efficient than other methods when it comes to managing free space on a disk. p17/18 BUT ACTUALL: Actually, the File Allocation Table (FAT) file system does store information about free blocks. The FAT file system is a simple and commonly used file system originally introduced by Microsoft for MS-DOS and later adopted by other operating systems.
|
||||
|
||||
In the FAT file system, the file allocation table is a table that keeps track of the allocation status of each block or cluster on the disk. Each entry in the table corresponds to a specific block or cluster, and it indicates whether the block is free or allocated.
|
||||
|
||||
When a file is stored on a disk using the FAT file system, the file system looks for a sequence of free blocks in the file allocation table to allocate to that file. As the file grows or shrinks, the file system updates the corresponding entries in the table to reflect the changes in the allocation status of the blocks.
|
||||
|
||||
So, the FAT file system does store information about free blocks in the file allocation table, allowing the file system to keep track of available space on the disk and efficiently allocate and manage file storage.\\
|
||||
|
||||
\textbf{Using x-bit pointers limits the size of a file to 4 GB} The use of 32-bit pointers limits the size of a file to 4 GB. This is because a 32-bit pointer can address up to $2^32$ bytes of memory, which is equal to 4 GB. Therefore, if a file system uses 32-bit pointers, the maximum size of a file that can be stored on that file system is limited to 4 GB. \\
|
||||
|
||||
\textbf{Mounting a file system means that: }The filesystem is becoming available within the file system namespace. CHATGPT: Mounting a file system means that it is being made accessible and integrated into the operating system's directory hierarchy. When a file system is mounted, the operating system establishes a connection between the file system and a specific directory known as the mount point.
|
||||
|
||||
Here's what happens when a file system is mounted:
|
||||
|
||||
The operating system identifies the device or partition that contains the file system. This can be a hard drive, a solid-state drive, a network storage device, or any other storage medium.
|
||||
|
||||
The operating system reads the file system's metadata, such as the file allocation table, inode table, or other data structures, to understand the organization and structure of the file system.
|
||||
|
||||
A mount point is selected, which is an existing directory in the operating system's file hierarchy. The mount point acts as a reference point for accessing the contents of the mounted file system.
|
||||
|
||||
The operating system establishes a link between the mount point and the file system, enabling access to the files and directories within the file system. The files and directories within the mounted file system become accessible through the mount point.
|
||||
|
||||
Once the file system is mounted, users and applications can read from and write to the files and directories within it, just as they would with any other files on the system. Any changes made to the mounted file system are reflected in the underlying storage device.
|
||||
|
||||
Mounting a file system allows users and programs to interact with the files and directories stored within it, incorporating the contents of the file system seamlessly into the overall file hierarchy of the operating system.\\
|
||||
|
||||
\textbf{A file system may be mounted during the boot time only.} False, see File System Internals.
|
||||
|
||||
\section{Security}
|
||||
\textbf{Phishing is an attack, in which a legitimate-looking e-mail misleads a user into entering confidential information}, true see p8 Security.
|
||||
\\
|
||||
\textbf{A protection domain is a collection of access rights, each of which is} a pair <object-name, rights-set> see p 13 Protections\\
|
||||
\textbf{UNIX operating system associates a protection domain with threads}, false, see p 10 protection.
|
||||
|
||||
\section{Virtualization}
|
||||
\textbf{Hypervisors of Type1 show higher latency than hypervisors of Type2}, false, see p18 Virtualization\\
|
||||
\textbf{Hypervisors of Type2 are designed for endusers rather than service providers} true, see p10 virtualization.\\
|
||||
\textbf{Paravirtualization refers to modifying the OS kernel to avoid traps and calling APIs from the hypervisor (hypercalls)} The PDF mentions that paravirtualization takes an approach of modifying the source code of the guest operating system to avoid causing traps. This is done by calling APIs from the hypervisor, which are known as hypercalls. Therefore, your statement is correct. see p19\\
|
||||
\textbf{Nothing can provide an abstraction to the operating system itself} False, Containers are an abstractino of the OS itself. see p. 23\\
|
||||
\end{multicols*}
|
||||
\end{document}
|
||||
Reference in New Issue
Block a user