Cheatsheets/os/quiz.tex
2024-01-12 12:45:56 +01:00

286 lines
35 KiB
TeX

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% writeLaTeX Example: A quick guide to LaTeX
%
% Source: Dave Richeson (divisbyzero.com), Dickinson College
%
% A one-size-fits-all LaTeX cheat sheet. Kept to two pages, so it
% can be printed (double-sided) on one piece of paper
%
% Feel free to distribute this example, but please keep the referral
% to divisbyzero.com
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% How to use writeLaTeX:
%
% You edit the source code here on the left, and the preview on the
% right shows you the result within a few seconds.
%
% Bookmark this page and share the URL with your co-authors. They can
% edit at the same time!
%
% You can upload figures, bibliographies, custom classes and
% styles using the files menu.
%
% If you're new to LaTeX, the wikibook is a great place to start:
% http://en.wikibooks.org/wiki/LaTeX
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[10pt,landscape]{article}
\usepackage{amssymb,amsmath,amsthm,amsfonts}
\usepackage{multicol,multirow}
\usepackage{calc}
\usepackage{ifthen}
\usepackage{helvet}
\renewcommand{\familydefault}{\sfdefault}
%\usepackage[fontsize=6pt]{fontsize}
\usepackage[landscape]{geometry}
\geometry{a4paper, landscape, margin=0.5in}
\usepackage[colorlinks=true,citecolor=blue,linkcolor=blue]{hyperref}
\usepackage[
protrusion=true,
activate={true,nocompatibility},
final,
tracking=true,
kerning=true,
spacing=true,
factor=1100]{microtype}
\SetTracking{encoding={*}, shape=sc}{40}
%%Packages added by Sebastian Lenzlinger:
\usepackage{enumerate} %% Used to change the style of enumerations (see below).
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}
\newtheorem{axiom}{Axiom}
\newtheorem{lem}{Lemma}
\newtheorem{corr}{Corollary}
\usepackage{tikz} %% Pagacke to create graphics (graphs, automata, etc.)
\usetikzlibrary{automata} %% Tikz library to draw automata
\usetikzlibrary{arrows} %% Tikz library for nicer arrow heads
%%End
\microtypecontext{spacing=nonfrench}
\ifthenelse{\lengthtest { \paperwidth = 11in}}
{ \geometry{top=.5in,left=.5in,right=.5in,bottom=.5in} }
{\ifthenelse{ \lengthtest{ \paperwidth = 297mm}}
{\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} }
{\geometry{top=1cm,left=1cm,right=1cm,bottom=1cm} }
}
\pagestyle{empty}
\makeatletter
\renewcommand{\section}{\@startsection{section}{1}{0mm}%
{0.1mm}%
{0.0001mm}%x
{\normalfont\normalsize\bfseries}}
\renewcommand{\subsection}{\@startsection{subsection}{2}{0mm}%
{0mm}%
{0mm}%
{\normalfont\small\bfseries}}
\renewcommand{\subsubsection}{\@startsection{subsubsection}{3}{0mm}%
{-1ex plus -.5ex minus -.2ex}%
{1ex plus .2ex}%
{\normalfont\small\bfseries}}
\makeatother
\setcounter{secnumdepth}{0}
\setlength{\parindent}{0pt}
\setlength{\parskip}{0pt plus 0.5ex}
% -----------------------------------------------------------------------
\title{Operating Systems FS23}
\begin{document}
\tiny
\raggedright
\footnotesize
\begin{multicols*}{3}
\setlength{\premulticols}{1pt}
\setlength{\postmulticols}{1pt}
\setlength{\multicolsep}{1pt}
\setlength{\columnsep}{1pt}
\section{Intro}
\textbf{Asym. Mult. Proc. (AMP):} Special task per proc. Single OS code in mem, only run on CPU1. syscalls and cache buffer all to CPU1. Mem shared across all, scheduling accross all CPUs \textbf{SMP:} All procs perform all tasks. Sync issues. Syscalls proc on CPU where occurs.
\rule{\linewidth}{.1pt}
\section{OS Structures}
\section{Processes-Threads-Concurrency}
\textbf{What is a process?} In the context of operating systems, a process is an instance of a program that is being executed by one or many threads. It is a unit of work within the operating system that can be scheduled and allocated resources such as CPU time, memory, and I/O devices. Each process has its own address space, which contains the code being executed, data structures, and other resources required by the program. Processes are managed by the operating system's process management subsystem, which provides mechanisms for creating, terminating, and synchronizing processes.\\
\textbf{For a single-processor system, there will never be more than one process in the Running state.} This statement is not entirely true. In a single-processor system, there can be multiple processes in the Running state, but only one process can be executing on the CPU at any given time. The operating system uses a scheduling algorithm to determine which process should be given access to the CPU next. The scheduler may switch between processes frequently, giving each process a small amount of CPU time before switching to another process. This gives the illusion that multiple processes are running simultaneously, even though only one process is executing on the CPU at any given time. QUIZ says this statement is true!\\
\textbf{An I/O-bound process spends more time doing computations than I/O.} This statement is not correct. An I/O-bound process is a process that spends most of its time waiting for I/O operations to complete, such as reading from or writing to a disk or network. In contrast, a CPU-bound process is a process that spends most of its time performing computations and using the CPU. Therefore, an I/O-bound process spends more time waiting for I/O operations to complete than performing computations.\\
\textbf{What / who selects which process should be executed next (from the ready queue) and allocates CPU?} The responsibility of selecting which process should be executed next and allocating CPU time to that process lies with the operating system's scheduler. The scheduler is a component of the operating system's process management subsystem and is responsible for managing the execution of processes on the CPU. When a process is ready to run, it is placed in a queue called the ready queue. The scheduler selects a process from the ready queue based on its scheduling algorithm and allocates CPU time to that process. The scheduling algorithm can be preemptive or non-preemptive, depending on whether the scheduler can interrupt a running process to allocate CPU time to another process.\\
\textbf{An I/O-bound process spends more time doing I/O than computations.} Yes, this statement is correct. An I/O-bound process is a process that spends most of its time waiting for I/O operations to complete, such as reading from or writing to a disk or network. Therefore, an I/O-bound process spends more time doing I/O operations than computations. In contrast, a CPU-bound process is a process that spends most of its time performing computations and using the CPU.\\
\textbf{Which benefits go to multithreaded programming?} Multithreaded programming offers several benefits, including:
1. Responsiveness: Multithreading allows a program to continue executing even if part of it is blocked or waiting for I/O operations to complete. This is especially important for user interfaces and time-sharing systems.
2. Resource sharing: Threads share the resources of their parent process, making it easier to share data between threads than between separate processes.
3. Faster execution: Creating a thread is generally faster and less resource-intensive than creating a new process, and switching between threads has lower overhead than switching between processes.
4. Scalability: Even a single process can take advantage of multiprocessor architectures by using multiple threads to execute tasks in parallel.
5. Simplified code: Multithreading can simplify the code by allowing different parts of the program to execute concurrently without requiring complex synchronization mechanisms.
These benefits make multithreaded programming an attractive option for developing efficient and responsive applications that can take advantage of modern hardware architectures.\\
\textbf{What is the difference between parallelism and concurrency?} Concurrency and parallelism are related concepts in computer science, but they have different meanings.
Concurrency refers to the ability of a program to make progress on multiple tasks simultaneously. In a concurrent program, different parts of the program can execute independently and make progress at the same time. Concurrency is often achieved through the use of threads or other mechanisms that allow different parts of the program to execute concurrently.
Parallelism, on the other hand, refers to the ability of a system to perform multiple tasks simultaneously. In a parallel system, multiple processors or cores work together to execute different tasks simultaneously. Parallelism is often used in high-performance computing applications that require large amounts of computation.
In summary, concurrency is a property of a program that allows it to make progress on multiple tasks simultaneously, while parallelism is a property of a system that allows it to perform multiple tasks simultaneously by using multiple processors or cores.\\
\textbf{Amdahl's Law describes performance gains for applications with both a serial and parallel component.} Yes, that is correct. Amdahl's Law is a formula that describes the theoretical maximum speedup that can be achieved by parallelizing a program. It takes into account the proportion of the program that can be parallelized and the number of processors used to execute the program.
Amdahl's Law states that the speedup of a program is limited by its serial portion, which cannot be parallelized. The formula for Amdahl's Law is:
Speedup = 1 / (Serial fraction + Parallel fraction / Number of processors)
This formula shows that as the number of processors increases, the speedup of a program approaches a limit determined by its serial portion. Therefore, Amdahl's Law highlights the importance of identifying and optimizing the serial portion of a program to achieve maximum performance gains from parallelization.\\
\textbf{What concepts refers to the capability to allow multiple tasks to make progress on a single processor?} The concept that refers to the capability to allow multiple tasks to make progress on a single processor is concurrency. Concurrency allows different parts of a program to execute independently and make progress at the same time, even if they are running on a single processor.
Concurrency can be achieved through various mechanisms, such as time slicing or cooperative multitasking, which allow different tasks to take turns executing on the processor. However, it's important to note that while concurrency can improve the responsiveness and performance of a program, it does not provide true parallelism since only one task is executing at any given moment.\\
\textbf{The dispatcher module gives control of the CPU (core) to the process selected by the short-term scheduler, this means:} Yes, that is correct. The dispatcher module is responsible for giving control of the CPU (core) to the process selected by the short-term scheduler. Once the short-term scheduler selects a process to run, the dispatcher is responsible for performing a context switch to save the state of the currently running process and restore the state of the selected process.
The dispatcher then transfers control of the CPU to the selected process, allowing it to execute on the processor. This process continues until either the selected process completes its execution or is preempted by another process due to a higher-priority task or time slice expiration.
Overall, the dispatcher plays a critical role in managing and scheduling processes on a system's CPU, ensuring that each process gets its fair share of processing time and that system resources are used efficiently.
CPU switches to user mode. Switches context from one proc to another, jumps to proper location in the userprogram to resume that program.
\\
\textbf{Turnaround time is the amount of time to execute a particular process.} I'm sorry, but that statement is not entirely accurate. Turnaround time is actually the amount of time it takes for a process to complete execution from the moment it was submitted for execution. It includes both the time spent waiting in the ready queue and the actual execution time on the CPU. Therefore, it is a measure of how long a process takes to run from start to finish. This information can be found on page 16 of this PDF file under "Process Scheduling". In Quiz statement eas marked as correct!\\
\textbf{First-Come, First-Served (FCFS) Scheduling gives minimum average waiting time for a given set of processes.} I'm sorry, but that statement is not entirely accurate. First-Come, First-Served (FCFS) Scheduling is a non-preemptive scheduling algorithm in which the process that arrives first is executed first. While it may seem like FCFS gives minimum average waiting time for a given set of processes, this is not always the case. In fact, FCFS can lead to long waiting times for processes that arrive later if the first process has a long CPU burst time. Therefore, it may not be the most efficient scheduling algorithm in terms of minimizing average waiting time. This information can be found on page 18 of this PDF file under "Scheduler Types".\\
\textbf{Multilevel feedback queues are similar to multilevel queues, except that a process may migrate between different queues.} Yes, that statement is correct. Multilevel feedback queues are similar to multilevel queues in that they have multiple priority levels, but they differ in that a process can move between different queues based on its behavior. In a multilevel feedback queue, a process starts at the highest priority queue and moves down the hierarchy if it uses too much CPU time or has too many input/output operations. Conversely, if a process uses very little CPU time or has few input/output operations, it may move up the hierarchy to a higher priority queue. This allows for more efficient scheduling of processes based on their behavior and resource needs. This information can be found on page 20 of this PDF file under "Multilevel Feedback Queues".\\
\rule{\linewidth}{.1pt}
\section{Synchronization}
\textbf{Race Conditons:} A race condition is a situation that occurs when two or more processes have concurrent access to shared data and the final result depends on the particular order in which concurrent accesses occur. This can lead to inconsistent or corrupted values of shared data. You can find more information about race conditions and the critical section problem on page 53 of this PDF file (Sync.a). Outcome depends on order of access.\\
\textbf{Ch. Sync a}\\
\textbf{A mutex lock is released immediately after entering a critical section?} A mutex lock is released on exiting the critical section, not immediately after entering it. A mutex lock provides mutual exclusion by requiring that a process acquire a lock before entering a critical section and release the lock on exiting the critical section. You can find more information about mutex locks and their usage in solving the critical section problem on page 21 and 53 of this PDF file (sync a).
\\
\textbf{A semaphore has an integer value.} Yes, a semaphore has an integer value. There are two types of semaphores: counting semaphore and binary semaphore. A counting semaphore is initialized to a positive integer value and can range over an unrestricted domain, while a binary semaphore is initialized to either 0 or 1 and can range only between 0 and 1. You can find more information about semaphores and their usage on page 25 of this PDF file. (sync a).\\
\textbf{Semaphores and mutex locks both provide mutual exclusion.} Yes, both semaphores and mutex locks can be used to provide mutual exclusion. A mutex lock has a binary value that indicates if the lock is available or not, while a semaphore has an integer value and can therefore be used to solve a variety of other synchronization problems. You can find more information about semaphores and mutex locks on page 54 of this PDF file.\\
\textbf{The value of a counting semaphore can range only between 0 and 1.} No, the statement is incorrect. A counting semaphore can have an integer value that ranges over an unrestricted domain, while a binary semaphore can have a value that ranges only between 0 and 1. You can find more information about semaphores and their types on page 25 of this PDF file. A binary semaphore has value 0 or 1, but not inbetween. Semaphores are integers. --> false in quiz??\\
\textbf{Which of the following may cause a liveness failure?
1. an infinite loop
2. a deadlock
3. a busy waiting loop
4. all of the other answers} Option 4 is correct. All of the given options may cause a liveness failure. An infinite loop can cause a process to never complete, leading to a liveness failure. A deadlock occurs when two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes, leading to a liveness failure. Busy waiting loop can also lead to a liveness failure as it consumes CPU cycles without performing any useful work. You can find more information about liveness failures and their causes on page 28 of this PDF file. A liveness failure is a type of concurrency bug that occurs when a process or system fails to make progress towards its intended goal. In other words, the system becomes stuck and cannot proceed to the next state or complete its task.\\
\textbf{Solutions to the critical section problem may suffer from liveness failures.} Yes, that is correct. Solutions to the critical section problem may suffer from liveness failures such as deadlocks, livelocks, and starvation. Deadlocks occur when two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. Livelocks occur when two or more processes change their states in response to the state changes of other processes without making progress towards completing their tasks. Starvation occurs when a process is perpetually denied access to a resource it needs to complete its task. You can find more information about liveness failures and their causes on page 28 of this PDF file.\\
\textbf{When are atomic integers in Linux useful? When not?} Atomic integers in Linux are useful when multiple threads or processes need to access and modify a shared integer variable concurrently. Atomic integers provide a way to perform read-modify-write operations on the shared variable atomically, without the need for locks or other synchronization mechanisms. This ensures that the shared variable is always in a consistent state, even when accessed by multiple threads or processes simultaneously. Atomic integers are commonly used in Linux kernel programming for implementing synchronization primitives such as semaphores and spinlocks. You can find more information about atomic operations and their usage in Linux on page 47 of this PDF file. However, they are not useful when several variables contribute to possible race conditions. (Page 21 Examples). In Quiz: Useful when, Several variables are invonvled in rc, single proc access several lvars involved in rc, int needs to be updated.\\
\textbf{Emergence of multicore systems has put greater emphasis on developing novel concurrency techniques, because} the emergence of multicore systems has made it possible to execute multiple threads or processes simultaneously on a single machine. However, this also introduces new challenges related to concurrency and synchronization. Traditional concurrency techniques such as locks and semaphores may not be efficient or scalable enough to handle the increased number of threads or processes running on a multicore system. This is because these techniques often rely on serialization, which can lead to contention and overhead when multiple threads or processes try to access the same shared resource. In addition, traditional techniques may not take full advantage of the parallelism offered by multicore systems, leading to suboptimal performance. Therefore, there is a need for novel concurrency techniques that can take advantage of the parallelism offered by multicore systems while ensuring correctness and avoiding issues such as deadlocks and race conditions. You can find more information about the challenges posed by multicore systems and the need for novel concurrency techniques on page 21 of this PDF file. QUIZ Answer: Increasing number of processing cores causes more risk of race conditions. NOT Fundamentally new problems, race conditions are not more difficult to solve nor are deadlocks more difficult to prevent or avoid in multicore system. But the risk of running into a risk conditions rises.\\
\
\textbf{Critical Region:} Part of programm where shared memory is accessed.\\
\textbf{Ch Sync Deadlocks}\\
\textbf{Which of the following is not a factor in choosing which process to terminate during recovery from a deadlock?
1. How many more resources the process needs in order to complete.
2. How many processes will need to be terminated.
3. Who the parent of the process is.
4. What the priority of the process is.} Option 3 is not a factor in choosing which process to terminate during recovery from a deadlock. The other options are all factors that may be considered when deciding which process to terminate in order to resolve a deadlock. Specifically, the number of resources needed by the process, the number of processes that need to be terminated, and the priority of the process are all important factors that can help determine which process should be terminated to minimize disruption and maximize system performance. However, who the parent of the process is typically not relevant in this context.\\
\textbf{What are the conditions for a deadlock?} There are four necessary conditions for a deadlock to occur in a system. These conditions are:
1. Mutual exclusion: At least one resource must be held in a non-shareable mode, meaning that only one process at a time can use the resource.
2. Hold and wait: A process holding at least one resource is waiting to acquire additional resources held by other processes.
3. No preemption: A resource cannot be forcibly removed from a process holding it; it can only be released voluntarily by the process after it has completed its task.
4. Circular wait: There exists a set of waiting processes such that each process in the set is waiting for a resource held by another process in the set, creating a circular chain of dependencies.
If all four of these conditions hold simultaneously, then a deadlock can occur in the system.\\
\textbf{The circular-wait condition for a deadlock implies the hold-and-wait condition.} Yes, the circular-wait condition for a deadlock implies the hold-and-wait condition. The hold-and-wait condition states that a process holding at least one resource is waiting to acquire additional resources held by other processes. The circular-wait condition, on the other hand, states that there exists a set of waiting processes such that each process in the set is waiting for a resource held by another process in the set. If this condition holds, then it implies that at least one process in the set must be holding a resource while waiting for another resource to be released by another process in the set. Therefore, the circular-wait condition implies the hold-and-wait condition.\\
\textbf{A system will never enter a deadlocked state if
1. None of the other answers.
2. The system uses the detection and recovery technique.
3. The system uses the deadlock avoidance technique.
4. The system chooses to ignore the problem altogether.} Option 3 is not correct. A system will never enter a deadlocked state if it employs a protocol to ensure that the system will never enter a deadlock state. This can be achieved through either of the following techniques:
1. Deadlock prevention: This technique ensures that at least one of the four necessary conditions for deadlock cannot occur in the system.
2. Deadlock avoidance: This technique involves dynamically allocating resources to processes in a way that avoids the possibility of a deadlock.
Option 2 is incorrect because detection and recovery is not a prevention technique, but rather a way to handle deadlocks after they have already occurred. Option 4 is also incorrect because ignoring the problem altogether does not prevent deadlocks from occurring; it simply means that the system will not attempt to handle them proactively. The information can be found on page 20 of the PDF file. QUIZ says that using a deadlock avoidance tequnique will make a system not deadlock...\\
\textbf{Deadlock prevention and deadlock avoidance are essentially the same approaches for handling a deadlock.} No, deadlock prevention and deadlock avoidance are not the same approaches for handling a deadlock. Deadlock prevention involves designing the system in such a way that at least one of the four necessary conditions for deadlock cannot occur. This can be achieved by using techniques such as resource allocation, ordering of resources, or disabling interrupts. On the other hand, deadlock avoidance involves dynamically allocating resources to processes in a way that avoids the possibility of a deadlock. This is done by using algorithms that predict whether a particular resource allocation will lead to a deadlock or not. If it is predicted that a particular allocation will lead to a deadlock, then the system will avoid making that allocation.
Therefore, while both techniques aim to prevent deadlocks from occurring, they use different approaches to achieve this goal. The information can be found on page 20 of the PDF file.\\
\textbf{A system in an unsafe state will ultimately deadlock.}Yes, a system in an unsafe state can ultimately deadlock. An unsafe state is a state in which a set of processes is holding resources and waiting for others that are also being held by other processes. If the system enters an unsafe state, it may be possible to allocate resources in such a way that a deadlock will not occur. However, if no safe sequence of resource allocations exists, then the system will eventually deadlock. Therefore, it is important to ensure that the system does not enter an unsafe state in the first place by using techniques such as deadlock prevention or avoidance. The information can be found on page 21 of the PDF file. QUIZ says: False. I think that it is not deterministic to get into a deadlock\\
\rule{\linewidth}{.1pt}
\section{Main Memory}
\textbf{Ch Main Memory}\\
\textbf{Why do Computing Systems need cache? }Computing systems need cache to reduce memory stalls. Main memory access can take many clock cycles, causing a stall. Cache (fast memory) is added between main memory and CPU registers to reduce memory stalls. This helps improve the overall performance of the system by reducing the time it takes for the CPU to access data that is frequently used. (Page 6)\\
\textbf{An address generated by a CPU} is referred to as a logical address. The memory management unit (MMU) then translates this logical address to a physical address in memory. (Page 13)\\
\textbf{A large page size results in}
\\
\textbf{Hashed page tables are particularly usefilffor processes with sparse address space }
p.59 says clustered page tables (veriation of hashed p t) are good for sparse address space, does not mention processes though...\\
\textbf{An advantage of virtual memory is that
1. you do not have to worry about the amount of physical memory available
2. provides a way to execute a program that is only partially loaded in memory
3. all of the other answers
4. a program can be much larger than the size of physical memory} The correct answer is 3. All of the other answers are advantages of virtual memory. Virtual memory allows for efficient and safe sharing of memory among multiple processes, enables larger programs to run on a system with limited physical memory, and provides a way to execute a program that is only partially loaded in memory. Additionally, virtual memory eliminates the need for programmers to worry about the amount of physical memory available. (Page 5)\\
\textbf{Anonymous memory of a process refers to
1. the pages not associated with the binary executable file of the process.
2. the pages that cannot be swapped out of the physical memory.
3. the pages associated with the binary executable file of the process.
4. the pages associated with the static data of the process.} The correct answer is 1. Anonymous memory of a process refers to the pages not associated with the binary executable file of the process. (Page 19) \\
\textbf{A sign of thrashing is
1. CPU utilization decreases as the degree of multiprogramming is increased
2. CPU utilization decreases as the number of allocated pages is increased
3. CPU utilization increases as the degree of multiprogramming is increased
4. CPU utilization increases as the number of allocated pages is increased} The correct answer is 1. A sign of thrashing is that CPU utilization decreases as the degree of multiprogramming is increased. (Page 19)\\
\textbf{Current best practice to avoid thrashing is to inlucde enoughh physical memory} While having enough physical memory is important to avoid thrashing, it is not the only best practice. Other best practices include using efficient memory allocation algorithms, reducing the degree of multiprogramming, and using demand paging. True according to lecture notes page 30 VM part 2.\\
\textbf{Larger page size results in less total allocated memory?} No, it increases allocated memory since each process has a higher minimum amount of momory.
\textbf{Does a larger page size reduce I/O time?} Yes, since the CPU can get bigger blocks of data off disk into main memory.
\textbf{How to imporve I/O?} Caching.
\section{File System}
\textbf{Chapter file Systems a.}
\textbf{How is a file identified within a file system?}A file is uniquely identified within a file system by its identifier, which is usually a unique number. This identifier is one of the basic attributes of a file, along with its name, type (if the OS supports different types of files), location, size, timestamps (creation, modification, and latest use), protection information on who has access (read, write, and modify), and other attributes such as type of encoding. You can find more information about this on page 11 of this PDF file.\\
\textbf{What steps are necessary to create a file?} To create a file, there are two steps that need to be taken. First, the operating system needs to find a space to store the new file. Second, the operating system needs to make an entry for the new file in a directory. (p 12).
\\
\textbf{The FAT method does not store information about free blocks.} QUIZ says is correct--> ask in discussion!!!!!!
That is correct. The File Allocation Table (FAT) method does not store information about free blocks. Instead, it uses a linked list to keep track of free blocks. This means that the FAT method can be less efficient than other methods when it comes to managing free space on a disk. p17/18 BUT ACTUALL: Actually, the File Allocation Table (FAT) file system does store information about free blocks. The FAT file system is a simple and commonly used file system originally introduced by Microsoft for MS-DOS and later adopted by other operating systems.
In the FAT file system, the file allocation table is a table that keeps track of the allocation status of each block or cluster on the disk. Each entry in the table corresponds to a specific block or cluster, and it indicates whether the block is free or allocated.
When a file is stored on a disk using the FAT file system, the file system looks for a sequence of free blocks in the file allocation table to allocate to that file. As the file grows or shrinks, the file system updates the corresponding entries in the table to reflect the changes in the allocation status of the blocks.
So, the FAT file system does store information about free blocks in the file allocation table, allowing the file system to keep track of available space on the disk and efficiently allocate and manage file storage.\\
\textbf{Using x-bit pointers limits the size of a file to 4 GB} The use of 32-bit pointers limits the size of a file to 4 GB. This is because a 32-bit pointer can address up to $2^32$ bytes of memory, which is equal to 4 GB. Therefore, if a file system uses 32-bit pointers, the maximum size of a file that can be stored on that file system is limited to 4 GB. \\
\textbf{Mounting a file system means that: }The filesystem is becoming available within the file system namespace. CHATGPT: Mounting a file system means that it is being made accessible and integrated into the operating system's directory hierarchy. When a file system is mounted, the operating system establishes a connection between the file system and a specific directory known as the mount point.
Here's what happens when a file system is mounted:
The operating system identifies the device or partition that contains the file system. This can be a hard drive, a solid-state drive, a network storage device, or any other storage medium.
The operating system reads the file system's metadata, such as the file allocation table, inode table, or other data structures, to understand the organization and structure of the file system.
A mount point is selected, which is an existing directory in the operating system's file hierarchy. The mount point acts as a reference point for accessing the contents of the mounted file system.
The operating system establishes a link between the mount point and the file system, enabling access to the files and directories within the file system. The files and directories within the mounted file system become accessible through the mount point.
Once the file system is mounted, users and applications can read from and write to the files and directories within it, just as they would with any other files on the system. Any changes made to the mounted file system are reflected in the underlying storage device.
Mounting a file system allows users and programs to interact with the files and directories stored within it, incorporating the contents of the file system seamlessly into the overall file hierarchy of the operating system.\\
\textbf{A file system may be mounted during the boot time only.} False, see File System Internals.
\section{Security}
\textbf{Phishing is an attack, in which a legitimate-looking e-mail misleads a user into entering confidential information}, true see p8 Security.
\\
\textbf{A protection domain is a collection of access rights, each of which is} a pair <object-name, rights-set> see p 13 Protections\\
\textbf{UNIX operating system associates a protection domain with threads}, false, see p 10 protection.
\section{Virtualization}
\textbf{Hypervisors of Type1 show higher latency than hypervisors of Type2}, false, see p18 Virtualization\\
\textbf{Hypervisors of Type2 are designed for endusers rather than service providers} true, see p10 virtualization.\\
\textbf{Paravirtualization refers to modifying the OS kernel to avoid traps and calling APIs from the hypervisor (hypercalls)} The PDF mentions that paravirtualization takes an approach of modifying the source code of the guest operating system to avoid causing traps. This is done by calling APIs from the hypervisor, which are known as hypercalls. Therefore, your statement is correct. see p19\\
\textbf{Nothing can provide an abstraction to the operating system itself} False, Containers are an abstractino of the OS itself. see p. 23\\
\end{multicols*}
\end{document}