Operating Systems

Popularity (by total correct streak): 84
Popularity (by number of users): 7

Cards

Memory Management Requirements Relocation, Protection, Sharing, Logical Organization, Physical Organization  
If the memory management system effectively deals with modules, a number of advantages can be realized: Modules can be written and compiled independently.....Modules can be given different degrees of protection (read only, execute only, etc.).....Modules may be shared among processes.  
Logical Organization Logically, memory is organized as a linear address space, consisting of a sequence of bytes or words  
Physical Organization Physically, memory is organized as main memory and secondary storage (disk).  
Memory Partitioning An easy way to deal with memory is to partition it into regions with fixed boundaries. Partitioning is an old technique that isn’t used today but is important to our understanding of modern memory management techniques.  
Partitioning: Equal Sized Partitions: Two problems: A program may be too large for a partition. In this case the programmer must use “overlays”, wherein the program loads pieces of itself into memory to execute, overlaying other pieces.......Memory utilization is poor, since a small program occupies a large partition (called “internal fragmentation”).  
Partitioning: Unequal Sized Partitions Both of the problems of equal-sized partitions can be lessened by using unequal-sized partitions.......Small programs can be loaded into small partitions improving memory utilization, and large programs can be loaded into large partitions helping to avoid the need for overlays.  
Placement Algorithm for equal partitions trivial, all partitions are the same  
Placement Algorithm for unequal partitions use a single queue for all partitions and assign a process to the smallest open partition that holds it.  
Advantages of Fixed Partitions Fixed partition systems are simple and require minimal OS support and overhead.  
Disadvantages of Fixed Partitions The number of partitions limits the number of active (not suspended) processes......Utilization is not as good as it could be since it is likely that part of a partition is unused.  
Dynamic Partitioning The partition size is not set at system initialization as in fixed partitioning, but is set dynamically as each program is loaded.  
Problem with Dynamic Partitioning external fragmentation  
Compaction Compaction shifts the processes around in memory so that there are no holes between them.  
Drawback of compaction very time consuming  
3 Placement Algorithms for Dynamic Partitioning Best, First, and Next Fit  
Analysis of Best, First, and Next Fit First-fit is usually simplest, fastest, and best..... Next-fit tends to fragment the end block of memory more than first-fit......Best-fit is usually worst. Causes smallest fragments which are unusable by other processes.  
Buddy System The buddy system divides memory blocks in half until a block size that satisfies the request is found......When a block has been divided and is later free again, it can be combined to form the original size block again.  
Why is Relocation necessary In a fixed equal-sized partition system, a process may be loaded into any open partition. When swapped out and back in, it may be loaded into a different partition. In a fixed unequal-sized partition system, if a single queue is used, a swapped process may load into a different partition than where it was originally loaded. In dynamic partitioning, compaction may move processes around in memory.  
Physical address This is the actual memory address. Also called the absolute address.  
Logical address Address from program’s viewpoint independent of current loaded position.Relative address: A logical address relative to a known point (beginning of program, for example).  
Address Translation If relative addressing is used relative to the beginning of the program, then these addresses need to be translated to physical addresses depending on where it is loaded. This can be done at runtime with special hardware. For example, a “base register” may contain the physical address of the program origin. This is then added to each relative address to get the physical address. A “bounds register” ensures the new address is inside this process’s memory space.  
Paging Paging is a more modern memory management approach. In paging, the memory is partitioned into small equal-size chunks. Likewise a program is divided into the same equal-size chunks. The chunks of a program are called pages and chunks of memory are called frames.  
The OS maintains a page table for_________ each process.  
Each entry of a page table consist of _____________________ the frame number where the corresponding page is physically located in memory.  
The page table is indexed by __________ to obtain the frame number. the page number  
A __________, available for pages, is also maintained. free frame list  
In a paging system, within each program, each logical address consists of _______________________ a page number and an offset within the page.  
A CPU register holds _________________ of the process which is running. the starting physical address of the page table  
Segmentation Similar to paging but uses a segment number and offset.......Segments are not equally-sized....... Similar to dynamic partitioning except a program can span multiple segments that need not be contiguous........Suffers external fragmentation.  
Two characteristics leading to Paging and Segmentation breakthrough: All memory references are logical addresses that are translated at run time into physical addresses.......A process may be broken up into many pieces, and these pieces need not be contiguous.  
What is the fundamental breakthrough in memory managements? it is not necessary that all pages or segments of a program be in main memory at the same time, it is only necessary to have the pieces in memory that are currently needed.  
Resident Set The portion of a process in main memory at a given time.  
If a program tries to access memory outside its resident set, a ________occurs and _____________ fault occurs interrupting the process, and the OS blocks the process and loads the requested piece of the program. Once the piece is loaded, the process can be put back into a ready state to resume execution.  
Implications of loading only part of a program into memory: More processes may be loaded at the same time.......A process can actually be larger than all of main memory and still be run. This lifts a major barrier for the programmer.  
Virtual memory is... the program’s logical view of memory. It can be very large, much larger than main memory.  
Virtual memory gives the appearance to the program that the main memory is actually _________________ much larger than it really is.  
Principle of Locality It says that program and data references tend to cluster. Hence, if a cluster of needed pages is in memory, then the program can run for a while without needing more pages loaded.  
Thrashing When thrashing, the OS is spending too much of its time loading and unloading parts of processes rather than running them.  
Paging flags Each entry in the page table can have a flag to keep track of whether the page is loaded, and if so, what frame the page is loaded into. Another flag may be used to indicate whether a page has been modified since it was loaded. Other flags may also be used for protection and sharing purposes.  
Inverted Page Table This approach uses one page table entry per memory frame rather than one per virtual page. (Hence, it is inverted). This reduces the size of the page table, since the table size does not depend on the number of pages in the program, but rather on the number of frames in main memory.  
Inverted Page Table drawback Since the page number of the address can no longer be used as an index into the page table, finding an entry in the table becomes a problem.  
Solution to inverted page table problem use a hash function to map the virtual page number to a hash table. The hash table entry has a pointer to the frame table entry.  
Translation Lookaside Buffer The TLB does for the page table what a memory cache does for main memory. It contains the page table entries that have been most recently used. Note that it does not cache the pages themselves, but only the page table entries.  
Reason for Translation Lookaside Buffer In principle, each virtual address requires two physical memory accesses, one for the page table to get the frame number, and one to access that frame. This would double the number of accesses made.  
associative mapping Because the TLB only has a subset of page table entries, the page number cannot be used as an index into the TLB as it can for a page table. The processor is equipped with hardware that allows it to interrogate simultaneously a number of TLB entries to find a match.With this technique, the TLB entries can be examined all at once to see if there is a match.  
The cache blocks are identified by ________, which is usually just _________ The cache blocks are identified by a “tag”. The tag is usually just the leftmost bits of the address.  
The cache blocks are identified by ________, which is usually just _________ The cache blocks are identified by a “tag”. The tag is usually just the leftmost bits of the address.  
The cache blocks are identified by ________, which is usually just _________ The cache blocks are identified by a “tag”. The tag is usually just the leftmost bits of the address.  
A small page size ________ which is good for _______. A small page size reduces internal fragmentation which is good for memory utilization.  
a small page size also leads to __________, increasing the page table size. a small page size also leads to a larger number of pages per process, increasing the page table size.  
A large page size may be more efficient for data transfers to and from the disk, since __________ A large page size may be more efficient for data transfers to and from the disk, since the disk is designed to efficiently transfer large blocks of data.  
What is relationship between page size and page fault rates. As the page size grows, locality is weakened and more faults will occur (until the page size begins to approach the size of the entire process).  
Page sizes typically range from _________ Page sizes typically range from 512 bytes to 16 megabytes.  
The page fault rate is ____________________ The page fault rate is also determined by the number of frames allocated to a process.  
If the whole process is in memory, then ______ If the whole process is in memory, then page faults are zero.  
If only one page is in memory, then ___________ If only one page is in memory, then each new page request results in a page fault.  
3 Benefits of Segmentation Ease of handling growing data structures (simply increase segment size)...... Modularity by assigning each module to a segment..... Sharing and protection can be set per segment  
In regards to the memory management software in the OS, there are three basic areas of choice: Whether to use virtual memory ....Whether to use paging, segmentation, or both...... . What algorithms to use for various aspects of the memory management.  
Fetch Policy The fetch policy determines when a page should be loaded into memory.  
Two approaches to Fetch Policy Demand paging, and Prepaging  
Demand paging: Only bring in a page when it is referenced. Results in high page faults at first, but low once most needed pages are loaded (due to locality).  
Prepaging: Bring in more pages than the actual page that was referenced (due to locality). This makes efficient use of the disk drive since if the disk is already reading a page, it can continue to read pages in the same disk access.  
Placement Policy If there are free frames, then it doesn’t matter since all frames are the same.  
Replacement Policy Try to replace the page that is least likely to be referenced in the near future. Due to locality, recent references often imply near-future references. Simplicity is desirable to keep the overhead of the replacement decision low.  
4 Replacement Algorithms Optimal Least Recently Used (LRU) First-in-first-out (FIFO) Clock  
Page Buffering FIFO can be improved through a technique called page buffering. A replaced page is not lost but rather reassigned to one of two lists: the free page list or the modified page list. The page is not physically removed from memory. If the process references this page again, it can be added back quickly.  
How many pages of a process should be loaded? Three considerations: The fewer the pages, the more processes can be loaded...... Too few pages results in high page faults......Too many pages reduces the number of processes and doesn’t reduce page faults noticeably.  
Two approaches to resident set size Fixed-allocation: process is given a fixed set of pages within which to operate......Variable-allocation: process is given a variable number of pages as it runs. Can be allocated according to page fault rates. (Better, but higher overhead).  
Local replacement Choose among a process’s own resident pages to replace.  
Global replacement Choose among any process’s resident pages to replace.  
When should a modified page be written back to secondary storage? Two approaches: Demand Cleaning: only write page out when it is chosen for replacement. Delays the writing until necessary, but slows the processing of a page fault since the page must first be written before a new one is loaded...... Precleaning: write page out before it is chosen for replacement so that pages may be written out in batches for efficient use of the disk drive. May result in unnecessary writes since the page may be modified again.  
To reduce degree of multiprogramming, some processes must be suspended, but which ones? Lowest priority..... Faulting process: indicates lack of working set..... Last process activated: probably doesn’t have working set yet..... Smallest resident set: least effort to reload..... Largest process: frees the most frames..... Process with the largest remaining execution window  

Quisition is a browser-based flashcard system that repeats old cards and introduces new ones at optimal time intervals. You can create your own card packs or use those developed by others.