What is Memory Management in OS? An In-Depth Guide


What is Memory Management in OS?
Memory Management in OS?

 
    

Memory Management in OS

One definition of memory is a grouping of data in a particular format. It is employed in data processing and instruction storage. A sizable collection of words or bytes, each with a unique place, make up the memory. A computer system's main function is to carry out programs. During execution, these programs and the data they access ought to be located in the main memory. The value of the program counter determines how the CPU retrieves instructions from memory.

Memory management is crucial to achieving some degree of multiprogramming and appropriate memory consumption. There are numerous memory management techniques that represent different strategies, and each algorithm's efficacy varies depending on the circumstances.

What is Main Memory?


 The main memory is essential to the functioning of a modern computer. A vast collection of words, or bytes, spanning from hundreds of thousands to billions, make up main memory. A common store of quickly accessible data between the CPU and I/O devices is main memory. Information and programs are stored in main memory when the processor is actively using them. Since the CPU and main memory are connected, the processor can process and retrieve data very quickly. RAM (Random Access Memory) is another name for main memory. This is a volatile memory. A power outage causes RAM to lose its data.


What is Memory Management?


A multiprogramming computer has memory that is divided between several processes and the operating system. Memory management is the task of splitting up the memory across several activities. The operating system uses memory management to control how data is moved between main memory and the disk while a process is running. The efficient use of memory is the primary goal of memory management.

Why Memory Management is Required?


Memory should be allocated and released both before and after a process runs.
To monitor the amount of RAM that each process is using.
To reduce problems with fragmentation.

To make appropriate use of the primary memory.
To protect the integrity of the data while the process is running.


Logical and Physical Address Space

Logical Address Space: A CPU-generated address is referred to as a "Logical Address." Another name for it is a virtual address. The process's size is known as the logical address space. A logical address is modifiable.
Physical Address Space: The term "Physical Address" refers to the address that the memory unit sees, or the address that is stored into the memory address register of the memory. Real address is another name for a physical address. Physical address space is the collection of all physical addresses that match these logical addresses. By using MMU, a physical address is calculated. A hardware device called a Memory Management Unit (MMU) performs the run-time translation from virtual to physical addresses. 


Static and Dynamic Loading


A loader is responsible for loading a process into the main memory. Two distinct kinds of loading exist:

The process of loading the complete program into a fixed address is known as static loading. More RAM is needed for it.

Dynamic Loading: For a process to run, its whole program and all of its data need to be in physical memory. Thus, a process's size is constrained by the amount of physical memory available. Dynamic loading is utilized in order to achieve optimal memory usage. A routine that uses dynamic loading doesn't load until it is invoked. Every procedure is stored in a relocatable load format on disk. The fact that the unneeded function is never loaded is one benefit of dynamic loading. 


Static and Dynamic Linking


A linker is utilized to carry out linking tasks. An executable file is created by combining one or more object files produced by a compiler using a linker tool.

Static Linking: The linker merges all required program modules into a single executable file when using static linking. Thus, there isn't a runtime requirement. Certain operating systems only allow for static linking, which treats system language libraries like any other module of an object.

Dynamic Linking: Dynamic loading and dynamic linking share a similar underlying idea. "Stub" is added for each relevant library routine reference in dynamic linking. A short code segment is called a stub. The stub determines whether or not the required routine is already in memory when it is run. The software loads the routine into memory if it is not available.


Swapping


A process must to have been in memory before it is executed. Swapping, which is quick in comparison to secondary memory, is the process of temporarily moving a process from main memory into a secondary memory. A swapping makes it possible to execute and fit more processes into memory at once. Transferred time makes up the majority of switching, and the overall time is closely correlated with the quantity of memory exchanged. Because the memory manager can swap out a lower priority process and load and run a higher priority process instead, swapping is also referred to as roll-out, or roll. This is because a higher priority process can come and request service. Once the higher priority task was completed, the lower priority process switched back into memory and proceeded.


Contiguous  Memory Allocation


It is recommended that the operating system and various client processes be supported by the main memory. As a result, the operating system's memory allocation becomes crucial. Typically, memory is split into two sections: one for the operating system that resides there and another for user processes. Typically, we require multiple user processes to run concurrently in memory. As a result, we must think about how to distribute the memory to the processes waiting to be brought into memory in the input queue. Every process in an neighboring memory allocation is housed in a single, continuous memory segment.


Memory Allocation


Effective memory allocation is necessary for optimal memory consumption. One of the easiest ways to allocate memory is to split it up into multiple fixed-sized divisions, with one process running in each partition. Therefore, the number of divisions determines the degree of multiprogramming.

Multiple partition allocation: Using this technique, a process is loaded into the free partition after being chosen from the input queue. The partition becomes available for use by other processes when the process ends.

Fixed partition allocation: Using this technique, the operating system keeps track of which memory regions are free for use and which are taken up by running programs. At first, all memory is thought of as one big block of memory that is accessible to user activities. 
The term "Hole" refers to this accessible memory. We look for a hole big enough to hold this process when it arrives and needs memory. We allocate RAM to process if the requirement is met; if not, we maintain the remainder free to handle requests in the future.
 Dynamic storage allocation challenges, which deal with how to fulfill a request of size n from a list of available holes, can occasionally arise during memory allocation.

First Fit

The first free hole that becomes available in the First Fit satisfies the process's requirements.

Because the first two blocks in this diagram did not have enough memory space, process A (which has a size of 25 KB) can be stored in the first available free slot, which is a 40 KB memory block.

Best Fit

Select the smallest hole in the Best Fit that can accommodate the criteria. Unless the list is arranged according to size, we search the whole thing for this.

In this example, the last hole, 25KB, is the most appropriate hole for Process A (size 25KB). We find this by going through the entire list. Compared to other memory allocation approaches, this strategy has the highest memory consumption.

Worst Fit

Assign the biggest opening to be processed in the Worst Fit. The biggest residual hole is created with this technique.


In this instance, Process A (Size 25 KB) is assigned to the 60 KB largest memory block that is accessible. One of the main problems with the worst fit is inefficient memory use.

Fragmentation


Fragmentation is the result of a process loading into memory and then being withdrawn, leaving a little gap in its wake. Because these holes are not merged or do not meet the process's memory requirements, they cannot be assigned to new processes. In order to accomplish some multiprogramming, we need to lessen memory waste and fragmentation issues. 

There are two kinds of fragmentation in operating systems:

1. Internal fragmentation: When a process is allotted more memory blocks than it has requested, internal fragmentation takes place. As a result, some unused space is left over, which leads to an issue with internal fragmentation.
 For instance: Assume that memory is allocated using a set partitioning scheme, with 3MB, 6MB, and 7MB of memory allocated to each of the three block sizes. A memory block is now required by a new process, P4, which has a size of 2MB. It receives a 3MB memory block, but 1MB of that memory is wasted and cannot be used by other programs. We refer to this as internal fragmentation.

2. External fragmentation: Because the blocks in External Fragmentation are not contiguous, we are unable to allocate a process to a free memory block. 
For instance: Assume (using the example above) that three processes, p1, p2, and p3, have relative sizes of 2 MB, 4 MB, and 7 MB. They now each have memory blocks allocated that are 3MB, 6MB, and 7MB in size. Following allocation, 1MB and 2MB were left by the P1 and P2 processes. Assume that a new process called P4 requests a 3MB block of memory. This request is met, but we are unable to assign it since the free memory space is not contiguous. We refer to this as extrinsic fragmentation.

External fragmentation affects memory allocation in both first-fit and best-fit systems. Compaction is used to solve the external fragmentation issue. All of the open memory space comes together and forms one big block during the compaction process. This means that other processes can make good use of this area.

Allowing a process's logical address space to be noncontiguous and allocating physical memory to it wherever it is available is another potential remedy for the external fragmentation.

Paging


A memory management technique called paging does away with the requirement for a continuous physical memory allocation. This approach allows a process's physical address space to be non-contiguous.

  • An address created by the CPU is known as a logical address or virtual address (represented in bits).
  • The collection of all logical addresses produced by a program is known as the "logical address space" or "virtual address space" (as expressed in words or bytes).
  • Physical Address: An address that is genuinely accessible on a memory device, expressed in bits.
  • The collection of all physical addresses that match the logical addresses is known as the physical address space, and it is expressed in words or bytes.

Advantages and Disadvantages of Paging

Below is a summary of the benefits and drawbacks of paging.
  • Paging minimizes exterior fragmentation but does not eliminate interior fragmentation.
  • Paging is a widely accepted and easy-to-implement method for managing memory effectively.
  • The pages and frames are the same size, which makes switching them out quite simple.
  • Page tables may not be suitable for a system with limited RAM since they demand more memory space.

Segmentation


Segmentation is a memory management strategy where a job is broken up into multiple segments of varying sizes, one for each module that consists of components that work together. In reality, each segment is a distinct program logical address space.

Even if each segment is placed into a contiguous block of available memory, when a process is to be executed, its associated segmentation are stored into non-contiguous memory."

While paging and segmentation memory management function similarly, segmentation memory management uses variable-length segments as opposed to fixed-size pages.

A program segment includes data structures, utility functions, and the core function of the program, among other things. Every process has a segment map table kept up to date by the operating system, which also keeps track of free memory blocks, segment numbers, sizes, and the major memory locations that correspond to them. The table records the length and the segment's starting address for every segment. A segment identification value and an offset are included in a reference to a memory location.


What is Fragmentation and How Does it Affect Memory Allocation?

Memory fragmentation happens when it is broken up into little, non-contiguous chunks. It may result in inefficient use of memory. Fragmentation was discussed in passing in the essay, however it could be helpful to go into further detail.


Can memory fragmentation be prevented in an operating system?

Since memory fragmentation is difficult to totally eradicate, there are methods that can lessen its effects. Using memory allocation techniques like buddy systems or slab allocation, which concentrate on minimizing external fragmentation, is one strategy. Compaction is another approach that can be used to lessen internal fragmentation.

 What are the advantages and disadvantages of using virtual memory?

The capacity to run larger programs than the amount of physical memory available, memory isolation for enhanced security, and easier memory management for applications are only a few advantages of virtual memory. But using virtual memory also comes with extra expense because of page table lookups and the possibility of performance deterioration if swapping happens a lot.

. What is Thrashing in Memory Management?

 When a computer's performance drastically deteriorates as a result of frequent data switching between RAM and the disk, this is known as thrashing. An inadequate quantity of physical memory for the tasks being done may be the cause of this.