Here was part one from Mike Lee on Kernel Memory and Exchange...
Kernel memory resource bottlenecks can drastically limit Exchange 2003 scalability. Kernel resource usage may vary greatly from one Exchange server to another. A hardware platform that can support 4000 heavy users in one organization may be limited to half that number in a different organization because of kernel memory exhaustion.
This flash is the first in a series of three. These flashes are important reading for everyone who supports or administers large scale Exchange servers.
Large increases in kernel memory consumption can be triggered by changes that few would anticipate as problematic. This could cause sudden and widespread Exchange server outages throughout an organization.
The purpose of this initial flash is to introduce the issue and provide technical background. The second and third articles in this series will address common factors that either limit or consume kernel memory, and provide specific advice about optimizations for better management of kernel memory. There may be additional articles in the series, as needed.
This article applies specifically to Exchange Server 2003 running on Windows Server 2003. However, much of the information presented here applies generally to application scalability on a 32-bit computing architecture.
The second flash in this series will discuss some new hardware features available on recent servers. Hot-add RAM and the installation of more than 4 gigabytes of RAM can consume large additional amounts of Windows kernel memory. The second flash will explain how these features work and how to optimize Exchange for them. This flash will be released shortly.
The third flash will explain how large security tokens presented by clients can quickly exhaust kernel memory, and provide recommendations for reducing average token size. This flash will be available near the 14th of December.
Personal computer hardware continues to improve rapidly and dramatically in speed and storage capacity. But one thing that hasn't changed is the 32-bit processor and operating system architecture in the majority of Intel and AMD based computers used today.
Hardware performance is no longer the most important computing bottleneck. Instead, the theoretical limits of a 32-bit architecture define the ceiling on application speed and scalability.
The problem with a 32-bit architecture is that an application can juggle a maximum of only four billion bytes of information at once. For complex applications that service thousands of simultaneous users, four billion is not very much.
It has taken 20 years for general computing needs to outgrow the 32-bit architecture. The last quantum jump from 16-bit to 32-bit computing was a necessary precondition for enabling the sophisticated applications we depend on every day. Going from 16-bit to 32-bit allowed programs to go from handling about 64,000 pieces of information at once to handling four billion--a multiplier of 64 million. The next jump to 64-bit computing will allow applications to handle four billion times as much information as they do today.
Understanding the theoretical limitations of 32-bit architectures has not been very important to most people. Until recently, the ceiling on application scalability has been set by the performance limitations of processors, disks and networks. Theoretical 32-bit limits have not had a chance to come into play. But state of the art hardware can now process information so rapidly that everyone who works with large applications today needs a basic working knowledge of how memory works in a 32-bit world.
FREQUENTLY ASKED QUESTIONS (FAQ)
Why is a 32-bit architecture limited to 4 gigabytes of memory?
Before answering that, it is important to distinguish between memory address space and physical memory.
Each byte of memory in a computer must have a unique address so that applications can keep track of and identify the memory. In a 32-bit computer, the memory addresses are 32 bits long and stored as binary (base 2) numbers. There are approximately 4 billion possible different 32-bit binary numbers (2 raised to the 32nd power is 4,294,967,296). This accounts for the 4 gigabyte limit for addressable memory in a 32-bit computer.
The amount of physical memory on the computer is not related to the amount of memory address space. If a computer has 256 megabytes of physical memory, there is still a 4 gigabyte memory address space. If a computer has 8 gigabytes of physical memory, there is still a 4 gigabyte memory address space.
What happens when you run out of physical memory?
When all physical RAM in a computer is in use, Windows starts using the hard disk as if it were additional RAM. This is the purpose of the pagefile (also called the swap file). This means that the actual limit on the memory used by all applications is the amount of RAM installed plus the maximum size of the pagefile.
Generally, RAM memory is hundreds of times faster than the hard disk. Therefore, using the pagefile to relieve memory pressure incurs a significant performance penalty. One of the most effective things you can do to improve performance is ensure that there is enough RAM available to avoid frequent paging (swapping) of memory contents between disk and RAM.
How do Windows applications cooperate to share the 4 gigabytes of memory address space?
Instead, each process is isolated from the rest and has its own 4 gigabyte address space. This means that the 4 gigabyte addressability limit applies on a per-application basis, not across all applications taken together.
Each process is assigned an address space of 4 gigabytes of virtual memory, regardless of the amount of available physical memory. Applications are not allowed direct access to physical memory.
How does the 4 gigabyte address space map to a computer's physical memory?
Windows controls physical memory resources (RAM and the paging file) and carefully allocates these resources. Applications are granted access to physical memory resources only as needed, not in advance.
When an application requests more memory, Windows maps some physical memory (as long as some is available) into the process's address space. In essence, the virtual address is linked to a physical memory address. Windows maintains several tables that keep track of all of this, and the application knows only about the virtual memory address.
If both RAM and the paging file are completely full when an application needs more memory, an error will occur because of memory exhaustion.
In theory, it is possible for multiple applications to each request enough memory fill their entire address spaces. In practice, no server would be able to satisfy all those simultaneous requests.
How much memory does Exchange need?
Exchange is a very scalable application. It can be used to serve a few dozen clients or thousands. Its memory requirements increase in proportion to the work you want Exchange to do.
With current disk and server hardware, you can keep scaling Exchange up to the limits of its 32-bit maximum address space.
Memory usage for all Windows applications can be divided into two fundamental categories: kernel memory and user (application) memory.
Kernel memory is owned by the Windows operating system, and is used to provide system services to applications. All applications need to make use of kernel resources. Therefore, kernel memory is mapped into each application's address space so that the application can see and call on system resources.
By default, a full half of the virtual address space (2 gigabytes) for each application is dedicated to the Windows kernel. The other half of the address space is user memory. This is where the application loads all of its own code and data
It is possible to run out of kernel memory well before running out of user memory, or vice versa. There are trade-offs between kernel and user memory that have to be carefully balanced on a large scale Exchange server.
A large scale Exchange server is defined here as one that is handling so much traffic that it is in danger of exhausting either user mode memory addresses or kernel mode resources.
What happens when Exchange gets close to running out of user address space?
It becomes more and more difficult to allocate additional memory. Allocations have to be made in smaller, less efficient blocks. Shortly before the address space is completely exhausted, virtual memory fragmentation will cause new memory allocations to fail entirely. Exchange must then be re-started. But this is only a temporary solution. After a period of time, the load on the server will cause the same problem to happen again.
To permanently solve the problem you must reduce the load on the server or you must obtain additional address space. You can get additional address space by borrowing it from the kernel.
Windows 2000 Advanced Server and Datacenter editions, and all editions of Windows 2003 (Standard, Enterprise and Datacenter) support a 4GT (4 Gigabyte Tuning) through the /3GB startup switch in the server's boot.ini file.
Instead of giving half of the address space to the kernel and half to the application, the /3GB switch allocates 1 gigabyte to the kernel and 3 gigabytes to each application. By increasing the user address space by 50%, you can continue to scale an Exchange server well beyond the limits of the default memory configuration. But there is a trade-off: you have now reduced available kernel resources.
How does the /3GB switch affect kernel resources?
Several of the most critical memory resources or pools in the kernel are pre-allocated as Windows starts. The size of these pools is set based on the address space allocated for the kernel. You cannot change the size of these pools without reconfiguring and rebooting the server.
If you set the /3GB switch, the initial size of these kernel memory pools will be reduced. At the same time, the amount of kernel resources applications demand will increase. This happens for two reasons: first, some additional kernel resources are required to support additional the additional user space memory, and, second, applications will be able to do more work and accept more connections than before.
For Exchange, setting the /3GB switch means that you will typically exhaust kernel resources before Exchange runs out of user address space.
Which kernel resources are most affected by use of the /3GB switch?
The resources listed here do not only affect Exchange. They are critical and are used to some extent by any application.
- System Page Table Entries (PTE's) and Page Frame Numbers (PFN's). These map installed physical RAM to the virtual addresses that "own" the RAM. Adding physical RAM to a computer increases the demand for these resources, as does allocating the majority of a computer's memory to running applications.
- Paged pool. Miscellaneous kernel resources are allocated from paged pool. This is called paged pool because this memory can be swapped to the pagefile on disk if necessary. Adding additional workload to the computer generally increases the demand for paged pool memory.
- Non-paged pool. The most critical kernel resources are allocated from non-paged pool. This memory is never allowed to be swapped out to the pagefile.
It is possible to manually tune the allocation of these resources. There are tradeoffs to be made if you do this. For example, if you increase available PTE's, this will proportionally reduce paged pool memory.
What happens when kernel memory resources are exhausted?
Symptoms of kernel memory exhaustion include:
- Slow performance
- Server crashes or cluster failovers
- Errors that report complete exhaustion of system page table entries (PTEs) or kernel pool memory
A server may keep running, but may run so slowly that it appears to be completely unresponsive.