SQL has detected a a fall in page life expectancy, producing slower responses.
Impact : Medium
Slow SQL responses will degrade the user experience, resulting in poor efficiency of your organization’s operations.
Expected behavior :
The longer a page can stay in the buffer pool and be read from memory the better the system performance. We recommend setting the threshold value at 300 seconds for every 4 GB of buffer pool (BP) memory.
Possible causes of persistent low Page Life Expectancy (PLE)
Continually high usage associated with low PLE may indicate the need for a CPU upgrade or the addition of multiple processors. See our explanation of CPU usage here
Alternatively, a high CPU usage rate may indicate a poorly tuned application or SQL code. Optimizing the application can lower CPU utilization.
Review the amount of total RAM allocated to SQL (while still keeping sufficient RAM for operating system needs). Compare the ratio of SQL memory to database size – based on our experience, a ratio of total RAM greater than 20% of the largest database is optimal.
If possible, allocate more memory to SQL, or alternately add physical RAM to the server and increase SQL memory allocation.
Check what else is running that might be slowing disk access speed. The buffer pool swaps pages out to disk when memory space is needed. This process is highly dependent on the efficiency of data read/write actions to the hard drive, and on level of demand.
To speed up disk I/O and lower latency, locate the page file on the fastest available drive (or upgrade to more efficient hardware). As well, reduce competition for this resource from other operations. AimBetter resource monitor gives comprehensive statistics on total disk space, free space and page file activity.
In SQL Server, data is stored in 8 KB pages. When data is required for input/output the page is first read from the disk and stored into memory. The area allocated is called the Buffer Pool. Once a page is loaded into the pool, all subsequent access is very fast and efficient, whereas disk I/O is relatively slow. As much memory as possible should be allocated to the buffer pool, without starving the operating system.
Allocate the optimal proportion of physical RAM to BP, but leave sufficient RAM for OS operations. Set a value in the Maximum server memory option of the ‘Server memory options’ page, that leaves enough memory for the OS itself. We recommend around 4 GB less than the total amount of RAM installed
Possible causes or transient low Page Life Expectancy (PLE)
Missing indexes means SQL Server is suggesting that your query could run faster with an index. 99% of the time, corruptions are disk or disk driver/firmware related.
In case of a missing index, refer back to you system design team.
In case of corrupted indexes, refer to your storage management team, or our support experts, for help. One possible remedy in cases where the corruption is repeating is to clone the database onto a new hardware configuration.
May be a result of such problems as inefficient query plans or improper coding standards.
Inspect the plan that SQL was using, to determine whether it was optimal.
Investigate the database queries, to see whether required data queries are corrupted, or incorrect.
Investigate the coding, so see whether efficiency can be improved.
These causes not persist over long periods. Main causes are:
- getting large memory grants
- displacing large numbers of pages in memory with new ones
- modifying many pages, forcing them to be flushed to disk
- many concurrent (or some very large) queries
In cases where they are re-occurring or showing tendency to increase in magnitude (lower PLE), investigate the other possible causes below.
CHECKDB is very resource intensive – probably one of the most resource intensive things SQL Server performs. Introducing this process as additional IO load on top of normal SQL Server load means that disk heads will moving constantly.
We suggest running any essential CHECKDB processes at a time of minimal SQL Server load, or alternatively on backup copies of the databases offline.
An index rebuild operation will always build a new index, which means extra space is required to build the new index before dropping the old one; a lot of CPU and I/O resources are required, which can overload the system.
If it occurs frequently, it can be corrected by changes in the indexing procedure, for example by switching to a strategy in which fragmentation is analyzed every night and only the fragmented indexes are processed. Our Expert Support Team consultants are available to analyze and assist.
Page Life Expectancy metric measures the average number of seconds pages are staying in the buffer pool. It is closely related to page file usage – see here.
For optimal performance, data should be read from memory rather than disk, which is much slower. SQL achieves this by utilizing memory pages that store the most recently accessed data. A reserved area of memory (the buffer pool) is available for storage of these pages. When the buffer pool is full and SQL requires new space, the server swaps the oldest page out to the page file on disk and reads in the new data. Increases in the number of swaps between memory and disk directly affect performance.
Sustained low levels of PLE is a reliable indicator of SQL Server memory pressure. This pressure may be a result of high demand, or from inefficient data organization, both physical and logical
We recommend setting the threshold value at 300 seconds for every 4 GB of Buffer Pool (BP) memory on your server, which means the server should retain any given page in memory (after a process references the page) in the buffer pool for minimum 5 minutes before it is flushed to disk. If the buffer pool is flushing pages in less than 300 seconds, there probably is a problem.
The more memory your server has the more cached disk reads and writes it can perform. Lack of system memory can cause high non-cached disk reads and writes. Adding memory to your server can help lower physical disk access.