and now, a further understanding of what the pagefile actually does, is, have a read...it's a little different then what most people suspect;
Xp is a virtual memory operating system that uses more memory then a computer has installed. Most users need over two gigs of physical memory to actually compute.
Using more memory then installed is a cute little dance, accomplished by providing a storage area somewhere on the disc for every bit of data that is in memory, or might go into memory...this storage area will be the place the os will retrieve information if that becomes necessary.
data that's in memory but hasn't been referenced in the longest time will be unloaded from physical memory if memory otherwise goes under pressure according to useage... the physical memory that has been released will now be used somewhere more current... however, if the unloaded data is referenced again later, the 0S will obviously need that information to be re loaded once more...this is what paging is.
There are different types of data, and different types of data get paged to different files. You can't page "private writable committed" memory to .exe or .dll files, and you don't page code to the paging file...these files have different locations for their respective paging activity.
Data that's not meant to get modified during use does not need, nor does it get a new area on the disc other then the area it came from..such data uses it's own original file when backing store is required....in other words, data that never gets modified never uses the pagefile if or when that data finds the necessity to page in and out of physical memory.
to summarize; data that doesn't get modified doesn't use the pagefile...the os simply retrieves pages directly from the original source on the disc ! very efficient.!!
Here's an example; suppose a dll happens to be the best candidate for release...then, when memory is needed somewhere, this dll ( portions of it ) will be unloaded...unloaded data like this is not written to the pagefile or anywhere else...it's simply unloaded....if that data is later referenced once again, the os retrieves those pages frome the original location whence it loaded in the first place.
this is true of all pages that don't get modified...nothing in this category ever gets written to the pagefile, since they use their own location as their own private little pagefile.
Obviously, that can't be the method to load and release all that's in memory, since work does get modified while computing.
Data that's been modified, (known as ""process private writable, committed") couldn't possibly be retrieved from the original area on the disc that it came from because it gets modified. (obvious once told, isn't it)
data like that goes to a file that is specifically provided for modified pages
thus, the "pagefile"
and so we see the pagefile is the storage area for these modified pages...in addition, to speed the process and to make it as seamless as possible, in the back round, the os writes the most likely of these pages to the pagefile long before they are needed to be released...other less likely candidates simply reserve space without even being written...this primes the process, however, though the pagefile has pages written, MODIFIED PAGES ONLY GET UNLOADED IF THEY ARE THE CANDIDATE THAT WILL BE LEAST NOTICED WHEN RELEASED...they don't get unloaded simply because they've been written to the pagefile.
so you see, the pagefile only presents better options for the memory management model...if the pagefile isn't there, or if it's too small, the process private writable data will not be available for memory release even though it would have been the best candidate...the os will simply go to the next candidate, and it will release that candidate instead of the best candidate...not a good thing.
most users won't even notice a performance hit of this nature, or they won't attribute the hit to incorrect pagefile settings...a performance hit like this will manifest as a little hitch of some sort when a feature has to be reloaded that shouldn't have been unloaded, and wouldn't have, had the best candidate been available for memory management.
that's it...pretty simple; the pagefile is the area on the disc for data that's been modified, or will become modified.
once the pagefile is optimum size, having a setting "bigger then necessary" does not impede performance...it can't encourage any paging activity, nor affect computing on the negative side... in fact, a bigger then necessary pagefile can actually facilitate performance, since "bigger then necessary" provides more contiguous area for data to be written...while the actual pagefile never gets fragmented on a properly sized pagefile, the data that's written inside the pagefile could possibly become fragmented...though not likely on a properly sized pf...obviously, the contents of the pagefile are less likely to be fragmented if there are bigger areas of contiguous free space where it's written.
in addition, Mark Russinovich (author of inside windows 2000) told me in a conversation, (quote)some applications want to reserve a large block of its address space for a particular purposes (keeping data in a contiguous block makes the data easy to manage) but might not want to use all of that space.(unquote)
Now, while having a pagefile "bigger then necessary" doesn't encourage paging, and presents no performance liability what so ever, the reverse is not true...having a pagefile too small actually actually DOES encourage paging and DOES present a performance liability...this is true simply because the os is forced to unload data that is not the best candidate, and the likelihood increases proportionately that the very data unloaded will be needed again, thus more paging not less paging when the pagefile is too small