Page File Question

This might sound like a stupid question, but is there any way to set up windows xp pro so that it has a windows managed pagefile, but i wont have to watch how much of my C: partition (where the page is at the moment) is used up - i tend to use pretty much any available space on my hdd's at all times! :rolleyes:
 
Interesting conversation. Came in late. I've disabled the Pagefile on C:\ drive as well as three others. I haven't noticed any time improvements in bootup. I have noticed substancial improvement in gameplay. For some reason my Nvidia and F-Prot icons are not showing up in my toolbar.

I'm running a P4/2.4 Ghz at 1024mb of RAM on a 120GB HD. So If I used the 1.5 rule would I then set my Pagefile min./max at 1536. Should this be on my primary C:\ drive as well as my other 3 non-primary drives where I keep games and p2p.

Its really weird though you guys because I'm not seeing any glitches, hedging, slowness, whatsoever with in programs. There has got to be some memory taking place with the Pagefile or I wouldn't be able to bring up programs with saved information.. ? Correct..
 
Windows Will Page. Period.

IF you want to stop windows paging, steal the source and modify the VMM system so it doest.

However PC's have been paging since intel released the 80286 with almost-protected mode adn the 80386 with full protected mode.

Memory is sectioned off as 64k pages in dos, and I believe winnt uses 4k pages.

The hardware page, windows pages. set your pages file to either windows managed or ~1.5 x ram size and be happy.
 
he he... love a good page file thread... always seem to get them coming up w/ the advent of grossly over populated RAM slots. Try this, follow the rule of matching or doubling the page file size to the amount of RAM you have... as the MAX value... set the lower value at 50mb. Then keep an eye on the size of the thing over time. I have a machine w/ 1.5 gig of RAM in it (yeah yeah... it was a want, lol) that I've manually set a custom paging file on... minimum 50mb, maximum 500mb. I've looked at the size of it several times and it never [seems to] go over the 50mb mark.

As far as I'm concerned the proof is in how the OS reacts... if it wants it the higher value its there, so answer me why it never gets past the lower value.

I'm now an advocate for 'reasonable' amounts of RAM. Unless you're into real hardcore video editing or have a massive Ramdisk running... I don't think anything over 512mb is really necessary.
 
Read it. So I'll set the max at 1.5 * available ram. All that should mean is that at the initial bootup creation it should be the minimum I set and will expand, dynamically, as the system requires, if it requires it. Correct?
 
Lonman said:
Read it. So I'll set the max at 1.5 * available ram. All that should mean is that at the initial bootup creation it should be the minimum I set and will expand, dynamically, as the system requires, if it requires it. Correct?
ya, that's it

for just about every user with 512 or above , your system will never ever have to expand the initial minimum of 1.5

video or other memory intensive users might need to raise the initial minimum above that figure, but only a very select few.

you should use the free systems intenal pagefile defrag program to make the pf contiguous, and then, once it's contiguous, it will never have to be defragged again...unless you manually change the settings again

also, no matter what the setting, if the os ever tells you it's expanding the pagefile, or you are in need of virtual memory, then your initial minimum needs to be increased from wherever you had it set.

if you do change the initial minimum, it will need to be defragged once again
 
I'm still not sure we're on the same page[file]. ;)

The attached is a pic of the gui window where the page file settings can be set. I've set a custom initial size of 50mb, and a custom maximum size of 2250mb (for a system w/ 1500mb of RAM - 1500*1.5=2250).

I'm assuming that if it requires to expand [past the initial creation size of 50mb] it will continue to do so, without warning, until the maximum of 2250mb is reached. I'm also assuming that if pagefile.sys hardly ever gets past the initial creation size of 50mb [during normal uptime operations] that it can be surmised that the paging operations, as outlined in your paper, don't require more phyiscal disk space. And, if pagefile.sys does go beyond the initial creation size of 50mb, I also surmise that on the next reboot it will revert to the initial creation size because that's how I have it set up. I guess this will raise the question whether or not there's a performance hit by requiring XP to expand [if necessary] the initial creation size of pagefile.sys during normal operation... my guess is not much if at all.

I'm by no means arguing against the necessity of a paging file. I do however question the assumption that to gain maximum paging benifits that the initial size of the thing should be 1.5*RAM just because XP 'might' need it.
 

Attachments

  • page_file_settings.jpg
    page_file_settings.jpg
    69.5 KB · Views: 70
lonman, if area isn't available for modified pages to release, then they'll remain in memory no matter how long ago they were accessed, therefore. unmodified pages will get released instead when ever windows does want to reclaim memory.

the pagefile is only for modified pages...code simply gets released and reclaimed from the original .dll or .exe.

you'll be impeding the memory manager model, you won't be impeding paging...

the dlls or the exe's or even kernal are pages that will get released instead if you force windows to keep modied pages in physical memory...windows will do it's best to keep you running smooth, even if you don't have ideal settings

your settings would be fine if you were short hardrive area, as a neccessary sacrifice caused by your hardrive shortfall

if you want as small a pagefile as you can get away with without taking a hit, ( a hit that is amost never noticed...it's just a dll reloading when it should have been in physical memory) just do this...load as much as you can conceivably load during a heavy work day... look to taskmanager, look to pagefile useage, double the peak useage for good measure, keep expansion enabled, and you should be fine with that setting

here's a great example though even with doing what I just gaave directions to do

I just got a gps program for my laptop which I leave running while I work in my car...this thing is a bear, and if I even followed the advise I just gave before I loaded got the gps program, I would have had to increase my initial minimum to accomodate this program

that's if I remembered to even do it.

this is the very reason the default is so high...there is nothing lost oin keeping it high, and you are prepared for the future if or when you become more a power user.
 
So you're saying the custom maximum level setting means nothing? That XP won't dynamically increase the size of the paging file when it needs to for optimal paging operation? That unless I set the initial minimum to the [preferred] maximum size I'm impeding paging performance?
 
Alright. As a result of this conversation I went looking for another source of information. First stop was the Windows Knowledge base... basically all I got from there was a bunch of hoo-haw about the 1.5 recommended size, bleh. :p

I found a very informative set of articles written in enough laymen language that I could easily understand it. It also supports many of my own current theories which is also helpful. ;)

I'd encourage you to read through the entirety of what this guy has to say, and do additional research and reading such as can be found in perris' article posted here at OSNN.net before you jump off on a tangent regarding manipulating your page file size/location, etc.

The beginning is here: http://www.theeldergeek.com/paging_file.htm ... be sure to read the remaining pages that are linked at the bottom of the page regarding this topic.

Basically, what I found out, was that I can set a low initial minimum size (say 2mb) and a reasonable maximum size limit (say total of RAM installed if it's 768mb or higher) and xp WILL dynamically increase it at it's own behest as needed, and I won't be stuck with a HUGE file that takes up unnecessary hard disk space.

My recommendations are dependant on amount of RAM installed. If you have 64mb (xp minimum) to 256mb installed then I suggest letting XP have complete control... no manual changes execept maybe putting it on a dedicated hard drive. 512mb installed and/or sharing memory w/ onboard anything then study the articles and learn how to determine for yourself your typical system load (through Task Manager) and decide for yourself how to set up your paging file. If you have 768 and above, I'd recommend setting a low minimum (I'm personally going to change mine to 2mb) and a reasonable maximum of at least the amount of RAM you have installed... go as high as you want with this one because it won't get created unless it's needed.

With large amounts of physical memory available XP will do it's best to keep paging operations there... only swapping out to your virtual memory (pagefile.sys) if demands on available physical memory require it.

Update\ I tried creating the initial size at 2mb and xp automatically created the pagefile at the max I specified because it couldn't dynamically expand the page on startup (silly xp). I set the initial minimum back to 50mb which boots up just fine.
 
as for the elder geek article, lonman, it's uninformed...for instance it says;

When the load imposed by applications and services running on the computer nears the amount of installed RAM it calls out for more. Since there isn't any additional RAM to be found, it looks for a substitute; in this case virtual memory which is also known as the page file.

contrary to that authors understanding, All memory seen under the NT family of OS's is virtual memory, (Processes access memory through their virtual memory address space) there is no way to address RAM directly!!

lowering the size of the pagefile serves no purpose in restricting the amount of paging operations, all it restricts is paging operations to the pagefile...if modified pages have no area to be backed, the os simply goes to the next available candidate...simple...lowering the pagefile changes not one bit when the os releases a page...iit only changes which page gets released


with the pagefile, even modified pages are able to be concidered in the memory management model, without the pagefile, those pages are forced to remain in physical memory, since there's no backing store for the modified pages

as far as tunning with a pagefile small, and expecting xp to expand it if it needs to...it expands it when it has to, it waits long after it would have needed in order to follow the memory management model

the pagefile only expands when the commit charge reaches the commit limit...nothing else will make it expand...if you have 256 physical memory, without a pagefile, that's your commit limit...if yoiur commit charge doesn't reach the commit limit, even with that small amount of memory, the pagefile will not expand

as far as the reasoning that with a small pagefile, you won't be stuck with a huge pagefile.

well, that's a self evident statement, but if you have a hardrive that is big enough to accomodate a huge pagefile, lowering the size of it becomes counter productive.
 
lowering the size of the pagefile serves no purpose in restricting the amount of paging operations, all it restricts is paging operations to the pagefile
Exactly!!

the pagefile only expands when the commit charge reaches the commit limit...nothing else will make it expand...if you have 256 physical memory, without a pagefile, that's your commit limit...if yoiur commit charge doesn't reach the commit limit, even with that small amount of memory, the pagefile will not expand
I've never once said a paging file is not necessary... I believe it is. But to just accept the 'fact' that I need to commit over 2 gig of disk space to the thing on the chance it 'may' be needed just isn't good enough for me.

as far as tunning with a pagefile small, and expecting xp to expand it if it needs to...it expands it when it has to, it waits long after it would have needed in order to follow the memory management model
You say this like its 'counterproductive.' Counterproductive to what? Paging from physical memory to disk? You've actually made my point for me... the system will perform much faster reading it's page 'charges' directly from physical memory then it will by having to get it from the disk. Forcing xp to only have to write paging operations to disk because its reached the 'commit level' is sorta the whole purpose behind having a large amount of RAM in the first place.

as far as the reasoning that with a small pagefile, you won't be stuck with a huge pagefile. well, that's a self evident statement, but if you have a hardrive that is big enough to accomodate a huge pagefile, lowering the size of it becomes counter productive.
I'm offering you this arguement in reverse... I have a TON of physical memory I want to see my machine actually put to use... forcing XP to hold it's paging charges in it as long as possible is PRODUCTIVE from that standpoint... and productive in that I've reclaimed over 2 gig of usable hard drive space.

I'm not sure exactly how much of that 'uninformed' article you read, but I'd like to see you put together an article like the one he has, the very last page link, about creating a custom Microsoft Management Console for monitoring pagefile use... I thought that was particularly informative and I now have one on both of my machines. With this running for several days of typical operation I'll be able to adjust my high and low ends quite affectively. I also found the information on the third page 'Sizing the Page File' to be of paricular interest in learning what all that stuff means under the Performance tab in the Task Manager.

I always get a good chuckle out of folks that will hack thier protected files layer but argue tooth and nail about tinkering with the pagefile, lol. Well, I'm a tinkerer... and as a result of this discussion, and reading the various articles, I feel very comfortable with manually manipulating my page file. So let XP expand it ONLY when it has too... that's my whole argument in a nutshell.
 
for your last paragraph, I never hack my protected files, as far as I'm concerned, the only thing that really speeds us up is user interface customization...but you can and should tinker and enjoy your box if that's your enjoyment...that's what it's there for,..as far as asking me to put together an article like that elder geek article, his is uninformed...why would I want to do that? ...then accepting his article as particularly informative, when in fact and already not ed for you, it's uninformed (as I pointed out in my previous post) is not anything I'll aspire to...once he makes it clear he doesn't know that ALL memory in an NT os is virtual, anything he says based on that misinformation takes his incorrect information to progressive misunderstanding

as far as what you say about having a ton of memory, and wanting to hold data as long as possible in memory, you change not one iota for the good how long data is held in memory by making the pagefile as smaller, you change it for the bad...you actually FORCE the os to release data that HAS BEEN referanced BEFORE data that HAS NOT been referanced...not good...in your effort, all you change are the pages that the os WILL release WHEN it releases and reclaims memory, making the pagefile smaller doesn't change WHEN the os releases and reclaims, it just changes WHAT the os releases and reclaims..you haven't changed how much data is held in memory, you've only changed WHICH data is held in memory

when I say counterproductive, I mean that what you are attempting to accomplish by lowering the pagefile is not accomplished, it's impeded...for instance, if the least used or longest ago useed page during your work episode would have been backed to the paging file, and your paging file can't back that page becuase of your restrictive settings, the OS WILL go to the next candidate instead of the best candidate...that's what the os does.

on the other hand, if what you are trying to accomplish is the savings of that amount of hardrive area...ya, you've accomplished that, and if you need those mb's, you should do it....then what you're trying to do is accomplished
 
well, it's obvious i've managed to piss you off to a certain degree so I'll stop using your own arguments in favor of my stance on dealing with this issue. You misunderstood my 'article' statement... I was specifically referring to the MMC page... creating a pagefile monitoring plug-in... I guess his progressive misunderstanding didn't get you past the first page... too bad.

I don't see my settings as 'restrictive' btw, just because the initial creation of pagefile.sys is small compared to the standard 1.5 means little... the maximum value is still generously set for XP to expand it on the fly when it's needed (I'm sure I'll never see a 'low page file' warning). I take that 'range' between the initial and maximum size to be dynamic and not as so set in stone or counter productive as you imply it to be... maybe I'm wrong on that point but then why would there be the ability to set a maximum limit?

I think what would offer optimum pagefile performance is to have it on it's own hard drive and let windows do with it whatever it wanted, but I'm basing that on information found on more then just the first page of his pagefile dissertation, as well as other sources I've read in the past.

What are some really pagefile intensive operations/programs? I'd like to put my machine through some paces and keep an eye on the MMC pagefile plug-in to fine-tune my practical understanding of this process.
 
lonman, I'm not pissed off...it's an unfortunate style of mine when I'm in discussion...I do have special regard for your opinion in all matters concerning this operating system....though it does peeve me a little bit when someone posts an article as if it's from the microsft kernel team, when instead it's uninformed...I do get anoyed at that

and I agree with your point though it's hard to tell from my writing...that for just about every user, the default pagefile setting is much more abundant then it needs to be.

my only real point is that this abundance is at no price whatsoever.

if it's too abundant for a users work load, they loose nothing...however, if it's too small for a users workload, they will take hits they don't even realize

for instance, and as I said before, a dll will be unloaded instead of a much more benign candidate

as far as you not seeing your settings as being restrictive, if the settings are lower then the peak commit charge, they are surely restrictive...that's the barometer

to put it better though, if you have more modified pages laoded then pagefile area, you will be taking hits.


if you want to try to put your memory under pressure, pick up the trial version of maya, and of course photoshop..keep a couple of projects going at the same time from thses programs

also, laod all four user profiles with some intensive programs running on each profile

if you're like me, you'll have some things running in the backround as well

I never shut down whatever I open if I think I might go back to it again...word, excell, mail client, and now gps with voice recognition...I just minimize

minimizing releases the memory in a working set, but it leaves the data in memory on the spl, (standby page list) so that if anything is referanced, the os incurs a soft fault instead of a hard fault

anyway, great conversation
 
lonman, I'm not pissed off...it's an unfortunate style of mine when I'm in discussion...I do have special regard for your opinion in all matters concerning this operating system....though it does peeve me a little bit when someone posts an article as if it's from the microsft kernel team, when instead it's uninformed...I do get anoyed at that
I really don't think this guy is trying to come off as some kind of expert... he posts a clear disclaimer on the third page of his dissertation "Sizing the Page File." I'd like your take on that particular page in respect to the Task Manager Performance page... is that an accurate description of Commit Charge, Physical Memory, and Kernal Memory?

I'm going to do a load test on my machines using some of the programs you suggest and monitor the page file usage closely... if nothing else it will be an interesting experiment :)

I don't know about you, but I actually feel more confident in manually manipulating page file initial size creation and maximum value [in a system that boasts a LOT of available physical memory]. I do recognize the need for a high maximum of at least the amount of ram installed when there's 768 or more installed 'just in case.'

I'd like to read more on how setting a low initial size hampers XP's ability to affectively page... from my thinking, as I stated before, I've drawn the conclusion that XP is free to expand beyond the initial size up to the maximum value as if it had the whole thing were there to start with, and the only real performance issue with that being, of course, the devils own fragmentation of the pagefile going on as it does expand. If you have any other articles on the subject that would help clarify this point I'd appreciate a link to them :)

It has been a good discussion and I appreciate your time to express your understanding on the subject. Killer guide btw... lays the ground work well. I guess I'm just a little tenacious when it comes to hard drive space. I have a pretty good understanding of the MFT and all that 'reserved system space' now, and I think when this thread dies out I'll have a pretty good understanding of all things page file too, lol.

Question for you. Do you think leaving the page file on the system drive, an ATA100 7200rpm would be faster then throwing in an older ATA66 5400rpm for a dedicated page file drive?
 
I'm at work...I'll get to what I can as the day goes on..as far as your last question, a dedicated drive that's as fast as your os drive or faster, and not too small as well (so that the heads don't need to cross more cylinders) and also is not set to spin down, is supposedly the best set up...a slower drive is probably counter productive.

I also don't think there'll be a difference even with the ideal set up with your amount of memory anyway...a waste of a hardrive imo

also, when you load multiple users, use fast user switching from power tools, that'll really help to put memory under pressure...you can also launch multiple instances of Maya if you want

as far as the pagefile expanding when it needs to...as I said, it only expands when the commit charge reaches the commit limit...working sets are trimmed when there's not enough memory....they are permitted to be hogs when there's an abundance of memory.

send to me your email address, I'll send you some ms papers that you'll really enjoy...

also, for better understanding of the multi- threading nature of the NT os in general, take a look at my paper on CPU scheduling which I titled " system idle process explained"

also, if you're reluctant to put this much hardrive area to your pagefile, just look to the peak commit charge and go a little higher then that...you should be fine till your commit charge increases...you need to have at least as big a pagefile as you have memory in use...which then quite naturally to me means at least as much pagefile as memory installed...otherwise you are quite simply admitting you don't plan on putting all your memory to use

I just hate the idea of constant monitioring when there's nothing to gain in performance by keeping the settings at a level that needs to be monitored

I'll look to what you want me to look at later on a fast connection.

send me your email address
 
After some initial testing on my system that's not all bloated (448mb of RAM), I think I've modified my stance on a low initial pagefile creation size. I discoverd that when I was able to 'force' its expansion (I set it up w/ a 50mb initial minimum and a 671mb max) that xp popped up with the low page file warning... it's not the dynamic process I'd thought it would be. Is there some kind of fail-safe that comes into play at this point that restricts certain paging file operations? What I need to find out now is what exactly happens during the process of a forced pagefile expansion? Do whatever restricticns that are automatically imposed get eventually lifted after it is expanded, or do they stay in place? If you have any information that could shed some light on this particular issue I'd love to learn more about it.

I ran a similar 'test' on the machine w/ the 1.5 gig and I was so far away from the commit charge limit it was almost scary, lol. But believing now that forcing a page file expansion could cause system instabilities. Chances are that when a system gets to this point that something important is going on that I wouldn't want my machine crashing on.

Thanks for your time in this... I haven't enjoyed experimenting with my box like this for awhile.
 
for some strange reason, I have a pagefile fetish...ya, I got your address, and will send some pages soon...great reading

as for expanding the pagefile, and restrictions, ya...working sets are trimmed when the os is asking for more memory then your settings are prepared to deliver, the memory manager will try to figure out where to asign what's available to what's neccessary...but once it's actually finnished expanding, all should be as if the pagefile were the correct size to begin with.

here's a great analogy which once realized makes the whole idea of reducing the size of the pagefile seem silly

according to documents I'll send to you, nt needs hardrive area for everything it keeps in memory..pagefile or exe it came from...whether or not that data will ever get paged doesn't matter...the os still needs the address translation, since it doesn't know what pages are least likey used for any given moment before the page needs to be released, it creates an area for everything

for instance, you hire 512 workers...you find your bussiness only needs 256... you have workers that never need to come to work

fine, get rid of their work desk if you need the space, and just get some more work desks when you know you'll be putting them to work

but if you have plenty of space, you might as well keep their area ready for them to come to work for the if you get a contract you didn't expect, and you need those workers to just show up without renting space for them

as my analogy in the paper also demonstrates, if you look to an apartment building in manhatten that is 100 percent capacity...only about 25 percent is ever in their apartment space at one time

that doesn't mean I can reduce the amount of bes by 75 percent does it

most people seem to think that any pagefile area is a common area for all memory...that;s not correct...each page needs it's own area, not a shared area just because someone elses area never gets used

as the mocrosoft documentation will demonstrate, if a page cannot be released to it's original exe, dll or file, the spot for it is the pagefile.

I'll send that stuff when I gather enough for you to munch on

/me goiing to bed to prepare for a long july 4th weekend
 

Members online

No members online now.

Latest profile posts

Also Hi EP and people. I found this place again while looking through a oooollllllldddd backup. I have filled over 10TB and was looking at my collection of antiques. Any bids on the 500Mhz Win 95 fix?
Any of the SP crew still out there?
Xie wrote on Electronic Punk's profile.
Impressed you have kept this alive this long EP! So many sites have come and gone. :(

Just did some crude math and I apparently joined almost 18yrs ago, how is that possible???
hello peeps... is been some time since i last came here.
Electronic Punk wrote on Sazar's profile.
Rest in peace my friend, been trying to find you and finally did in the worst way imaginable.

Forum statistics

Threads
62,015
Messages
673,494
Members
5,621
Latest member
naeemsafi
Back