Discussion in 'Windows Desktop Systems' started by Capricorn, Apr 11, 2003.
Does adding more RAM memory add to the used portion of the Hard Drive?
actually, it does.
you will have larger working sets with more ram, and more code, (backup files, etc) written to disc when you have more ram
in addition, but not exclusive by itself;
if you have the correct pagefile setting, ( default is correct), then adding ram will increase your initial minium.
this is something I remind everyone all the time;
if you increase your ram, you must defrag the pagefile.
but once it's defragged, it will remain contigous forecer more, so forget about it once you've done this
How do you defrag the pagefile????
Uh... with defrager.
with this free pagefile deffrag program...
and no matter what anyone tells you, your pagefile will not get fragmented once you defrag it.
and this absolutely includes a dynamic pagefile
the important thing is to have in initial minimum so big, it never has to expand, but even when it does expand, expanded prtions are discarde on reboot, and the original condition of the pf is of course identical to before it expanded
Thank you Dealer for your helpful answer....I'll give a go...Thanx...
thnx for the linky dealer.
/runs off to d/l and install
Will the Disk defragger included with XP get the job done?
no, it cannot defrag the pagefile
Yes I used this prog months ago and since then pf has remained in one single (large) fragment.
is this good to do just because, or only when you increase the RAM?
you defrag the pf only one time
that's when you adjust the initial minimum, or you increase ram, and the os adjusts the initial minimum, and that is the only thing that will creeate fragments...also, the pf is usually fragmented on the installation of the os.
of course, also if you develope bad sectors on your hardrive, right where the pf is, but I'm not talking about faulty hardware
the pf cannot get fragmented once it's has contigousm, (similar to fujism) and those old "experts?" that said it did become framented had then, and continue to have (since most insist on hanging on to this rediculous notion), no clue...
and, as I always say, this statement unequiviacably includes a dynamic pagefile
I took you out of context, to make a point I have never made
even if your pagefile is fragmented, it is only fragmented for each individual io.
this is hard to explain.
suppose you have 100 extents.
well, in order for this to mean anything at all, the information the pf is gathering has to cross an extent boundary.
if it does not cross an extent boundary, then for this io, the pf is contiguous.
the io is so small, that your pf would really have to be all over the place for you to notice a performance hit.
but, for the sake of fine tuning your box, and for those with a harddrive that is pretty full, it's important to defrag the thing.
what I am saying is the performance gain will not be noticed on most comps
it wont damage anything if i just did it for no reason. i added ram about 1 year ago, but never in this computers life has its pagefile been defraged.. im not looking for performance gain, but i was just curious if it was ok to do.
and you answered my question, so thankyou.
The page file size will only change when you add more RAM if you have not set the PF yourself and let windows manage it. Windows does the very old and not necessarily correct setting of 1.5X RAM, but I made a promise to myself a couple months ago that I would never discuss the pagefile sttings in public again.
and here you are doing it
as I said, the default settings are absolutely correct
and anyone that has a lower pf then 1.5 initisal minimum is definately not performing as well as they should be
less then 1.5, (with expansion enabled) I don't care if you have two gigs of ram, is incorrect
dealer, as RAM goes up the necessitiy for the pagefile decreases. I am not saying that there is no need, but the size is definitely lower. Windows will try and use all the RAM up if possible, this is a good thing, why have it if you don't use it.
New systems with a gig of RAM DO NOT need a pagefile sized 1536MB that is absolutely ridiculous.
Older (or newer) systems with low RAM (128) will need more than 192MB regularly, so it should be set higher, or else it fragments. I know that the PF will defragment when it gets smaller, but that is not the point. The point is that when the pagefile is being used it should be defragmented. If the OS is trying to page and has to scour the disk to access the PF it will slow it down. When the PF is not in use it doesn't matter how fragmented it is.
Here I go again (slaps self), 512MB is a good setting for all usual systems regardless of RAM. If you use AutoCAD, photoshop, PSPro, anything that is super memory intensive then you might need more.
I'm just trying to explain how the M$ setting is flawed. Just because M$ says something doesn't necessarily make it so.
as ram goes up, the more pagefile you need, not the less...
pagefile is not a substitute for ram, and it's this flawed reasonning that makes for the flawed belief that with more ram you need a smaller pagefile
and no, the ms documentation is far from flawed j, and far from old, it's current, written soecifically for xp, and it's correct.
ms thinks, (heaven forbid), that if you have alot of ram, you will eventually use alot of ram...do you think this is incorrect reasoning on their behalf j?...it's not.
j, I love these conversations, so, if you feel the need to make your points, feel free to do it
pagefile has been party to so much miss information, and people like yourself are so used to old notions concerning what the pf actually does, (it's not a replacement for ram, so try not to think of it that way, as this is what you are having trouble with). it makes for great reading.
the pagefile is a place for ram information to go, a holding bin so to speak, in the event that you might put code into use that is not normally in use, or had not in use for some time during your work.... if you have even 2 gigs of ram, the os wants a place to put it, for the event that you use two gigs of ram, or the event you use more of the code then currently...this is the cocept, and a well well done concept it is indeed
the following is correct;
the more ram you have, the more pf you need, not the less pf you need, and this is so easy to prove, it's hard to believe people still think that if you have more ram you need a smaller pagefile...this is a vm os j, and this is what gives it speed and stability
here's the simple, practical in everybodys face proof.
just open your taskmanager.
now, open something huge...photoshop, anything.
look at your pf useage...at the same time, look at your ram useage...(use coolmon if you need to)
now, very simply , close the program.
you will notice, the exact amount of ram released is the exact amount of pf useage that goes down...it has an adress area for the entire working set of the programs you have launched
this is very simple j
xp needs for smooth running, an address area for whatever ram is in use....using two gigs?... address allocation for two gigs
if it cannot get it, you will not be running as efficiently.
pretty simple proof