RAM Memory

http://www.aumha.org/a/xpvm.php

This is of significance from there:

How big should the page file be?
There is a great deal of myth surrounding this question. Two big fallacies are:

The file should be a fixed size so that it does not get fragmented, with minimum and maximum set the same
The file should be 2.5 times the size of RAM (or some other multiple)
Both are wrong in a modern, single-user system. A machine using Fast User switching is a special case, discussed below.)

Windows will expand a file that starts out too small and may shrink it again if it is larger than necessary, so it pays to set the initial size as large enough to handle the normal needs of your system to avoid constant changes of size. This will give all the benefits claimed for a ‘fixed’ page file. But no restriction should be placed on its further growth. As well as providing for contingencies, like unexpectedly opening a very large file, in XP this potential file space can be used as a place to assign those virtual memory pages that programs have asked for, but never brought into use. Until they get used — probably never — the file need not come into being. There is no downside in having potential space available.

For any given workload, the total need for virtual addresses will not depend on the size of RAM alone. It will be met by the sum of RAM and the page file. Therefore in a machine with small RAM, the extra amount represented by page file will need to be larger — not smaller — than that needed in a machine with big RAM. Unfortunately the default settings for system management of the file have not caught up with this: it will assign an initial amount that may be quite excessive for a large machine, while at the same leaving too little for contingencies on a small one.

How big a file will turn out to be needed depends very much on your work-load. Simple word processing and e-mail may need very little — large graphics and movie making may need a great deal. For a general workload, with only small dumps provided for (see note to ‘Should the file be left on Drive C:?’ above), it is suggested that a sensible start point for the initial size would be the greater of (a) 100 MB or (b) enough to bring RAM plus file to about 500 MB. EXAMPLE: Set the Initial page file size to 400 MB on a computer with 128 MB RAM; 250 on a 256 MB computer; or 100 MB for larger sizes.

But have a high Maximum size — 700 or 800 MB or even more if there is plenty of disk space. Having this high will do no harm. Then if you find the actual pagefile.sys gets larger (as seen in Explorer), adjust the initial size up accordingly. Such a need for more than a minimal initial page file is the best indicator of benefit from adding RAM: if an initial size set, for a trial, at 50MB never grows, then more RAM will do nothing for the machine's performance.
 
Ijust finished using the defrag utility that dealer posted and i think it says it remains one framented.
I know the image says "Dont defrag" but that's after i ran it. I made sure it was set to "Defrag at next boot".

fragments.png
 
1 fragment does not mean fragmented, the 1 file is the 1 fragment.

Contiguous file looks like (*** file ----free space)

******-----------------------------

2 fragments

*********--------------*********

3

**------*********----**********
 
actually j, I had a conversation with alex nichole on that very subject, and that paper he wrote, and he agreed with me here.

I had to correct him on other aspects of that paper as well, as you will see when you read our conversation

great thread.


it makes absolutlely no sense to lower the initial minimum of the pagefile, and anyone that suggests doing it has no reason to do it.


on the other hand, it can make plenty of sense to raise the pf, and many people do need to do this
 
OK, since we are both right, at the very least in our own minds, I will say that I am not trying to argue that a larger pagefile, but I do think it is a waste of space, is a performance decrease, but that a machine with low RAM should have a larger pagefile than 1.5x RAM. A PC with 128 MB of RAM will probably frequently want/need more than 192MB of PF so it should be delegated to have a larger initial size. Because if that expanded PF fragments it will take a performance hit.
 
Im going to take advantage of the argument:)D ) and ask a question, but, to simplify it, i am going to post an image.
At this screen, what should i set it for? (I have 512Mb of ram)
Image1.png
 
a 1.5 page file is most efficient for page addressing, a smaller page file gets congested with pages easily, it's that simple, it's not rocket science.
 
Originally posted by j79zlr
OK, since we are both right, at the very least in our own minds,

well, you are gonna hate me for this one j, but no, we are not both right

this is not an opinion

this is a performance fact

there is absolutely no gain by lowering your pagefile

yes, many people won't take a hit for doing it

but absolutely nobody will take a gain


and some people will absolutely take a hit...including people with two gigs of ram...ask vid pro, for one

so

this is not an opinion

this is a performance fact;

lowering the pagefile will in the best case scenarion cause no good.

and in quite a few scenarios, cause slowdowns

it is incorrect to go below the default

read my discussion with the author of the paper that you provided...it is excellant reading.

in addition, try this thread, for an identical conversation as we are having here
 
Where it says :

Initial size MB 384
Maximun size: 768

I changed to:
Initial size MB 768
Maximum size:768
Is that right?

Image1.png
 
Originally posted by blinden
a 1.5 page file is most efficient for page addressing, a smaller page file gets congested with pages easily, it's that simple, it's not rocket science.

very well said blinden...in one sentance, you've stated what I've tried to do in paragrapghs...nicely done
 
Originally posted by dealer
very well said blinden...in one sentance, you've stated what I've tried to do in paragrapghs...nicely done

So mine is right?
768
768
 
Originally posted by Leo
Where it says :

Initial size MB 384
Maximun size: 768

I changed to:
Initial size MB 768
Maximum size:768
Is that right?

no

there is no purpose served by dissabling expansion, except if that is if you don't want the os to prevent you from freezing or crashing if or when your memory goes under pressure

your initial minimum needs to be no less then 1.5 ram, and you should never turn off expansion, unless your initial minimum is 3x ram...then you can safely turn off expansion.

since the os will only expand when the commit charge reaches the commit limit, and since it never expands more then what's necessary, you can enable expansion to 4096 without any fear of using any of it unless you need to use any of it
 
So dealer, could u please suggest the right settings?
Im getting lost here...:(
I apologize for that.

I have 512
what should i set it for? And what's expansion?
 
this is not an opinion

this is a performance fact

there is absolutely no gain by lowering your pagefile

yes, many people won't take a hit for doing it

but absolutely nobody will take a gain

I said that, I was saying that having a larger than 1.5xRAM pagefile on low RAM system is important.

And I didn't claim a performance increase by setting it lower, only that it wastes HDD space.

There is 100% no denying that setting a larger minimum PF setting for a small RAM system like 128 will be beneficial. The pagefile does not have to expand to accomodate the higher requested usage then. If it has to expand and it gets fragmented there is a performance hit.

What was the original argument? :D

I don't even remember.

BTW Vidpro uses a 512MB PF with between 1 and 2 GB of RAM, and has told me he has never seen the system use more than 256MB of it. That is doing whatever crazy video editing he does. But I don't want to bring him into this.
 
leo

set your pf to 1.5 ram, expansion to 4096.

then defrag the file

j

on this we will absoulutely agree

most people will not use all of their pagefile for some time to come..

so, for most people, there will be no performance hit by lowering it

so, we do agree on this

but my point is, there is absolutely no point in lowering the initial minimum is there?

you want to say to yourself, "I am now not wasting a gig of hardrive space"

well j

actually, when you take a resource out of use, that is the waste, isn't it.

putting an unused resource to use, any use, is hardly a waste....this is how we get the most efficiency from our resources, and our operating system

now, that we have found some common ground, I will point a few things out, which I pointed out to alex nichole, but I might as well repeat myself here;

so, today, you don't access 100 percent of your pagefile.

so what?

soon, you will

experimenting with a trimmed pagefile will fail, as soon as you load a program that is sophisticated enough to take advantage of the ram that is now available.

this time is already here, for as resources grow, programmers write code that can use the resources available

and further

I just moved to n.y to take care of some family issues.

I made a user account for my dad, so I could teach him about computers, and still maintain some privacy.

so, with fast user sxwitching invoked, of course, we are written to the pagefile when we switch users.

DOUBLE THE NORMAL PAGEFILE USE

now, add my sisters aol, and an account for her, and boom...triple pagefile use.

all at a family crisis, where the last thing I wanted to do was worry about my box

SO, for those people that think they are getting away with "saving" some hardrive space to hang around and do nothing.

these people would take a huge performance hit with fast user switching, if hey lowered their pagefile

and what about those people that just got an animation program

you're recomendation would have slowed people like this down, and they would not have known it.

however

default would not have slowed them down one bit

excellant
 
by the way j

I keep my posts to minimum over at tweak xp, as it seems my posts annoy someone over there.

so, I'd like to take the chance here on congratulating you on becoming a moderator at tweak xp

I always click on your links, and I can usually pick up some good information from your posts.

and that great group policy post you made over here is an amazzing boon

great job
 
First off, thanks dealer I appreciate it.

I understand the user switching loads everything, and is a reason I don't use it. Yes XP will page no matter how much RAM, but the less it does the better it performs. And each case is different, there is no one-size-fits-all PF size. If you do video editing, adjust accordingly.

I'm gonna stop here, just because this can seriously continue endlessly.
 
Originally posted by j79zlr
First off, thanks dealer I appreciate it.

I understand the user switching loads everything, and is a reason I don't use it. Yes XP will page no matter how much RAM, but the less it does the better it performs. And each case is different, there is no one-size-fits-all PF size. If you do video editing, adjust accordingly.

I'm gonna stop here, just because this can seriously continue endlessly.

I'll stop here too j, however, not without getting the last word:p

there is no reason to adjust a pf accordingly, if you have default, as default will suit fine at no cost in performance, AND THAT IS THE VERY POINT, SO YOU MAKE IT FOR ME...your settin will slow quite a few people down, and speed absolutely no person up

now, for some great information for you, as I try to keep the techno speak to a minimum, but I think you'll appreciate the following ;

Heap allocation algorithms (used to manage space in the pagefile) work better if there's tons of free space...at least 50% of the total. By this metric you want the default pagefule size large enough that the peak "%usage" for the pagefile object (use Performance monitor, not Task Manager ) stays below 50%, (without needing the pagefile to expand.)

in this, a performance hit will not be recognized with a too small pagefile, but it wiil definately be there.

in addition;

if the entire virtual address space was resident in physical RAM,

Go to Performance Monitor, Processes object, and look at the "virtual bytes" counter for the process. (this is a dynamic number, it can change as the process loads and unloads DLLs, creates and terminates threads, etc).

allright, look at the "Virtual bytes" counter for the _Total object representing the sum of all processes.

and now you see why even two gigs of memory is not enough to keep paging from happening.

ha...this total doesn't even include the paged pool, file system cache, and whatever else is in system address space

virtual memory, (paging is an important part of but not exclusive to),allows us to use almost nothing of the several gigabytes of RAM that are acually in code
 
I have 512mb of ram, I set my pf on a data partition at min and max of 1.5mb.

First I set the pf to none, rebooted, then when I saw that there was no pf, I set it to 1.5mb on my data partition. (of course i defragged both my C: and D: partitions first.) Now I have one lg. green pf on my data partition. It should not be fragmented or am I wrong?
 

Members online

No members online now.

Latest profile posts

Also Hi EP and people. I found this place again while looking through a oooollllllldddd backup. I have filled over 10TB and was looking at my collection of antiques. Any bids on the 500Mhz Win 95 fix?
Any of the SP crew still out there?
Xie wrote on Electronic Punk's profile.
Impressed you have kept this alive this long EP! So many sites have come and gone. :(

Just did some crude math and I apparently joined almost 18yrs ago, how is that possible???
hello peeps... is been some time since i last came here.
Electronic Punk wrote on Sazar's profile.
Rest in peace my friend, been trying to find you and finally did in the worst way imaginable.

Forum statistics

Threads
62,015
Messages
673,494
Members
5,623
Latest member
AndersonLo
Back