Creating a custom Page File for Virtual Memory

umm u know cus i am not having any issues at all now, why we having this convo LOL
 
big is not a performance issue whatsoever, this is benchmarked and documented by microsoft


in addition, while the pagefile doesn't fragment as a file, the contents of the pagefile are subject to fragmentation untill you reboot, where they are then ignored

but until the reboot, the smaller the pagefile, the less likely there will be a contiguous block for new paging info.

so you are causeing more internal fragmentation fo the pagefile by having it so small the os wants to expand it also
 
but it aint so small its 1.5 gb :s (768 mb on 1 drive, and 768 on a raid 0 array, yes diffrent drives to windows)
 
gonzo, that statement is like, "it ain't so fast, I used to get away with 40 miles an hour"

just like you needed more ram, you needed an approprate pagefile to accomodate the memory address space.

the more ram, the bigger the pagefile needs to be...this will always be the case with this kernal,

memory in nt is a two staged process...it's addresed first and assigned second.

it's obvious, if your pagefile is expnading, then "it ain't big enough"...every bit of memory assigned needs it's own separate address.

just because the os only writes a given amount of info from memory, doesn't mean the rest of memory doesn't need the address area
 
heres the report's for the PF drives.

Volume Paging FIle (Z:):
Volume size = 2,996 MB
Cluster size = 4 KB
Used space = 785 MB
Free space = 2,210 MB
Percent free space = 73 %

Fragmentation percentage
Volume fragmentation = 0 %
Data fragmentation = 0 %

File fragmentation
Total files = 7
Average file size = 20 KB
Total fragmented files = 0
Total excess fragments = 0
Average fragments per file = 1.00
Paging file fragmentation
Paging/Swap file size = 768 MB
Total fragments = 1

Directory fragmentation
Total directories = 5
Fragmented directories = 0
Excess directory fragments = 0

Master File Table (MFT) fragmentation
Total MFT size = 35 KB
MFT records In Use = 24
Percent MFT in use = 68 %
Total MFT fragments = 2

--------------------------------------------------------------------------------
Fragments File size Most fragmented files
None

defragged, long time ago..

Volume 60gb Striped (K:):
Volume size = 58,642 MB
Cluster size = 4 KB
Used space = 35,473 MB
Free space = 23,169 MB
Percent free space = 39 %

Fragmentation percentage
Volume fragmentation = 0 %
Data fragmentation = 0 %

File fragmentation
Total files = 10,891
Average file size = 4,329 KB
Total fragmented files = 904
Total excess fragments = 1,290
Average fragments per file = 1.11
Paging file fragmentation
Paging/Swap file size = 768 MB
Total fragments = 1

Directory fragmentation
Total directories = 402
Fragmented directories = 25
Excess directory fragments = 414

Master File Table (MFT) fragmentation
Total MFT size = 17,968 KB
MFT records In Use = 11,307
Percent MFT in use = 62 %
Total MFT fragments = 14

defragged a couple of days ago.


sorry mate, this used to be a problem, not any more, and it was back in 512mb days :)
 
it's still not a problem, the only problem is when the pagefile is too small.

there is never any fragmentation of the pagefile...fragmentation of the pagefile is an issue if your pagefile is set too small, and the os is therefor always expanding it...that should never be any users setting

as I say, if the os was trying to expand your pageifle and it can no loinger do it in the same situations, then the os is unloading a dll or exe

that has to happen
 
gonzo, the only time a pagfile is too small is if you have it lower then the amount of memory in use.

or even then, if the commit charge ever reaches the commit limit, and the os tries to expand it.

then it's too small

but static does not help, it just causes paging from areas that should not page

got to go to work...read the paper that's posted on that link up above
 
GoNz0 said:
sorry mate, this used to be a problem, not any more, and it was back in 512mb days :)
before I go, this is the biggest point

back in the 512 days you had the issue, and your pagefile was not 1.5 gigs like it is today when you it set to the default, it was 712...too small for your gaming, and this is obvious.

then you fixed it at 1.5 gigs, and puff, no problems

there you go

if you set the initial minimum at 1.5, with expansion to whatever, 4 gigs, it doesn't matter, since the os will only take what it needs, it will not the 4 gigs you allow it to take

this would have given you the exact same thing...a static setting of 1.5 gigs, even though expansion was enabled

this becuase the commit charge would have never hit the commit limit, it would absolutely remain static just the same as if you did not leave expansion enabled...however, if you left ecpansion enabled, the os will be prepared, to accomodate the next huge program that you don't even know you want.

if set to that setting, but static, it will therefore not be able to accomodate you when you might need more virtual memory in the future

that's the point

the way to create a static pagefile is to have the initial minimum so large the commit charge never reaches the commit limit

if the the commit charge does reach the commit limit, it's obviously too small, and you want it to expand untill you get a chance to adjust your settings...then it needs to be increased

everytime you adjust the settings, it will preobably need a defragmentation if you want it contiguous
 
hmmm so from what your saying perris is that where I have 512 meg of ram my fixed pagefile of 1400meg is not right ??? Also going back to what someone else said I have two HDDs and windows on one drive and the pagefile on the other I was under the impression this was the best performace situation not both on the same drive .....
 
indyjones said:
hmmm so from what your saying perris is that where I have 512 meg of ram my fixed pagefile of 1400meg is not right ???.....
well, if your fixed pagefile is the same size as the expanded would have been maxed out, it's the same thing as having the benefit of expansion

expansion is for people that want to have a pagefile that's as little as they can get away without a performance hit.

I"m with you, and so is microsoft, that larger is always a better choice...my pagefile is 2 gigs, expansion will never happen, so it's just an excerisize to leave it expandable to 4 gigs

so you are right, fixed at 1400 will probably always be plenty for your use, since you,ve fixed it at a size that it would never expand to anyway.

probably
 
Shouldn't this post and post I linked to be merged since they are the same thing?
 
cheers perris i remember discussing this with you back in the xp-eriance days, when i made the mistake to remove my pagefile all together :)
 
indyjones said:
hmmm so from what your saying perris is that where I have 512 meg of ram my fixed pagefile of 1400meg is not right ??? Also going back to what someone else said I have two HDDs and windows on one drive and the pagefile on the other I was under the impression this was the best performace situation not both on the same drive .....

Having it on the second drive could cause a performance hit, if that drive spins down and then has to spin back up to read the pagefile.
 
not again, no it doesn't, the best placement is on a second drive on a seperate controller
 
perris said:
the pagefile will fragment in these situations;

if you adjust the minimum size, or if you change the amount of the memory and the os changes the initial size

ALSO

diskeeper will putt your pagefile on a differant area on your drive, and can fragment the pf also

use the system internals free program to monitor your pagefile extents, and you will see ONCE IT IS CONTIGUOUS it remains contiguous even after expansion, ( once you reboot)
all of this is explained in the paper that's posted in the thread xie sites, you guys should read that

once it's contiguous, it remains contiguous forever on a healthy drive

if expansion is invoked, the expanded extent is removed on reboot, and thre original pagefile HAS to be in the original condition

the issue is that you've mmade the initial minimum too small, and your operating system is expanding the pagefile

that's the point, you must not have a pagefile so small the os wants to expand it, when it does expand, the fragmentation IS ONLY FOR THE EXPNSION EPISODE and you have a pagefile that the initial minimum needs to be increased, which of course will ahve to be defragemnted ONE TIME and then it will always be defragmented
Well, I've never touched the PF ever. Also, FYI Windows internal defrag program is a cut down version of diskeeper (i.e. it uses the same core instructions as diskeeper) (from what I've read).
 
OY!

This has gotten interesting. First of all, certain things need to be clarified.

What drive should the page file be on??

The page file needs to be on a seperate harddrive, which is on a seperate channel than what you are currently using for your OS and for GAMES or applications. I have two controllers (for a total of 4 channels (each having a maseter and slave) running cdrw/dvd/hdd1/hdd2 all on seperate channels, all as the master device on that particular channel.

This is for maximum performance of disk I/O (also the PF) because of one important reason. You can load an application and page memory AT THE SAME TIME. (if its on the same hdd, you go back and forth with write operations at different sectors on the same disk. If its on the same channel, you lose out with I/O performance. remember.... the rated disk I/O performance is WHEN ITS BY ITSELF on the channel, not when its shared with OTHER devices...

Disk spindown is a moot point. Just change your power settings so it doesn't spin down as quickly.. (The reason the deathstar failed was becuase of spindown after it got all hot...)

If I'm incorrect, let me know. After writing my own OS I'm pretty confident about this. More later since I have to run an errand.
 
Next fun question..

How big should the page file be??

(mostly from perris...great answer there..)
Back in the 512 days, if your pagefile wasn't roughly 3 times your ram (1.5 gigs.) you could easily run out of space (and would subsequently have a problem.) If you let windows manage the size, it was set to 712... which is too small for your gaming, this is obvious.

All you had to do was set the max to 1.5 gigs, and puff, no problems

The problem is now in the days of the Gig-O-Ram. If you set the initial minimum at 1.5, with expansion to say, 5 gigs, (it doesn't really matter), the OS will only page what is needed - UNLESS you use a OS tweaker and tell windows NOT to page the kernel (and a couple of other things...) this does NOT mean that the OS will take all 1.5 gigs of your pagefile.

1.5 minimum just means that there is an initial 1.5 gigs ALLOCATED on the hdd. If your commit limit (how much is avail) is never crossed, then the pagefile will never grow, hence, it will be a static size.

HOWEVER, the OS will be prepared to expand the pagefile since expansion is enabled (remember the 5 gig max setting?) to accomodate the next huge program (you don't even know you want it, but it would needs ram than whats avail.)

If you set a static size (min and max are the same) then the afore mentioned program will croak, becasue you ran out of virtual memory (becuase of your static setting).

The best initial size is to have the initial minimum so large the commit charge never reaches the commit limit. If the the commit charge does reach the commit limit, it's obviously too small, and you want it to expand untill you get a chance to adjust your settings...then it needs to be increased.

You want your pf to be DYNAMIC (in the case of your nasty memory-eating program), but you DON'T want the pagefile constantly changing size.

That causes fragmentation, which is evil.

Once you find a size (initial min.) that works you will need to defrag. With the ONLY exception being that your pf is by itself on the drive (or at the END of your data - which is unlikely).

A contiguous pagefile is a happy pagefile (and fast).

Agian. If I'm wrong, let me know. *and let perris know..* I'm confident that we know what we are talking about.
 

Members online

No members online now.

Latest profile posts

Also Hi EP and people. I found this place again while looking through a oooollllllldddd backup. I have filled over 10TB and was looking at my collection of antiques. Any bids on the 500Mhz Win 95 fix?
Any of the SP crew still out there?
Xie wrote on Electronic Punk's profile.
Impressed you have kept this alive this long EP! So many sites have come and gone. :(

Just did some crude math and I apparently joined almost 18yrs ago, how is that possible???
hello peeps... is been some time since i last came here.
Electronic Punk wrote on Sazar's profile.
Rest in peace my friend, been trying to find you and finally did in the worst way imaginable.

Forum statistics

Threads
62,015
Messages
673,494
Members
5,621
Latest member
naeemsafi
Back