"memory optimizing" programs and the myths they tell

Perris Calderon

dealer
Staff member
Political Access
Joined
24 Jan 2002
Messages
12,388
/bored

I figured I'd post more on memory management

I'm gonna give you the heads up on programs that claim memory management, memory optimizing, memory defrag...whatever

\most people seem impressed by the claim that these programs "release memory", and they try to prove it by putting a larger number in the "available memory" counter, and then they display that number.

here's what these programs do, how they put a large number in that "available memory" counter, and why doing that is NOT a good idea;

First, I'll give you the summery, then follow up with the specifics;

A "memory optimizer" will access as much memory in as short a period as possible. this will cause the native memory manager of xp to shrink all other programs working sets, thus giving physical memory to the "optimizing" program at the expense of the usability of programs you are using

Then the program will give that memory up again to the xp memory manager.

In other words, the "memory optimizing" program deliberately increases it's own working set at the expense of other processes.

This will force data out of processes that you are working with so that there can be a larger number in the "available memory" counter.

you've taken memory out of use that was in fact being referenced and you've made all other programs less responsive by decreasing their working sets.



more to follow
 
Now the tech talk;

The "available memory" s number going up will give the illusion to people that are uninformed that available memory is free for programs to use...the reverse is true, and these programs take memory out of use, rather then make the memory available to use.

in a normal configuration, xp will allow each program to allot virtual address memory up to 4 gigs

obviously, with correct configuration, programs can allocate virtual memory that will easily exceed the physical memory installed on the box.

obviously again, the operating system needs a memory manager that will distribute physical memory as virtual need presents itself.

the memory manager starts out by assigning each process a portion of the physical memory...this amount of physical memory is called the "working set" of that process

most programs are written with the "90/10 rule"...that is they spend 90% of the time accessing 10% of their code...thus a working set doesn't need to be nearly as large as a programs features would imply

The memory manager of xp will actually expand and contract a working set according to the users need

For referance purpose, I need to mention that portions of code are known as a "page"...at times a process will request a "page" that has not been represented in the "working set"

when that happens, a "page fault" is generated....if there is enough physical memory available, the memory manager simply assigns some of the memory from the available pool...now the next part is sweet;..if there is enough physical memory to allow it, then this process's "working set" is simply increased at no price to any other process ... nice.

However, if there isn't enough physical memory available, then this new page will have to replace a page that's somewhere in physical memory...the memory manager will use the page that hasn't 't been referenced in the longest period of time...in most cases, this particular page that hasn't been referanced in the longest while will most likely not get referenced again, and it is the safest candidate to take out of physical memory.

so now that the Memory Manager removes a page from a process working set, it has to decide what to do with the info that was in that page.

If the data has been modified, the Memory Manager will put it on the modified page list, ( a list of pages that eventually will be written to the paging file) or if it's not been modified, then back to the memory-mapped files from whence those pages correspond.

From the modified page list, the Memory Manager moves pages to a pool called the "standby list".

Unmodified pages go directly to the standby list.!!! ( you can view the standby list as a cache of file data)

The "stand by list" is one of the sweetest policies of memory management;...this is a list of physical memory that is available for anything at all, but it still has the data that was at one time being used somewhere!...so if that data does happen to get referanced before this physical memory is claimed, the page comes streight from ram...very very nice

The standby list is memory that's also considered by the memory manager as "available memory"...there are other pools that'll contribute to available memory...pages that contain info which belonged to data that's been dealocated...for instance, pages that once belonged to processes you've closed down...also, pages that were freed and filled with zero data by the Memory Manager's "low-priority zero page".

All of this goes on dynamically, and the memory manager examines working sets once a second..when memory is under pressure, the memory manager will pro-actively remove pages from those working sets which haven't encountered a page fault in a certain time frame....now, when the memory manager pro-actively removes a page from a working set, that page simply goes into the standby list!!! In this fashion, the data isn't lost to the hardrive at all, though the system has prepared for a page fault before it happens!

It's important to remember that the pages on the standby list are considered as available memory, and equally important to realize that they still retain the data whence they came.

here's something nice; what this tuning mechanism does for idle threads, is it will take pages from those idle processes a little at a time, and the working sets on idle processes eventually disappear...processes that remain idle for a length of time eventually consume no physical memory !

OK, now if a process needs a new page of physical memory, the memory manager first looks to see if that page is on the standby or modified page list. It will be here if the page will be in one of these lists if the page was removed from that working set and it wasn't claimed for another purpose....this operation is called a "soft page fault" since it doesn't involve a read from the hardrive.

if a page that's requested isn't on one of the available memory lists, a "Balance Set Manager" is triggered, and it will trim the process working sets in order to populate the list that makes up available memory.

If the memory manager has to remove a page from available memory, it'll read the data from somewhere on the hardrive...the paging file or an executable, whatever.

( Meanwhile, back at the ranch )

And now the summery which I posted up on top;

A "memory optimiser" will access as much memory in as short a period as possible so that memory management in xp will shrink all other programs working sets and give physical memory to the "optimizing" program

Then the program will give that memory up the xp memory manager

In other words, the "memory optimizing" program deliberately increases it's own working set at the expense of other processes.

This will force data out of processes that you are working with so that there can be a larger number in the "available memory" counter.

you've taken memory out of use that was in fact being referanced and you'vr made all other programs less responsive by decreasing their working sets

more to follow
 
and now, a further understanding of what the pagefile actually does, is, have a read...it's a little different then what most people suspect;

Xp is a virtual memory operating system that uses more memory then a computer has installed. Most users need over two gigs of physical memory to actually compute.

Using more memory then installed is a cute little dance, accomplished by providing a storage area somewhere on the disc for every bit of data that is in memory, or might go into memory...this storage area will be the place the os will retrieve information if that becomes necessary.

data that's in memory but hasn't been referenced in the longest time will be unloaded from physical memory if memory otherwise goes under pressure according to useage... the physical memory that has been released will now be used somewhere more current... however, if the unloaded data is referenced again later, the 0S will obviously need that information to be re loaded once more...this is what paging is.

There are different types of data, and different types of data get paged to different files. You can't page "private writable committed" memory to .exe or .dll files, and you don't page code to the paging file...these files have different locations for their respective paging activity.

Data that's not meant to get modified during use does not need, nor does it get a new area on the disc other then the area it came from..such data uses it's own original file when backing store is required....in other words, data that never gets modified never uses the pagefile if or when that data finds the necessity to page in and out of physical memory.

to summarize; data that doesn't get modified doesn't use the pagefile...the os simply retrieves pages directly from the original source on the disc ! very efficient.!!

Here's an example; suppose a dll happens to be the best candidate for release...then, when memory is needed somewhere, this dll ( portions of it ) will be unloaded...unloaded data like this is not written to the pagefile or anywhere else...it's simply unloaded....if that data is later referenced once again, the os retrieves those pages frome the original location whence it loaded in the first place.

this is true of all pages that don't get modified...nothing in this category ever gets written to the pagefile, since they use their own location as their own private little pagefile.

Obviously, that can't be the method to load and release all that's in memory, since work does get modified while computing.

Data that's been modified, (known as ""process private writable, committed") couldn't possibly be retrieved from the original area on the disc that it came from because it gets modified. (obvious once told, isn't it)

data like that goes to a file that is specifically provided for modified pages

thus, the "pagefile"

and so we see the pagefile is the storage area for these modified pages...in addition, to speed the process and to make it as seamless as possible, in the back round, the os writes the most likely of these pages to the pagefile long before they are needed to be released...other less likely candidates simply reserve space without even being written...this primes the process, however, though the pagefile has pages written, MODIFIED PAGES ONLY GET UNLOADED IF THEY ARE THE CANDIDATE THAT WILL BE LEAST NOTICED WHEN RELEASED...they don't get unloaded simply because they've been written to the pagefile.

so you see, the pagefile only presents better options for the memory management model...if the pagefile isn't there, or if it's too small, the process private writable data will not be available for memory release even though it would have been the best candidate...the os will simply go to the next candidate, and it will release that candidate instead of the best candidate...not a good thing.

most users won't even notice a performance hit of this nature, or they won't attribute the hit to incorrect pagefile settings...a performance hit like this will manifest as a little hitch of some sort when a feature has to be reloaded that shouldn't have been unloaded, and wouldn't have, had the best candidate been available for memory management.

that's it...pretty simple; the pagefile is the area on the disc for data that's been modified, or will become modified.

once the pagefile is optimum size, having a setting "bigger then necessary" does not impede performance...it can't encourage any paging activity, nor affect computing on the negative side... in fact, a bigger then necessary pagefile can actually facilitate performance, since "bigger then necessary" provides more contiguous area for data to be written...while the actual pagefile never gets fragmented on a properly sized pagefile, the data that's written inside the pagefile could possibly become fragmented...though not likely on a properly sized pf...obviously, the contents of the pagefile are less likely to be fragmented if there are bigger areas of contiguous free space where it's written.

in addition, Mark Russinovich (author of inside windows 2000) told me in a conversation, (quote)some applications want to reserve a large block of its address space for a particular purposes (keeping data in a contiguous block makes the data easy to manage) but might not want to use all of that space.(unquote)


Now, while having a pagefile "bigger then necessary" doesn't encourage paging, and presents no performance liability what so ever, the reverse is not true...having a pagefile too small actually actually DOES encourage paging and DOES present a performance liability...this is true simply because the os is forced to unload data that is not the best candidate, and the likelihood increases proportionately that the very data unloaded will be needed again, thus more paging not less paging when the pagefile is too small
 
Pushing back the frontiers of computer ignorance yet again! My hero.
 
Wont 'freeing up' used memory then making it available clear up any processes which are consuming memory unecessarily? i.e. if a program allocates itself memory but fails to release it? i.e. memory leaks.
 
the only time a program fails to release memory is when that code is being referanced..

code that's being referanced is not freed by memory releasing programs

whatever is being referanced by a program is just going to be reloaded if you aggresively unload the working set
 
the larger the percentage of your memory that's in use, the larger the working sets...it's a myth that you want as much free memory as possible

what you want is enough free memory to handle new work, everything else should be put to use
 
suppose we both have exactly the same work load on the exact same box

if you've "released memory" and I havn't, then my working sets are bigger then yours are, and my programs will be far more responsive
 
on the other hand, if you are about to play a game that is memory intensive, and the game is to the exclusion of everything else you are doing, and you don't want the memory manager of xp to take it's time releasing everything that this game will need, then you can and should go ahead and release the memory pro actively.

in that case, the programs will serve a nice purpose...but you might as welll just shut everything down that's running anyway

in that case, running a memory release will shrink working sets of possibly neccessary threads to stability and smothe computing
 
shouldn't matter though perris, as XP will free the memory for the game as its needed, if this is a performance hit, it should only be until the required memory is freed for the first time. All the dormant memory should be paged and the active app gets the RAM. I have yet to see a "memory leak" on XP, they were very common on 98, hence the reboot once a day fix for it.
 
I agree j79

but some people don't even want to wait for the memory manager to free the memory that's needed.

but I'm certainly with you on that call
 
here's something interesting j79;

xp agressively releases working sets to the standby list when you minimize a program

if you want a program to remain responsive even though you aren't using it, don't minimize it...just work with a window over window

on the other hand, if you want a program running, but nusing as litlle memory as possible, just minimize it

this is cute...you can watch it in task manager...watch memory useage while minimizing a program...you'll be wuite surprised how quickly xp releases working sets.

and don't forget...that goes to the stanby page list... if a page is referanced before that particular page has been claimed from physical memory, it only incurs a soft fault...very nice
 
Perris, I have about a gig of memory with plenty free, however my p2p program consumes a lot of something, so much so that my computer seems to stutter ever 5 seconds. The mouse freezes for a split second...
 

Members online

No members online now.

Latest profile posts

Also Hi EP and people. I found this place again while looking through a oooollllllldddd backup. I have filled over 10TB and was looking at my collection of antiques. Any bids on the 500Mhz Win 95 fix?
Any of the SP crew still out there?
Xie wrote on Electronic Punk's profile.
Impressed you have kept this alive this long EP! So many sites have come and gone. :(

Just did some crude math and I apparently joined almost 18yrs ago, how is that possible???
hello peeps... is been some time since i last came here.
Electronic Punk wrote on Sazar's profile.
Rest in peace my friend, been trying to find you and finally did in the worst way imaginable.

Forum statistics

Threads
62,015
Messages
673,494
Members
5,621
Latest member
naeemsafi
Back