Virtual Memory Question?

Discussion in 'Windows Desktop Systems' started by lilsnoop40, Nov 1, 2002.

  1. lilsnoop40

    lilsnoop40 Guest

    hi, i have 1.0gig of DDR 333mhz memory and i am getting a virtual memory error saying to low. and i know how to adjust it, but i don't know what to set it at to make it go away.

    thanks
     
  2. Code_Cutter

    Code_Cutter Guest

    control panel >system>advanced>performance settings>advanced>virtual memory

    set min and max to 512mb
     
  3. dreamliner77

    dreamliner77 The Analog Kid

    Messages:
    4,702
    Location:
    Red Sox Nation
    look in the my "favourite tweaks" thread, this is one of Dealer's favourite subjects. So far from what I can figure, it is best to just let Windows manage your virtual mem.
     
  4. gonaads

    gonaads Beware the G-Man Political User Folding Team

    It sounds like the page file may be fragmented, and the O/S cannot allocate a single contiguous chunk large enough for a particular operation.

    Try deleting the page file (or setting it to 0 min and max), then defragmenting the hard disk, afterwards resetting the page file to your preferred settings.

    Here's why;
    After you have defragged, all of your available free space will be a single contiguous chunk. If you create a fixed sized page file (min and max the same, it will sit as a single allocated file in the new defragmented space. If it is larger than your available physical RAM, there will be plenty of space for all operations.

    However (there's always a however), this could be down to resource-hungry applications not releasing resources once they've finished with them. Rebooting clears the swap file, hence the issue goes away. There's really not much that can be done about this except to track down which are the leaky apps and try to use alternatives, or else use these apps only when necessary.

    Also... How much free space you got? You can safely lower the pagefile setting to about 150MB or so for both Min and Max values. Keep them the same so that windows doesn't expand it. When the thing bittches about not having enough, look in the task manager under processes for the biggest hog... that will most probably be your problem... it might be some app you don't know is running thats hogging up everything.

    Have you ever tried Cacheman. Works really well for those type of problems. You can download the program from here:
    http://www.outertech.com/downloads.php

    Although I don't have the pagefile problem, I did know someone that had the ZERO Pagefile problem... He found talking to the boys at MicroSquish... that certain Intel chip sets have problems with XP.. He download a patch file and the problem was gone...
    Supposidly to fix the problem, you download the Intel Application Accelerator at http://support.intel.com/support/chipsets/iaa/

    Straight from the guys at Microsoft.

    I hope this works for you :)
     
  5. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    thanx dreamliner...hehe

    ya, do not set the max and minimum to 512.

    just about every user will suffer a performance hit with this setting, especially a person who's allready getting a low memory warning.

    since you're getting this warning, and assuming you haven't already touched the page file, you must increase the inital minimum to at least 2x ram, (666), and you should set the maximum to 4096.

    that should solve your problem.

    if you ever get the warning again, then simply increase the initial minimum, do not make the maximimum the same size as the minimum...ever...does not make sense in xp, or any nt kernal
     
  6. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    gonaads, i'm very surprised at your post...there is no user that can safely lower the pagefile to that small setting, what will happen with a setting that small, is the os will find areas that you did not allocate, and it will page there, creating more hardrive activity, and creating a more fragmented envirnment, not a less fragmented enfinment.

    sometimes, we will suffer performance hits that we don't even realize, and with a pagefile this small, the os is definately slower, though some people might not notice, the slow down is there regardless.

    you cannot stop xp from expanding the pagefile, even though you try... by setting a static max and min, as soon as the commit charge reaches the commit limit, which is the only time the pagefile even wants to expand, and of course, is the very time YOU NEED the pagefile to be bigger, (AND which by the way, and oviously, THE COMMIT CHARGE REACHES THE COMMIT LIMIT SOONER WITH A SMALL PAGEFILE THEN WITH A BIG PAGEFILE), the os will therefore find other areas on the hardfrive, and page to those other areas, it will do it more, and it will do it sooner, and it will do it less efficiently, then if you allow the os and the pagefile to do the job it was well designed to do

    you can easily prove this to yourself, by lowering your pagefile to the setting you suggest, and take a look at permon...you will see more pagefile activity, not less with this setting.

    all of this is allready documented by microsoft, way back when nt was first released...here's the paragraph, and referance;

    "...A pagefile that's set too small can lead to overactive disk swapping, or "disk thrashing." The only real drawback with a relatively large swapfile is that you might not have as much disk space available for other uses as you would if you'd followed the pagefile setup recommendations."...

    for your referance, document number; Q102020:

    i'm amazed there are people tha tstill believe there can possibly be a benefit to a small pagefile

    now, since that document, microsoft has severely increased the minimum recomendation, but the facts of that document remain, there is no slowdown whatsoever with a big pagefile, and quite a performance hit if the pagefile is too small

    and now, about cahcheman...no, cahcheman will make this problem worse, it will release ram that is in use, and if it's not in use, then it's allready released by xp.

    cacheman is for millenium and 9x......hardly fotr nt...that's exactly what the pagefile is for in the first place
     
  7. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    by the way, the simplest way to defrag the pagefile is with this free pagefile deffrag program...

    once your pagefile is contigous, it will never, ever get fragmented again, until you increase the size of the pagefile yourself manually, or increase your ram
     
  8. gonaads

    gonaads Beware the G-Man Political User Folding Team

    This was only to find a possible Memory Hog nothing more than that.

    And as for Cacheman, this is directly from their site:

     
  9. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    ah...I didn't realize you just wanted to track down a memory hog
    and that was the purpose of lowering the pagefile.

    there is an easier way to do it;

    For every program running on a computer, the operating system allocates a portion of physical memory. This is called the working set. Even if the program is not generating any activity, the operating system allocates memory for the program's working set.

    if you watch perfmon, when closing any program you will watch corresponding pagfile use decrease as you turn off each program...you will be able to monitor the working set of any program in that fassion

    now, as far as that qoute you poste claiming xp support for cacheman.

    gonaads, that is an intel site, not a microsoft site...micrsoft admonishes the use of memory programs in xp, and they do not support the use of cacheman in xp