It sounds like the page file may be fragmented, and the O/S cannot allocate a single contiguous chunk large enough for a particular operation.
Try deleting the page file (or setting it to 0 min and max), then defragmenting the hard disk, afterwards resetting the page file to your preferred settings.
After you have defragged, all of your available free space will be a single contiguous chunk. If you create a fixed sized page file (min and max the same, it will sit as a single allocated file in the new defragmented space. If it is larger than your available physical RAM, there will be plenty of space for all operations.
However (there's always a however), this could be down to resource-hungry applications not releasing resources once they've finished with them. Rebooting clears the swap file, hence the issue goes away. There's really not much that can be done about this except to track down which are the leaky apps and try to use alternatives, or else use these apps only when necessary.
Also... How much free space you got? You can safely lower the pagefile setting to about 150MB or so for both Min and Max values. Keep them the same so that windows doesn't expand it. When the thing bittches about not having enough, look in the task manager under processes for the biggest hog... that will most probably be your problem... it might be some app you don't know is running thats hogging up everything.
Although I don't have the pagefile problem, I did know someone that had the ZERO Pagefile problem... He found talking to the boys at MicroSquish... that certain Intel chip sets have problems with XP.. He download a patch file and the problem was gone...
Supposidly to fix the problem, you download the Intel Application Accelerator at http://support.intel.com/support/chipsets/iaa/
gonaads, i'm very surprised at your post...there is no user that can safely lower the pagefile to that small setting, what will happen with a setting that small, is the os will find areas that you did not allocate, and it will page there, creating more hardrive activity, and creating a more fragmented envirnment, not a less fragmented enfinment.
sometimes, we will suffer performance hits that we don't even realize, and with a pagefile this small, the os is definately slower, though some people might not notice, the slow down is there regardless.
you cannot stop xp from expanding the pagefile, even though you try... by setting a static max and min, as soon as the commit charge reaches the commit limit, which is the only time the pagefile even wants to expand, and of course, is the very time YOU NEED the pagefile to be bigger, (AND which by the way, and oviously, THE COMMIT CHARGE REACHES THE COMMIT LIMIT SOONER WITH A SMALL PAGEFILE THEN WITH A BIG PAGEFILE), the os will therefore find other areas on the hardfrive, and page to those other areas, it will do it more, and it will do it sooner, and it will do it less efficiently, then if you allow the os and the pagefile to do the job it was well designed to do
you can easily prove this to yourself, by lowering your pagefile to the setting you suggest, and take a look at permon...you will see more pagefile activity, not less with this setting.
all of this is allready documented by microsoft, way back when nt was first released...here's the paragraph, and referance;
"...A pagefile that's set too small can lead to overactive disk swapping, or "disk thrashing." The only real drawback with a relatively large swapfile is that you might not have as much disk space available for other uses as you would if you'd followed the pagefile setup recommendations."...
for your referance, document number; Q102020:
i'm amazed there are people tha tstill believe there can possibly be a benefit to a small pagefile
now, since that document, microsoft has severely increased the minimum recomendation, but the facts of that document remain, there is no slowdown whatsoever with a big pagefile, and quite a performance hit if the pagefile is too small
and now, about cahcheman...no, cahcheman will make this problem worse, it will release ram that is in use, and if it's not in use, then it's allready released by xp.
cacheman is for millenium and 9x......hardly fotr nt...that's exactly what the pagefile is for in the first place
ah...I didn't realize you just wanted to track down a memory hog
and that was the purpose of lowering the pagefile.
there is an easier way to do it;
For every program running on a computer, the operating system allocates a portion of physical memory. This is called the working set. Even if the program is not generating any activity, the operating system allocates memory for the program's working set.
if you watch perfmon, when closing any program you will watch corresponding pagfile use decrease as you turn off each program...you will be able to monitor the working set of any program in that fassion
now, as far as that qoute you poste claiming xp support for cacheman.
gonaads, that is an intel site, not a microsoft site...micrsoft admonishes the use of memory programs in xp, and they do not support the use of cacheman in xp