"you are posting as fact, some activity that does not take place."
After reading many threads and web pages on the subject over the past 6 years, and experiencing it on all 9x/ME/NT4/2000 systems, I must disagree with you. Important note: XP *may not* suffer from the problems, thus explaining your experience. I run Windows 2000 now and have little experience with XP, but was curious about its VM system, that's why I came here in the first place. XP may be less agressive on trimming working sets. XP may well have automatic pagefile internal database defragmentation as well as pagefile filelevel defragmentation. MS does claim to have improved the VM system so it may be true:
http://www.microsoft.com/HWDEV/driver/XP_kernel.asp#Memory
Speaking from experience, I can assure you that the older OSes have no pagefile optimization and are much too agressive at trimming working sets and paging them out.
"if you want to use ram for long term storage purposes, this is obviously circumventing the strategy of where the ram would have otherwise been used."
A file cache of 500+ MB is no benefit to me or the majority of users. Keeping apps in RAM *IS* beneficial. And I've got plently of RAM, so there's no reason to trim the apps' working sets. They should only be trimmed if free RAM is low (and not so the OS can build up a massive file cache that I'm never going to use...).
"my harddrive is never used before my ram"
What CPU are your running? I've noticed paging is much faster on Athlon/Duron/Pentium IV machines. Do you do much file I/O? If you aren't doing several GB of file I/O per hour and/or have a fast machine, you won't likely experience the problem. And if you are running XP, as I stated above, the problem may well be fixed.
"you know six, You can get the same what i believe is an illusion of this benefit you are experiencing with a simple memory management program"
These programs *DONT* work. I tried O&O Clevercache, Systernals Cacheset, and Outer Technologies Cacheman. Tried the default and various custom settings with those programs. And I tried the registry tweaks that are listed on this site and others. Nothing had any effect. Slowdowns persisted.
"you are making another mistake, and comparing a 9x OS and philosophy with XP."
I didn't mean to compare it with XP. The VM system in XP may well be fixed.
One question of mine remains:
"How is using a solid state drive different from using a RAMdisk for the page file?"
I noticed you didn't answer this one.
Here's one of the best discussions I've found on the pre XP Windows VM systems. If you're curious take a look at the complete thread:
http://groups.google.com/groups?hl=...=34ec79c4.44008984@msnews.microsoft.com#link1
A few postings from the thread:
---------------------------
From: Wayne J. Hyde (wjh@cise.ufl.edu)
Subject: NT's braindead VM subsystem
Newsgroups: microsoft.public.windowsnt.misc
Date: 1998/02/18
Ok, I've done some more testing. This is getting *really* depressing.
NT's memory management subsystem is completely braindead. Come on,
should I really have 87MB paged out when I have 113MB free and 192MB
physical RAM? Every time I minimize an application, NT trims the
working set like mad. This is ludicrous. NT's performance could be
so much better, yet it is crippled by such a poor VM subsystem and
paging method (FIFO).
It is very easy to check for yourself just by using NT's Task Manager.
Load up Task Manager and view the "Processes" information. Change
your update speed to "low" and select the following columns: Mem
Usage, Memory Usage Delta, Page Faults, Page Faults Delta, and Virtual
Memory Size. The update speed should be on "low" so it only updates
every four seconds or so -- that way you have time to see what
happens.
You can also use the program 'pmon' from the Resource Kit. It pretty
much shows the same information except it is a command-line utility.
Task Manager is nice because you can sort columns, etc. You may also
want to load up Performance Monitor and make a graph with "Page
Reads/sec" and "Page Writes/sec" from the "Memory" group. Now, you
will be able to see what NT is doing with memory when you use an
application, minimize it, maximize it, etc.
My machine has plenty of RAM (192MB) and swap (400MB). You would
think that NT wouldn't need to do much paging on my machine until I
started overcommiting RAM. "Not so fast my friend." NT doesn't wait
around until you really need to page; it pages out long before you run
out of RAM. Not only that, It will page out a process as _soon_ as
you minimize it. No joking here.
Load up a bunch of your applications. Right now, my commit charge is
over 150MB. Netscape, MSVC++ 5.0, Outlook, Word, Backup Exec, What's
Up, a few copies of Forte Agent (for different servers), GateKeeper,
Diskeeper, Exceed, a few hostexplorer's, mIRC, and a bunch of other
apps are all loaded. I've sorted my Task Manager window by "VM Size"
since the largest apps will usually cause the most paging. Now for
the fun...
Right now, Netscape is taking up 2380k of RAM and 14MB VM. It is
minimzed with three windows total. When I restore one of the windows,
the following occurs:
Memory usage Delta jumped to 2168k.
A bunch of page faults occured.
Performance Monitor showed that pages were read from disk (meaning
that they weren't in the standby list)
If I restore the other Netscape windows, more pages are swapped back
into RAM. (It jumped back up to 5504K resident) So far this doesn't
look bad, especially if you don't know how NT is handling RAM. It
becomes apparent that NT is braindead once you minimize an
application: NT __immediately__ trims the working set down to zilch.
After minimizing Netscape, the following occurs:
Memory Usage Delta is -3756K (NT trimmed the working set by 3.7MB)
Memory Usage dropped to 1748K from 5504K
If I restore Netscape once again, it pages 2184K back into the working
set. If I do this immediately after I minimized Netscape, the pages
will usually be in the Standby list and NT won't have to go to disk.
Microsoft claims that this is a good thing since a "soft" page-fault
can be resolved [relatively] quickly compared to a "hard" fault. I
think it is incredibly dumb because NT is wasting cycles on too many
damned page faults. Why trim Netscape's working set immediately after
the window is minimized? *especially* when I have so much RAM
available. (104MB right now)
NT exhibits this behavior for just about every application. I just
minimized MS Outlook 98b and NT trimmed the working set by 4.6MB (down
to 1.3MB). After restoring the window, NT pages the memory back into
the working set.
I just minimized Lview Pro (an image viewer). The Mem Usage dropped
from 9576K to 328K! A drop of 9248K! Yes, that is 9 MEGABYTES
trimmed from the working set. I noticed some Page Write/sec in the
Performance Monitor, so it appears that NT had to write the updated
memory pages from LView to the pagefile. I wont immediately restore
Lview since I want to see how long the pages stay in the standby list.
Now, lets see what happens when a bunch of applications are restored
at once: Netscape, Word97, Outlook98b, HostExplorer, Agent, POV-Ray,
and bookshelf basics.
App: Mem Usage (before) Mem Usage (after)
Netscape: 1748K 5260K
Word97: 572K 1832K
OutLook98: 1220K 2700K
HostExpl: 268K 980K
Agent: 1568K 1588K
POV: 384K 1076K
Bookshelf: 684K 1528K
As you can see, NT paged in quite a bit. The pagefiles were hit also
as reported by PerfMon. I minimized the applications and guess
what... yes, NT immediately trimmed the working sets. NT also paged
out some other pages to make room for the programs being swapped in.
I guess NT has something against trying to use _FREE_ RAM. No sir, it
would make too much sense to use free RAM to swap memory back in. NT
needs to swap more data out to make room. Just in case you run a
program like clearmem.exe from the reskit. Yeah, that happens a
bunch.
It is sickening. I've got 116MB RAM "Available" right now and NT is
paging out programs to disk as soon as I minimize them. And of
course, I restored the previous LView, and it had to page in from the
pagefile -- meaning that the pages were not in the standby list.
Now is it just me, or is this just a _very_ poor design? I've got
*gobs* of RAM -- more than enough to hold all of the programs I am
currently running without paging out a single page, yet NT is swapping
like there is no tomorrow. My Pagefile in use is 100MB right now.
Peak is 170MB.
NT handles CPU-intensive applications pretty well. It just doesn't
know what the hell to do with RAM. Perhaps I would have better
performance on a system with much less RAM. Maybe then NT would stop
paging everything out to disk as soon as it is minimized.
Maybe I just haven't been told by Microsoft how a "workstation" is
supposed to behave and perform. Maybe I'm just too used to working on
my Solaris machines at work where the OS doesn't leave half of my
memory wasted.
-Wayne
---------------------------
Interesting eh? How about this:
---------------------------
From: Tim Hill/MVP (timhill@pacbell.net)
Subject: Re: NT's braindead VM subsystem
Newsgroups: microsoft.public.windowsnt.misc
Date: 1998/02/18
>Solaris doesn't go crazy trimming working sets and paging out until it
>is necessary. NT pages out immediately, whether it needs to or not.
"until it is necessary"??? And when is that? When the pages are needed,
that's when. But by that time, it's *too late*. So any OS (Unix included)
has some form of background swap-out algorithm which frees pages which are
not seen as needed (the WST, typically). The trick, of course, is choosing
(a) which pages to swap and (b) when to swap them.
One of the choices the NT designers made was to trim an apps working set
when an app is minimized. The rationale is that when you minimize an app two
things happen: (a) you've finished with it, for the time being and (b)
you're probably about to run another app. So NT pre-emptively trims the
working set. Sometimes this is correct, sometimes not. It's a statistical
thing, and in this case a UI thing. I'm not sure I agree with this decision,
but I can certainly understand it, and saying it's brain-dead is pretty
silly.
Another thing to understand. NT itself isn't actually responsible for
trimming the working set of an app when minimized. The app actually does it
during processing of the minimize request. The Win32 sub-system tells NT to
trim the set. NT does so. The fault here (IMHO) is that the Win32 code
should be more intelligent, and not request a trim unless it sees (say) >70%
RAM used.
---------------------------
"Sometimes this is correct, sometimes not."
In my case it's never correct. When I minimize a window, I most definitely am *NOT* done with it and don't want its working set trimmed and then soon written to the pagefile. When I'm done with it, I'll close it myself thank you.
My post continues below due to the 15000 char posting limit: