not included in the paper, but for your purposes vector, a comparison between linux swap and nt page.;
Paging is done on a fine granularity -- a page at a time ( there is some effprt to group some pagefaults to cut down the I/O overhead, but in general it's very fine)...it's done in nt in response to the page faults that are incurred...it's reactive, not proactive as far as the affected process is concerned
also, this is done to the pages of whatever process incurred the fault,..not others.
Swapping, however, is done to entire working sets at a time, a "working set" in these systems would be all the physical pages currently protected by a process.
They take the entire working set, copy it to disk, and rthenelease all its pages for use by other processes. hopefully, It's not done to an active process, and it's done in response to some OTHER process's iwhich incur a page fault because the system is low on memory.
for the most, it's gladly done to a long-idle process as a rule.
NT doesn't do anything exactly like traditional swapping in these terms of writing and reading entire working sets;
the paging mechanism performs the same philosophy,but it's much more gradual, and obviously, with a lot less of a hit specific programs that might get referanced.
then, there is no specific "inswap" later; the process will simply page what it needs back in, i(if or when).
Also, as in regular paging, this "gradual outswap via paging" doesn't bother to write out nonmodified pages, like those containing code -- if needed again they can just be brought in from the original exe or dll's...this is missed by most.
As in linux swapping, all done to longest-idle processes when the system is short on free RAM.
one mistake I think xp plociy makes when a person has an abundance or ram, is it will aggressively release working sets of applications you minimize...this particular policy I think needs to be changed in nt memory management