For the 2 GB limit, take a look at (
http://www.brianmadden.com/content/article/The-4GB-Windows-Memory-Limit-What-does-it-really-mean-). It clearly explains that an application sees memory as a 4 GB limit (Windows limit, if they had done it properly, each application would only see their own memory and going over that 32 bit boundary would have not been a problem. That is essentially what PAE also accomplishes). 2 GB can be used by the app. 2 GB can be used by the kernel for that app.
Network data that is read into memory is DEFINITELY not swapped back to the network. It is written to the local swap.
psuedocode
Code:
read 1024 MB into memory
Go do other stuff, and sometimes use part of the 1024 we just read
release the 1024 MB of memory
During the "go do other stuff" the OS will most likely swap the data that is in memory to the hard drive, especially if it is not currently being used.
Old data is just mapped? No. From a network to a local disk it is an actual physical copy, like so:
Code:
loopthis:
read 1024 bytes from network
write 1024 bytes to hard drive
goto loopthis
More pseudo code of how that is implemented. That means at any one time a max of about 1024 bytes of ram are needed and cleared. On most OS's it is going to be a lot more, just for the extra speed that is gained. Reading 1 byte and writing 1 byte is slow.
But those 1024 bytes COULD be written to swap as well.
If the OS were to try and track where a certain read call came from, and then "map" this in memory so as to not swap it, how would this be implemented on top of the memory system that exists? There is no clear way that it would cause an improvement. Also, if something has been read from the network, and it is now mapped to the network, what happens if the file server goes offline and the program resumes doing whatever it was doing expecting it's cache of what it read in to still exist? The OS can't just throw it an error that it can't access that part of memory, as clearly it is that programs memory.
As for what resources he is running out of, I don't know. It is Windows Vista which is already a bad sign. They had a major bug where they slowed down network traffic because media was playing. What other bugs still exist? They have also re-written most of the TCP/IP stack to be adaptive. Maybe it is unable to allocate more resources for the buffering required for that as it has run out of resource limits Windows has set upon bootup?
The work he is doing does not require a 64 bit OS. There were no problems handling 2 GB and bigger files in Windows XP, why should this be any different now? I handle 2 GB files on an old Pentium 1 with 64 MB of ram with ease.
I didn't read brian's entire page and I don't know if he wrote that before or after sp2 but it doesn't matter, it hasn't been updated to include the new information and he should address the new definitions pae addresses
as I said before, pae doesn't do what it used to do and the os can no longer map to 64 bits using physical address extension
there were driver and security issues when pae was being used, some popular drivers couldn't understand the buffers and crashed, microsoft couldn't create exeptions for those drivers I think because they are just too pervasive (on every box) there were also were security issues I think executing kernal code, in any event ms also couldn't re write the drivers since they weren't microsoft's...instead microsoft chaged the way physical address extension handles memory and registers
here's the documentation
but he's missinformed if that's where you're sourcing the opinion that data isn't mapped back into the network file or network dll, if you want to ask him to contact me I'll give him the microsoft sources, most are documented but some are not...however if people like yourself are referancing brian's work and that's where you formed your opinion I would think he'd like to have the documentation that demonstrates otherwise
if you're not sourcing him for that point of view of network swapping then this is just academia for the two of us
there's lots going on in your post, I don't know if I addressed everything below
the os assumes the network won't go off line, if you do go off the networ aned a page has to swapped that is no longer available, THEN the write is made, to your local disc, the os is going to try to map to the original file whenever possible, if it can't then it will create an image locally
the purpose of mapping to the original file is to avoid hardrive bottlecks, and of course there is a performance advantage doing it this way... when an image exists and there is no conflict the os will avoid writing a new image...if there is a conflict the os has no choice but that is rare
now xistance it's self evident, if he's running out of resources I doubt this will happen on 64 bit...unless it's a network resource issue and not local
maybe that's it but I don't think so, it looks like a local os notice to me
I think his operation is opening the entire file to write for some reason but if his notice is an operating system notice then obviously he DOES need a 64 bit processor
doing some deductive reasoning, the os says he needs more resources and I can't think of anything BUT his operation is reading the file into memory, I know there are network buffers and resources associated with that but I cannot believe that's preventing him from copying a file, I think the file has to be getting loaded into memory for him to be running out of resources
I can't think of any other reason he can't do these copies, if there is another reason then obviously we need to see what's using those resources and address the issue...if possible but in any event, I think the resources he's running short of are os based and 64 bit looks to me like it solves his problem
you might be right that there is a simpler solution but NOT if his operation includes reading the entire file into memory.
mapping back to the .dll or original file is documented by microsoft as you noted in your code quote, mapping over the network is not officially documented last time I looked, that's information that was given to me directly from the head of the memory management team at microsoft...I don't think too many people knew about the network mapping till I published the correspondance...I remember jeh was involved with the correspondance and while he knew of the threoretical possibility he didn't know the os was sophisticated enough to do it until the head of the memory management team at microsoft said it to me
xistance, I had a conversation directly with the head of the memory management team at microsoft Landy Wang...this was quite a long email conversation which involved numerous correspondance..
he told me when I get my edits in place he would try to get my paper published on the microsoft network
lanny had been sent my virtual memory paper from larry osterman, (another microsoft big wig)...larry had red my paper and liked it, wanted it to be proof red by lanny wang and lanny got in touch with me to tweak it
here's the first correspondance, his correction is in red, there were a few tweaks he wanted me to make...the first excerp is from my paper, lanny put in his own edits in red and here's one of them
perris's paper corrected by lanny wang said:
,,,The OS will retrieve said information directly from the .exe or the
.dll that the information came from if it's referenced again. This is
accomplished by simply "unloading" portions of the .dll or .exe, and
reloading that portion when needed again....
(self evident, isn't it). virtual locked memory will not go to the
hard drive (and if it was demand zero like a stack or a heap it never came
from a hard drive either). same thing with network-backed files.
swaps are mapped back to the original file wherever that was, there is never an image duplicated anywhere unless there is some kind of os conflict accessing that image
there is no reason create a second image if no conflict exists and the os does not, there are some rare times a dll has a collision with the application accessing the same file at the same time, Iwhen that happens and there's a conflict of this nature that's when the os MIGHT votluntarily create another identical image to the pagefile but that's a fixup for a conflict when it occurs
swaps are mapped back over the network if that's where the data came from, and there's no conflict. pages are mapped back to the original file either on the network or the local disc, wherever the os got the page in the first place, except when there are conflicts there are no writes for existing images
as long as the network file is still available it is NOT written to the local file, it's not written anywhere since it's allready available and it would be a waste of hardrive time creating more then neccessary items in cue
the conversation went on between lanny and myself and he explained there are other times data isn't written to the pagefile even if there's no image anywhere, for instance if the os knows that new data will be zeroed out it won't get imaged to the pagefile or anywhere since it's gonna be zeroed out...but again, there's no reason to write any information that already has an image unless there's a conflict, the os knows where that image is and maps back and forth without unnessary hardrive activity
that's according to landy wang himself, now maybe this is undocumeted info I am not sure but that's it right there
old data is justed mapped, there is no writting old information again there's no reason and the os does not do it
you can ask brian to contact me if you want but he's wrong