insufficient system resources exist to complete the requested service

fermulator

OSNN One Post Wonder
Joined
4 Dec 2007
Messages
1
Good evening!

I'm hoping someone might be able to assist me with this rather annoying Windows bug.

Operating System: Windows Vista Business

Whenever we try to copy semi-large files over the network (i.e. say, over 2GB), we receive the following error:

Code:
Error 0x800705AA: Insufficient system resources exist to complete the requested service.

Options: Try Again, Skip, Cancel

Try again = same result


This occurs with any large file attempting to be copied across two network locations. (i.e. between to file servers)

Any assistance and/or direction would be greatly appreciated.

Thanks so much.
 
Welcome to OSNN,

Check to see if there is any processes in the Task Mangler that are resource hogs, maybe even shut off a couple of the processes to see if this improves.

Might what to try Microsoft's Robocopy It is heavy duty file transfer manager built into Windows Server, and is designed to transfer large files across networks.

Download Here as part of a Resource Kit.



Heeter
 
Last edited:
Good evening!

I'm hoping someone might be able to assist me with this rather annoying Windows bug.

Operating System: Windows Vista Business

Whenever we try to copy semi-large files over the network (i.e. say, over 2GB), we receive the following error:

Code:
Error 0x800705AA: Insufficient system resources exist to complete the requested service.

Options: Try Again, Skip, Cancel

Try again = same result


This occurs with any large file attempting to be copied across two network locations. (i.e. between to file servers)

Any assistance and/or direction would be greatly appreciated.

Thanks so much.
I believe you are going to need 64 bit oporating system if you want to open files that big consistantly

32 bit runs out of address space right around the 3 gig mark so if anything is running on the box a 2 gig file might max it out
 
Last edited:
Sorry Perris. Wrong.

I can copy a 4 Gb DVD ISO image around just fine over the network using Windows XP.

The 3 GB "limit" is set by the BIOS and how it handles shadowing, and how Windows XP maps certain cards to memory. Video cards for example.

You can install 4 GB's, only about 3.75 GB or so will be addressable (also highly depends on the chipset/bios). The amount is also up to how much ram is on your video card (gets memory mapped, so that eats a chunk), bios gets memory mapped (eats another chunk).

http://www.pagetable.com/?p=29 (64 bit is a lot)

The current implementations of almost all CPU's allow up to 48 bits of addressable memory space. It is an architectural design limit in Windows that not more than 4 GB can be used/referenced, and unless PAE is used, and the system is booted with NOLOWMEM the OS will keep memory mapping over real physical ram because device drivers will otherwise fail to work. (Ex, they expect a 32 bit pointer and suddenly get a hole 64 bits to handle) (http://www.brianmadden.com/content/article/The-4GB-Windows-Memory-Limit-What-does-it-really-mean-)

However file copying has absolutely nothing to do with this. NTFS has a file size limit (http://en.wikipedia.org/wiki/Comparison_of_file_systems#Limits) of 16 EiB (http://en.wikipedia.org/wiki/Exbibyte)
 
ftp://download.intel.com/design/motherbd/bl/C6859901.pdf

Scroll down to page 49 (in the lower right hand corner) and look at Figure 14. It explains why you lose memory on 32 bit systems, without PAE enabled, and without a motherboard that can re-locate those memory addresses into the 64 bit range. (BTW, more than 4 GB can be addressed using PAE on 32 bit (48 bits total addressable memory), however BIOS has to co-operate and move stuff into higher memory)

Edit:
http://www.itwriting.com/blog/?postid=152

Another person saying exactly that.
 
Sorry Perris. Wrong...

The 3 GB "limit" is set by the BIOS and how it handles shadowing...

sorry x-istance but I'm not wrong, you aren't wrong either but your explanation is only a portion of the issue

as far as coying files, I am assuming they are opening the files not just copying them, or somehow the file is getting opened to copy over the network

I think he should try the application heeter posted.

the 3 gig threshold is not only set in the bios, it's also set by microsoft (3.2 I believe)

there are physical limitations due to simple math and whatever else is on the box requiring address alocaction and there are artificial limitations set by microsoft to compensate for some badly written drivers that are too polular to ignore...rather then blame the driver every time it had an issue microsoft just wrote a work around into the os since they aren't allowed to rewrite those drivers

I understand why there is a memory hole and everything you're saying about it, but copying your iso is differant then copying files if they have to open them while copying which I believe is what is happening

.while it doesn't take two gigs of memory to write a 2 gig file, (I can write an entire hardrive with 126 mbs of memory on a box), it does take two gigs of memory to actually open a two gig file and maybe that is his issue

he's also doing it accross a network to boot and there are additional resources associated with that, further there are a bunch of other things running on his desktop no matter what he's doing

I don't know if address extension is going to be able to overcome his resource limitation, he might be able to shut some processes down or whatever but I think if he wants to be able to open these files consistantly he probably needs a 64 bit os...there might be another reason which I mention in my last paragraph

as far as the memory hole, there are two entirely differant "memory hole" issues and I'll explain here as simply as possible;

1) some devices have their own memory and the bios is reserving address resources for those external memory requirements..the processor can address 4 gigs of memory, it doesn't matter where that memory is coming from, on the box or off the box, operating system or hardware, the os is limited to 4 gigs so if you have hardware with it's own gig of memory that only leaves three gigs for the user...when you reach the finite plateu of for memory there are addresses allready alocated which cannot be claimed for your installed memory

2) there are some popular drivers that get buggy when addressing large amounts of memory and microsoft actually set aside a seperate cushion for those drivers, microsoft supposedly limits a user from addressing more then 3.2 gigs in xp and I assume that limitation is continued in vista...you can read about microsofts workaround for badly written drivers here

that cushion was written into the os since service pack 2...it didn't bother anyone at the time because very few people actually had more then a gig of memory

so when he says "copy files over two gigs" those files might actually be reaching the artificial limit set by microsoft all by themselves, then add to it whatever else is running on his box and he's there on some or most of his files

microsoft missed the boat on that one, the limititation should be driver speciific, when installing the driver the os should flag the os at that time, the flag shouldn't be written into every os that's kind of rediculous as far as I'm concerned

but they didn't ask me, they did it without my consultation...I wouldn't have charged them much and it would have saved them tons of aggrivation

he can set physical address extension but his os is only going to address up to 3,2 gigs if microsoft extended the driver cushion to vista

and as you noted, some bios set a cushion for hardware incidentals and no matter how much memory you install the os will not see anything above that cushion

setting the bios to ignore that cushion doesn't do anything if he has say a video card with it's own memory

if the video card shares memory that is deducted from the operating systems available resources as well and while you can limit the amount of memory that's shared it still takes resources from the finite available

this kind of user will typically call for a 64 bit operating system

he is also using vista which has a few more things going on then xp and I imagine he would run out of resources sooner on vista

where I am lost is the fact that he's running multiple core processors...I would think he would have plenty of address resources, 4 gig per processor is what's physically available so I think it's now a 32 bit operating system issue not a hardware issue

it might be a licensing issue, microsoft has allowed multi core processors to be licensed as a single processor and I think that's his real problem

with multiple cores he might be able to get the os to address each processor separately but he might need different Microsoft licensing...I don't know about that though...I'm gonna do some research and get back with that info if I can find it
 
Last edited:
32 bit processors still have the ability to address up to 48 bits. Using the aforementioned extensions. The bios settings, if available, remaps the memory addresses where the PCI-express and other sit above that hole. So in a BIOS that supports it, memory layout would look like this:

3.x GB of RAM | Memory mapped hardware | Rest of ram available here

any process in Windows XP/Vista can eat up only 2 GB of Ram (including swap space) and that is a limit that has been set for years now in NT.

When you copy a file from the network onto a local machine (drag and drop in Explorer) it will open the file, read a certain amount of data, write a certain amount of data until it reaches the end of the file. It does not open the file, read the entire file into memory, then write it to disk. Which is why copying large files works perfectly fine in Windows XP, especially DVD ISO images which can be WAY over the 2 GB limit.\

There is definitely not 4 GB per core available. Even if he has two cores, the scheduler will run the process on one or the other in a round robin fashion. There is no difference. They both have the same limits as they are both talking to the same RAM.

It does not matter how many other things he is running, if Windows Vista is trying to first copy the entire file into memory (which IMHO is retarded and makes no sense what so ever), it would hit the 2 GB of memory per process limit. Every process on the system has a 2 GB memory limit. If I ran 10 processes and all of them used up 1.5 GB of ram, and I only had 3 GB of ram, I would have about 12 GB swapped out onto the hard drive. Hence the reason Virtual Memory exists, because we want to be able to use up more "RAM" than we actually have.
 
Last edited:
32 bit processors still have the ability to address up to 48 bits. Using the aforementioned extensions. The bios settings, if available, remaps the memory addresses where the PCI-express and other sit above that hole. So in a BIOS that supports it, memory layout would look like this:

3.x GB of RAM | Memory mapped hardware | Rest of ram available here

any process in Windows XP/Vista can eat up only 2 GB of Ram (including swap space) and that is a limit that has been set for years now in NT.

When you copy a file from the network onto a local machine (drag and drop in Explorer) it will open the file, read a certain amount of data, write a certain amount of data until it reaches the end of the file. It does not open the file, read the entire file into memory, then write it to disk. Which is why copying large files works perfectly fine in Windows XP, especially DVD ISO images which can be WAY over the 2 GB limit.\

There is definitely not 4 GB per core available. Even if he has two cores, the scheduler will run the process on one or the other in a round robin fashion. There is no difference. They both have the same limits as they are both talking to the same RAM.

It does not matter how many other things he is running, if Windows Vista is trying to first copy the entire file into memory (which IMHO is retarded and makes no sense what so ever), it would hit the 2 GB of memory per process limit. Every process on the system has a 2 GB memory limit. If I ran 10 processes and all of them used up 1.5 GB of ram, and I only had 3 GB of ram, I would have about 12 GB swapped out onto the hard drive. Hence the reason Virtual Memory exists, because we want to be able to use up more "RAM" than we actually have.

first. you're right about the cores, it's not four gigs per core though as far as I'm concerned if the os was written to use those address resources it would extend memory availability...but it's not and the amount of cores makes no differance

as far as the 2 gig per process, I'm not sure it's a per process barrior, there might be a cumulative barrior also but I'm not clear on that

as far as swapping to the hard rive, network data is swapped back to the network not the hardrive

nothing old is written to the hardrive, nothing old is written to the network, the os just maps the old data from and to the file it came from in the first place unless it's been changed, if it's been changed then that data gets imaged to wherever you put your pagefile for swap purposes

data from the network is not swapped to the local disc it's mapped back and forth to the original file wherever that is, local or network it doesn't matter, there are no extraneous writes due to swapping it's just mapping

the local disc is only going to be used for private addressable info (new data not on the disc or the network)


I also agree with your other point, it doesn't make sense that the files are getting opened and loaded into memory just to copy over the network but that's the only thing I can think of as to why he hasn't enough resources...can you think of any other reason for him to not have enough resources?

in any event he should try the app heeter posted and I still believe he needs a 64 bit os for the kind of work he's doing
 
Last edited:
For the 2 GB limit, take a look at (http://www.brianmadden.com/content/article/The-4GB-Windows-Memory-Limit-What-does-it-really-mean-). It clearly explains that an application sees memory as a 4 GB limit (Windows limit, if they had done it properly, each application would only see their own memory and going over that 32 bit boundary would have not been a problem. That is essentially what PAE also accomplishes). 2 GB can be used by the app. 2 GB can be used by the kernel for that app.

Network data that is read into memory is DEFINITELY not swapped back to the network. It is written to the local swap.

psuedocode

Code:
read 1024 MB into memory
Go do other stuff, and sometimes use part of the 1024 we just read
release the 1024 MB of memory

During the "go do other stuff" the OS will most likely swap the data that is in memory to the hard drive, especially if it is not currently being used.

Old data is just mapped? No. From a network to a local disk it is an actual physical copy, like so:

Code:
loopthis:
read 1024 bytes from network
write 1024 bytes to hard drive
goto loopthis

More pseudo code of how that is implemented. That means at any one time a max of about 1024 bytes of ram are needed and cleared. On most OS's it is going to be a lot more, just for the extra speed that is gained. Reading 1 byte and writing 1 byte is slow.

But those 1024 bytes COULD be written to swap as well.

If the OS were to try and track where a certain read call came from, and then "map" this in memory so as to not swap it, how would this be implemented on top of the memory system that exists? There is no clear way that it would cause an improvement. Also, if something has been read from the network, and it is now mapped to the network, what happens if the file server goes offline and the program resumes doing whatever it was doing expecting it's cache of what it read in to still exist? The OS can't just throw it an error that it can't access that part of memory, as clearly it is that programs memory.

As for what resources he is running out of, I don't know. It is Windows Vista which is already a bad sign. They had a major bug where they slowed down network traffic because media was playing. What other bugs still exist? They have also re-written most of the TCP/IP stack to be adaptive. Maybe it is unable to allocate more resources for the buffering required for that as it has run out of resource limits Windows has set upon bootup?

The work he is doing does not require a 64 bit OS. There were no problems handling 2 GB and bigger files in Windows XP, why should this be any different now? I handle 2 GB files on an old Pentium 1 with 64 MB of ram with ease.
 
For the 2 GB limit, take a look at (http://www.brianmadden.com/content/article/The-4GB-Windows-Memory-Limit-What-does-it-really-mean-). It clearly explains that an application sees memory as a 4 GB limit (Windows limit, if they had done it properly, each application would only see their own memory and going over that 32 bit boundary would have not been a problem. That is essentially what PAE also accomplishes). 2 GB can be used by the app. 2 GB can be used by the kernel for that app.

Network data that is read into memory is DEFINITELY not swapped back to the network. It is written to the local swap.

psuedocode

Code:
read 1024 MB into memory
Go do other stuff, and sometimes use part of the 1024 we just read
release the 1024 MB of memory

During the "go do other stuff" the OS will most likely swap the data that is in memory to the hard drive, especially if it is not currently being used.

Old data is just mapped? No. From a network to a local disk it is an actual physical copy, like so:

Code:
loopthis:
read 1024 bytes from network
write 1024 bytes to hard drive
goto loopthis

More pseudo code of how that is implemented. That means at any one time a max of about 1024 bytes of ram are needed and cleared. On most OS's it is going to be a lot more, just for the extra speed that is gained. Reading 1 byte and writing 1 byte is slow.

But those 1024 bytes COULD be written to swap as well.

If the OS were to try and track where a certain read call came from, and then "map" this in memory so as to not swap it, how would this be implemented on top of the memory system that exists? There is no clear way that it would cause an improvement. Also, if something has been read from the network, and it is now mapped to the network, what happens if the file server goes offline and the program resumes doing whatever it was doing expecting it's cache of what it read in to still exist? The OS can't just throw it an error that it can't access that part of memory, as clearly it is that programs memory.

As for what resources he is running out of, I don't know. It is Windows Vista which is already a bad sign. They had a major bug where they slowed down network traffic because media was playing. What other bugs still exist? They have also re-written most of the TCP/IP stack to be adaptive. Maybe it is unable to allocate more resources for the buffering required for that as it has run out of resource limits Windows has set upon bootup?

The work he is doing does not require a 64 bit OS. There were no problems handling 2 GB and bigger files in Windows XP, why should this be any different now? I handle 2 GB files on an old Pentium 1 with 64 MB of ram with ease.

I didn't read brian's entire page and I don't know if he wrote that before or after sp2 but it doesn't matter, it hasn't been updated to include the new information and he should address the new definitions pae addresses

as I said before, pae doesn't do what it used to do and the os can no longer map to 64 bits using physical address extension

there were driver and security issues when pae was being used, some popular drivers couldn't understand the buffers and crashed, microsoft couldn't create exeptions for those drivers I think because they are just too pervasive (on every box) there were also were security issues I think executing kernal code, in any event ms also couldn't re write the drivers since they weren't microsoft's...instead microsoft chaged the way physical address extension handles memory and registers

here's the documentation


but he's missinformed if that's where you're sourcing the opinion that data isn't mapped back into the network file or network dll, if you want to ask him to contact me I'll give him the microsoft sources, most are documented but some are not...however if people like yourself are referancing brian's work and that's where you formed your opinion I would think he'd like to have the documentation that demonstrates otherwise

if you're not sourcing him for that point of view of network swapping then this is just academia for the two of us

there's lots going on in your post, I don't know if I addressed everything below

the os assumes the network won't go off line, if you do go off the networ aned a page has to swapped that is no longer available, THEN the write is made, to your local disc, the os is going to try to map to the original file whenever possible, if it can't then it will create an image locally

the purpose of mapping to the original file is to avoid hardrive bottlecks, and of course there is a performance advantage doing it this way... when an image exists and there is no conflict the os will avoid writing a new image...if there is a conflict the os has no choice but that is rare

now xistance it's self evident, if he's running out of resources I doubt this will happen on 64 bit...unless it's a network resource issue and not local

maybe that's it but I don't think so, it looks like a local os notice to me

I think his operation is opening the entire file to write for some reason but if his notice is an operating system notice then obviously he DOES need a 64 bit processor


doing some deductive reasoning, the os says he needs more resources and I can't think of anything BUT his operation is reading the file into memory, I know there are network buffers and resources associated with that but I cannot believe that's preventing him from copying a file, I think the file has to be getting loaded into memory for him to be running out of resources

I can't think of any other reason he can't do these copies, if there is another reason then obviously we need to see what's using those resources and address the issue...if possible but in any event, I think the resources he's running short of are os based and 64 bit looks to me like it solves his problem

you might be right that there is a simpler solution but NOT if his operation includes reading the entire file into memory.

mapping back to the .dll or original file is documented by microsoft as you noted in your code quote, mapping over the network is not officially documented last time I looked, that's information that was given to me directly from the head of the memory management team at microsoft...I don't think too many people knew about the network mapping till I published the correspondance...I remember jeh was involved with the correspondance and while he knew of the threoretical possibility he didn't know the os was sophisticated enough to do it until the head of the memory management team at microsoft said it to me

xistance, I had a conversation directly with the head of the memory management team at microsoft Landy Wang...this was quite a long email conversation which involved numerous correspondance..

he told me when I get my edits in place he would try to get my paper published on the microsoft network

lanny had been sent my virtual memory paper from larry osterman, (another microsoft big wig)...larry had red my paper and liked it, wanted it to be proof red by lanny wang and lanny got in touch with me to tweak it

here's the first correspondance, his correction is in red, there were a few tweaks he wanted me to make...the first excerp is from my paper, lanny put in his own edits in red and here's one of them

perris's paper corrected by lanny wang said:
,,,The OS will retrieve said information directly from the .exe or the
.dll that the information came from if it's referenced again. This is
accomplished by simply "unloading" portions of the .dll or .exe, and
reloading that portion when needed again....

(self evident, isn't it). virtual locked memory will not go to the
hard drive (and if it was demand zero like a stack or a heap it never came
from a hard drive either). same thing with network-backed files.

swaps are mapped back to the original file wherever that was, there is never an image duplicated anywhere unless there is some kind of os conflict accessing that image

there is no reason create a second image if no conflict exists and the os does not, there are some rare times a dll has a collision with the application accessing the same file at the same time, Iwhen that happens and there's a conflict of this nature that's when the os MIGHT votluntarily create another identical image to the pagefile but that's a fixup for a conflict when it occurs

swaps are mapped back over the network if that's where the data came from, and there's no conflict. pages are mapped back to the original file either on the network or the local disc, wherever the os got the page in the first place, except when there are conflicts there are no writes for existing images

as long as the network file is still available it is NOT written to the local file, it's not written anywhere since it's allready available and it would be a waste of hardrive time creating more then neccessary items in cue

the conversation went on between lanny and myself and he explained there are other times data isn't written to the pagefile even if there's no image anywhere, for instance if the os knows that new data will be zeroed out it won't get imaged to the pagefile or anywhere since it's gonna be zeroed out...but again, there's no reason to write any information that already has an image unless there's a conflict, the os knows where that image is and maps back and forth without unnessary hardrive activity

that's according to landy wang himself, now maybe this is undocumeted info I am not sure but that's it right there

old data is justed mapped, there is no writting old information again there's no reason and the os does not do it


you can ask brian to contact me if you want but he's wrong
 
Last edited:
in a nut shell xistance, if an image already exists then when a swap needs to occur memory is just released and claimed by the new request, there is no hardrive image created tje image is already there and the os knows where it came from in the first place

the writes are for information that is new that will not be zeroed out, nothing else is re written, there's no reason to do that and the os does not
 
in a nut shell xistance, if an image already exists then when a swap needs to occur memory is just released and claimed by the new request, there is no hardrive image created tje image is already there and the os knows where it came from in the first place

the writes are for information that is new that will not be zeroed out, nothing else is re written, there's no reason to do that and the os does not

I could absolutely be wrong, but it looks like in that context it is regarding DLL's and writing to the page file code, not what is read into memory by the process itself.

If I load 1024 MB worth of data into memory to then change a few bits to it, and have the program after that do nothing with that data, it will still have to keep it somewhere. If it was code I could understand mapping it back (DLL, EXE's and others). I am okay with that.



Now, the Brian article (link is not dead as you so say it is. I just checked it again) explains how and why the 2 GB per application exists. Since you say it is dead, I will post here what he wrote. So it is absolutely academia for both of us.

[Note from Brian Madden on March 24, 2004: Since I originally posted this article, I received some corrections from David Solomon, author of the book "Inside Windows 2000." (Thanks David!) I've since rewritten some portions of this article to incorporate his corrections.]

There seems to be a lot of confusion in the industry about what's commonly called the Windows “4GB memory limit.” When talking about performance tuning and server sizing, people are quick to mention the fact that an application on a 32-bit Windows system can only access 4GB of memory. But what exactly does this mean?

By definition, a 32-bit processor uses 32 bits to refer to the location of each byte of memory. 2^32 = 4.2 billion, which means a memory address that's 32 bits long can only refer to 4.2 billion unique locations (i.e. 4 GB).

In the 32-bit Windows world, each application has its own “virtual” 4GB memory space. (This means that each application functions as if it has a flat 4GB of memory, and the system's memory manager keeps track of memory mapping, which applications are using which memory, page file management, and so on.)

This 4GB space is evenly divided into two parts, with 2GB dedicated for kernel usage, and 2GB left for application usage. Each application gets its own 2GB, but all applications have to share the same 2GB kernel space.

This can cause problems in Terminal Server environments. On Terminal Servers with a lot of users running a lot of applications, quite a bit of information from all the users has to be crammed into the shared 2GB of kernel memory. In fact, this is why no Windows 2000-based Terminal Server can support more than about 200 users—the 2GB of kernel memory gets full—even if the server has 16GB of memory and eight 3GHz processors. This is simply an architectural limitation of 32-bit Windows.

Windows 2003 is a little bit better in that it allows you to more finely tune how the 2GB kernel memory space is used. However, you still can't escape the fact that the thousands of processes from hundreds of users will all have to share the common 2GB kernel space.

Using the /3GB (for Windows 2000) or the /4GT (for Windows 2003) boot.ini switches is even worse in Terminal Server environments because those switches change the partition between the application memory space and kernel memory space. These switches gives each application 3GB of memory, which in turn only leaves 1GB for the kernel—a disaster in Terminal Server environments!

People who are unfamiliar with the real meaning behind the 4GB Windows memory limit often point out that certain versions of Windows (such as Enterprise or Datacenter editions) can actually support more than 4GB of physical memory. However, adding more than 4GB of physical memory to a server still doesn't change the fact that it's a 32-bit processor accessing a 32-bit memory space. Even when more than 4GB of memory is present, each process still has the normal 2GB virtual address space, and the kernel address space is still 2GB, just as on a normal non-PAE system.

However, systems booted /PAE can support up to 64GB physical memory. A 32-bit process can "use" large amounts of memory via AWE (address windowing extension) functions. This means that they must map views of the physical memory they allocate into their 2GB virtual address space. Essentially, they can only use 2GB of memory at a time.

Here are more details about what booting /PAE means from Chapter 7 of the book "Inside Windows 2000," by David Solomon and Mark Russinovich.

All of the Intel x86 family processors since the Pentium Pro include a memory-mapping mode called Physical Address Extension (PAE). With the proper chipset, the PAE mode allows access to up to 64 GB of physical memory. When the x86 executes in PAE mode, the memory management unit (MMU) divides virtual addresses into four fields.

The MMU still implements page directories and page tables, but a third level, the page directory pointer table, exists above them. PAE mode can address more memory than the standard translation mode not because of the extra level of translation but because PDEs and PTEs are 64-bits wide rather than 32-bits. The system represents physical addresses internally with 24 bits, which gives the x86 the ability to support a maximum of 2^(24+12) bytes, or 64 GB, of memory.

As explained in Chapter 2 , there is a special version of the core kernel image (Ntoskrnl.exe) with support for PAE called Ntkrnlpa.exe. (The multiprocessor version is called Ntkrpamp.exe.) To select this PAE-enabled kernel, you must boot with the /PAE switch in Boot.ini.

This special version of the kernel image is installed on all Windows 2000 systems, even Windows 2000 Professional systems with small memory. The reason for this is to facilitate testing. Because the PAE kernel presents 64-bit addresses to device drivers and other system code, booting /PAE even on a small memory system allows a device driver developer to test parts of their drivers with large addresses. The other relevant Boot.ini switch is /NOLOWMEM, which discards memory below 4 GB and relocates device drivers above this range, thus guaranteeing that these drivers will be presented with physical addresses greater than 32 bits.

Only Windows 2000 Advanced Server and Windows 2000 Datacenter Server are required to support more than 4 GB of physical memory. (See Table 2-2.) Using the AWE Win32 functions, 32bit user processes can allocate and control large amounts of physical memory on these systems.


If he is running out of resources because of the 2 GB per application memory limit a move to a 64 bit OS will not make a difference as that is a Microsoft Windows set limit.
 
I could absolutely be wrong, but it looks like in that context it is regarding DLL's and writing to the page file code, not what is read into memory by the process itself.

If I load 1024 MB worth of data into memory to then change a few bits to it, and have the program after that do nothing with that data, it will still have to keep it somewhere. If it was code I could understand mapping it back (DLL, EXE's and others). I am okay with that.



Now, the Brian article (link is not dead as you so say it is. I just checked it again) explains how and why the 2 GB per application exists. Since you say it is dead, I will post here what he wrote. So it is absolutely academia for both of us.




If he is running out of resources because of the 2 GB per application memory limit a move to a 64 bit OS will not make a difference as that is a Microsoft Windows set limit.

well you created a caveat where there is no image, if you change anything on a dll that entire dll is new and unless you save it and load your work from that saved image

if your change is not saved and you're working directly from that source it's gonna have to get a swappable image and it won't be the original file it will be to the pagefile wherever you have that configured...you could even map your pagefile to the network I might add

that's the very point of the page file and it's the only memory management point of the pagefile, existing images don't get written to the pagefile unless there's a conflict and the original file is not accessable

as far as your point on 64 bit os, I don't think the 2 gig per proces limit is the same on 64 bit, I could be wrong, I don't have 64 bit os loaded

and ya, I got the link to load and edited my post
 
Last edited:
So I did some simple tests, if what you say is true and that if a file is loaded into memory by a program it will not be written to the pagefile, and instead will be mapped back to where it came from, the following screenshot showing 900 MB of "memory" used (includes pagefile) on my 512 MB limited RAM running in VMWare, telling me it needs a bigger swap space to hold it all.

This is my setup:

File server named keyhole has a shared named Movies, in which I created a 1.4 GB base 64 encoded file. This share is mapped as Z:. The file is called "file". Simple.

I created a new Visual C++ project, and basically wrote the following pseudo code:

Code:
read entire file into memory

Once read into memory, pause so one may remove network cable

Now let the user examine what is stored in memory and what is not stored in memory by typing in an offset to show what is in memory from, and what length

Now, my code has in that screenshot not even made it to the once read into memory part, since Windows XP is trying to resize it's pagefile so it can store all that stuff I just loaded into memory into it's pagefile for retrieval later on (instead of "mapping" it back to the file on the network share).

Will post code as soon as Windows XP on my VMWare stabilises and is usable again. (Windows is dog-on slow while paging).

Am I missing something?
 

Attachments

  • Picture 1.png
    Picture 1.png
    102.8 KB · Views: 1,227
well you created a caveat where there is no image, if you change anything on a dll that entire dll is gonna have to get a swappable image and it won't be the original file

that's the very point of the page file and it's the only memory management point of the pagefile, existing images don't get written to the pagefile unless there's a conflict

as far as your point on 64 bit os, I don't think the 2 gig per proces limit is the same on 64 bit, I could be wrong, I don't have 64 bit os loaded

and ya, I got the link to load and edited my post

I will let you respond to my other post in this thread with regards to non-modified data and swapping it out to the pagefile. I can understand executable code not being swapped out, but he is transferring data over the network, so if it is reading it into memory entirely, that is NOT executable code, and as such it won't get mapped to the original drive location as can be seen from my example.

Windows Vista on 64 bit still has this same problem (I just ran a test, allocated over 2 GB's of RAM, causes the app to fail the allocation it requested. Anything just smaller than 2 GB's is not a problem.
 
Code:
#include <iostream>
#include <string>
#include <fstream>
#include <limits>


int main()
{
	std::string pathname;
	std::cout << "Gimme a path: " << std::flush;
	std::getline(std::cin, pathname);
	
	std::ifstream myfile;
	myfile.open(pathname.c_str(), std::ios::in);
	
	std::string * contents = new std::string();
	std::string temp;
	
	// This is going to take a while. Read the entire file contents into a std::string
	
	while (myfile >> temp) {
		contents->append(temp);
		//std::cout << "String length/bytes: " << contents->length() << std::endl;
	}
	// We need a pause here
	std::cout << "Read data into memory. Disconnect network cable." << std::endl;
    std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');

	while (1) {
		std::cout << "Enter a starting range, and the amount you want to examine: (1024 512)" << std::endl;
		int start, length;
		std::cin >> start >> length;
	
		try {
		for (int i = 0; i < length; i++)
			std::cout << contents->at(start + i);
		} catch (std::out_of_range) {

		}
		
		std::cout << std::endl;
	}
	
	return 0;
}

I have attached the Visual C++ project that I have created.
 

Attachments

  • MemoryPagefile.zip
    6.9 KB · Views: 271
Last edited:
I will let you respond to my other post in this thread with regards to non-modified data and swapping it out to the pagefile. I can understand executable code not being swapped out, but he is transferring data over the network, so if it is reading it into memory entirely, that is NOT executable code, and as such it won't get mapped to the original drive location as can be seen from my example.

.


it's not just executeables that get mapped back to the original file, no file gets a new image for swapping unless it's been modified or has an access conflict

the os knows where the file came from and maps right back to it

I might not be making myself clear here though\

if he is saving a file it is already creating a local hardrive image and obviously there's going to be writing for that, also copying a file is probably creating the impression that is new information to the os and there might be pagefile mapping because the os doesn't consider the original file accessable

I'm gonna have to do some research on the 64 bit os, I see what you are saying but I thought the very purpose of 64 bit was to create more available memory per process
 
it's not just executeables that get mapped back to the original file, no file gets a new image for swapping unless it's been modified or has an access conflict

the os knows where the file came from and maps right back to it

I might not be making myself clear here though\

if he is saving a file it is already creating a local hardrive image and obviously there's going to be writing for that, also copying a file is probably creating the impression that is new information to the os and there might be pagefile mapping because the os doesn't consider the original file accessable

Sure, that is what I have been saying from the start. There is no way he is going to run out of resources unless Windows Vista is first copying the ENTIRE file into memory. Copying a file to the local hard drive does indeed not require it to pass the pagefile.

As for the OS mapping it back to a file, my example is clear as day that does not happen. The OS knows what file it came from, it knows what reads were made to get a certain amount of data, yet it still really copies the data from the hard drive into memory, and once memory runs out into the pagefile. Prove to me that this is not happening. Clearly I have a program that I wrote that fails to do what you say it would do.

I'm gonna have to do some research on the 64 bit os, I see what you are saying but I thought the very purpose of 64 bit was to create more available memory per process

No, 64 bit allows for faster math, bigger programs, and allows addressing more memory. However the per-process limit is one that is still set by Windows.
 
Sure, that is what I have been saying from the start. There is no way he is going to run out of resources unless Windows Vista is first copying the ENTIRE file into memory. Copying a file to the local hard drive does indeed not require it to pass the pagefile.

As for the OS mapping it back to a file, my example is clear as day that does not happen. The OS knows what file it came from, it knows what reads were made to get a certain amount of data, yet it still really copies the data from the hard drive into memory, and once memory runs out into the pagefile. Prove to me that this is not happening. Clearly I have a program that I wrote that fails to do what you say it would do.



No, 64 bit allows for faster math, bigger programs, and allows addressing more memory. However the per-process limit is one that is still set by Windows.
memory that runs out does not get mapped into the pagefile, it gets mapped back where it came from unless the os thinks that file is not accessable, there is some kind of conflict accessing the file or the file is marked as "dirty" due to new code or data

now obviously the file you are experimenting is getting marked "dirty", maybe it's generating code, time stamps, something, maybe it's aquiring code by what you are doing to it but if a file is not so marked "dirty" the map remains where it came from in the first place...it only gets mapped to the pagefile when the os sees the dirty flag

now that "dirty" flag might be the case if you're copying a file, the os might think that file is now fluid and might consider it private writable and mark it dirty...I don't know, what I DO know is files that are constant and not marked dirty do not get mapped to the pagefile they get mapped back to where they came from and this includes the network

files are not mapped to the pagefile until the os sees the dirty flag or if there is a file access conflict

if you want to do a little research for yourself, here is a very nice article describing the principles on how virtual memory is reclaimed, microsoft didn't invent the idea of mapping back to the original file they just use the principles that were around long before nt

read that article x, someone like you is gonna get have a great read...here's a snippet, on point;

The fundamental advantage of direct addressability is that information copying is no longer mandatory. Since all instructions and data items in the system are processor-addressable, duplication of procedures and data is unnecessary. This means, for example, that core images of programs need not be prepared by loading and binding together copies of procedures before execution; instead, the original procedures may be used directly in a computation. Also, partial copies of data files need not be read, via requests to an I/O system, into core buffers for subsequent use and then returned, by means of another I/O request, to their original locations; instead the central processor executing a computation can directly address just those required data items in the original version of the file.

I am still doing research on the 64 bit because theoretically each process on x64 can have up to 8 TB address space, 7 TB on Itanium if the process is written for that

I would be amazed if microsoft restricted that to 2 gigs per process
 
Last edited:
http://here/

Is not a valid URL I can visit to read anything.

The file is hosted on a read-only share, so time-stamps don't matter.
 

Members online

No members online now.

Latest profile posts

Also Hi EP and people. I found this place again while looking through a oooollllllldddd backup. I have filled over 10TB and was looking at my collection of antiques. Any bids on the 500Mhz Win 95 fix?
Any of the SP crew still out there?
Xie wrote on Electronic Punk's profile.
Impressed you have kept this alive this long EP! So many sites have come and gone. :(

Just did some crude math and I apparently joined almost 18yrs ago, how is that possible???
hello peeps... is been some time since i last came here.
Electronic Punk wrote on Sazar's profile.
Rest in peace my friend, been trying to find you and finally did in the worst way imaginable.

Forum statistics

Threads
62,015
Messages
673,494
Members
5,621
Latest member
naeemsafi
Back