insufficient system resources exist to complete the requested service

Discussion in 'Windows Desktop Systems' started by fermulator, Dec 4, 2007.

  1. fermulator

    fermulator OSNN One Post Wonder

    Messages:
    1
    Good evening!

    I'm hoping someone might be able to assist me with this rather annoying Windows bug.

    Operating System: Windows Vista Business

    Whenever we try to copy semi-large files over the network (i.e. say, over 2GB), we receive the following error:

    Code:
    Error 0x800705AA: Insufficient system resources exist to complete the requested service.
    Options: Try Again, Skip, Cancel

    Try again = same result


    This occurs with any large file attempting to be copied across two network locations. (i.e. between to file servers)

    Any assistance and/or direction would be greatly appreciated.

    Thanks so much.
     
  2. Heeter

    Heeter Overclocked Like A Mother

    Messages:
    2,732
    Welcome to OSNN,

    Check to see if there is any processes in the Task Mangler that are resource hogs, maybe even shut off a couple of the processes to see if this improves.

    Might what to try Microsoft's Robocopy It is heavy duty file transfer manager built into Windows Server, and is designed to transfer large files across networks.

    Download Here as part of a Resource Kit.



    Heeter
     
    Last edited: Dec 4, 2007
  3. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    I believe you are going to need 64 bit oporating system if you want to open files that big consistantly

    32 bit runs out of address space right around the 3 gig mark so if anything is running on the box a 2 gig file might max it out
     
    Last edited: Dec 4, 2007
  4. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    Sorry Perris. Wrong.

    I can copy a 4 Gb DVD ISO image around just fine over the network using Windows XP.

    The 3 GB "limit" is set by the BIOS and how it handles shadowing, and how Windows XP maps certain cards to memory. Video cards for example.

    You can install 4 GB's, only about 3.75 GB or so will be addressable (also highly depends on the chipset/bios). The amount is also up to how much ram is on your video card (gets memory mapped, so that eats a chunk), bios gets memory mapped (eats another chunk).

    http://www.pagetable.com/?p=29 (64 bit is a lot)

    The current implementations of almost all CPU's allow up to 48 bits of addressable memory space. It is an architectural design limit in Windows that not more than 4 GB can be used/referenced, and unless PAE is used, and the system is booted with NOLOWMEM the OS will keep memory mapping over real physical ram because device drivers will otherwise fail to work. (Ex, they expect a 32 bit pointer and suddenly get a hole 64 bits to handle) (http://www.brianmadden.com/content/article/The-4GB-Windows-Memory-Limit-What-does-it-really-mean-)

    However file copying has absolutely nothing to do with this. NTFS has a file size limit (http://en.wikipedia.org/wiki/Comparison_of_file_systems#Limits) of 16 EiB (http://en.wikipedia.org/wiki/Exbibyte)
     
  5. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    ftp://download.intel.com/design/motherbd/bl/C6859901.pdf

    Scroll down to page 49 (in the lower right hand corner) and look at Figure 14. It explains why you lose memory on 32 bit systems, without PAE enabled, and without a motherboard that can re-locate those memory addresses into the 64 bit range. (BTW, more than 4 GB can be addressed using PAE on 32 bit (48 bits total addressable memory), however BIOS has to co-operate and move stuff into higher memory)

    Edit:
    http://www.itwriting.com/blog/?postid=152

    Another person saying exactly that.
     
  6. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    sorry x-istance but I'm not wrong, you aren't wrong either but your explanation is only a portion of the issue

    as far as coying files, I am assuming they are opening the files not just copying them, or somehow the file is getting opened to copy over the network

    I think he should try the application heeter posted.

    the 3 gig threshold is not only set in the bios, it's also set by microsoft (3.2 I believe)

    there are physical limitations due to simple math and whatever else is on the box requiring address alocaction and there are artificial limitations set by microsoft to compensate for some badly written drivers that are too polular to ignore...rather then blame the driver every time it had an issue microsoft just wrote a work around into the os since they aren't allowed to rewrite those drivers

    I understand why there is a memory hole and everything you're saying about it, but copying your iso is differant then copying files if they have to open them while copying which I believe is what is happening

    .while it doesn't take two gigs of memory to write a 2 gig file, (I can write an entire hardrive with 126 mbs of memory on a box), it does take two gigs of memory to actually open a two gig file and maybe that is his issue

    he's also doing it accross a network to boot and there are additional resources associated with that, further there are a bunch of other things running on his desktop no matter what he's doing

    I don't know if address extension is going to be able to overcome his resource limitation, he might be able to shut some processes down or whatever but I think if he wants to be able to open these files consistantly he probably needs a 64 bit os...there might be another reason which I mention in my last paragraph

    as far as the memory hole, there are two entirely differant "memory hole" issues and I'll explain here as simply as possible;

    1) some devices have their own memory and the bios is reserving address resources for those external memory requirements..the processor can address 4 gigs of memory, it doesn't matter where that memory is coming from, on the box or off the box, operating system or hardware, the os is limited to 4 gigs so if you have hardware with it's own gig of memory that only leaves three gigs for the user...when you reach the finite plateu of for memory there are addresses allready alocated which cannot be claimed for your installed memory

    2) there are some popular drivers that get buggy when addressing large amounts of memory and microsoft actually set aside a seperate cushion for those drivers, microsoft supposedly limits a user from addressing more then 3.2 gigs in xp and I assume that limitation is continued in vista...you can read about microsofts workaround for badly written drivers here

    that cushion was written into the os since service pack 2...it didn't bother anyone at the time because very few people actually had more then a gig of memory

    so when he says "copy files over two gigs" those files might actually be reaching the artificial limit set by microsoft all by themselves, then add to it whatever else is running on his box and he's there on some or most of his files

    microsoft missed the boat on that one, the limititation should be driver speciific, when installing the driver the os should flag the os at that time, the flag shouldn't be written into every os that's kind of rediculous as far as I'm concerned

    but they didn't ask me, they did it without my consultation...I wouldn't have charged them much and it would have saved them tons of aggrivation

    he can set physical address extension but his os is only going to address up to 3,2 gigs if microsoft extended the driver cushion to vista

    and as you noted, some bios set a cushion for hardware incidentals and no matter how much memory you install the os will not see anything above that cushion

    setting the bios to ignore that cushion doesn't do anything if he has say a video card with it's own memory

    if the video card shares memory that is deducted from the operating systems available resources as well and while you can limit the amount of memory that's shared it still takes resources from the finite available

    this kind of user will typically call for a 64 bit operating system

    he is also using vista which has a few more things going on then xp and I imagine he would run out of resources sooner on vista

    where I am lost is the fact that he's running multiple core processors...I would think he would have plenty of address resources, 4 gig per processor is what's physically available so I think it's now a 32 bit operating system issue not a hardware issue

    it might be a licensing issue, microsoft has allowed multi core processors to be licensed as a single processor and I think that's his real problem

    with multiple cores he might be able to get the os to address each processor separately but he might need different Microsoft licensing...I don't know about that though...I'm gonna do some research and get back with that info if I can find it
     
    Last edited: Dec 5, 2007
  7. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    32 bit processors still have the ability to address up to 48 bits. Using the aforementioned extensions. The bios settings, if available, remaps the memory addresses where the PCI-express and other sit above that hole. So in a BIOS that supports it, memory layout would look like this:

    3.x GB of RAM | Memory mapped hardware | Rest of ram available here

    any process in Windows XP/Vista can eat up only 2 GB of Ram (including swap space) and that is a limit that has been set for years now in NT.

    When you copy a file from the network onto a local machine (drag and drop in Explorer) it will open the file, read a certain amount of data, write a certain amount of data until it reaches the end of the file. It does not open the file, read the entire file into memory, then write it to disk. Which is why copying large files works perfectly fine in Windows XP, especially DVD ISO images which can be WAY over the 2 GB limit.\

    There is definitely not 4 GB per core available. Even if he has two cores, the scheduler will run the process on one or the other in a round robin fashion. There is no difference. They both have the same limits as they are both talking to the same RAM.

    It does not matter how many other things he is running, if Windows Vista is trying to first copy the entire file into memory (which IMHO is retarded and makes no sense what so ever), it would hit the 2 GB of memory per process limit. Every process on the system has a 2 GB memory limit. If I ran 10 processes and all of them used up 1.5 GB of ram, and I only had 3 GB of ram, I would have about 12 GB swapped out onto the hard drive. Hence the reason Virtual Memory exists, because we want to be able to use up more "RAM" than we actually have.
     
    Last edited: Dec 9, 2007
  8. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    first. you're right about the cores, it's not four gigs per core though as far as I'm concerned if the os was written to use those address resources it would extend memory availability...but it's not and the amount of cores makes no differance

    as far as the 2 gig per process, I'm not sure it's a per process barrior, there might be a cumulative barrior also but I'm not clear on that

    as far as swapping to the hard rive, network data is swapped back to the network not the hardrive

    nothing old is written to the hardrive, nothing old is written to the network, the os just maps the old data from and to the file it came from in the first place unless it's been changed, if it's been changed then that data gets imaged to wherever you put your pagefile for swap purposes

    data from the network is not swapped to the local disc it's mapped back and forth to the original file wherever that is, local or network it doesn't matter, there are no extraneous writes due to swapping it's just mapping

    the local disc is only going to be used for private addressable info (new data not on the disc or the network)


    I also agree with your other point, it doesn't make sense that the files are getting opened and loaded into memory just to copy over the network but that's the only thing I can think of as to why he hasn't enough resources...can you think of any other reason for him to not have enough resources?

    in any event he should try the app heeter posted and I still believe he needs a 64 bit os for the kind of work he's doing
     
    Last edited: Dec 9, 2007
  9. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    For the 2 GB limit, take a look at (http://www.brianmadden.com/content/article/The-4GB-Windows-Memory-Limit-What-does-it-really-mean-). It clearly explains that an application sees memory as a 4 GB limit (Windows limit, if they had done it properly, each application would only see their own memory and going over that 32 bit boundary would have not been a problem. That is essentially what PAE also accomplishes). 2 GB can be used by the app. 2 GB can be used by the kernel for that app.

    Network data that is read into memory is DEFINITELY not swapped back to the network. It is written to the local swap.

    psuedocode

    Code:
    read 1024 MB into memory
    Go do other stuff, and sometimes use part of the 1024 we just read
    release the 1024 MB of memory
    
    During the "go do other stuff" the OS will most likely swap the data that is in memory to the hard drive, especially if it is not currently being used.

    Old data is just mapped? No. From a network to a local disk it is an actual physical copy, like so:

    Code:
    loopthis:
    read 1024 bytes from network
    write 1024 bytes to hard drive
    goto loopthis
    
    More pseudo code of how that is implemented. That means at any one time a max of about 1024 bytes of ram are needed and cleared. On most OS's it is going to be a lot more, just for the extra speed that is gained. Reading 1 byte and writing 1 byte is slow.

    But those 1024 bytes COULD be written to swap as well.

    If the OS were to try and track where a certain read call came from, and then "map" this in memory so as to not swap it, how would this be implemented on top of the memory system that exists? There is no clear way that it would cause an improvement. Also, if something has been read from the network, and it is now mapped to the network, what happens if the file server goes offline and the program resumes doing whatever it was doing expecting it's cache of what it read in to still exist? The OS can't just throw it an error that it can't access that part of memory, as clearly it is that programs memory.

    As for what resources he is running out of, I don't know. It is Windows Vista which is already a bad sign. They had a major bug where they slowed down network traffic because media was playing. What other bugs still exist? They have also re-written most of the TCP/IP stack to be adaptive. Maybe it is unable to allocate more resources for the buffering required for that as it has run out of resource limits Windows has set upon bootup?

    The work he is doing does not require a 64 bit OS. There were no problems handling 2 GB and bigger files in Windows XP, why should this be any different now? I handle 2 GB files on an old Pentium 1 with 64 MB of ram with ease.
     
  10. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    I didn't read brian's entire page and I don't know if he wrote that before or after sp2 but it doesn't matter, it hasn't been updated to include the new information and he should address the new definitions pae addresses

    as I said before, pae doesn't do what it used to do and the os can no longer map to 64 bits using physical address extension

    there were driver and security issues when pae was being used, some popular drivers couldn't understand the buffers and crashed, microsoft couldn't create exeptions for those drivers I think because they are just too pervasive (on every box) there were also were security issues I think executing kernal code, in any event ms also couldn't re write the drivers since they weren't microsoft's...instead microsoft chaged the way physical address extension handles memory and registers

    here's the documentation


    but he's missinformed if that's where you're sourcing the opinion that data isn't mapped back into the network file or network dll, if you want to ask him to contact me I'll give him the microsoft sources, most are documented but some are not...however if people like yourself are referancing brian's work and that's where you formed your opinion I would think he'd like to have the documentation that demonstrates otherwise

    if you're not sourcing him for that point of view of network swapping then this is just academia for the two of us

    there's lots going on in your post, I don't know if I addressed everything below

    the os assumes the network won't go off line, if you do go off the networ aned a page has to swapped that is no longer available, THEN the write is made, to your local disc, the os is going to try to map to the original file whenever possible, if it can't then it will create an image locally

    the purpose of mapping to the original file is to avoid hardrive bottlecks, and of course there is a performance advantage doing it this way... when an image exists and there is no conflict the os will avoid writing a new image...if there is a conflict the os has no choice but that is rare

    now xistance it's self evident, if he's running out of resources I doubt this will happen on 64 bit...unless it's a network resource issue and not local

    maybe that's it but I don't think so, it looks like a local os notice to me

    I think his operation is opening the entire file to write for some reason but if his notice is an operating system notice then obviously he DOES need a 64 bit processor


    doing some deductive reasoning, the os says he needs more resources and I can't think of anything BUT his operation is reading the file into memory, I know there are network buffers and resources associated with that but I cannot believe that's preventing him from copying a file, I think the file has to be getting loaded into memory for him to be running out of resources

    I can't think of any other reason he can't do these copies, if there is another reason then obviously we need to see what's using those resources and address the issue...if possible but in any event, I think the resources he's running short of are os based and 64 bit looks to me like it solves his problem

    you might be right that there is a simpler solution but NOT if his operation includes reading the entire file into memory.

    mapping back to the .dll or original file is documented by microsoft as you noted in your code quote, mapping over the network is not officially documented last time I looked, that's information that was given to me directly from the head of the memory management team at microsoft...I don't think too many people knew about the network mapping till I published the correspondance...I remember jeh was involved with the correspondance and while he knew of the threoretical possibility he didn't know the os was sophisticated enough to do it until the head of the memory management team at microsoft said it to me

    xistance, I had a conversation directly with the head of the memory management team at microsoft Landy Wang...this was quite a long email conversation which involved numerous correspondance..

    he told me when I get my edits in place he would try to get my paper published on the microsoft network

    lanny had been sent my virtual memory paper from larry osterman, (another microsoft big wig)...larry had red my paper and liked it, wanted it to be proof red by lanny wang and lanny got in touch with me to tweak it

    here's the first correspondance, his correction is in red, there were a few tweaks he wanted me to make...the first excerp is from my paper, lanny put in his own edits in red and here's one of them

    swaps are mapped back to the original file wherever that was, there is never an image duplicated anywhere unless there is some kind of os conflict accessing that image

    there is no reason create a second image if no conflict exists and the os does not, there are some rare times a dll has a collision with the application accessing the same file at the same time, Iwhen that happens and there's a conflict of this nature that's when the os MIGHT votluntarily create another identical image to the pagefile but that's a fixup for a conflict when it occurs

    swaps are mapped back over the network if that's where the data came from, and there's no conflict. pages are mapped back to the original file either on the network or the local disc, wherever the os got the page in the first place, except when there are conflicts there are no writes for existing images

    as long as the network file is still available it is NOT written to the local file, it's not written anywhere since it's allready available and it would be a waste of hardrive time creating more then neccessary items in cue

    the conversation went on between lanny and myself and he explained there are other times data isn't written to the pagefile even if there's no image anywhere, for instance if the os knows that new data will be zeroed out it won't get imaged to the pagefile or anywhere since it's gonna be zeroed out...but again, there's no reason to write any information that already has an image unless there's a conflict, the os knows where that image is and maps back and forth without unnessary hardrive activity

    that's according to landy wang himself, now maybe this is undocumeted info I am not sure but that's it right there

    old data is justed mapped, there is no writting old information again there's no reason and the os does not do it


    you can ask brian to contact me if you want but he's wrong
     
    Last edited: Dec 9, 2007
  11. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    in a nut shell xistance, if an image already exists then when a swap needs to occur memory is just released and claimed by the new request, there is no hardrive image created tje image is already there and the os knows where it came from in the first place

    the writes are for information that is new that will not be zeroed out, nothing else is re written, there's no reason to do that and the os does not
     
  12. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    I could absolutely be wrong, but it looks like in that context it is regarding DLL's and writing to the page file code, not what is read into memory by the process itself.

    If I load 1024 MB worth of data into memory to then change a few bits to it, and have the program after that do nothing with that data, it will still have to keep it somewhere. If it was code I could understand mapping it back (DLL, EXE's and others). I am okay with that.



    Now, the Brian article (link is not dead as you so say it is. I just checked it again) explains how and why the 2 GB per application exists. Since you say it is dead, I will post here what he wrote. So it is absolutely academia for both of us.


    If he is running out of resources because of the 2 GB per application memory limit a move to a 64 bit OS will not make a difference as that is a Microsoft Windows set limit.
     
  13. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    well you created a caveat where there is no image, if you change anything on a dll that entire dll is new and unless you save it and load your work from that saved image

    if your change is not saved and you're working directly from that source it's gonna have to get a swappable image and it won't be the original file it will be to the pagefile wherever you have that configured...you could even map your pagefile to the network I might add

    that's the very point of the page file and it's the only memory management point of the pagefile, existing images don't get written to the pagefile unless there's a conflict and the original file is not accessable

    as far as your point on 64 bit os, I don't think the 2 gig per proces limit is the same on 64 bit, I could be wrong, I don't have 64 bit os loaded

    and ya, I got the link to load and edited my post
     
    Last edited: Dec 9, 2007
  14. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    So I did some simple tests, if what you say is true and that if a file is loaded into memory by a program it will not be written to the pagefile, and instead will be mapped back to where it came from, the following screenshot showing 900 MB of "memory" used (includes pagefile) on my 512 MB limited RAM running in VMWare, telling me it needs a bigger swap space to hold it all.

    This is my setup:

    File server named keyhole has a shared named Movies, in which I created a 1.4 GB base 64 encoded file. This share is mapped as Z:. The file is called "file". Simple.

    I created a new Visual C++ project, and basically wrote the following pseudo code:

    Code:
    read entire file into memory
    
    Once read into memory, pause so one may remove network cable
    
    Now let the user examine what is stored in memory and what is not stored in memory by typing in an offset to show what is in memory from, and what length
    
    Now, my code has in that screenshot not even made it to the once read into memory part, since Windows XP is trying to resize it's pagefile so it can store all that stuff I just loaded into memory into it's pagefile for retrieval later on (instead of "mapping" it back to the file on the network share).

    Will post code as soon as Windows XP on my VMWare stabilises and is usable again. (Windows is dog-on slow while paging).

    Am I missing something?
     

    Attached Files:

  15. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    I will let you respond to my other post in this thread with regards to non-modified data and swapping it out to the pagefile. I can understand executable code not being swapped out, but he is transferring data over the network, so if it is reading it into memory entirely, that is NOT executable code, and as such it won't get mapped to the original drive location as can be seen from my example.

    Windows Vista on 64 bit still has this same problem (I just ran a test, allocated over 2 GB's of RAM, causes the app to fail the allocation it requested. Anything just smaller than 2 GB's is not a problem.
     
  16. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    Code:
    #include <iostream>
    #include <string>
    #include <fstream>
    #include <limits>
    
    
    int main()
    {
    	std::string pathname;
    	std::cout << "Gimme a path: " << std::flush;
    	std::getline(std::cin, pathname);
    	
    	std::ifstream myfile;
    	myfile.open(pathname.c_str(), std::ios::in);
    	
    	std::string * contents = new std::string();
    	std::string temp;
    	
    	// This is going to take a while. Read the entire file contents into a std::string
    	
    	while (myfile >> temp) {
    		contents->append(temp);
    		//std::cout << "String length/bytes: " << contents->length() << std::endl;
    	}
    	// We need a pause here
    	std::cout << "Read data into memory. Disconnect network cable." << std::endl;
        std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
    
    	while (1) {
    		std::cout << "Enter a starting range, and the amount you want to examine: (1024 512)" << std::endl;
    		int start, length;
    		std::cin >> start >> length;
    	
    		try {
    		for (int i = 0; i < length; i++)
    			std::cout << contents->at(start + i);
    		} catch (std::out_of_range) {
    
    		}
    		
    		std::cout << std::endl;
    	}
    	
    	return 0;
    }
    
    I have attached the Visual C++ project that I have created.
     

    Attached Files:

    Last edited: Dec 9, 2007
  17. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york

    it's not just executeables that get mapped back to the original file, no file gets a new image for swapping unless it's been modified or has an access conflict

    the os knows where the file came from and maps right back to it

    I might not be making myself clear here though\

    if he is saving a file it is already creating a local hardrive image and obviously there's going to be writing for that, also copying a file is probably creating the impression that is new information to the os and there might be pagefile mapping because the os doesn't consider the original file accessable

    I'm gonna have to do some research on the 64 bit os, I see what you are saying but I thought the very purpose of 64 bit was to create more available memory per process
     
  18. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    Sure, that is what I have been saying from the start. There is no way he is going to run out of resources unless Windows Vista is first copying the ENTIRE file into memory. Copying a file to the local hard drive does indeed not require it to pass the pagefile.

    As for the OS mapping it back to a file, my example is clear as day that does not happen. The OS knows what file it came from, it knows what reads were made to get a certain amount of data, yet it still really copies the data from the hard drive into memory, and once memory runs out into the pagefile. Prove to me that this is not happening. Clearly I have a program that I wrote that fails to do what you say it would do.

    No, 64 bit allows for faster math, bigger programs, and allows addressing more memory. However the per-process limit is one that is still set by Windows.
     
  19. Perris Calderon

    Perris Calderon Moderator Staff Member Political User

    Messages:
    12,332
    Location:
    new york
    memory that runs out does not get mapped into the pagefile, it gets mapped back where it came from unless the os thinks that file is not accessable, there is some kind of conflict accessing the file or the file is marked as "dirty" due to new code or data

    now obviously the file you are experimenting is getting marked "dirty", maybe it's generating code, time stamps, something, maybe it's aquiring code by what you are doing to it but if a file is not so marked "dirty" the map remains where it came from in the first place...it only gets mapped to the pagefile when the os sees the dirty flag

    now that "dirty" flag might be the case if you're copying a file, the os might think that file is now fluid and might consider it private writable and mark it dirty...I don't know, what I DO know is files that are constant and not marked dirty do not get mapped to the pagefile they get mapped back to where they came from and this includes the network

    files are not mapped to the pagefile until the os sees the dirty flag or if there is a file access conflict

    if you want to do a little research for yourself, here is a very nice article describing the principles on how virtual memory is reclaimed, microsoft didn't invent the idea of mapping back to the original file they just use the principles that were around long before nt

    read that article x, someone like you is gonna get have a great read...here's a snippet, on point;

    I am still doing research on the 64 bit because theoretically each process on x64 can have up to 8 TB address space, 7 TB on Itanium if the process is written for that

    I would be amazed if microsoft restricted that to 2 gigs per process
     
    Last edited: Dec 9, 2007
  20. X-Istence

    X-Istence * Political User

    Messages:
    6,498
    Location:
    USA
    http://here/

    Is not a valid URL I can visit to read anything.

    The file is hosted on a read-only share, so time-stamps don't matter.