Those files would have the highest probability of being fragmented unless you clear that space, defrag the rest of the drive, and then allow the temp files to occupy new space after compacting the rest of the drive.
even that protocol wouldn't prevent the temp internet file from fragmenting j79
as far as the entire file, being fragmented wouldn't present an issue as far as I can see, the data inside the temp file are the issue, if that data is fragmented across the file or the hardrive, that's gonna give a performance hit me thinks
I remember when I had the file on it's own partition for this same purpose, the files in the temp file still get fragmented
so how does the Linux file system solve the problem?
I know you know why in windows there's a fragmenting problem j79, but let me restate it for those that are reading this thread that might not understand what's going on...I can't figure out how a more sophisticated file system would address the issue;
the temp files are written as they are seen by the os, they aren't written contiguously in the first place, that's an internet packet thing not an os or files system thing;
suppose a gif is seen, it will hardly ever get seen all at once, but let's say it is for arguments sake, even that doesn't help, that gif, or a portion of, it will get written if that's the entire packet the os looks at right then, if the gif isn't contained contiguously on the internet transfer, it can't get written contiguously, it will get written as the packets arrive, and as they are seen in Que
compound that fragmenting issue with the fact that the gif might get written and accessed often, then right beside it some data might get written that's only accessed the once, say a rotating banner, and written directly adjacent on the drive
since that data isn't accessed but the once, it'sn gonna get occupied by new data long before the gif that's accessed often gets overwritten
files that were written after the banner but are more active are still going to be there after the banner is replaced
so even if the entire drive is empty, the files within the drive get fragmented no matter what you do
I can't see how a file system can overcome this physical and practical reality
unless the file system were always defragging files as they are written
that to me is counter productive, creating far too much hardrive activity
files systems and os's are sophisticated enough that they know where the fragments are long before they seek, they are accessed long before they need to be read, therefore, typically there is not allot of performance issue with fragmented files
so there's the quick lesson for everyone else, I know you knew all this
but you are saying the issues, however slight, don't even exist on the Linux file system?
are you also saying there isn't even a defrag program for Linux since files don't get fragmented?
and if so, why wouldn't Microsoft steal that technology?