I think it depends on what it is going to be used for, the cost difference may not be worth it. And, correct me if I'm wrong, doesn't the cache really only impact performance if the transfer is occuring between two physical disks?
Any size disk cache will only help if the data you want is in the cache. That means that large programs and data (like game textures, video editing, etc) will not see a benefit because they are often bigger than the tiny 16 MB cache.
You will see a benefit if you frequently work with many different, smaller programs and files open.
As for Tomshardware, I use them constantly. Of course I ignore their commentary, which usually borders on the moronic, check their test conditions and assumptions closely, and just look at the raw data.
Cache will help anytime the file is in cache. Seek time is as important as the RPM. RPM only determines how fast you can read/write contiguous data to disk. Seek time is how long it takes to move the head to the correct track which is usually signifiicant for small files scattered across the disk and fragmented files.
Wrong. If the data is not in the cache when the request comes in the servo has to slew the head to the data location (seek time) and then the data has to be read from the disk surface at the maximum bits per second allowed by the encoding density and the angular head velocity (transfer rate).
A data cache miss means you might as well not have a cache.
A read cache is only of value in large files where the drive assumes you will be wanting more of the existing file and preloads it into cache or with many small files that you use repeatedly and remain in cache. Heavy disk fragmentation can also cripple the efficiency of a cache by requiring freqeunt seek operations.
A write cache is more useful. It allows the CPU to dump main memory to the HD cache while seek is occuring and faster than the transfer rate the disk rpm allows. Again large files will overflow the cache and see no benefit.
Whether a cache is of value depends on the application, disk fragmentation level and file sizes you handle.
Yes, file size is exactly what I was thinking of, as important. If one is working with a small 1k text file (lets say in DOS, to get Windows out of the picture), then the 8 vs. 16 MB cache isn't going to make much of a difference. If one is in a situation in which the data set could fit in a 16 MB cache, but not an 8 MB cache, then increasing the cache size will help increase cache hits.
Generally speaking, and without looking at all the specifics of data access, I would expect there to be a point where a database server could continue to make use of the bigger cache, where many home user applications might begin to see some diminishing returns wrt performance in creasing the size of the thing. dBase servers are also looking at a large number of dBase records (themselves a bunch of table entries related through a foreign key and whatever SQL statement was issued for the data retreival) with each record being much smaller. But with possibly thousands of people hitting an Oracle server, and it's disk array...
As to Tom's Hardware, I stopped going there for the most part when Tom Pabst had that whole spat with Brian Hooke when Hooke was still working at Id Software, and then the like 8 versions of appologies Pabst made, the first few not sounding very appologetic at all.
To make a long story short, Quake 3 was in beta at the time, and Tom used the timedemo from the beta to compare the TNT2 vs the Voodoo 3. Tom drew conclusions off the timedemo, at which point Hooke, as an Id Software programmer had said that the timedemo in the beta was broken, and doesn't produce reliable benchmark scores yet. It was also noticeable, as in some parts of the beta, the timedemo just sat in a corner looking at a bare wall (aka blank screen), and kept taking benchmark readings on it.
Tom pretty much dismissed all Hooke was saying, and came off as arrogant and deragatory, insisting that his benchmark and therefor his roundup of the cards was correct. This continued and the spat between Tom Pabst and Id Software escalated, and in the end it came out on the authority of John Carmack himself, that the beta code for the thing was busted and still needed more work, just as one of Carmack's fellow programmers had been trying to tell him, before Id Software would be willing to consider their own timedemo to be reliable for use. Up to that point, the timedemo wasn't considered the most important part of their software to bug fix, I would imagine. At this point, Tom Pabst was becomming increasingly appologetic, but was also trying to "save face" rather then just admit straight out that he was wrong.
It ended with Brian Hooke assuring the code got corrected, and then running the benchmarks himself, with Id Software releasing their own set of "official" benchmarks for the cards Tom Pabst was testing, based on the then corrected (to Id's programmers satisfaction), then still beta code. It also ended with me writting Tom Pabst off as a reliable source of information for the most part, and my having pretty much ceased the visits to his site, I used to make.
LeeJend,- You keep forgetting something about the chache on the drive itself = its both read ahead and write back. It will improve data access speed at all times, and the bigger it is the more it will improve that. its a very rare time when the drive doesnt have your desired data in cache.
take alook at storgereview.com, they have a review of a maxtor's 16meg cache drive and compare it with an 8 and 2 meg as well as a raptor from WD.
Tell me then that 16meg cahce does not offer greater sustained read speeds and greater burst rate.
Cache 101. If data is not in the cache there is no improvement. CPU or HD makes no difference. The advantage is there IF the data does not have to be pulled off the drive. CPU cache sizes have stopped growing and in some cases been reduced because they offer no added value for added size. The hit/refill ratio becomes ineffective.
Even worse with the HD caches is the issue of disk fragmentation. Large files you are working on will tend to get scattered over the disk causing frequent head seeks that deplete the cache.
Benchmarks can be written to make a cache look great or to make it look useless. The Sandra HD benchmark used to show this nicely by having different file size tests in the benchmark. When you used large files the cache advantage goes to almost 0. This may no longer show up in Sandra because the large file size may now be smaller than the 16 MB cache size.
Ep, glad to see you come back and tidy up...did want to ask a one day favor, I want to enhance my resume , was hoping you could make me administrator for a day, if so, take me right off since I won't be here to do anything, and don't know the slightest about the board, but it would be nice putting "served administrator osnn", if can do, THANKS