• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

ROFL, Executive software, then and now...

Son Goku

No lover of dogma
Well, I was going to title the thread "ROFL, so much for PR", but thought the actual contents might get lost in such a subject heading. Basically, I received an email about Diskeeper 10, and some of the great new features... Wouldn't you know, one of those features is listed as:

* NEW I-FAAST™ (Intelligent File Access Acceleration Sequencing Technology) in Diskeeper Professional Premier Edition boosts access speeds for the most commonly accessed files.

Now why do I mention this? Because it wasn't too many years ago when Diskeeper didn't include this (as some other defragmentation packages did like Norton Speedisk), and Executive claimed disk optomization as a waiste of time. Ironic, thinking back, that it's now been bundled and touted as a feature :D

This is their old argument on this:


by Lance Jensen, Executive Software Technical Support Representative

Disk optimization has been around for a number of years, on a number of different computer platforms. In theory, it makes a lot of sense: You move certain frequently-accessed files closer to certain positions on the disk, thereby reducing head-seek time. Your performance SHOULD increase.

But how does optimization really work? Why doesn't Diskeeper
"optimize"? And, beyond that, is optimization a real solution with
NTFS? The answer may surprise you.

First, what exactly is disk optimization? It is an attempt to speed
up file access by forcing certain files to be permanently located in
certain positions on the disk. The idea is to accelerate file access
even when all the files are contiguous and all the free space is
grouped together. The theory goes that if you put the most frequently accessed files in the middle of the disk, the disk heads will generally have to travel a shorter distance than if these files were located randomly around the disk.

Believe it or not, there are some major holes in this theory.

Hole number one: Extensive analysis of real-world computer sites
shows that it is not commonplace for entire files to be accessed all
at once. It is far more common for only a few clusters of a file to
be accessed at a time. Consider a database application, for example -
user applications rarely, if ever, search or update the entire
database. They access only the particular records desired. Thus,
placing the entire database in the middle of a disk is wasteful at
best and possibly destructive as far as performance is concerned.
Further, on an NTFS device, the smallest files are stored entirely
within the MFT; instead of a pointer to the file, the file itself is
present. There is no way they can be optimized or even moved.

Hole number two: Consider the typical server environment. Dozens or even hundreds of interactive users might be accessing the served disks at any moment, running who knows what applications, accessing innumerable files willy-nilly in every conceivable part of a disk. How can one even hope to guess where the disk's read-write head might be at any given time? With this extremely random mode of operation, how can one state flatly that positioning such-and-such a file at such-and-such an exact location will reduce disk access times? From examining this data, it would seem that file positioning is equally as likely to worsen system performance as to improve it. Even if the two conditions balance out at zero, the overhead involved gives you a net loss.

Hole number three: Optimization products developed for NT, when and if developed, will most likely force files to a specific position on
the disk by specifying exact logical cluster numbers, as this would
seem to make the most sense (this is how it's been done by other
products for other file systems). But when you force a file to a
specific position on the disk by specifying exact logical cluster
numbers, how do you know where it really is? You have to take into
account the difference between logical cluster numbers (LCNs) and
physical cluster numbers (PCNs). These two are not the same thing. LCNs are assigned to PCNs by the disk's controller. Disks often have more physical clusters than logical clusters. The LCNs are assigned to most of the physical clusters and the remainder are used as spares and for maintenance purposes. Magnetic disks are far from perfect and disk clusters sometimes "go bad." In fact, it is a rarity for a magnetic disk to leave the manufacturer without some bad clusters. When the disk is formatted, the bad clusters are detected and "revectored" to spares. Revectored means that the LCN assigned to that physical cluster is reassigned to some other physical cluster. Windows NT will also do this revectoring on the fly while your disk is in use. The new cluster after revectoring might be on the same track and physically close to the original, but then again it might not. Thus, not all LCNs correspond to the physical cluster of the same number, and two consecutive LCNs may actually be widely separated on the disk.

Hole number four: What if you have more than one partition on a disk? If two partitions of the same size reside on a single disk, one
"middle" will be 1/4 of the way out from the first LCN, and the other
"middle" will be 1/4 of the way in from the last LCN. If users are
accessing BOTH partitions, optimizing them will *guarantee* more head motion.

Also, many disks have multiple partitions on a single spindle. If you
have two platters, where is the middle? It occupies the outermost
track of one platter, and the innermost track of the next platter.
The two tracks cannot be farther apart!

Hole number five: The middle of the disk is the point halfway between LCN zero (the "beginning" of the disk) and the highest LCN on that disk volume (the "end" of the disk). Right?

Well, maybe not. We have already seen that LCNs do not necessarily correspond to the physical disk block of the same number. But what about a multi-spindle disk (one with two or more sets of platters rotating on separate spindles)? There are several different types of multi-spindle disks. Besides the common volume sets and stripesets, there are also disks that use multiple spindles for speed and reliability yet appear to the operating system as a single disk drive. Where is the "middle" of such a disk? I think you will agree that, while the location of the apparent middle can be calculated, the point accessed in the shortest average time is certainly not the point halfway between LCN zero and the last LCN. This halfway point could be on the outermost track of one platter or on the innermost track of another - not on the middle track of either one. Such disk volumes actually have several "middles" when speaking in terms of access times.

Hole number six: With regular defragmentation, a defragmenter such as Diskeeper needs to relocate only a tiny percentage of the files on a disk; perhaps even less than one percent. "Optimization" requires
moving virtually all the files on the disk, every time you optimize.
Moving 100 times as many files gives you 100 times the opportunity for error and 100 times the overhead. Is the result worth the risk and the cost?

Hole number seven: What exactly is the cost of optimizing a disk and what do you get for it? The costs of fragmentation are enormous. A file fragmented into two pieces can take twice as long to access as a contiguous file. A three-piece file can take three times as long, and so on. Some files fragment into hundreds of pieces in a few days' use. Imagine the performance cost of 100 disk accesses where only one would do! Defragmentation can return a very substantial portion of your system to productive use.

Now consider optimization. Suppose, for the sake of argument, that
disk data cluster sequencing really did correspond to physical cluster
locations and you really could determine which files are accessed most frequently and you really knew the exact sequence of head movement from file to file. By carefully analyzing the entire disk and
rearranging all the files on the disk, you could theoretically reduce
the head travel time. The theoretical maximum reduction in average
travel time is one-quarter the average head movement time, after
subtracting the time it takes to start and stop the head. If the
average access time is 10 milliseconds and 8 milliseconds of this is
head travel time, the best you can hope for is a 2 millisecond
reduction for each file that is optimized. On a faster disk, the
potential for reduction is proportionately less. And taking rotational
latency into account, your savings may be even less than that.

Each defragmented file, on the other hand, saves potentially one disk access (10 milliseconds) per fragment. That's five times the
optimization savings, even with the bare minimum level of fragmentation. With badly fragmented files, the difference is

On top of all that, what do you suppose it costs your system to
analyze and reposition every file on your disk? When you subtract
that from the theoretical optimization savings, it is probably COSTING you performance to "optimize" the files.

Additionally, it takes only a tiny amount of fragmentation, perhaps
only one day's normal use of your system, to undo the theoretical
benefits of optimizing file locations. While "optimization" is an
elegant concept to some, it is no substitute for defragmentation, it is unlikely to improve the performance of your system at all, and it is
more than likely to actually worsen performance in a large number of

In summary, file placement for purposes of optimizing disk performance is a red herring. It is not technologically difficult to do. It is just a waste of time.


This article was excerpted from Chapter 6 of the book "Fragmentation: the Condition, the Cause, the Cure" by Craig Jensen, CEO of Executive Software. It has been modified for application to Windows NT. The complete text of the book is available at

Lance Jensen is one of our ace Tech Support reps, and has great
experience with both Windows NT and Digital's OpenVMS. He can be
reached at dknt_support@executive.com. Please feel free to write to him with questions or comments about this article.
OK, but the change in their product feature set:


Intelligent File Access Acceleration Sequencing Technology (I-FAAST) improves IFAAST file access and creation by up to 80% (average 10%-20%). Using specially-engineered benchmarks Diskeeper learns disk performance characteristics, and then transparently monitors volumes over time for file access frequency applying proprietary techniques to prevent Diskeeper from being "fooled" by files that have been recently accessed. During special I-FAAST defragmentation jobs, Diskeeper organizes the most commonly used files and applications to increase file access performance. Like other intelligent technologies featured in Diskeeper 10, I-FAAST can automatically adapt to changing usage patterns—so there's no need to reconfigure if the system is transferred to another user.
OK, given that Executive Software is the company that develops Diskeeper, I wonder what gives? Do they still subscribe to their old thinking on this? If so, as to the inclusion? I wonder if their PR on this subject will change, now that the feature set to their software has changed? I just find this an ironic change on their part, given their former position on this subject...
Last edited:

Son Goku

No lover of dogma
Yeah, there have been other alternatives. I just find it ironic that for years, the developing company has been arguing against drive optomization, and putting out articles such as the one I cited which indicate why it's a waste and why they didn't include it.

But then in an almost about 180, they not only include it in Diskeeper 10, but take the very sorta thing they were arguing as a bad idea for so many years, but describe it as a new feature :laugh: I was, tbh, laughing as I looked at the email :D

Members online

No members online now.

Latest posts

Latest profile posts

Hello, is there anybody in there? Just nod if you can hear me ...
What a long strange trip it's been. =)

Forum statistics

Latest member