Is HFS+ more prone to fragmentation than HFS?
1-Since a file can be split in more parts in HFS+ than in HFS because of the smallest block size, can this lead to more disk fragmentation even if I heard that MacOS has a more efficient management of files in HFS+?
2-If we compare 2 disks with the same fragmentation, but one in HFS+ and the other in HFS, which one will be fastest?
3-I have heard that fragmentation in HFS+ was irrelevant, is this true?
4-Is it a more risky business to defragment his hard drive than leaving it fragmented? (I have heard that disk utilities are not very reliable)
Thanks for your help.
I don’t see why HFS+ (same as FAT32 in WinTel world) would have a tendency to fragment more than Standard HFS. Fragmentation is a “physical” phenomenon, that is, it refers to the fact that parts of the same file are not in contiguous segments on the HD but scattered about a bit. Smaller blocks, in contiguous segments, are not fragmentation
The advantage to HFS+ is instead of a fixed number of blocks (something like 65,535) the block size is fixed at 4KB. A block can only have data assigned to it from a single file, different files cannot share the same block. What this meant, with the older system, was lots of wasted disk space as volumes (partitions) got larger. Example : 2GB volume would have approx. 30 KB for a block size. If 1KB from a file is stored there, 29KB is wasted. You can see how this could accumulate to be quite a waste of disk space.
As to defragmentation and disk optimization, I have yet, in 16 years experience doing computer support for microcomputers, had a problem caused by that process. The important thing is that you verify the media before proceeding and don’t interrupt the process (shut down, reboot, etc. - cancel, if the software allows it, is O.K.) once it has begun. Since defrag temporarily writes to other areas of the hard drive, without verification (well, sometimes you can select that option but it really slows down the process) it is important that the media is able to accept the data writes without errors. BTW, disk optimization defrags and arranges files depending on frequency of access. For example, system files are placed at the inner tracks because they are accessed more frequently, as you use other software, and the data read is faster in that area. Some utilities ask you what functionality you wish to optimize for and, supposedly, optimize for that use.
As to defrag being irrelevant with HFS+, I don’t know why this would be so since fragmentation is fragmentation, regardless of block size. Remember, old drives of 20MB and 40 MB would benefit from defragmentation and they had a much smaller block size than 4KB. Because the R/W head doesn’t have to travel all over the place, to get bits and pieces of the same file, data retrieval is faster than with a fragmented drive.
Since the block size is smaller in HFS+, a file could be fragmented that might not be capable of being fragmented in HFS.
I had a problem with defrag software once when there was a disk caching program running with MSDOS. The user defragged the hard disk, didn't reboot, and went on running applications. It was a bad scene. Word to the wise.. reboot the system after defragging a drive.
I like defragging software and run it more often than necessary and have never had problems that I could blame on defragging.
|All times are GMT -4. The time now is 02:53 AM.||
Copyright © 2005-2007 MacNN. All rights reserved.
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2017, vBulletin Solutions, Inc.