I don’t see why HFS+ (same as FAT32 in WinTel world) would have a tendency to fragment more than Standard HFS. Fragmentation is a “physical” phenomenon, that is, it refers to the fact that parts of the same file are not in contiguous segments on the HD but scattered about a bit. Smaller blocks, in contiguous segments, are not fragmentation
The advantage to HFS+ is instead of a fixed number of blocks (something like 65,535) the block size is fixed at 4KB. A block can only have data assigned to it from a single file, different files cannot share the same block. What this meant, with the older system, was lots of wasted disk space as volumes (partitions) got larger. Example : 2GB volume would have approx. 30 KB for a block size. If 1KB from a file is stored there, 29KB is wasted. You can see how this could accumulate to be quite a waste of disk space.
As to defragmentation and disk optimization, I have yet, in 16 years experience doing computer support for microcomputers, had a problem caused by that process. The important thing is that you verify the media before proceeding and don’t interrupt the process (shut down, reboot, etc. - cancel, if the software allows it, is O.K.) once it has begun. Since defrag temporarily writes to other areas of the hard drive, without verification (well, sometimes you can select that option but it really slows down the process) it is important that the media is able to accept the data writes without errors. BTW, disk optimization defrags and arranges files depending on frequency of access. For example, system files are placed at the inner tracks because they are accessed more frequently, as you use other software, and the data read is faster in that area. Some utilities ask you what functionality you wish to optimize for and, supposedly, optimize for that use.
As to defrag being irrelevant with HFS+, I don’t know why this would be so since fragmentation is fragmentation, regardless of block size. Remember, old drives of 20MB and 40 MB would benefit from defragmentation and they had a much smaller block size than 4KB. Because the R/W head doesn’t have to travel all over the place, to get bits and pieces of the same file, data retrieval is faster than with a fragmented drive.