Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Software - Troubleshooting and Discussion > macOS > No, I think the developers need to see what we did to the music app...

No, I think the developers need to see what we did to the music app...
Thread Tools
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 13, 2016, 04:01 PM
 
They don't care about that little thing you have been cooking up on your spare time:

https://developer.apple.com/library/...nkElementID_27
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 16, 2016, 12:20 AM
 
The announcement made me really, really happy. Of course, I was a bit sad to hear it doesn't support checksumming, but at least if the claims about APFS' extensibility are to be believed, this could be added at a later date.
I don't suffer from insanity, I enjoy every minute of it.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 16, 2016, 03:52 AM
 
Word is that the description is really close to DragonBSD's HAMMER and HAMMER2 file systems. It may be based off of that code, since macOS (feels weird to write) still uses a BSD userland.

In general, this seems like a solid effort that gets the important things right without being particularly aggressive or ambitious.

Checksums are missing and compression is missing. I can't help but think that those are things you can do in your SSD controller - compression transparently, and checksums by simply adding an interface for the drive to inform the OS of a checksum failure (and trigger a Time Machine restoration, if required).
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 16, 2016, 05:55 AM
 
Originally Posted by P View Post
Word is that the description is really close to DragonBSD's HAMMER and HAMMER2 file systems. It may be based off of that code, since macOS (feels weird to write) still uses a BSD userland.
Yup, I heard the same thing from a friend. (Finally, all my *BSD geekery paid off )
Given that one of Dillon's fundamental design ideas was to enable distributed computing, HAMMER/HAMMER2 have been designed to sync remotely. That would explain why Apple would look in this direction.
Originally Posted by P View Post
In general, this seems like a solid effort that gets the important things right without being particularly aggressive or ambitious.
Last time I checked btrfs is still not out of beta … I really hope that they make another llvm-like bet on a new fundamental technology here: being smart about what is feasible and what isn't, opt for a flexible solution — and shipping it.
Originally Posted by P View Post
Checksums are missing and compression is missing. I can't help but think that those are things you can do in your SSD controller - compression transparently, and checksums by simply adding an interface for the drive to inform the OS of a checksum failure (and trigger a Time Machine restoration, if required).
Yes, in fact, I was expecting that Apple's filesystem would be more closely tied to the “firmware” on the SSD controller. In any case, if the rumors are true, then at least it seems entirely possible to implement checksumming in APFS.
I don't suffer from insanity, I enjoy every minute of it.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 16, 2016, 08:29 AM
 
Btrfs can be considered stable now - it is the default format in SUSE - but it has been a very long road from the original promise date in 2008 until it finally got stable on disk in 2014. Btrfs is also still missing encryption, still missing de-dup, has rather basic compression features, and apparently still requires a separate partition for swap. It is an excellent base to build on and it is now good enough to use, but it is a definite regression in some areas compared to what ol' HFS+ can handle.

It will be interesting to see how Spotlight is handled throughout all of this. It is currently built on top of HFS+, a second database with all the info in the directory file plus a lot more. The sane way would of course be to integrate all of this, so you don't have to write twice (and generally APFS seems to move in this direction, witness the way journalling is handled) but I don't see that mentioned in the docs yet.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Jun 16, 2016, 10:40 AM
 
Has Apple given any indication on when APFS would be used as the default file system ?
Form what I understand, Sierra will support it, but no require to use it.

-t
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 16, 2016, 11:09 AM
 
"Scheduled to ship in 2017" is all we know. Personally, I think that it will be default for machines that ship with 10.13, mirroring the way journalling was introduced, but I am guessing here.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 16, 2016, 08:27 PM
 
Originally Posted by P View Post
Btrfs can be considered stable now - it is the default format in SUSE - but it has been a very long road from the original promise date in 2008 until it finally got stable on disk in 2014. Btrfs is also still missing encryption, still missing de-dup, has rather basic compression features, and apparently still requires a separate partition for swap. It is an excellent base to build on and it is now good enough to use, but it is a definite regression in some areas compared to what ol' HFS+ can handle.
Some NAS (e. g. by Synology) started offering btrfs instead of ext4 as a file system last year, so it seems to be getting closer. But I am surprised it is missing something as basic as encryption and compression. De-dup is more complicated and in many situations less relevant (since you need a sufficient amount of RAM).
Originally Posted by P View Post
It will be interesting to see how Spotlight is handled throughout all of this. It is currently built on top of HFS+, a second database with all the info in the directory file plus a lot more. The sane way would of course be to integrate all of this, so you don't have to write twice (and generally APFS seems to move in this direction, witness the way journalling is handled) but I don't see that mentioned in the docs yet.
I reckon that this will be implemented optional functionality so that, say, the Watch which probably doesn't need Spotlight at this point is not burdened with it. Apple's file system team has spent the last 15 years bolting things onto HFS+, so informed by that I am sure flexibility and expandability are the part of the fundamental file system design. You never know how long APFS will be around

I've watched the WWDC session on their file system, and while details were conspicuously absent, it seems clear that Apple engineers added a huge dollop of pragmatism into the design (e. g. how they keep track of directory sizes). From what I can gather, the design goals were:
(1) Flexibility
(2) Support for encryption
(3) CoW
(4) Backwards compatibility
(5) Optimized for low latency rather than high throughput
(6) In-place migration

The different ways an APFS file system can be encrypted is quite impressive and shows how serious Apple takes this. It seems that they even can employ different encryption schemes depending on what is supported in hardware. Smart.
I don't suffer from insanity, I enjoy every minute of it.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 17, 2016, 04:22 AM
 
Originally Posted by OreoCookie View Post
Some NAS (e. g. by Synology) started offering btrfs instead of ext4 as a file system last year, so it seems to be getting closer. But I am surprised it is missing something as basic as encryption and compression. De-dup is more complicated and in many situations less relevant (since you need a sufficient amount of RAM).
There IS compression, just a rather basic algorithm as I understand it. Encryption remains missing and is the real hole. De-dup is indeed less relevant - more of a comment that they still have work to do to achieve what they planned to have done in 2008 (and admittedly, there is off-line de-dup).

Originally Posted by OreoCookie View Post
I've watched the WWDC session on their file system, and while details were conspicuously absent, it seems clear that Apple engineers added a huge dollop of pragmatism into the design (e. g. how they keep track of directory sizes). From what I can gather, the design goals were:
(1) Flexibility
(2) Support for encryption
(3) CoW
(4) Backwards compatibility
(5) Optimized for low latency rather than high throughput
(6) In-place migration

The different ways an APFS file system can be encrypted is quite impressive and shows how serious Apple takes this. It seems that they even can employ different encryption schemes depending on what is supported in hardware. Smart.
This all sounds good, very good. I should watch those WWDC sessions.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 19, 2016, 04:15 AM
 
You can find a nice overview of features, some tidbits from discussions with Apple engineers and educated guesses here.
( Last edited by OreoCookie; Jun 19, 2016 at 07:58 AM. )
I don't suffer from insanity, I enjoy every minute of it.
     
Agent69
Mac Elite
Join Date: Jun 2000
Status: Offline
Reply With Quote
Jun 19, 2016, 08:51 AM
 
Something to remember is that Apple employs Dominic Giampaolo (the creator of BeOS's BeFS), so while the new filesystem might be an adaption of Hammer, it might also be an entirely new creation.

Either way, its good news.
Agent69
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 19, 2016, 09:09 PM
 
Originally Posted by Agent69 View Post
Something to remember is that Apple employs Dominic Giampaolo (the creator of BeOS's BeFS), so while the new filesystem might be an adaption of Hammer, it might also be an entirely new creation.
Well, in many cases, certain ideas are just prevalent right now. For example, copy-on-write is an idea that has been implemented in all recently created, modern file systems, and all of the things that this idea enables (low-cost snapshots, etc.). HAMMER is one of them, and so far all I have to go on are rumors. But even without them, you can tell that the scope of APFS is different from btrfs and ZFS: the latter are geared more towards larger systems, and one of the main functionality is to replace a logical volume manager — something you don't really need if you only have one physical storage device.
I don't suffer from insanity, I enjoy every minute of it.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 20, 2016, 04:26 AM
 
APFS in detail from someone who knows a bit about file systems (he was at Sun, although working on Dtrace rather than ZFS, and has written enough about ZFS in the past that I think he knows more about it than most).

Adam Leventhal's blog » APFS in Detail: Overview

EDIT: So the TL;DR, because this is the Internet and we all have the attention span of a goldfish on speed:

* This is developed by Dominic Giampalo's team at Apple, without looking at ZFS, btrfs or HAMMER. It is based on code previously used in Apple's own CoreStorage, but is otherwise new.
* Giampalo made a point of praising the testing team, implying that this has been cooking for some time now.
* There is no compression right now, but the design of the FS clearly supports it, and it seems to imply that this is coming. I'm going to bet something like LZ4.
* Lots of focus on encryption, including the secure erase features from iOS
* APFS has snapshots, but they cannot currently be serialized to be sent over the network. Apple seems a lite hesitant about using this for Time Machine, as they can't figure out a good way to have exclusion lists. I can see one way (put excluded folders as separate volumes in the same container) but maybe that is too finicky to do on the fly.
* De-duping is not in, but the copy-on-write semantics seem effectively work as a primitive de-dup. They also mean that we could have offline de-dups (run a program on an offline drive to de-dupe it), just like BtrFS.
* Latency is top priority, including QoS for file system activity. Drool...
* There is checksums for metadata, but no checksums for data. The reasoning is that Apple makes sure to source hardware that is good enough that checksums on data would be unnecessary. Why then checksum the metadata? OK, so I'm facetious, there is much less metadata so you can store a copy of it to restore from backup if necessary, but the reasoning is interesting. There seems to be some debate inside the Apple team about this still.

All in all, I'm missing some answers on how backups and data integrity will work, but all in, it looks very good.
( Last edited by P; Jun 20, 2016 at 04:54 AM. )
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Agent69
Mac Elite
Join Date: Jun 2000
Status: Offline
Reply With Quote
Jun 22, 2016, 05:11 PM
 
Lots of good info. Thank you P.
Agent69
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Jun 26, 2016, 03:06 PM
 
I'm quite disappointed with APFS so far. I was interested in data checksums, and to a lesser extent, FS-level redundancy. Where you can set a value higher than '1' on important files / folders / volumes, and the FS will maintain that many copies. This would tie into checksums for background data repair. Both features are missing. Along with background data repair.

CoW is an odd feature. My usual reason for a file copy is redundancy, otherwise I'd use a symlink. Of course, I usually make copies to a different disk, so CoW would not come into play anyway. Sounds like it doesn't hurt, but I'm not seeing many use cases.

Triple redundancy on top-level metadata, and checksums on all metadata. ++
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 26, 2016, 03:21 PM
 
I have on occasion pushed the "XFS solution" for the file system problem - that Apple should adopt one of several good-quality file systems developed in the nineties, as they are all much better than HFS and something to build upon. XFS is one of these, and one that I think Apple might buy. APFS is very similar to that. It is 64-bit, it has all those important features (journaling, extended attributes) in the core and not added on, it is faster, it should be thread safe, etc.

Apple's thinking here is that silent corruption on SSDs when the data isn't being accessed is so rare that it does not need to be guarded against. They make efforts to avoid corrupting files while writing them, and then trust in the drives (and their own SSD controllers!) to keep the data safe once on the drive. Personally I think they should make checksums an option to be enabled on the Mac only (not on iOS etc devices), as Apple may not control all drives that those Macs use, and HDDs are still in use for now, but they need to figure out a good reaction plan in case of error.

Copy-on-write is for Time Machine. Remember that if you don't have the external disk connected, Time Machine will still take backups of changed files so you maintain that hour-by-hour backup. That is what CoW will do now.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 26, 2016, 08:19 PM
 
Originally Posted by P View Post
Apple's thinking here is that silent corruption on SSDs when the data isn't being accessed is so rare that it does not need to be guarded against. They make efforts to avoid corrupting files while writing them, and then trust in the drives (and their own SSD controllers!) to keep the data safe once on the drive.
But there are plenty of other modes of failure, especially since data spends more time in transit. How can you be sure that the data on your TimeMachine backup drive are really consistent with the data on your SSD? How can you be sure that the data transmitted via the net from iCloud to your local SSD is accurate? And how about the other way around? At my alma mater, we had large, professional Sun ZFS storage arrays, and indeed, found some corrupt data on it. Is Apple's hardware really that much more reliable than a 5-digit storage array?
I don't suffer from insanity, I enjoy every minute of it.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 27, 2016, 04:27 AM
 
Well, they can't guard against those other failure modes absolutely without ECC RAM, which won't happen because Intel, but they do have checksumming on a lot of levels. DDR4 has it for transfers between RAM and memory controller. PCIe has it for transfers across that bus, and USB has it as well. Any network protocol has some checksumming.

The thing about silent bitrot is that it happens on a fraction of all bits stored, so a massive storage array is actually more likely to develop some bitrot, simply because it is that much bigger, so... Apple's 16GB iPhone probably is more reliable than your 5-digit storage array. Not because it is better, because it is smaller.

(also that storage array was mostly HDDs, right? HDDs have more bitrot than flash)

BTW, we don't know how Apple checks its flash chips. Maybe the reason a 64GB upgrade costs $100 is because they have really stringent acceptance tests?
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
besson3c
Clinically Insane
Join Date: Mar 2001
Location: yes
Status: Offline
Reply With Quote
Jun 27, 2016, 07:48 AM
 
Very interesting thread, thank you!
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 27, 2016, 08:32 AM
 
Originally Posted by P View Post
Well, they can't guard against those other failure modes absolutely without ECC RAM, which won't happen because Intel, but they do have checksumming on a lot of levels. DDR4 has it for transfers between RAM and memory controller. PCIe has it for transfers across that bus, and USB has it as well. Any network protocol has some checksumming.
True, you can't guard against all failure modes, but it would guard against one more failure mode. Given that the amount of storage we have has increased significantly, in expectation value there will be bit rot on most disks. Checksumming the data on your SSD would be one step further. Personally, I think it would have been more elegant to relegate this to specialized hardware in the SSD controller, but Apple elected to design APFS in such a way that it abstracts the on-chip data structures.
Originally Posted by P View Post
The thing about silent bitrot is that it happens on a fraction of all bits stored, so a massive storage array is actually more likely to develop some bitrot, simply because it is that much bigger, so... Apple's 16GB iPhone probably is more reliable than your 5-digit storage array. Not because it is better, because it is smaller.
It's clear that this is more likely to happen on a massive storage array, but don't forget that a “massive” enterprise storage array from 5~7 years ago has a similar amount of space than what you get from a few consumer grade drives these days. Even if the probability of bit flips and other form of silent data corruption is very low, just the fact that we have tons of memory these days implies that in all likelihood there is data corruption on our drives and perhaps even in RAM. At a certain point we should just accept that we need to use ECC for most things.
Originally Posted by P View Post
BTW, we don't know how Apple checks its flash chips. Maybe the reason a 64GB upgrade costs $100 is because they have really stringent acceptance tests?
Oh, so this is where my money went! Makes me feel much better about my investment … 
I don't suffer from insanity, I enjoy every minute of it.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Jun 27, 2016, 04:36 PM
 
Storage needs will keep growing. We're transitioning to 4K video at 10 bits, and higher-res sound. It's likely 11.1 sound will be the standard in a few years. Even with h.265, we're still looking at over 2x the raw storage per movie or episode. With 8K video on the horizon.

Hard drives are subject to bitrot based on time-since-write. Since ATA doesn't rewrite the low-level format bits (unlike SCSI), HDs are also subject to unreadable blocks eventually, even if they were recently written. SSDs experience bitrot when they're unpowered for extended times. Both types are subject to cosmic rays.

ECC can correct a certain number of errors, but when the errors exceed the redundancy threshold, they cannot be fully corrected. I don't know the failure mode when the ECC limits are exceeded - maybe you can fix some of the errors. I suspect that you can't correct any at that point.

The FS should do precautionary checksums on all files, rather than trust the hardware. Recheck a file upon opening, if it hasn't been checked in a few weeks. Files that go unopened for extended times should be rechecked every few months.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 28, 2016, 04:05 AM
 
I'm not so sure that storage needs will keep growing. They haven't, the last few years. That 1TB drive I got for the iMac seven years ago is still not full. File sizes are now gated by network transfer speeds, not by storage needs, and the rise of mobile only exacerbates that.

And all the things you say about hardware ECC failing is true, but the filesystem checksum is no different in principle - it is just more checksumming, more overhead. If an SSD cell can fail, why not the cell that holds your checksum? At some point, you have to make the decision about how much checksumming is enough - where that checksumming is happening is not important for reliability, and it is likely faster to do it in the SSD controller.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 28, 2016, 10:34 AM
 
Originally Posted by P View Post
And all the things you say about hardware ECC failing is true, but the filesystem checksum is no different in principle - it is just more checksumming, more overhead.
Yes, but in my opinion we not only have the resources to burn, and it'd be worth it. I'm curious: do you think it's worth doing?
Originally Posted by P View Post
If an SSD cell can fail, why not the cell that holds your checksum? At some point, you have to make the decision about how much checksumming is enough - where that checksumming is happening is not important for reliability, and it is likely faster to do it in the SSD controller.
I assumed that in this case the data is marked as bad because its integrity cannot be verified? If there exists another copy of that block, you can compute the checksums of the blocks and compare to restore the data. I agree that it doesn't matter where checksumming is happening, and I was actually expecting that Apple relegates this bit to the SSD controller's firmware which works in concert with the rest of APFS.
I don't suffer from insanity, I enjoy every minute of it.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 28, 2016, 11:08 AM
 
Originally Posted by OreoCookie View Post
Yes, but in my opinion we not only have the resources to burn, and it'd be worth it. I'm curious: do you think it's worth doing?
On the Mac: Yes. Apple can't know what drives people connect to it, so I believe there should be a simple error detection system to let people know if a block has gone bad, so it can be restored from a backup. Ideally this check can be integrated in Time Machine.

On other devices: No, probably not. Statistically a very large number of defects come from a small number of bad flash devices, and if Apple is confident they can weed these out at production (or at the controller level), I think that that is fine.

Note that there is already in effect error-detect checksumming of executable files on the Mac, because they are all signed and refuse to launch if the signature check fails.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Jun 28, 2016, 03:45 PM
 
You're assuming a single original SSD for all data. People are likely to attach external HDs for media drives and backups. As SSDs get bigger/cheaper, people are likely to upgrade internal SSDs/HDs with 3rd party SSDs. And presently use external SSDs for media and backups when the size/price becomes negotiable. Work has continued on alternatives to SSDs as well.

It's a bad assumption that OEM memory controllers of one storage type will insure your data. When multiple storage types are in use. Historically, we've always had multiple storage types. The Apple ][ had your choice of Disk ][ or tape backups for example. Real-world behavior is to add storage, often with price as a major factor.

The file system is the point in common, where the computer can insure data integrity regardless of where it was stored. It's good if individual storage types do internal integrity checking. And good that connectivity options do checks as well. But the computer has no way to guarantee such checks, or their reliability.

As I see it, we have three options.

1. Add the checksumming at the FS level. Make it optional, so portable devices can skip it. But on by default on Macs. We've got processing power to burn, CPU performance has outpaced other components.

2. Live with silent data corruption. The current state of affairs on the Mac. Other platforms seem headed for FS checksums, so this isn't a great competitive choice.

3. Apple evaluates all storage controllers on the market, at their own pace. The macOS locks out any storage device that hasn't been signed by Apple under a "made for macOS" program. Essentially a new DRM engine in the OS, with it's own overhead. This choice burns CPU overhead also, while removing consumer choices, and is dependent on Apple's review speed. Signed devices likely to be costlier than alternatives, and always later to market. I don't like this option.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jun 28, 2016, 06:25 PM
 
Originally Posted by reader50 View Post
You're assuming a single original SSD for all data. People are likely to attach external HDs for media drives and backups. As SSDs get bigger/cheaper, people are likely to upgrade internal SSDs/HDs with 3rd party SSDs. And presently use external SSDs for media and backups when the size/price becomes negotiable. Work has continued on alternatives to SSDs as well.
I think they should add basic checksumming on the Mac, but probably leave it off by default on included SSDs. And the Mac is a tiny part of the equation here - iOS is much bigger.

Originally Posted by reader50 View Post
1. Add the checksumming at the FS level. Make it optional, so portable devices can skip it. But on by default on Macs. We've got processing power to burn, CPU performance has outpaced other components.
You know everyone will use the defaults, and the new Macbooks are not faster than a new iPad Pro, not generally. No, if the drive uses an Apple SSD and Apple has reason to think their hardware is good, leave it off - but let that SSD notify the OS if it finds an uncorrectable error.

I just checked the SMART status on my five year old MBA. Not only are there zero uncorrectable errors, there are zero cases where the hardware ECC was invoked. Not a single time has the hardware ECC had to step in, and you propose adding another layer on top of that? This is not spinning rust anymore, SSDs are that much better.

And, again, ECC RAM and the lack thereof. At least on this Mac, that has (statistically) been a bigger cause for concern - I will have had softflips from that.

Originally Posted by reader50 View Post
2. Live with silent data corruption. The current state of affairs on the Mac. Other platforms seem headed for FS checksums, so this isn't a great competitive choice.
I need a smiley wobbling his hand, with code :not really:. Android uses Ext4, which does not have file checksumming (it has journal checksumming, but so does APFS). MS is working on ReFS, but 4 years after introduction, it remains unsupported on client Windows and lacks many fundamental features like Windows boot support. I am not so sure that MS will implement it on Win 10 - they're moving very slow if that's the idea.

Originally Posted by reader50 View Post
3. Apple evaluates all storage controllers on the market, at their own pace. The macOS locks out any storage device that hasn't been signed by Apple under a "made for macOS" program. Essentially a new DRM engine in the OS, with it's own overhead. This choice burns CPU overhead also, while removing consumer choices, and is dependent on Apple's review speed. Signed devices likely to be costlier than alternatives, and always later to market. I don't like this option.
It doesn't burn CPU to check the SSD controller ID once at mount... but why would they lock out drives? Either checksum in that case only, or just popup a box recommending frequent backups. Or do nothing and you're no worse off than you are today.

Again: I think that optional filesystem checksums on the Mac makes sense, but if Apple has data that says they're not needed on their first-party drives, I'm inclined to believe them.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jun 28, 2016, 09:17 PM
 
Originally Posted by P View Post
On the Mac: Yes. Apple can't know what drives people connect to it, so I believe there should be a simple error detection system to let people know if a block has gone bad, so it can be restored from a backup. Ideally this check can be integrated in Time Machine.
It can only be restored if you know the blocks are bad. And very often, good data in backups eventually gets replaced by bad backups.
Originally Posted by P View Post
On other devices: No, probably not. Statistically a very large number of defects come from a small number of bad flash devices, and if Apple is confident they can weed these out at production (or at the controller level), I think that that is fine.
But it's not a matter of probability, it's a matter of expectation value: I'm keeping my data for decades, and I want to be sure that the pictures of when I was in my 20s are still accessible so I can bore the crap out of my children later on
I don't suffer from insanity, I enjoy every minute of it.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 4, 2016, 10:36 AM
 
Originally Posted by OreoCookie View Post
It can only be restored if you know the blocks are bad. And very often, good data in backups eventually gets replaced by bad backups.
The way I envision it is to store a basic checksum for every block and then when the backup runs, that checksum is computed again (by the SSD controller) and compared to the stored value - on both the backup and the actual drive. If there is a mismatch, raise hell so the user knows something is wrong, then restore the bad copy from the good copy.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 4, 2016, 10:03 PM
 
Originally Posted by P View Post
The way I envision it is to store a basic checksum for every block and then when the backup runs, that checksum is computed again (by the SSD controller) and compared to the stored value - on both the backup and the actual drive. If there is a mismatch, raise hell so the user knows something is wrong, then restore the bad copy from the good copy.
Yeah, and perhaps the system performs consistency checks in the background as well. But especially for backups or other data transfers, I'd really like the idea of being sure that the data you've sent coincides with the data you transmit. Having end-to-end data integrity seems like a worthy, forward looking goal to me. Given that the iPad and the iPhone have comparable performance to low-end Macs, I think this is a benefit that they should enjoy, too. The watch is a different story, though, especially since very little data is going upstream.
I don't suffer from insanity, I enjoy every minute of it.
     
Ham Sandwich
Guest
Status:
Reply With Quote
Jul 14, 2016, 12:14 PM
 
So Apple has a new file system but then why can't we use it on the beta (or likely final version as well)?
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 14, 2016, 01:17 PM
 
Can be used on the beta, I believe, but not to boot from.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Ham Sandwich
Guest
Status:
Reply With Quote
Jul 14, 2016, 01:49 PM
 
So does that mean, after I boot up my computer, I will be using HFS, but then after I log in I can switch to APFS?
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 14, 2016, 01:57 PM
 
It means that you use APFS on a data drive but not on the drive with the operating system.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Ham Sandwich
Guest
Status:
Reply With Quote
Jul 14, 2016, 02:07 PM
 
I don't get that logic. So, if I have a 2 TB external hard drive, just to store files on it, I can use the elaborate APFS on that, even though all that I can do is put files on it, but not put an OS on there, but on my own Mac where the core of everything I do is, I still have to use the older file system.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 14, 2016, 02:16 PM
 
Booting is actually quite hard to do. It took three years after the official release for Sun to get Solaris to boot on ZFS. Apple will certainly want to have the FS code rock solid and the disk format stable before they start to work on that.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Jul 14, 2016, 02:39 PM
 
Originally Posted by And.reg View Post
I don't get that logic. .
What's not to get ?

This is a beta, it's in the testing stages. Once finalized, it will certainly be used for bootable drives.

-t
     
Ham Sandwich
Guest
Status:
Reply With Quote
Jul 14, 2016, 04:10 PM
 
Originally Posted by turtle777 View Post
Once finalized, it will certainly be used for bootable drives.
That's what I was not getting.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Jul 14, 2016, 04:32 PM
 
Any bugs in filesystem code can (will) corrupt data. They all have to be caught and fixed. If it were used on your system drive, a bug could corrupt your system files instead of your family pictures. This would force you to reinstall the OS. And with the bug still there, it could re-corrupt at any time.

Filesystem code really needs to be solid.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 14, 2016, 10:47 PM
 
Originally Posted by And.reg View Post
That's what I was not getting.
APFS isn't fully baked yet, they have just released a beta, and it will take until next year's release of macOS until Apple intends to release it to the public.
I don't suffer from insanity, I enjoy every minute of it.
     
Ham Sandwich
Guest
Status:
Reply With Quote
Jul 16, 2016, 12:02 PM
 
So, until 10.13?
     
besson3c
Clinically Insane
Join Date: Mar 2001
Location: yes
Status: Offline
Reply With Quote
Jul 17, 2016, 04:37 PM
 
Originally Posted by P View Post
Booting is actually quite hard to do. It took three years after the official release for Sun to get Solaris to boot on ZFS. Apple will certainly want to have the FS code rock solid and the disk format stable before they start to work on that.
It took Sun this long because they wanted to go all-in on ZFS, rather than having a separate boot partition for the kernel. Since there already is a rescue partition, they could put just the kernel on an HFS partition.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 18, 2016, 04:56 AM
 
I would suspect that they want a little time to make the VM system work well with APFS as well. Now, one CAN put the swap file on a raw partition if one so chooses, but that of course makes it less flexible - either you reserve a good chunk for VM or you remove the ability grow the swap.

Basically they're not in too much hurry. They will finish APFS to be a drop-in replacement, root boot and swap, before they launch.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
CharlesS
Posting Junkie
Join Date: Dec 2000
Status: Offline
Reply With Quote
Jul 18, 2016, 03:50 PM
 
Originally Posted by And.reg View Post
So, until 10.13?
They just dropped the "OS X" moniker. My guess is that the next one's going to be 11.0.

And if there were ever a change that would warrant an 11.0, it's probably this.

Ticking sound coming from a .pkg package? Don't let the .bom go off! Inspect it first with Pacifist. Macworld - five mice!
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Jul 18, 2016, 07:52 PM
 
Originally Posted by CharlesS View Post
They just dropped the "OS X" moniker. My guess is that the next one's going to be 11.0.
Well, Sierra is 10.12, event though it's not OS X anymore, right ?
So who knows.

Originally Posted by CharlesS View Post
And if there were ever a change that would warrant an 11.0, it's probably this..
Yes, it would make sense.

-t
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Jul 18, 2016, 10:37 PM
 
Originally Posted by P View Post
I would suspect that they want a little time to make the VM system work well with APFS as well. Now, one CAN put the swap file on a raw partition if one so chooses, but that of course makes it less flexible - either you reserve a good chunk for VM or you remove the ability grow the swap.
FreeBSD used to work this way, although I do not know whether this has changed after they transitioned to ZFS as their default filesystem. You'd have to dedicate a special partition for swap memory. I'm not sure whether it is really a concern that you use a raw partition for swap.
Originally Posted by P View Post
Basically they're not in too much hurry. They will finish APFS to be a drop-in replacement, root boot and swap, before they launch.
Agreed. And they focus on making it is ship, the flexible design allows them to add many features later on without breaking compatibility.
I don't suffer from insanity, I enjoy every minute of it.
     
P  (op)
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Jul 19, 2016, 10:03 AM
 
Back in the 10.0 days when we were all trying to make it run snappier, there were people who experimented with putting the VM swap on a raw partition. It worked, it was a little bit faster in benches, but not enough to bother with.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
   
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 08:54 AM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,