Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Software - Troubleshooting and Discussion > macOS > macOS High Sierra

macOS High Sierra (Page 2)
Thread Tools
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 2, 2017, 06:49 AM
 
Originally Posted by P View Post
I think Apple wants to keep the checksumming in the SSD controller for power reasons. Sending all that data to the SOC will cost lots of power. They do need hooks to the OS though, we agree on that.
Agreed, and since they will be able to reuse at least the logic, I think that seems like the way to go. I'm curious, though, how they will expose checksumming to the filesystem, because that is something the filesystem should know about, not some other process that is Frankensteined on top.
Originally Posted by P View Post
Apple recently made a new dump of Darwin (open source Mac OS X kernel) to github, and this time included all the iOS parts for the first time. It truly is one development tree.
Yup, I noticed that this morning, too. Curious move, since Apple's heart never seemed to have been into the Darwin OS project. It's not as if they start developing their OSes out in the open. Do you have an idea what prompted this?
Originally Posted by P View Post
As for Mac on ARM... being as good as Intel isn't enough. Moving CPU archs again is hard and costs money, and I don't think they will unless their own ARM hardware is noticeably better. Thet do have that option in the back pocket, though.
At least on mobile, just the ability to integrate custom co-processors alone can make the difference, if you want to do a fingerprint scanner or FaceID, you'd no longer need an ARM-computer-in-an-Intel-computer just to ensure the integrity of the data. Ditto for “neural engines” and other co-processors that they could tack on very easily on their own silicon, but that simply won't happen if they stick to Intel. That friction will add up over time, I think, and the longer I look into the future, the more likely it seems to me that Apple will end up switching — but that could be 5-10 years out.

To me the canary in the coal mine to me is whether Apple will deploy its own ARM-based servers based on custom silicon, because I take that as a sign of them developing “power consumption is no object” version of their CPU cores. Once they do that, I am quite sure it is game over for Intel at Apple.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 3, 2017, 07:00 AM
 
Originally Posted by OreoCookie View Post
Agreed, and since they will be able to reuse at least the logic, I think that seems like the way to go. I'm curious, though, how they will expose checksumming to the filesystem, because that is something the filesystem should know about, not some other process that is Frankensteined on top.
If the error occurs on copy, trigger a write error. If the error occurs on access, trigger a read error. Regular disks have these errors and the filesystem should have routines to respond to them. The process of checking regularly must be something outside the filesystem driver, though.

Yup, I noticed that this morning, too. Curious move, since Apple's heart never seemed to have been into the Darwin OS project. It's not as if they start developing their OSes out in the open. Do you have an idea what prompted this?
I sometimes get the feeling that there are two sides on this issue at Apple - lower level devs want to open the source, higherups don't see the reason. Opening the source generally helps outside developers who can understand how the OS works, but it is of course giving away some secrets that Apple wants to keep. The current dump contains no info about the A11, despite the iPhone X being out already.

At least on mobile, just the ability to integrate custom co-processors alone can make the difference, if you want to do a fingerprint scanner or FaceID, you'd no longer need an ARM-computer-in-an-Intel-computer just to ensure the integrity of the data. Ditto for “neural engines” and other co-processors that they could tack on very easily on their own silicon, but that simply won't happen if they stick to Intel. That friction will add up over time, I think, and the longer I look into the future, the more likely it seems to me that Apple will end up switching — but that could be 5-10 years out.

To me the canary in the coal mine to me is whether Apple will deploy its own ARM-based servers based on custom silicon, because I take that as a sign of them developing “power consumption is no object” version of their CPU cores. Once they do that, I am quite sure it is game over for Intel at Apple.
There are wins to integrating, but... have you seen a recent Mac motherboard? It's not like we're down to four chips and have issues on how to integrate them. There are still several. If Apple wants to save space, it can bundle up features from several of those into on chip.

There are also wins to having secure features like TouchID on a separate chip, but more interestingly... AMD uses an ARM core for secure functions. They have also built a highly modular architecture, and could make a special Apple chip with an x86 CPU, a Radeon GPU, and whatever special bits Apple wants, all of it connected to HBM RAM on an interposer. It still not one chip, because I think flash storage would be off package along with the WiFi transceiver, but it is getting there.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 3, 2017, 08:32 AM
 
I just stumbled on this piece:

http://www.tomshardware.co.uk/intel-...ews-56570.html

It is about EMIB, a new interconnect that Intel is working on. As fast as an interposer but cheaper, basically, but what it means is that you can connect multiple chips together cheaply and with high bandwidth. The interesting part to me is the comment that Intel wants to make EMIB a standard. Intel could then make x86 CPUs and have them packaged with other chips at a third party for a buyer such as Apple. Interesting.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Chongo
Addicted to MacNN
Join Date: Aug 2007
Location: Phoenix, Arizona
Status: Online
Reply With Quote
Oct 3, 2017, 10:00 AM
 
Originally Posted by reader50 View Post
Apple has confirmed APFS support is coming to Fusion and HDDs in a future update.

Ars confirm

Press release for High Sierra:

Since HDDs are supported already, Apple presumably means they'll eventually convert boot HDDs by default.
Hopefully it won’t be too long.

When the update is realeased, does the install convert external HDD’s? I plan on getting an external drive to dump all the video off my iMac.
( Last edited by Chongo; Oct 3, 2017 at 11:09 AM. )
"The blood of the martyrs is the seed of the church" Saint Tertullian, 197 AD
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 3, 2017, 12:27 PM
 
Originally Posted by Chongo View Post
When the update is realeased, does the install convert external HDD’s? I plan on getting an external drive to dump all the video off my iMac.
Not by default, I think, but you can do so manually in Disk Utility.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
reader50  (op)
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Oct 3, 2017, 12:43 PM
 
The automatic APFS conversion is done by the HS Installer. It only does it to the volume it's being installed to, and (currently) only if that volume is hosted entirely on an SSD.

At least as of the betas, it's only supposed to do the silent conversion if it's an Apple-supplied SSD. However, beta testers reported 3rd-party SSDs were sometimes converted. I haven't heard if that proviso still applies to the final release.
     
reader50  (op)
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Oct 3, 2017, 01:51 PM
 
I tried a High Sierra upgrade on my 4,1 Mac Pro. It already had 5,1 firmware and the installer didn't give me any extra warnings.

The firmware update was applied, probably so the MacPro can recognize APFS boot volumes. Since I used a spare HDD partition, there was no APFS conversion. Installation went without incident.

It's just a default clean install. I won't try migrating until 13.1 or 13.2, after some bugs are fixed. Especially any bugs in APFS. Observations:

• Apple's graphics drivers hang up boot on my Radeon 7970 if an SST 4K monitor is plugged in via DisplayPort. ie - if a 4K60Hz display connection is present. Same if a 4K monitor connected over HDMI is the only connected monitor. I have to power down the monitors until after the boot chime, or connect via DVI without any DP connections, to work around it. This bug is still present in High Sierra.
• I seem to recall Startup Disk in Yosemite didn't always show Sierra volumes as boot options. It definitely does after the firmware update.
     
CharlesS
Posting Junkie
Join Date: Dec 2000
Status: Offline
Reply With Quote
Oct 3, 2017, 08:20 PM
 
Originally Posted by P View Post
Not by default, I think, but you can do so manually in Disk Utility.
It is very easy to do; just choose Edit -> Convert to APFS.

The lack of automatic conversion for HDDs is a non-issue.

Ticking sound coming from a .pkg package? Don't let the .bom go off! Inspect it first with Pacifist. Macworld - five mice!
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 3, 2017, 10:03 PM
 
Originally Posted by P View Post
If the error occurs on copy, trigger a write error. If the error occurs on access, trigger a read error. Regular disks have these errors and the filesystem should have routines to respond to them. The process of checking regularly must be something outside the filesystem driver, though.
If that is what Apple is planning on doing, IMHO they are doing it wrong. It just doesn't advance the state of the art. With ZFS you can ask the filesystem to keep copies of files in order to reconstruct the data from corrupted blocks. APFS could be smarter and automatically re-download the data from a Time Machine backup or from iCloud.
Originally Posted by P View Post
I sometimes get the feeling that there are two sides on this issue at Apple - lower level devs want to open the source, higherups don't see the reason. Opening the source generally helps outside developers who can understand how the OS works, but it is of course giving away some secrets that Apple wants to keep. The current dump contains no info about the A11, despite the iPhone X being out already.
There are some hugely successful Apple open source projects, WebKit, llvm + clang and Swift come to mind, but there is usually not as much interest in keeping stuff under wraps. A publicly discussed and decided syntax change in Swift isn't going to spoil anything in the next WWDC keynote. Apple profits from getting input from developers and they actually help further development.

But with their OSes, Apple has no incentive to actually develop them in the open as that would give away tons of stuff. Nevertheless, it would still be beneficial, because e. g. security researchers could poke the system and see what happens on the inside.
Originally Posted by P View Post
There are wins to integrating, but... have you seen a recent Mac motherboard? It's not like we're down to four chips and have issues on how to integrate them. There are still several. If Apple wants to save space, it can bundle up features from several of those into on chip.
I don't think that's a good argument: the trend is to consolidate different chips as much as possible to shrink the size of motherboards, lower power consumption and simplify the overall design. It'd be simpler if Apple didn't have to put a second computer running a second OS in a computer.

But most importantly, Apple wouldn't be beholden to Intel to design chips with the specs that Apple wants. Intel still hasn't included low-power cores on their mobile chips even though ARM-based SoCs have had that for years. The balance of CPU-to-GPU is different in Intel's chips compared to what Apple prefers. Apple has much less input on whether or not to include certain features in Intel's next integrated GPUs (e. g. to enable certain features in Metal).

When Apple switched from PowerPC to Intel, PowerPCs weren't competitive in the CPU department but GPU-wise Apple could be state-of-the-art. And Intel had three extremely convincing arguments:
(1) It's then-new Core CPU cores would have great performance-per-watt.
(2) It's then-new Core CPU cores would have great performance, period, covering everything from notebook CPUs to workstations.
(3) Intel had a promising roadmap of continuous year-over-year improvements in (1) and (2), and an established track record of keeping its promises.

What Apple needs now is different from what Intel can offer, it needs
(1) more integration between CPU cores, GPU cores and various accelerators — something that Intel can't or won't offer,
(2) better performance-per-watt,
(3) a faster turn-around from speccing a design to bringing it to market and
(4) reliable execution.

None of these points line up with what are Intel's strengths at the moment. Performance-wise, its own cores are competitive in the mobile space already, so that is a wash right now. Since also Apple will probably hit the same essential performance walls as Intel has, I reckon that the year-over-year speed increases will become smaller also for Apple's CPU cores.
Originally Posted by P View Post
There are also wins to having secure features like TouchID on a separate chip, [...]
Sure, but on balance, I think the current arrangement is a net negative for Apple. Keep in mind that Apple could still decide to have a multichip design even if it designed its own SoCs for Macs.
Originally Posted by P View Post
but more interestingly... AMD uses an ARM core for secure functions. They have also built a highly modular architecture, and could make a special Apple chip with an x86 CPU, a Radeon GPU, and whatever special bits Apple wants, all of it connected to HBM RAM on an interposer. It still not one chip, because I think flash storage would be off package along with the WiFi transceiver, but it is getting there.
Well, I thought Apple would have taken the opportunity to buy AMD a while back, even if just for the GPU designers and all the patents. And I was thinking that it would consider putting AMD-based GPUs into its SoCs, but that was before it built its own GPU.

If AMD were open to making custom chips for Apple, that might be enticing, but it would depend largely on AMD's ability to deliver. And their recent track record is mixed. Ryzen is competitive, at least if you compare AMD's offerings to Intel CPUs at the same price point. But Apple would leave single-core performance on the table. I don't see the incentives
I don't suffer from insanity, I enjoy every minute of it.
     
CharlesS
Posting Junkie
Join Date: Dec 2000
Status: Offline
Reply With Quote
Oct 3, 2017, 10:28 PM
 
Originally Posted by OreoCookie View Post
What Apple needs now is different from what Intel can offer, it needs
(1) more integration between CPU cores, GPU cores and various accelerators — something that Intel can't or won't offer,
(2) better performance-per-watt,
(3) a faster turn-around from speccing a design to bringing it to market and
(4) reliable execution.

None of these points line up with what are Intel's strengths at the moment.
You're forgetting a huge one, though:

(5) Compatibility with Apple's existing third-party software library.

When Apple switched from 680x0 to PowerPC, and from PowerPC to Intel, in each case the target architecture had surpassed the previous one to a sufficient extent that they could emulate the old architecture with acceptable performance; that's not true with ARM. Trying to emulate x86 on ARM would result in quite pitiful performance; I don't expect that that will change unless x86 gets stalled at some point in a similar fashion to 68k and PPC in their final years. Failing that, if Apple were ever to introduce ARM-based Macs, they'd have to let the developer community know well ahead of time for there to be any software available at all for the new platform, and even then they'd risk the same sort of PR fiasco that the early non-x86-based Microsoft Surface models ran into where people were confused by the fact that between two supposedly Windows-based machines, only one of them would run all of their Windows software, and the other wouldn't.

Ticking sound coming from a .pkg package? Don't let the .bom go off! Inspect it first with Pacifist. Macworld - five mice!
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 4, 2017, 01:15 AM
 
Except that Apple has successfully managed 4 (!) CPU architecture transitions in the past 15 years (PowerPC —> Intel x86 —> Intel x64, ARM 32 bit —> ARM 64 bit). Apple has all of he tools to make this transition very smooth for developers and customers alike. Of course, there will be weird outliers like probably Adobe who will eff up.

With emulation, you're correct that this will be a factor, but it's only a temporary one and, I think, will be much more manageable than the PowerPC —> x86 thanks to the development environment being just XCode (as opposed to back then when many developers stuck unnecessarily long to Code Warrior) and the App Store. By forcing developers to submit fat binaries after some date, I think a lot of the popular software will make the transition rather quickly. Even software that relies on hardware-specific optimizations (say, Adobe software) has had time to implement at least part of them in their iOS apps. Of course, no transition is painless, there will be some who, say, will keep CS6 on their Macs until it stops running and then some, but overall I am confident that this is a problem with a solution.
I don't suffer from insanity, I enjoy every minute of it.
     
CharlesS
Posting Junkie
Join Date: Dec 2000
Status: Offline
Reply With Quote
Oct 4, 2017, 02:10 AM
 
The various 32->64 bit transitions are hardly comparable here, since those were simply upgrades to a new version of the current architecture, not a jump to a new one. In each case, the new processor was able to natively run code for the old one. It's apples and oranges.

Much of the third-party development was using Xcode at the time of the Intel transition, as well. It still took a long time to get everything ported. Xcode's not magic. Without the emulation layer, that transition would have been extremely painful.

Ticking sound coming from a .pkg package? Don't let the .bom go off! Inspect it first with Pacifist. Macworld - five mice!
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 4, 2017, 03:24 AM
 
Originally Posted by CharlesS View Post
The various 32->64 bit transitions are hardly comparable here, since those were simply upgrades to a new version of the current architecture, not a jump to a new one. In each case, the new processor was able to natively run code for the old one. It's apples and oranges.
Of course you are right that migrating to an entirely different architecture is more difficult, but I think the two 32 bit —> 64 bit transitions are comparable in certain respects, because apps did break. As an aside Apple's new A11 can no longer run 32 bit apps, because it apparently no longer contains the logic to run ARM v7(s) code. But even the chips that can, iOS 11 only supports 64 bit apps and the question of whether the underlying CPU can run 32 bit apps is moot.

The 64 bit Intel transition had to be done over a much longer period of time, because Apple did not and still does not control all the app distribution channels (which is good). At each step, there was some software that broke (e. g. my color calibration utility stopped working and then a Cisco VPN kernel almost bricked an OS update, these Apple OS updates always seem to come as a surprise for Cisco). So the pain was distributed over a longer period of time (32 bit kernel with the ability to run 64 bit apps —> 64 bit kernel with the ability to run also 32 bit apps —> 64 bit only).
Originally Posted by CharlesS View Post
Much of the third-party development was using Xcode at the time of the Intel transition, as well. It still took a long time to get everything ported. Xcode's not magic. Without the emulation layer, that transition would have been extremely painful.
But the hold-outs were important, big developers that impacted one of Apple's then-core markets. And I didn't mean to imply there should be no emulation layer, there definitely should be. What I was saying was that unlike with the PowerPC —> Intel transition, there would be no huge single-core speed boost associated with it, so there would be some performance regression. But I still maintain that this performance hit would be temporary and still result in usable, albeit slow apps.

I reckon Apple would roll out ARM-based Macs just like it did when doing the switch to Intel: first, release an ARM-based Mac aimed squarely at developers and then migrate notebooks first. Here, maximum performance is less of a priority and the type of apps these machines typically run would probably be migrated over quickly. Apple would help Microsoft and Adobe along with the transition, and most of the premier Mac developers (e. g. Omnigroup) have ported or rewritten their apps for iOS anyway.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 4, 2017, 06:02 AM
 
Originally Posted by OreoCookie View Post
If that is what Apple is planning on doing, IMHO they are doing it wrong. It just doesn't advance the state of the art. With ZFS you can ask the filesystem to keep copies of files in order to reconstruct the data from corrupted blocks. APFS could be smarter and automatically re-download the data from a Time Machine backup or from iCloud.
But this is in how the file system driver reacts to an error. The communication between SSD-controller and CPU doesn't have to be more complex than that.

I don't think that's a good argument: the trend is to consolidate different chips as much as possible to shrink the size of motherboards, lower power consumption and simplify the overall design. It'd be simpler if Apple didn't have to put a second computer running a second OS in a computer.
But in the case of TouchID, I think they want it to be separate. It is a feature.

But most importantly, Apple wouldn't be beholden to Intel to design chips with the specs that Apple wants. Intel still hasn't included low-power cores on their mobile chips even though ARM-based SoCs have had that for years. The balance of CPU-to-GPU is different in Intel's chips compared to what Apple prefers. Apple has much less input on whether or not to include certain features in Intel's next integrated GPUs (e. g. to enable certain features in Metal).
Intel is being quite responsive, though. Apple only put low-power cores on its chips last year, and there hasn't been a new Intel arch since. Maybe that is in the plans for Ice Lake or whatever the next redesign is called, Sapphire Rapids IIRC (except I think that Ice Lake will have to widen the reorder buffer, because the entire chip is at least six wide but bottlenecking on a four wide reorder buffer, and that doesn't make sense when both AMD and Apple are wider than four right now). Ironically, the best option is probably the old Atom core - not the current one. Intel hasn't done much with its graphics for some time, except updating the video decoder chips, but they did make the entire GT3 and Crystalwell in response to Appe's requests. A dualcore with GT3e runs quite well, and is probably exactly what Apple wants for the 13" right now.

When Apple switched from PowerPC to Intel, PowerPCs weren't competitive in the CPU department but GPU-wise Apple could be state-of-the-art. And Intel had three extremely convincing arguments:
(1) It's then-new Core CPU cores would have great performance-per-watt.
(2) It's then-new Core CPU cores would have great performance, period, covering everything from notebook CPUs to workstations.
(3) Intel had a promising roadmap of continuous year-over-year improvements in (1) and (2), and an established track record of keeping its promises.

What Apple needs now is different from what Intel can offer, it needs
(1) more integration between CPU cores, GPU cores and various accelerators — something that Intel can't or won't offer,
(2) better performance-per-watt,
(3) a faster turn-around from speccing a design to bringing it to market and
(4) reliable execution.

None of these points line up with what are Intel's strengths at the moment. Performance-wise, its own cores are competitive in the mobile space already, so that is a wash right now. Since also Apple will probably hit the same essential performance walls as Intel has, I reckon that the year-over-year speed increases will become smaller also for Apple's CPU cores.
1) is why I posted the EMIB bit. It is clearly Intel's answer to that.
2) I'm not sure I agree. Intel has the best performance per watt right now. What they don't necessarily have is good performance at very low power requirements. That is why you do things like add low-power cores - not to get better performance per watt, but to get any performance at all at low power. Performance per watt is worse.
3) is Apple any faster? They clearly take several years to make a design.
4) yeah... Intel has stumbled on 14nm, but they used to be reliable as clockwork for a decade before that. These things happen, and I don't think Apple can improve reliability by switching.

If AMD were open to making custom chips for Apple, that might be enticing, but it would depend largely on AMD's ability to deliver. And their recent track record is mixed. Ryzen is competitive, at least if you compare AMD's offerings to Intel CPUs at the same price point. But Apple would leave single-core performance on the table. I don't see the incentives
Arguably AMD is making custom chips for Apple - the MBP GPU is a special variant to be extra thin, and Vega graphics include a feature that absolutely smells Metal-compatible. I think they would leap at the chance. And remember, if it doesn't work out, Apple could buy Intel again, and they could keep doing so in some models to keep them happy. If they go ARM, that is a one-way street.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 4, 2017, 06:52 AM
 
@P
Sorry, I hope my thoughts are not too disjointed.

Originally Posted by P View Post
But this is in how the file system driver reacts to an error. The communication between SSD-controller and CPU doesn't have to be more complex than that.
But this way, you actually don't enable new, important features that are available on other filesystems. A read or write error could mean many things and the filesystem doesn't know whether the error is correctable or not, and that seems like a waste of an opportunity here that could really improve data integrity in a meaningful way. Ditto for things like ZFS's copies features, really important things like the kernel itself, kernel modules and essential libraries could have, say, two copies, temp files just one and perhaps the expert user could also deem some data important enough to warrant an extra layer of protection.
Originally Posted by P View Post
But in the case of TouchID, I think they want it to be separate. It is a feature.
Why? It has been integrated on the SoC for iOS devices, why should that be advantageous for Macs?
Originally Posted by P View Post
Intel is being quite responsive, though. Apple only put low-power cores on its chips last year, and there hasn't been a new Intel arch since.
But Intel isn't designing just for Apple, they have to balance Apple's wishes with those of other vendors happy as well. I know that Intel sometimes makes custom parts for Apple (e. g. the special packaging for their CPUs destined for the original MacBook Air comes to mind), but that is very different from Apple speccing a design precisely the way they want to. They also can't make Intel take a hit in margins by making their design quite large (the A11 has 4.3 billion transistors, that is quite something). Apple, if it designed the next MacBook Pro's SoC itself, could just take the hit and still make a 40 % margin on the product.

I'm just saying that Intel's and Apple's incentives and motives aren't as aligned anymore as they once were.
Originally Posted by P View Post
Intel hasn't done much with its graphics for some time, except updating the video decoder chips, but they did make the entire GT3 and Crystalwell in response to Appe's requests. A dualcore with GT3e runs quite well, and is probably exactly what Apple wants for the 13" right now.
This exemplifies the difference in priorities: Apple has been pushing hard to improve GPU performance, and that grew more rapidly than CPU performance in iOS devices. Intel hasn't continuously kept the foot on the gas in the GPU department. And there would have been 2c Crystalwell parts available for quite some time if Apple had had its way.
Originally Posted by P View Post
1) is why I posted the EMIB bit. It is clearly Intel's answer to that.
From Intel's perspective it is the right pitch to make, and for certain types of systems, particularly systems where you can have larger, more complex motherboards, this may work well. But ultimately, you have to standardize all chips around someone else's interconnect. That makes it difficult to deal with iterations, you have to wait until the new version is fully baked, which may mean that you have to wait until the Intel CPU has made it out the door. Especially for highly integrated mobile systems, the trend will always be to have more integration rather than less. For desktop parts, you have an argument, though.
Originally Posted by P View Post
2) I'm not sure I agree. Intel has the best performance per watt right now. What they don't necessarily have is good performance at very low power requirements. That is why you do things like add low-power cores - not to get better performance per watt, but to get any performance at all at low power. Performance per watt is worse.
I am speaking of performance-per-watt for the whole SoC at real-life workloads. I am quite sure that a fictional A11X-based MacBook would have better battery life than its real-life Intel counterpart. Would you disagree? And why? An A11X (which isn't out yet, but surely exists) running at full tilt consumes less power than the Core m7/low-power Core i7 that you find in the top-of-the-line MacBook, and I reckon the A11X's graphics is faster, too. (Of course, I extrapolated from the A10X here, but we could use the A11 (non-X) in my argument.)

Plus, I haven't heard Intel talk about heterogeneous multiprocessing at all (although correct me if I am just not up to speed). These parts have existed on the ARM side for quite a few years, Apple was arguably late to the party (although that may have been a conscious decision on their part), and I don't think Intel's lead time is any worse than Apple's lead time on new silicon. So it seems to me that Intel made the conscious decision not to make parts with big and small cores.
Originally Posted by P View Post
3) is Apple any faster? They clearly take several years to make a design.
Yes, because I am thinking of the time frame from when Apple specs its systems to the product hitting the shelf. If Apple has to interact with Intel here, convince them to make certain special parts, that takes additional time and Intel could always say no.

Or things in Intel's roadmap could change (e. g. the new interconnect could be postponed to Titikaka Lake). Necessary technologies (such as display interconnects with sufficient speeds) could be delayed whereas an in-house design could just use something custom (I imagine a MacBook Pro with a Super Retina display, for instance, the internal display does not need to be based off some standard interconnect).
Originally Posted by P View Post
Arguably AMD is making custom chips for Apple - the MBP GPU is a special variant to be extra thin, and Vega graphics include a feature that absolutely smells Metal-compatible. I think they would leap at the chance. And remember, if it doesn't work out, Apple could buy Intel again, and they could keep doing so in some models to keep them happy.
A custom version of an existing GPU is a very different beast from a custom SoC design. AMD has done custom designs (for the Playstation and the XBox), but they would have to do that every 1~2 years. I am not sure they are up for it. If Apple went with AMD CPUs, that would definitely be a sign that it plans to stay with x86, because that'd be a great way to put the thumb screws on Intel.
Originally Posted by P View Post
If they go ARM, that is a one-way street.
Yes, but Apple's largest ecosystem is already on ARM — and I think that is the ultimate advantage of ARM, it already is on >1 billion Apple devices. And Apple has all the pieces in place. To me it seems like a risk worth taking.
I don't suffer from insanity, I enjoy every minute of it.
     
And.reg
The Mighty
Join Date: Feb 2004
Location: Well the sports issue was within arm's reach but they closed up shop and kicked me out. And I'm out of toilet paper.
Status: Online
Reply With Quote
Oct 5, 2017, 09:49 AM
 
You know, one of the supposed benefits of updating to APFS on High Sierra was that folder/file sizes were supposed to be calculated instantaneously.

*buzzer*

WRONG. I just tried that on my Documents folder, and it took a 13-second calculation (same as on Sierra 10.12) to tell me that it was using 27.6 GB.

And to be real sure, on this Kaby Lake Pro MacBook, you can bet that the format is indeed APFS.
This one time, at Boot Camp, I stuck a flute up my PC.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 5, 2017, 01:00 PM
 
Originally Posted by OreoCookie View Post
But this way, you actually don't enable new, important features that are available on other filesystems. A read or write error could mean many things and the filesystem doesn't know whether the error is correctable or not, and that seems like a waste of an opportunity here that could really improve data integrity in a meaningful way.
So add more error codes, it is PCIe after all and Apple can do as they like - but I don't see why they should? The drive should try to make a correct read or write, retry as applicable, and then correct after if it has to (i.e. mark those blocks as bad if there was a failure). Only if it is unrecoverable should it wake up the OS and tell it something.

Ditto for things like ZFS's copies features, really important things like the kernel itself, kernel modules and essential libraries could have, say, two copies, temp files just one and perhaps the expert user could also deem some data important enough to warrant an extra layer of protection.
You can still have two copies on the file system layer, as there is no de-duping in the controller. A better fix, perhaps, is more parity information. APFS already makes multiple partitions in a default install, for temp files and swap, and could support settings like that easily. I'm not saying that there aren't features where doing the checksumming in the CPU doesn't make sense, but that costs power, and I don't see an obvious need for it.

Why? It has been integrated on the SoC for iOS devices, why should that be advantageous for Macs?
Why integrate it? It is integrated on the iPhone to save space on the motherboard - no other reason. They don't even share RAM.

But Intel isn't designing just for Apple, they have to balance Apple's wishes with those of other vendors happy as well. I know that Intel sometimes makes custom parts for Apple (e. g. the special packaging for their CPUs destined for the original MacBook Air comes to mind), but that is very different from Apple speccing a design precisely the way they want to. They also can't make Intel take a hit in margins by making their design quite large (the A11 has 4.3 billion transistors, that is quite something). Apple, if it designed the next MacBook Pro's SoC itself, could just take the hit and still make a 40 % margin on the product.
The point about going to Intel was that Apple would never fall behind. They could outsource the thing they're not necessarily good at - chip design - and win by having an OS that is better than the competition. If Apple is now moving things back in house, they have to always be better than Intel. Now, Apple is good, but Intel has been the leader of the pack for a decade now, and if they stumbled a bit recently, that doesn't erase that history. It is a risky move.

Yes, the transistor count on the A11 is massive, but the size of the chip is what costs money (you can have a crazy transistor count on a tiny chip if it is all SRAM). 89mm2 is not exactly large - Skylake quadcore is bigger. Intel will make as large a chip as you like, they make 500mm2 monsters and they used to make 300 mm2 chips for the mainstream. They just cost money, which is where you need to decide if the savings from having the chips fabbed at TSMC or whatever is worth it.

I'm just saying that Intel's and Apple's incentives and motives aren't as aligned anymore as they once were.

This exemplifies the difference in priorities: Apple has been pushing hard to improve GPU performance, and that grew more rapidly than CPU performance in iOS devices. Intel hasn't continuously kept the foot on the gas in the GPU department. And there would have been 2c Crystalwell parts available for quite some time if Apple had had its way.
There are bigger Crystalwell parts, it is just that nobody is buying them.

Crystalwell was GT3e, 2 full GPU slices and extra RAM on package. There is now, for Skylake, GT4e - three full slices and the extra RAM. Nobody is buying it (and so they didn't make a Kaby Lake version). Apple went with discrete graphics on the 15" MBP, where it would have been perfect.

I don't think Intel is not focused on graphics performance. They have made big steps - and the 13" MBP has a quite nice GPU, actually - but AMD and nVidia are simply better at it. For most customers, base Skylake graphics (GT2) is fine. For those that want more, you need discrete anyway. There is little market for the gap between.

I am speaking of performance-per-watt for the whole SoC at real-life workloads. I am quite sure that a fictional A11X-based MacBook would have better battery life than its real-life Intel counterpart. Would you disagree? And why? An A11X (which isn't out yet, but surely exists) running at full tilt consumes less power than the Core m7/low-power Core i7 that you find in the top-of-the-line MacBook, and I reckon the A11X's graphics is faster, too. (Of course, I extrapolated from the A10X here, but we could use the A11 (non-X) in my argument.)
Does an A10X consume less power than the Core m7 or whatever it is called now? Because iPhone 8 battery life is not so impressive. I think that power reqs have been creeping up.

Anyway: Yes, the A11 will beat Intel's nerfed 4.5W CPUs (it beats the 15W CPUs on many tests), but some of that power is connectivity, and it generally wins when the tests are a few seconds long or so. PCIe lanes cost energy. (Hey, maybe that is why there is only one port?)

Plus, I haven't heard Intel talk about heterogeneous multiprocessing at all (although correct me if I am just not up to speed). These parts have existed on the ARM side for quite a few years, Apple was arguably late to the party (although that may have been a conscious decision on their part), and I don't think Intel's lead time is any worse than Apple's lead time on new silicon. So it seems to me that Intel made the conscious decision not to make parts with big and small cores.
I think Intel has been flailing for a while on low power cores. They had Atom, and nobody used it where it was supposed to be used (netbooks were a save, not the intended main usage). They made Atom more powerful, but the name was so dead that the entire chip died in phone and tablet uses. It is now a low-budget CPU (some of those Celerons and Pentiums are Atoms inside). Then they just tried to squeeze power down, relying on their superior process, but they lost too much performance. I don't know what they're up to, but they haven't said a word on what Ice Lake includes. They could very well have low-power cores that run transparently at low P states.

Yes, because I am thinking of the time frame from when Apple specs its systems to the product hitting the shelf. If Apple has to interact with Intel here, convince them to make certain special parts, that takes additional time and Intel could always say no.
If it were special features for Apple, yes - but that is the point of EMIB, I think. Apple (along with other big OEMs) can certainly guide the general design .

Or things in Intel's roadmap could change (e. g. the new interconnect could be postponed to Titikaka Lake). Necessary technologies (such as display interconnects with sufficient speeds) could be delayed whereas an in-house design could just use something custom (I imagine a MacBook Pro with a Super Retina display, for instance, the internal display does not need to be based off some standard interconnect).
They could till use third-party GPUs for that... and even iPhones use a standard display interconnect (eDP). As long as Apple doesn't make the displays, that have to.

A custom version of an existing GPU is a very different beast from a custom SoC design. AMD has done custom designs (for the Playstation and the XBox), but they would have to do that every 1~2 years. I am not sure they are up for it. If Apple went with AMD CPUs, that would definitely be a sign that it plans to stay with x86, because that'd be a great way to put the thumb screws on Intel.
OTOH, AMD has enjoyed great financial success with those semi-custom SoCs for gaming consoles, and its customers are happy enough to come back and use those them again. If I were AMD and needed to make money, that is certainly something I would offer to my customers.

Yes, but Apple's largest ecosystem is already on ARM — and I think that is the ultimate advantage of ARM, it already is on >1 billion Apple devices. And Apple has all the pieces in place. To me it seems like a risk worth taking.
I'm not sure. Apple doesn't seem to spend that much time on the Mac anymore. The pro-consumer split is all but gone - the new MP might change that, but it is a reaction, not what Apple wanted to do. The OS gets minor fixes of anything except what might be used on iOS. There are Macs in the line not updated for years, and Apple won't even talk about them. Are they really going to make special chips for those Macs? They seem to be drawing down their Mac spending in every other area.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
And.reg
The Mighty
Join Date: Feb 2004
Location: Well the sports issue was within arm's reach but they closed up shop and kicked me out. And I'm out of toilet paper.
Status: Online
Reply With Quote
Oct 6, 2017, 02:10 PM
 
Interrupting... so, it's been about a week since updating to High Sierra, and I'm noticing somewhat improved battery life on my 2017 Pro MacBook, 2.9 GHz, on iGPU and battery and Safari only. At 87% battery life I have approximately 15 hours of battery life. Whereas on 10.12 Sierra, I would have about 10 hours. Safari seems to be more efficient.
This one time, at Boot Camp, I stuck a flute up my PC.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 6, 2017, 07:57 PM
 
@P
Great exchange, I'm enjoying this.
Originally Posted by P View Post
So add more error codes, it is PCIe after all and Apple can do as they like - but I don't see why they should? The drive should try to make a correct read or write, retry as applicable, and then correct after if it has to (i.e. mark those blocks as bad if there was a failure). Only if it is unrecoverable should it wake up the OS and tell it something.

You can still have two copies on the file system layer, as there is no de-duping in the controller. A better fix, perhaps, is more parity information. APFS already makes multiple partitions in a default install, for temp files and swap, and could support settings like that easily. I'm not saying that there aren't features where doing the checksumming in the CPU doesn't make sense, but that costs power, and I don't see an obvious need for it.
I fully admit that my views are colored by what I want

Right now, Apple doesn't even seem convinced that it needs checksumming for data blocks, so I think we should convince them one step at a time …
Originally Posted by P View Post
Why integrate it? It is integrated on the iPhone to save space on the motherboard - no other reason. They don't even share RAM.
If you look at the size of motherboards of mobile Macs, they have steadily been getting smaller as well.
Originally Posted by P View Post
The point about going to Intel was that Apple would never fall behind. They could outsource the thing they're not necessarily good at - chip design - and win by having an OS that is better than the competition. If Apple is now moving things back in house, they have to always be better than Intel. Now, Apple is good, but Intel has been the leader of the pack for a decade now, and if they stumbled a bit recently, that doesn't erase that history. It is a risky move.
I agree it is a risky move, and five years ago the risk would have been much larger. But now I'd say it is a calculated risk, and from Intel's recent track record I think it is fair to say that it is by no means clear whether Intel will be able to catch up with Apple. In fact, I would say the biggest risk is not chip design (which Apple does in-house) but fabbing (which Apple outsources).
Originally Posted by P View Post
Crystalwell was GT3e, 2 full GPU slices and extra RAM on package. There is now, for Skylake, GT4e - three full slices and the extra RAM. Nobody is buying it (and so they didn't make a Kaby Lake version). Apple went with discrete graphics on the 15" MBP, where it would have been perfect.
As far as I remember, there were no 2c notebook Crystalwell parts until recently, and those parts would have been appreciated by Apple's best-selling MacBook Pro, the 13".
Originally Posted by P View Post
Does an A10X consume less power than the Core m7 or whatever it is called now? Because iPhone 8 battery life is not so impressive. I think that power reqs have been creeping up.

Anyway: Yes, the A11 will beat Intel's nerfed 4.5W CPUs (it beats the 15W CPUs on many tests), but some of that power is connectivity, and it generally wins when the tests are a few seconds long or so. PCIe lanes cost energy. (Hey, maybe that is why there is only one port?)
Right now Apple could conceivably use the A10X and its successors in the MacBook and the non-Touch Bar 13" MacBook Pro (the one which uses a 15 W CPU) without any impact on performance (probably even a performance boost) and with improved battery life. With improved cooling and larger batteries, I don't think throttling will be an issue. (Traditionally, Apple's SoCs haven't been prone to throttling in tests as Apple keeps the TDP closer to the amount of heat that can be dissipated by the chassis. To be fair, though, we don't know yet how the A11 fares.)

For larger machines, Apple would have to beef up its SoCs, and they have to scale up its CPU and GPU cores. This is a non-trivial effort, but achievable.
Originally Posted by P View Post
I think Intel has been flailing for a while on low power cores. They had Atom, [...]
Yes, agreed, and it bit them in the rear end. And I don't see a sign of that getting better any time soon, because instead of focussing on having a second, lower-power core (akin to an ARM Cortex A53) in addition to its higher-power core, they now apparently sell Celerons and Pentiums with older-gen Cores. It's confusing to say the least.

I think Intel is caught between a rock and a hard place: apparently, its mainstream Core processors performed better when constricted to 4.5 W than the Atoms, but they were still the same big and expensive cores as before. Plus, 4.5 W TDP is far away from the Core's design specs, so it isn't exactly optimized for that here. I am not sure whether to blame the relative lack of attention or resources — it is clear that their Core cores (ugh) are their bread and butter, spanning the gamut from high end servers to MacBooks, and diverting a significant amount of resources away from that may seem like a gamble.

ARM's big LITTLE strategy turns out to be a winning idea in the mobile space, and they were early and very strategic about it. Intel hasn't sent any signs that it will have heterogeneous multiprocessing in one of its CPUs anytime soon, and it's a real bummer. A 2 big + 4 LITTLE combo is a great, balanced configuration for entry-level notebooks like the MacBook that can be scaled up.
Originally Posted by P View Post
I don't know what they're up to, but they haven't said a word on what Ice Lake includes. They could very well have low-power cores that run transparently at low P states.
I hope you are right, because at least for a few more years, Macs will have CPUs running the x64 instruction set.
Originally Posted by P View Post
They could till use third-party GPUs for that... and even iPhones use a standard display interconnect (eDP). As long as Apple doesn't make the displays, that have to.
No, they don't have to use something home grown if a standard fully satisfies their needs.
Originally Posted by P View Post
OTOH, AMD has enjoyed great financial success with those semi-custom SoCs for gaming consoles, and its customers are happy enough to come back and use those them again. If I were AMD and needed to make money, that is certainly something I would offer to my customers.
AMD would probably kill to add Apple to its list of customers and be very accommodating. And I think they have the right products for iMacs (especially higher-end ones and the iMac Pro) as well as the Mac Pro right now. Where they are lacking is in the mobile department.

Philosophically, I would like if Apple and the rest of the industry applied more pressure to Intel to perform, and it'd be cool if they offered Threadripper and Epyc CPUs in their professional desktops.
Originally Posted by P View Post
I'm not sure. Apple doesn't seem to spend that much time on the Mac anymore. The pro-consumer split is all but gone - the new MP might change that, but it is a reaction, not what Apple wanted to do. The OS gets minor fixes of anything except what might be used on iOS. There are Macs in the line not updated for years, and Apple won't even talk about them. Are they really going to make special chips for those Macs? They seem to be drawing down their Mac spending in every other area.
Honestly, I think this is your best counter argument to switching (at least as I see it). It is very easy to extrapolate from technical possibilities and upsides, but if there is no will or if Apple does not want to risk messing with its perfect yearly cadence for iPhone (and iPad) SoCs, then they might be dissuaded to switch.

One canary in the coal mine to me is if Apple deploys custom ARM-based servers on a large scale. That is something most of the big cloud providers are at least toying with, and that would indicate an effort to build chips that scale way up from a phone or a tablet.
( Last edited by reader50; Oct 6, 2017 at 09:42 PM. Reason: remove broken quote paragraph that had no reply)
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 7, 2017, 11:11 AM
 
Originally Posted by And.reg View Post
Interrupting... so, it's been about a week since updating to High Sierra, and I'm noticing somewhat improved battery life on my 2017 Pro MacBook, 2.9 GHz, on iGPU and battery and Safari only. At 87% battery life I have approximately 15 hours of battery life. Whereas on 10.12 Sierra, I would have about 10 hours. Safari seems to be more efficient.
Some of that is blocking some tracking, some it is auto-pause on videos. I wonder if Apple didn't get a driver update in for the GPU as well. Kaby Lake is supposed to have full hardware decoding for a few more codecs (notably VP9 that Google likes to push to all Chrome users, and which is used by others as well sometimes), and it was not enabled on Sierra.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 7, 2017, 12:38 PM
 
Originally Posted by OreoCookie View Post
@P
Great exchange, I'm enjoying this.
Me too! This is why I joined online forums in the first place. Fair warning though, this is a long, rambling post.

I fully admit that my views are colored by what I want

Right now, Apple doesn't even seem convinced that it needs checksumming for data blocks, so I think we should convince them one step at a time …
There is some sort of verification in the writing stage in the SSD controller - it is just not verified for bitrot.

I just want the data. Apple says that they have data for why it isn't needed, so why won't they just show us?

If you look at the size of motherboards of mobile Macs, they have steadily been getting smaller as well.
They have, but slowly. The one in the newest 2016 (and 2017) models has a funny shape to fit in around the cooling fans, but in surface area it isn't a lot smaller than the Retina one - and most of that space is saved by soldering in flash. The biggest saving Apple could have made would have been to skip the discrete graphics in the 15", since they're using a fairly anaemic power-throttled GPU anyway, but they didn't.

I agree it is a risky move, and five years ago the risk would have been much larger. But now I'd say it is a calculated risk, and from Intel's recent track record I think it is fair to say that it is by no means clear whether Intel will be able to catch up with Apple. In fact, I would say the biggest risk is not chip design (which Apple does in-house) but fabbing (which Apple outsources).
Yes the biggest risk is the fabs, but you can't separate them. If Intel messes up the process, then everyone suffers and the playing field is even - if Intel succeeds and TSMC (or GF, or Samesung, or whoever) fails, Apple is behind like in the PPC days. Remember that GF has failed spectacularly on their process development in recent years, and EVERYONE failed on the 20nm node. They both succeeded with the 14/16nm node, but there's really only two left on the cutting-edge foundry side now that GF is using Sammy's process, so there isn't a lot of data to draw on here.

As far as I remember, there were no 2c notebook Crystalwell parts until recently, and those parts would have been appreciated by Apple's best-selling MacBook Pro, the 13".
Right, I misunderstood your post here.

Crystalwell launched with Haswell in Q3 '13, quadcore parts only. The followup was going to be Broadwell on 14nm a year later, but that failed so spectacularly that we don't really know what it was supposed to include. The generation after, Skylake, DID include dualcores with Crystalwell (GT3e), launched in Q3 '15. Apple then didn't use them until a year later, which I will never understand, but the duals were there two years after the quads. Intel messed up the process which delayed the launch, but I don't think that they undervalued the need for better GPUs.

Right now Apple could conceivably use the A10X and its successors in the MacBook and the non-Touch Bar 13" MacBook Pro (the one which uses a 15 W CPU) without any impact on performance (probably even a performance boost) and with improved battery life. With improved cooling and larger batteries, I don't think throttling will be an issue. (Traditionally, Apple's SoCs haven't been prone to throttling in tests as Apple keeps the TDP closer to the amount of heat that can be dissipated by the chassis. To be fair, though, we don't know yet how the A11 fares.)

For larger machines, Apple would have to beef up its SoCs, and they have to scale up its CPU and GPU cores. This is a non-trivial effort, but achievable.
Performance-wise, they could - if we only look at the SoC - but they can't put Thunderbolt ports on them, and their storage connection would have to be slower. Apple would need to start putting PCIe controllers on its SoCs to make either of those things work, and there goes the battery life advantage.

Scaling up the chip for a higher-performance Macs is harder than it seems. I made a reference to the reorder buffer earlier, but let me expand that a little. (warning: long story about details of modern CPUs forthcoming). A modern out of order processor will treat the incoming program as a list of things to be done, but will then try to move them around to maximise performance. It will also ignore the registers in the instruction set to keep more data in its registers, close to the CPU. If you have an instruction to say r1 +r2 => r1, the processor will first find where it stored r1 and r2 - let's say it is at PRF47 and PRF59, for the sake of argument. It will then add those two values, and store them somewhere, but it won't be PRF47. It will store the data in an unused location - call it PRF84 - so that if it again needs to have the data that used to be in r1, it doesn't have to load it from main memory again. All it does is change the pointer from r1 to now point to PRF84 from that instruction on (because the CPU might still have earlier instructions to execute that haven't actually been completed yet).

Apple has an advantage over Intel here. Apple's SoCs can - ever since the A7 - execute 6 operations on the reorder buffer every cycle, while Intel has been stuck at 4 since the dawn of the modern era (the first Core 2, back in 2006, IIRC). Why is this? Well, because you can't just do multiple operations on the reorder buffer at the same time, so what Intel actually does is that it executes them one at a time, but at a higher clock, so it can service 4 operations per cycle. I don't know what Apple does, but most likely they do the same thing.

Apple needs to have its reorder buffer handle 15 billion operations per second - 6 per cycle, max clock 2.5 GHz. Intel need its reorder buffer to handle 20 billion operations per second - 4 per cycle, max clock 5 GHz (on the latest Coffee Lake CPUs). I mentioned in passing that the reorder buffer is now the bottleneck on Intel's execution width. Intel is clever and has a trick to limit pressure on the ROB (micro-op fusion), but why don't they just increase ROB width? They could easily be as wide as Apple if they just dropped to max clock a bit, since they can't use that high clock in the laptops anyway. Trouble is, of course, that Intel can't drop the max clock, because they're in a fight at the high-end as well, and their hardest competitor is their own old chips. Intel has tried before to make a separate CPU for the mobiles - Atom - but that failed, as we all know, so they're stuck with using their one good design across the entire production line.

Now: If Apple were to update its SOCs to support its desktops, it would have to increase clocks. The only way they can significantly do that, is by dropping performance on things like the ROB, and a million things like that (cache latency comes to mind as another big one). All of these changes drop IPC, and suddenly its hard to beat Intel.

I believe that it is advantageous for Apple to not have the same design for all its power targets. Intel stretches theirs from about 5W to close to 200W, and I think that that is too much - it is the only thing that lets Apple match them. Apple needs a break at some point, and I think between the best-selling high-margin iOS devices and the commoditised Macs is a good point.

Yes, agreed, and it bit them in the rear end. And I don't see a sign of that getting better any time soon, because instead of focussing on having a second, lower-power core (akin to an ARM Cortex A53) in addition to its higher-power core, they now apparently sell Celerons and Pentiums with older-gen Cores. It's confusing to say the least.

I think Intel is caught between a rock and a hard place: apparently, its mainstream Core processors performed better when constricted to 4.5 W than the Atoms, but they were still the same big and expensive cores as before. Plus, 4.5 W TDP is far away from the Core's design specs, so it isn't exactly optimized for that here. I am not sure whether to blame the relative lack of attention or resources — it is clear that their Core cores (ugh) are their bread and butter, spanning the gamut from high end servers to MacBooks, and diverting a significant amount of resources away from that may seem like a gamble.

ARM's big LITTLE strategy turns out to be a winning idea in the mobile space, and they were early and very strategic about it. Intel hasn't sent any signs that it will have heterogeneous multiprocessing in one of its CPUs anytime soon, and it's a real bummer. A 2 big + 4 LITTLE combo is a great, balanced configuration for entry-level notebooks like the MacBook that can be scaled up.
The thing is... TDP isn't power consumption. It is used as that as a proxy, but it isn't the truth. TDP is how much heat the system must be able to cool away. Having a 4.5W CPU means that you can put it in tight quarters without cooling, but it may not use less power in use.

On occasion I get dragged into debating the advantages of the various MBP models. One things frequently stated by the un-initiated is that the non-touchbar model has better battery life (than the 13" with touchbar) because it has a smaller TDP. Since the TDP is roughly half, it uses about half as much power - right?

Wrong. The non-touchbar has better battery life because the battery is 10% bigger, because it doesn't need the cooling that the bigger CPU does (only a single fan instead of two on its bigger brother, and Apple used the space for more battery), but if the two computers do the same thing, the non-touchbar model isn't more efficient. If they're doing something that either CPU can do in 15W or less, they will use the same amount of power. If they're doing something where the bigger CPU goes over the 15W limit, it will get done sooner as the 15W model power-throttles, and frequently use less power, as it can sleep sooner (which means that display, storage etc can also sleep).

The point about having an in-order core is that it can stay ready while sipping power. Without the OoOE hardware, you don't need to have power on to the ROB and all the long queues, so it uses less power while remaining ready to wake up at a moment's notice. It will however remain less efficient in principle. In practice this can be tweaked a bit (by using transistors and cache design optimised for power rather than for performance) but in general, the gain from a big.LITTLE is that you can wake up a small core, do a minor task and go back to sleep, having used less power than the big core would. This is vital in a phone, but maybe not in a laptop. Perhaps a better solution is a power-optimised small core? I'm not sure, but I can sort-of see why Intel didn't focus on making a big.LITTLE setup.

I wonder if there isn't more to be done on the software side. iOS can go into a power-saving mode, and it really does save power. Why can't my Mac do the same? The need is clearly there, and there is software that does this sort of thing in some limited cases (mainly by disabling the discrete GPU).

I hope you are right, because at least for a few more years, Macs will have CPUs running the x64 instruction set.

No, they don't have to use something home grown if a standard fully satisfies their needs.
This... came out strange. I mean that Apple cannot just ignore display standards, just because they make the SOC, unless they also make the display panel.

AMD would probably kill to add Apple to its list of customers and be very accommodating. And I think they have the right products for iMacs (especially higher-end ones and the iMac Pro) as well as the Mac Pro right now. Where they are lacking is in the mobile department.

Philosophically, I would like if Apple and the rest of the industry applied more pressure to Intel to perform, and it'd be cool if they offered Threadripper and Epyc CPUs in their professional desktops.
AMD's Zen-based laptop CPUs (4 cores and 9 Vega-based CUs of graphics power) are due in limited volumes this year, and from OEMs Q1 '18. We'll see soon enough if they're good enough.

Honestly, I think this is your best counter argument to switching (at least as I see it). It is very easy to extrapolate from technical possibilities and upsides, but if there is no will or if Apple does not want to risk messing with its perfect yearly cadence for iPhone (and iPad) SoCs, then they might be dissuaded to switch.

One canary in the coal mine to me is if Apple deploys custom ARM-based servers on a large scale. That is something most of the big cloud providers are at least toying with, and that would indicate an effort to build chips that scale way up from a phone or a tablet.
The custom server market is EXTREMELY interesting. The space for makers of enterprise computers that run x86 is vanishing rapidly. The big cloud providers (Amazon, Google, MS) buy their CPUs directly from Intel and make their own motherboards. There is maybe space for one maker of big x86 servers, and Dell and HPE are competing to be that one. I don't know what Apple does with clouds right now - they bought capacity from MS and Google and different points in time, but they also have their own data centres. One wonders what computers they have in them. I'm going to bet that it isn't trashcan Mac Pros...
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 8, 2017, 03:54 AM
 
Originally Posted by P View Post
They have, but slowly. The one in the newest 2016 (and 2017) models has a funny shape to fit in around the cooling fans, but in surface area it isn't a lot smaller than the Retina one - and most of that space is saved by soldering in flash. The biggest saving Apple could have made would have been to skip the discrete graphics in the 15", since they're using a fairly anaemic power-throttled GPU anyway, but they didn't.
I don't get why Apple bothered here, the discrete GPUs are middle of the road as you write, and since Apple now officially supports external GPUs, they could just forgo a discrete one and point the <1 % to a nexternal GPU if they need to ultimate GPU performance.
Originally Posted by P View Post
Yes the biggest risk is the fabs, but you can't separate them. If Intel messes up the process, then everyone suffers and the playing field is even - if Intel succeeds and TSMC (or GF, or Samesung, or whoever) fails, Apple is behind like in the PPC days. Remember that GF has failed spectacularly on their process development in recent years, and EVERYONE failed on the 20nm node. They both succeeded with the 14/16nm node, but there's really only two left on the cutting-edge foundry side now that GF is using Sammy's process, so there isn't a lot of data to draw on here.
You are right, although I expect that Intel will end up getting in the game here as well. I am sure Apple would be exhilarated if the A12 Superbionic or A13 Raptor (ok, I'm just making up names) were fabbed by Intel using the latest process node. Although Apple will still have that problem with its iOS devices (where, to be fair, it is several years ahead of the competition and won't have to worry).
Originally Posted by P View Post
Crystalwell launched with Haswell in Q3 '13, quadcore parts only. The followup was going to be Broadwell on 14nm a year later, but that failed so spectacularly that we don't really know what it was supposed to include. The generation after, Skylake, DID include dualcores with Crystalwell (GT3e), launched in Q3 '15. Apple then didn't use them until a year later, which I will never understand, but the duals were there two years after the quads. Intel messed up the process which delayed the launch, but I don't think that they undervalued the need for better GPUs.
I was wondering the same thing, when Intel debuted the 2c Crystalwell parts, I thought “Finally!” and expected them to show up in the next rev of the 13" MacBook Pro. I am not sure what Apple's reasoning here is.
Originally Posted by P View Post
Performance-wise, they could - if we only look at the SoC - but they can't put Thunderbolt ports on them, and their storage connection would have to be slower. Apple would need to start putting PCIe controllers on its SoCs to make either of those things work, and there goes the battery life advantage.
That's another point I didn't take into consideration in my earlier posts, although I would say it isn't so much connection to storage (Apple could just continue building its own storage controllers) but external connectors (a MacBook Pro has 4 ThunderBolt/USB3 ports).
Originally Posted by P View Post
Scaling up the chip for a higher-performance Macs is harder than it seems. [...] Apple needs to have its reorder buffer handle 15 billion operations per second - 6 per cycle, max clock 2.5 GHz. Intel need its reorder buffer to handle 20 billion operations per second - 4 per cycle, max clock 5 GHz (on the latest Coffee Lake CPUs). I mentioned in passing that the reorder buffer is now the bottleneck on Intel's execution width. Intel is clever and has a trick to limit pressure on the ROB (micro-op fusion), but why don't they just increase ROB width? They could easily be as wide as Apple if they just dropped to max clock a bit, since they can't use that high clock in the laptops anyway. Trouble is, of course, that Intel can't drop the max clock, because they're in a fight at the high-end as well, and their hardest competitor is their own old chips.
That's a nice succinct explanation, and it explains very nicely the intricacies of optimizing a design for a certain frequency range.
Originally Posted by P View Post
Intel has tried before to make a separate CPU for the mobiles - Atom - but that failed, as we all know, so they're stuck with using their one good design across the entire production line.

Now: If Apple were to update its SOCs to support its desktops, it would have to increase clocks. The only way they can significantly do that, is by dropping performance on things like the ROB, and a million things like that (cache latency comes to mind as another big one). All of these changes drop IPC, and suddenly its hard to beat Intel.
Let me expand on that a little: if you compare ARM and Intel here, then Intel had in theory three, in for a time two and currently only has one CPU core line that is supposed to serve its whole market from 4.5 W TDP all the way till 200 W TDP. (The third one is Intel's Quark core which I haven't heard much about and to my knowledge is sold only as evaluation kits.) ARM on the other hand offers many more cores, each of which is optimized for a different TDP (or die size) and feature window: you have Cortex A5/A7/A53 cpus that are roughly on par in terms of features, die size and characteristics (in order, narrow, very small die footprint, meant to be the LITTLE core in a big.LITTLE configuration). You have the A57/A72/A73/A75 series for larger, faster, out-of-order cores. And then you have much smaller cores in addition to that. While it is true that it is not fair to compare an A75 core to a Skylake Core core, it is still true that ARM does not need for one core to span the gamut of all applications. This, in a nutshell is Intel's Achilles Heel, and now it just might be too late.

If Apple intends to switch to ARM-based processors, it probably needs to come out with a custom design for its desktop at least — unless it feels content just gluing together many (weaker) cores. That's what AMD is doing now with Ryzen, and while they are competitive in certain benchmarks, just throwing cores at the problem isn't a panacea. That's why I think an ARM-based custom Apple server is the canary in the coal mine: what better way to cross fund development for a server SoC than also selling it in iMacs, iMac Pros and Mac Pros? Once I see rumors of that, I would be certain enough to take a bet.
Originally Posted by P View Post
I believe that it is advantageous for Apple to not have the same design for all its power targets. Intel stretches theirs from about 5W to close to 200W, and I think that that is too much - it is the only thing that lets Apple match them. Apple needs a break at some point, and I think between the best-selling high-margin iOS devices and the commoditised Macs is a good point.
Yes, I think this is a very valid point, the competition between Apple's SoCs and Intel's chips is asymmetric. And if the whole world were using mobile Macs, I would immediately say that this is why it is inevitable for Apple to switch. However, desktops are important for a small, but important sliver of the computing market. As I wrote in the preceding paragraph, Apple would probably need to invest in custom core design.

Originally Posted by P View Post
The thing is... TDP isn't power consumption. It is used as that as a proxy, but it isn't the truth. TDP is how much heat the system must be able to cool away. Having a 4.5W CPU means that you can put it in tight quarters without cooling, but it may not use less power in use.

On occasion I get dragged into debating the advantages of the various MBP models. One things frequently stated by the un-initiated is that the non-touchbar model has better battery life (than the 13" with touchbar) because it has a smaller TDP. Since the TDP is roughly half, it uses about half as much power - right?

Wrong. [...]
That's important, although I would add that at least in its current incarnations Apple's SoCs have been known for throttling very little, so they might have thermal headroom for further speed boosts.
Originally Posted by P View Post
This... came out strange. I mean that Apple cannot just ignore display standards, just because they make the SOC, unless they also make the display panel.
You are right, but they can jerry rig things if they have to.
Originally Posted by P View Post
AMD's Zen-based laptop CPUs (4 cores and 9 Vega-based CUs of graphics power) are due in limited volumes this year, and from OEMs Q1 '18. We'll see soon enough if they're good enough.
I'm very curious, because performance-wise AMD is competitive if you compare CPUs at the same price points.
Originally Posted by P View Post
The custom server market is EXTREMELY interesting. The space for makers of enterprise computers that run x86 is vanishing rapidly. The big cloud providers (Amazon, Google, MS) buy their CPUs directly from Intel and make their own motherboards.
I would add that all of them are also toying either with custom accelerators (such as Tensor Flow co-processors) or with ARM-based servers as well. ARM-based server SoCs are just at the verge of being competitive. As far as I can tell, Apple is the only big cloud provider who doesn't deploy custom server hardware.
Originally Posted by P View Post
There is maybe space for one maker of big x86 servers, and Dell and HPE are competing to be that one. I don't know what Apple does with clouds right now - they bought capacity from MS and Google and different points in time, but they also have their own data centres. One wonders what computers they have in them. I'm going to bet that it isn't trashcan Mac Pros...
From older pictures, it looked as if they were using off-the-shelf stuff such as EMC storage arrays. Who knows what they are running now — not just hardware-wise, also in terms of software.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 10, 2017, 03:45 PM
 
Originally Posted by OreoCookie View Post
I don't get why Apple bothered here, the discrete GPUs are middle of the road as you write, and since Apple now officially supports external GPUs, they could just forgo a discrete one and point the <1 % to a nexternal GPU if they need to ultimate GPU performance.
I would love to hear a tell-all about this project at some point, because so many things are interesting, but my guess is that it goes something like this...

Apple got requests to make a 15" Macbook Air, and it got requests to give the Airs a better display. It decided, more or less, to combine the two lines into one, and that is what we have in the new MB/MBP. While designing this, it also wanted to have a "marker" that "this line is pro!" for some of the products. It knows that it has some customers who really expect certain products to be pro, and look for things that designate them as such and not consumer (it famously sold an Ethernet adapter for the Retina MBPs at a loss because it . Discrete graphics was one such, and if we're honest, it's not a bad one - dual 5K displays is very much a pro feature - but I fear that it led them down a path of squeezing everything else to get that discrete graphics into a thin body.

I also think that external GPUs are very much a late idea, probably something they came up with after the MBP launch. Listening to Gruber's podcast, Apple staffers were flabbergasted and disappointed at the reception of the new MBPs - they thought that they had made another great product, and suddenly everyone is complaining about it "not being" pro for not having a 32GB RAM option. The presentation to five reporters about "we're going to make a new Mac Pro" was very much a reaction to that, and I think that the external GPU focus was as well.

Now, why didn't they make an integrated GPU option? Put simply, I don't think that it saved them that much money. AMD likely sold them very cheap GPUs, and Intel adds $60 for the Crystalwell chips.

You are right, although I expect that Intel will end up getting in the game here as well. I am sure Apple would be exhilarated if the A12 Superbionic or A13 Raptor (ok, I'm just making up names) were fabbed by Intel using the latest process node. Although Apple will still have that problem with its iOS devices (where, to be fair, it is several years ahead of the competition and won't have to worry).
I'm glad Schiller's team gets to come up with some cool names. This is like fighter jets now.

If TSMC's new process failed, Apple would be fine. Yes, the iPhone XIII wouldn't get a big boost over the iPhone XII, but that's OK, because nobody else would get a boost either. This is what happened with the 20nm process - the A8 in the iPhone 6 was a total dud, and Apple had to fudge the numbers to even come up with a 25% boost over the A7, but the iPhone 6 sold fairly well anyway (slight understatement).

I was wondering the same thing, when Intel debuted the 2c Crystalwell parts, I thought “Finally!” and expected them to show up in the next rev of the 13" MacBook Pro. I am not sure what Apple's reasoning here is.
There must have been a delay with the new models for some reason - touchbar, software for it, or the new displays - and Apple decided to hold off on the update to put Skylake into the expensive new model.

Let me expand on that a little: if you compare ARM and Intel here, then Intel had in theory three, in for a time two and currently only has one CPU core line that is supposed to serve its whole market from 4.5 W TDP all the way till 200 W TDP. (The third one is Intel's Quark core which I haven't heard much about and to my knowledge is sold only as evaluation kits.)
I forgot about Quark entirely. It seems to be sold still, but it does into tinkering kits that compete with a Raspberry Pi.

ARM on the other hand offers many more cores, each of which is optimized for a different TDP (or die size) and feature window: you have Cortex A5/A7/A53 cpus that are roughly on par in terms of features, die size and characteristics (in order, narrow, very small die footprint, meant to be the LITTLE core in a big.LITTLE configuration). You have the A57/A72/A73/A75 series for larger, faster, out-of-order cores.
Just to enhance your point a little - there are actually two different groups in the later set. The A9 core was supposed to be replaced by the A15, with a partial focus on microservers, which is what then became A57 and the whole big.LITTLE thing. Because that wasn't exactly a great core for mobiles, ARM kept developing the A9 into A12, A17 and finally A73, the last one being special only in that AMD tried a marketing push it for high-end mobiles so it could focus on microservers again with the A75. ARM doesn't have two families here - they have three. It is Apple's great luck that mobile manufacturers have kept picking the wrong core - they should have used A9 - A12 - A17 - A72 and skipped the rather lackluster A15, A57 and A72, which had far too much of a server focus to be good mobile chips.

And then you have much smaller cores in addition to that. While it is true that it is not fair to compare an A75 core to a Skylake Core core, it is still true that ARM does not need for one core to span the gamut of all applications. This, in a nutshell is Intel's Achilles Heel, and now it just might be too late.
Agree, completely. Intel is dragging around a server-focused cache system, AVX-256, an interconnect system that scales to 24 cores even if they only use 2. This is their weakness, and I hope they do something about it. Skylake does show some indications that they are looking at it, with Skylake-E being different than Skylake in a few important characteristics (interconnect and cache system)

If Apple intends to switch to ARM-based processors, it probably needs to come out with a custom design for its desktop at least — unless it feels content just gluing together many (weaker) cores. That's what AMD is doing now with Ryzen, and while they are competitive in certain benchmarks, just throwing cores at the problem isn't a panacea. That's why I think an ARM-based custom Apple server is the canary in the coal mine: what better way to cross fund development for a server SoC than also selling it in iMacs, iMac Pros and Mac Pros? Once I see rumors of that, I would be certain enough to take a bet.
I wonder if they're experimenting with that? I'm sure they have had the full OS running on their own chips since the A7 at least, but actually making a new SoC? The many weaker cores ("flock of chickens" approach) won't help, they need single-threaded performance.

I'm very curious, because performance-wise AMD is competitive if you compare CPUs at the same price points.
I think it all comes down to process. GF 14nm has a bit of a bad rap because TSMC 16nm has been better for both mobiles and GPUs, but it isn't terrible. Power consumption goes through the roof if you push the clock too far, but mobile chips wouldn't go that far - and those low-power GPUs Apple uses in the MBP indicate that AMD can get pretty decent performance out of a low-power GPU. The things that Zen is missing compared to core - inter-core communication, AVX bandwidth - is even less important for a mobile chip, so this could be good.

I would add that all of them are also toying either with custom accelerators (such as Tensor Flow co-processors) or with ARM-based servers as well. ARM-based server SoCs are just at the verge of being competitive. As far as I can tell, Apple is the only big cloud provider who doesn't deploy custom server hardware.

From older pictures, it looked as if they were using off-the-shelf stuff such as EMC storage arrays. Who knows what they are running now — not just hardware-wise, also in terms of software.
"doubling down on secrecy" indeed. I guess I can see why, but I'm curious, dammit!
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 10, 2017, 07:42 PM
 
Originally Posted by P View Post
I would love to hear a tell-all about this project at some point, because so many things are interesting, but my guess is that it goes something like this... [...] Discrete graphics was one such, and if we're honest, it's not a bad one - dual 5K displays is very much a pro feature - but I fear that it led them down a path of squeezing everything else to get that discrete graphics into a thin body.
Oh, yes, that might actually be a feature that pros want and need. But I like your hypothesis that Apple put it in to distinguish its different models. Many old-school pros still conflated integrated graphics with slow.
Originally Posted by P View Post
I also think that external GPUs are very much a late idea, probably something they came up with after the MBP launch.
Oh yes, although I would add that the idea of external GPUs is really old.
Originally Posted by P View Post
Listening to Gruber's podcast, Apple staffers were flabbergasted and disappointed at the reception of the new MBPs - they thought that they had made another great product, and suddenly everyone is complaining about it "not being" pro for not having a 32GB RAM option.
Yes, and asking around, most of the colleagues who need the power aren't happy with their redesigned MacBook Pros (for budgetary reasons I had to buy mine before the new, re-designed ones are released). Many don't like the TouchBar or the new keyboard, others would have liked more battery life and more performance. I think they were conceived from the same philosophy as the “new” Mac Pro — it just was a design for a different crowd. You need at least one “Ferrari” in your line-up that is just faster than anything else, has as much horsepower as you can fit in it and showcases where Apple goes next, a halo product.
Originally Posted by P View Post
Now, why didn't they make an integrated GPU option? Put simply, I don't think that it saved them that much money. AMD likely sold them very cheap GPUs, and Intel adds $60 for the Crystalwell chips.
Oh, I didn't think about the price at all (I have a PhD, not an MBA ), that's a good point.
Originally Posted by P View Post
If TSMC's new process failed, Apple would be fine. Yes, the iPhone XIII wouldn't get a big boost over the iPhone XII, but that's OK, because nobody else would get a boost either. This is what happened with the 20nm process - the A8 in the iPhone 6 was a total dud, and Apple had to fudge the numbers to even come up with a 25% boost over the A7, but the iPhone 6 sold fairly well anyway (slight understatement).
Absolutely, and because Apple is ahead in the CPU design department, they don't have to worry about that.
Originally Posted by P View Post
I forgot about Quark entirely. It seems to be sold still, but it does into tinkering kits that compete with a Raspberry Pi.
To be fair, I think everyone has forgotten about Quark. I don't really see a market for it either, and it seems it was mostly an academic exercise, a proof-of-principle that you can build ultra-low-power x86 cores.
Originally Posted by P View Post
Just to enhance your point a little - there are actually two different groups in the later set. [...] they should have used A9 - A12 - A17 - A72 and skipped the rather lackluster A15, A57 and A72, which had far too much of a server focus to be good mobile chips.
Oh yes, I forgot about that split. These are literally developed by different groups if memory serves.
Originally Posted by P View Post
Agree, completely. Intel is dragging around a server-focused cache system, AVX-256, an interconnect system that scales to 24 cores even if they only use 2. This is their weakness, and I hope they do something about it. Skylake does show some indications that they are looking at it, with Skylake-E being different than Skylake in a few important characteristics (interconnect and cache system)
And don't forget the lack of customizability: while the PC manufacturer ecosystem didn't grow around this, the mobile manufacturers did, and you can get 2 big 4 LITTLE cores, 4 fast A53s + 4 slow A53s, various graphics options, etc. And all of this for less money as the margins are much smaller. And Intel doesn't have an answer for that right now. The interconnect you mentioned could solve this, but it would have to be widely adopted and that at the very least will take time.
Originally Posted by P View Post
I wonder if they're experimenting with that? I'm sure they have had the full OS running on their own chips since the A7 at least, but actually making a new SoC? The many weaker cores ("flock of chickens" approach) won't help, they need single-threaded performance.
Definitely, for one simple reason: Apple wants best-of-class desktop-level performance in its iOS devices (as much as the thermal envelope permits), not “fast for a smartphone processor”. Right now they split the line-up into A# and A#X where the X variant is architecturally identical, but beefed up. Eventually, I expect them to design custom silicon for its iPad-like devices — and that may just include all Macs one day. To me, that's the right way to think about it.

Right now, the A#s are so ridiculously powerful for smartphone processors that Apple doesn't have to do that — yet. But as far as I can tell, all the “easy” measures to substantially increase IPC (going OoO, going wide, good branch prediction) have been implemented. And honestly, I'd be happy if the A12 and A13 focus on delivering the same level of performance at much lower power consumption. It feels as if I don't fully utilize the power of the A10 in my iPhone 7.

At one point Apple may conclude it is necessary to unchain the iPad from the iPhone, especially if Apple continues to release larger and larger variants. (Just imagine an iOS-powered “drawing table”.) They may then go to a “tick-tock” model as gains are harder to come by, and then they may alternate releases of these two different families of cores. And then there is the server path from the top, which I think is quite likely, too. For certain server tasks especially, a custom machine with co-processors for “machine learning” and “big data” and the right amount of cores with the right performance and performance per watt is an enticing proposition — that the competition is already playing with.

Of course (and to mention an argument to be cautious about my claim), Apple will want to delay this point in time as far into the future as it can. Its current strategy works exceptionally well and there is no competition for its iOS devices (in terms of performance) on the horizon. But once iPads compete more directly with notebooks, Apple will want to win based on performance as well.

By the way, I did forget that Apple already has different cores, they have the S# line-up which is comparable to the Cortex A5/A7/A53 family and W1 core which might be closer to either an embedded or a real time core (ultra low power and low latency).
Originally Posted by P View Post
"doubling down on secrecy" indeed. I guess I can see why, but I'm curious, dammit!
Same here! I would love to see what they do on the back end, but I haven't seen any real leaks.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 13, 2017, 08:45 AM
 
Originally Posted by OreoCookie View Post
Yes, and asking around, most of the colleagues who need the power aren't happy with their redesigned MacBook Pros (for budgetary reasons I had to buy mine before the new, re-designed ones are released). Many don't like the TouchBar or the new keyboard, others would have liked more battery life and more performance. I think they were conceived from the same philosophy as the “new” Mac Pro — it just was a design for a different crowd. You need at least one “Ferrari” in your line-up that is just faster than anything else, has as much horsepower as you can fit in it and showcases where Apple goes next, a halo product.
Apple made a new MBP that has a faster CPU, roughly double the GPU performance, better connectiviity, and a much better display in a slimmer configuration. It also adds the touchbar, and then spends the entire presentation demoing that thing, before telling everyone that by the way, you need to pay at least $1800 to get that touchbar. This makes the touchbar seem like a pro feature, but it very clearly isn't. Pros should be happy with all the other things - the new display and GPU in particular - but because Apple made it all about the touchbar, it became about that thing. This is marketing, not product.

I think Apple made some mistakes in the product as well, but the only big one was the removal of the USB-A port. If you buy a new MBP, you cannot connect it to the mouse you can buy with it, because the mouse comes with a lightning-to-USB-A - that enough is reason enough to say that it was a mistake. USB-A is also the only port that you need as much on the go as at your desk, and it is the one port that everyone assumes that you have. I also think that the dongle situation at launch was poor - no DisplayPort dongle, the only HDMI dongle doesn't support 4K@60Hz.

The other things are understandable, IMO. The new keyboard is different, but it isn't worse per se. The only real issue with the touchbar is that the Esc was removed, which I do think is an error, but one that can be mitigated in software. HDMI was removed because Intel wants to move away from it, and they have been telegraphing that since 2011. The SD card is removed because it is a port whose time is past, and honestly Apple should have removed it a year sooner to avoid the debate now.

Oh yes, I forgot about that split. These are literally developed by different groups if memory serves.
Yep. Share essentially nothing other than the ISA.

And don't forget the lack of customizability: while the PC manufacturer ecosystem didn't grow around this, the mobile manufacturers did, and you can get 2 big 4 LITTLE cores, 4 fast A53s + 4 slow A53s, various graphics options, etc. And all of this for less money as the margins are much smaller. And Intel doesn't have an answer for that right now. The interconnect you mentioned could solve this, but it would have to be widely adopted and that at the very least will take time.
Intel's finances are impenetrable, but basically they pay a lot to be the first on every new process. This is where the money goes.

nVidia at one point made a five-core CPU, with 4 high-performance cores and a single low-performance core. The trick was that they were all Cortex A9 cores, just made with different transistors and clockspeed expectations. I wonder if Intel could do that? Make 4 high-power cores and 1 low-power core, except they're all the same thing. The low-power one would then be on a different process, could have a different cache setup so the L2 is bigger but the L3 is further away (latency-wise), no AVX (because Pentiums and Celerons don't have AVX today anyway) and thus no need for the very wide pathways to the caches, etc etc.

Definitely, for one simple reason: Apple wants best-of-class desktop-level performance in its iOS devices (as much as the thermal envelope permits), not “fast for a smartphone processor”. Right now they split the line-up into A# and A#X where the X variant is architecturally identical, but beefed up. Eventually, I expect them to design custom silicon for its iPad-like devices — and that may just include all Macs one day. To me, that's the right way to think about it.

Right now, the A#s are so ridiculously powerful for smartphone processors that Apple doesn't have to do that — yet. But as far as I can tell, all the “easy” measures to substantially increase IPC (going OoO, going wide, good branch prediction) have been implemented. And honestly, I'd be happy if the A12 and A13 focus on delivering the same level of performance at much lower power consumption. It feels as if I don't fully utilize the power of the A10 in my iPhone 7.
I recently got a new phone (because I have a new job), and moved from an iPhone 5S (A7) to an SE (A9). It is honestly hard to tell the difference in speed. I know that the A9 is way faster - double the performance, most likely - but already the A7 was fast enough. I definitely think that the current crop of Apple SOCs are "too fast". I wonder if they could drop the max clocks a bit to make battery life better?

As for improvements... Apple is behind Intel on the latencies to cache and especially main memory. Apple is clearly working on that, because they have improved for several generations now (A9, A10 and A11 are all better than the previous), but there is some to do. Apple could also add Simultaneous Multithreading, SMT - what Intel calls Hyperthreading. I have expected for some time that they plan to do that, but it hasn't happened - probably Apple doesn't see a need for 4 threads all that often, and for now they can cover that with the low-power cores. SMT would be more power-efficient than having 6 cores active, though.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 14, 2017, 12:10 AM
 
Originally Posted by P View Post
Apple made a new MBP that has a faster CPU, roughly double the GPU performance, better connectiviity, and a much better display in a slimmer configuration. It also adds the touchbar, and then spends the entire presentation demoing that thing, before telling everyone that by the way, you need to pay at least $1800 to get that touchbar. This makes the touchbar seem like a pro feature, but it very clearly isn't. Pros should be happy with all the other things - the new display and GPU in particular - but because Apple made it all about the touchbar, it became about that thing. This is marketing, not product.
Just to be clear, I was talking about how the new machines are perceived, and with a few slight modifications Apple could have avoided much criticism. The main issue is that the big outward facing changes are either not what the community asked for or are not improvements, including

- the new keyboard (which very few people love, most seem to find worse than what was before and the people doing a lot of programming hate the virtual escape key; it doesn’t help that it is prone to failure),
- the TouchBar (which seems to be a meh for most, although I kinda like the idea),
- the lack of even at least a single USB-A port (this is a constant annoyance for most, you can’t even charge your new iPhone with it, unless you pay extra — people feel nickeled and dimed), and
- the lack of an HDMI port (the projectors are finally losing their VGA, yes, VGA ports in favor of HDMI — and now I need another adapter, ugh!).

Couple that with a so-so CPU performance upgrade and the same RAM ceiling, and it is clear why the perception is as it is. Personally, I think the product is not as bad as the perception: the screen is a great update, the machines are lighter and I find the TouchBar intriguing (although I haven’t lived with it).
Originally Posted by P View Post
Intel's finances are impenetrable, but basically they pay a lot to be the first on every new process. This is where the money goes.
This is also what makes them vulnerable: Intel is successful because it can demand higher prices, and because volume is shifting away from one of Intel’s main businesses, traditional personal computers, this leg may actually break away to some degree. I wouldn’t call it impenetrable. And the competition is heating up in the server space as well. ARM-based CPUs compete asymmetrically, and at least outside of WinTel, it is gaining ground.
Originally Posted by P View Post
nVidia at one point made a five-core CPU, with 4 high-performance cores and a single low-performance core. The trick was that they were all Cortex A9 cores, just made with different transistors and clockspeed expectations.
By the way, whatever happened to nVidia’s ambitions in the CPU core and SoC space? It seems as if they revived ideas pioneered by Transmeta from back in the day, released one product and then canned all future projects.
Originally Posted by P View Post
I wonder if Intel could do that? Make 4 high-power cores and 1 low-power core, except they're all the same thing. The low-power one would then be on a different process, could have a different cache setup so the L2 is igger but the L3 is further away (latency-wise), no AVX (because Pentiums and Celerons don't have AVX today anyway) and thus no need for the very wide pathways to the caches, etc etc.
Yes, that’d be an option, plenty of ARM SoC manufacturers go that route where they combine 2 or 4 performance optimized A53s (or Kryos) with 4 cores optimized for low power. I am not sure what the best core arrangement is here for notebook workloads, but I’d probably have at least two low-power cores. But so far, Intel hasn’t given any indication that it plans to make heterogeneous multiprocessing a reality. In any case, it’d be nice if Apple could use this ultra-low power mode for notebooks (where emails and backups are done in the background even if the lid is closed) more extensively. Have you heard any rumors to this effect?
Originally Posted by P View Post
I recently got a new phone (because I have a new job), and moved from an iPhone 5S (A7) to an SE (A9). It is honestly hard to tell the difference in speed. I know that the A9 is way faster - double the performance, most likely - but already the A7 was fast enough. I definitely think that the current crop of Apple SOCs are "too fast". I wonder if they could drop the max clocks a bit to make battery life better?
I went from an iPhone 5 to an iPhone 7 — and there I could tell the difference. I agree with you, though, that the phone is overpowered for what it does, and I hope that Apple finds ways to make use of the CPU horsepower with their software. But their focus on AR, “Machine Learning” and other things indicates that they do have an idea how to convert CPU cycles into more advanced functionality for the user. With iOS 11 I feel like I can exploit much more of my iPad Pro’s CPU prowess.
Originally Posted by P View Post
As for improvements... Apple is behind Intel on the latencies to cache and especially main memory. Apple is clearly working on that, because they have improved for several generations now (A9, A10 and A11 are all better than the previous), but there is some to do. Apple could also add Simultaneous Multithreading, SMT - what Intel calls Hyperthreading. I have expected for some time that they plan to do that, but it hasn't happened - probably Apple doesn't see a need for 4 threads all that often, and for now they can cover that with the low-power cores. SMT would be more power-efficient than having 6 cores active, though.
Regarding caches, yes, Apple has been changing things around with caches quite a bit (e. g. eliminating the L3 cache in favor of a much larger L2 cache), and have chosen to include a whole ton of it compared to other ARM processors (which typically feature 2 MB L2 cache or so instead of Apple’s 8). Seeing as Apple has already included 6 cores, I’m not sure whether SMT would do that much to improve performance outside of multi core CPU benchmarks.

However, the other thing is that Apple seems to be going all-in with dedicated co-processors dedicated to special functionality such as “Machine Learning” and image processing. Perhaps their for us on increasing CPU performance and chip area for the CPU cores will make way for more powerful special function hardware. The key difficulty here is to expose that functionality to developers either via APIs or to give them more direct access to the special function hardware itself. Given their proprietary nature and the reliance on software, I think it is much harder to compare that with the competition or earlier hardware.
I don't suffer from insanity, I enjoy every minute of it.
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Oct 14, 2017, 12:01 PM
 
Originally Posted by OreoCookie View Post
- the new keyboard (which very few people love, most seem to find worse than what was before and the people doing a lot of programming hate the virtual escape key; it doesn’t help that it is prone to failure),
This is not at all what I'm seeing.

A small handful are extremely vocal, many seem to absolutely love it (including myself), and most people don't care enough to bother ever voicing an opinion.

Failure rates seem very low from what I'm hearing from my friends in service. I had a brief issue early on with a key sort of gumming up due to a small crumb underneath. Blew into the key (just as I have with previous keyboards) and never had another issue since (eleven months now).

And the virtual escape key appears to be an issue for those few developers who fail to realise that muscle memory works just fine with the Touch Bar.

Almost all of the complaints seem to disappear after a few days of use.

(This is the MacBook *Pro*, mind you — the 2015 MacBook keyboard is much different.)
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 14, 2017, 08:24 PM
 
@Spheric
It’s good to also have input from what is probably a more representative sample of users, and I am not disputing your experience.

I‘m just relaying the experiences of my colleagues, which, admittedly, is a very selective audience. These are types who buy keyboards with mechanical key switches and the like (as I do). And of lot of them see the new keyboard as a compromise, where the quest was not to make a better keyboard, but the priority was to make it thinner. (I don’t think a version of the new butterfly mechanism with more key travel would weigh significantly more, it’d just be a millimeter or three thicker.)

One of them had to have their (MacBook) keyboard replaced already, because a key stopped working after something got trapped underneath. Personally, I think I could get used to the feel. (I’m typing this on my keyboard cover, and for some reason I immediately took a liking to that. Curiously, I did not have the same reaction to Apple’s new notebook keyboard.) However, I would really, really like more key travel.

Regarding the escape key, if you use it as one of the most commonly used keys (I use it for auto completions all the time), I’d be higher on the list of priorities, too. For regular users, though, I don’t think this is an issue.

I think the biggest problem with the new keyboard and the new machines in general is just one of perception, which even if objectively inaccurate could negatively affect Apple’s relation to its customers. Apple’s stinginess regarding adapters and cables is IMHO not ok. If you buy a new iPhone 8 and a new MacBook Pro, you can’t connect one to the other, you need to buy another cable. In my business, academia, the machines are paid for by your institution, so the money doesn’t come out of your own pocket, but nevertheless people feel a bit cheated.
I don't suffer from insanity, I enjoy every minute of it.
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Oct 15, 2017, 05:43 AM
 
Originally Posted by OreoCookie View Post
I think the biggest problem with the new keyboard and the new machines in general is just one of perception, which even if objectively inaccurate could negatively affect Apple’s relation to its customers. Apple’s stinginess regarding adapters and cables is IMHO not ok. If you buy a new iPhone 8 and a new MacBook Pro, you can’t connect one to the other, you need to buy another cable.
See, but this is a completely imaginary argument. People do not connect their iPhones to their computers via cable, unless something goes wrong.
I actually bought a Lightning-USB-C cable, but use it only to fast-charge them off the MacBook's power supply or the ports of the MacBook, not for data.*

It is quintessentially Apple, not thickening the edge of their machines to include a twenty-year-old USB-A (which, I agree, is mildly annoying, but in my bag, the USB-A adapter completely disappears next to the power supply and even the keychain — it is objectively irrelevant).

What I find the more relevant question (USB-A isn't coming back, get over it) is when Apple will switch iPhone and iPad to USB-C by default.

In my business, academia, the machines are paid for by your institution, so the money doesn’t come out of your own pocket, but nevertheless people feel a bit cheated.
I'm trying to remember a time or a product Apple made where someone, somewhere didn't feel "a bit cheated", but I'm coming up nought.


*) This may be about to change, now that iOS transfers MIDI via USB. Certainly not a mainstream case, though.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 15, 2017, 08:06 AM
 
Originally Posted by Spheric Harlot View Post
See, but this is a completely imaginary argument. People do not connect their iPhones to their computers via cable, unless something goes wrong.
I do this all the time — to charge it. This way, when I'm traveling, I don't need to bring any other power adapters. And from my experience, I'm not alone. Plus, if you have a new machine, you could charge it directly with your notebook's power adapter instead of plugging it into the machine.
Originally Posted by Spheric Harlot View Post
I actually bought a Lightning-USB-C cable, but use it only to fast-charge them off the MacBook's power supply or the ports of the MacBook, not for data.*
It's not about data, it's about charging.
I don't suffer from insanity, I enjoy every minute of it.
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Oct 15, 2017, 03:41 PM
 
If it’s about charging: it comes with a power supply.

If you want the added convenience: buy a USB-C cable. That’s exactly what I did, and why.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 15, 2017, 03:50 PM
 
There was a rumor that Apple planned to include USB-A ports on the charger. That would have been a great solution, in my opinion - the port is there when you need it, but doesn't thicken the laptop itself. I think that the Magic Mouse 2 is what betrays that it was too soon - Apple pushes a mouse as an option with the new MBPs, but if you add it, you get a warning that you also have to have add a USB adapter.

Anyway: I'm not sure what the major opinion of the newest keyboard is, because the people that hate it seem to think it is the worst thing since the puck mouse and assume that everyone thinks the same. Me, I'm OK with it. I think the evolution will move us towards thinner keyboard with short strikes, for packaging and for ergonomic reasons, and I'm loath to stand in the way of progress.

The touchbar is a good idea, but removing the Esc key was a mistake. It is a key that is used in the OS still (Cancel in every dialog box, to bring up the force quite dialog box) and not just for terminal commands. I think Apple removed it only because of where it was placed. It would have been a better idea to squeeze it in one row down, and make the top left key cap half size (it is § on my keyboard) or make the touchbar a bit narrower.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 15, 2017, 04:12 PM
 
Originally Posted by OreoCookie View Post
Just to be clear, I was talking about how the new machines are perceived, and with a few slight modifications Apple could have avoided much criticism.
And I agree, completely. The way the MBP was presented was a mistake from Apple, but the only big design mistake was the removal of the USB-A port (IMO).

(Keyboard discussions in the other post, following on what Spheric wrote.

- the lack of an HDMI port (the projectors are finally losing their VGA, yes, VGA ports in favor of HDMI — and now I need another adapter, ugh!).
The thing is... Intel has said ever since 2011 that the HDMI port is going away, and every big OEM has supported that. HDMI is shitty protocol and it needs to die. I can see why they did that, especially if the specs were locked in for a long time before and they assumed that HDMI would truly start to die sometime in 2014 or so.

This is also what makes them vulnerable: Intel is successful because it can demand higher prices, and because volume is shifting away from one of Intel’s main businesses, traditional personal computers, this leg may actually break away to some degree. I wouldn’t call it impenetrable. And the competition is heating up in the server space as well. ARM-based CPUs compete asymmetrically, and at least outside of WinTel, it is gaining ground.
Intel is currently the poster child for low-end disruption. It's adding features for the high end - a mesh for multiple cores, AVX-256 and 512 for vector math, TSX for multithreaded code - and leaves itself open to cheaper competitors. I think that disuption theory has been oversold a bit, but it does apply here.

By the way, whatever happened to nVidia’s ambitions in the CPU core and SoC space? It seems as if they revived ideas pioneered by Transmeta from back in the day, released one product and then canned all future projects.
NVidia almost certainly wanted to make an x86 core, but they just couldn't get a license from Intel. I'm pretty sure that they played extreme hardball to get it, but Intel wouldn't budge. They repurposed it as an ARM core, but the whole shebang simply had a too high TDP to work in a phone. nVidia put a nice spin on it and said they wanted to use it in tablets, in game consoles, whatever, but that cannot be the market they built it for. The Switch is based on an nVidia chip, but other than that, the answer appears to be... self-driving cars.

Yes, that’d be an option, plenty of ARM SoC manufacturers go that route where they combine 2 or 4 performance optimized A53s (or Kryos) with 4 cores optimized for low power. I am not sure what the best core arrangement is here for notebook workloads, but I’d probably have at least two low-power cores. But so far, Intel hasn’t given any indication that it plans to make heterogeneous multiprocessing a reality. In any case, it’d be nice if Apple could use this ultra-low power mode for notebooks (where emails and backups are done in the background even if the lid is closed) more extensively. Have you heard any rumors to this effect?
TBH, I don't know what Intel is planning right now. The last forward-looking thing on their roadmap was the Purley platform, which is Skylake-E (with the mesh network) and Optane. I'm just not sure what else they can do. They can make the core a little wider - the ROB width as I went on about, or splitting the scheduler into integer and FP like everyone else is doing right now - they can add SMT4 or SMT8 (so one thread can look like 4 or 8 to the OS to extract more parallelism) but that won't help much on laptops, they can change the cache hierarchy a bit (they gave themselves some leeway to do that in Skylake). I just think that Intel needs to focus on the regular desktop and laptop market again now and, and this is one idea.

Regarding caches, yes, Apple has been changing things around with caches quite a bit (e. g. eliminating the L3 cache in favor of a much larger L2 cache), and have chosen to include a whole ton of it compared to other ARM processors (which typically feature 2 MB L2 cache or so instead of Apple’s 8). Seeing as Apple has already included 6 cores, I’m not sure whether SMT would do that much to improve performance outside of multi core CPU benchmarks.
When Apple moved to a bigger L2 plus a victim L3 for the A9, I thought they were crazy - and then Zen and Skylake-E did the same thing. Clearly they are on to something there. A9X dropped the victim L3 - likely because the display controller can self refresh in the iPad - and that improved main memory latency.

I agree that SMT makes much less sense now than it did before Apple added all those extra "throughput cores", but it would save some energy to be able to turn those cores off.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 15, 2017, 08:37 PM
 
Originally Posted by Spheric Harlot View Post
If it’s about charging: it comes with a power supply.
Yes, but a higher-wattage power supply allows you to charge faster. Alternatively, you can charge your phone and/or iPad and your computer at the same time. (It doesn't help that the wall wart that comes with the iPhone requires a cumbersome adapter if you are in a country with different outlets. For Apple's other power adapters, you just need to change one small piece at the end or take a cable with a Euro plug on one end with you.)
Originally Posted by Spheric Harlot View Post
If you want the added convenience: buy a USB-C cable. That’s exactly what I did, and why.
If you spend €€€€ on a computer or €€€ on an iPhone, I don't think it is outrageous to expect that Apple includes a USB-C cable free of charge — especially considering that it is switching to USB-C.
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 15, 2017, 10:17 PM
 
Originally Posted by P View Post
There was a rumor that Apple planned to include USB-A ports on the charger. That would have been a great solution, in my opinion - the port is there when you need it, but doesn't thicken the laptop itself.
That's another thing I'd advocate for: Apple should give its chargers multiple USB ports (whether they should all be USB-C is another discussion). I recently bought a 5-port 60W anker USB power brick, and I am loving this thing. I put it in the bedroom, and now my wife and I can charge our iPhones, iPads and one of my bike lights all at the same time. And quality-wise it feels quite good — not Apple good, but it definitely oozes much more quality than the other multiple-port USB chargers I have seen. To be honest, that's another reason Apple should include USB-C cables in addition to multiple USB-C ports.
Originally Posted by P View Post
Anyway: I'm not sure what the major opinion of the newest keyboard is, because the people that hate it seem to think it is the worst thing since the puck mouse and assume that everyone thinks the same. Me, I'm OK with it. I think the evolution will move us towards thinner keyboard with short strikes, for packaging and for ergonomic reasons, and I'm loath to stand in the way of progress.
I am in the same camp, and the latest revision they include on the MacBook Pros feels much better. Plus, I kinda like the feel of the TouchCover keyboard, so there are short-stroke keyboards out there that I like.
Originally Posted by P View Post
The touchbar is a good idea, but removing the Esc key was a mistake. It is a key that is used in the OS still (Cancel in every dialog box, to bring up the force quite dialog box) and not just for terminal commands.
Seconded. To me that's the story of the latest machine: if Apple would have made just slight tweaks (include one single USB-A port and make the escape key a physical key), a lot of the bad press and negative perception could have been avoided.
Originally Posted by P View Post
The thing is... Intel has said ever since 2011 that the HDMI port is going away, and every big OEM has supported that. HDMI is shitty protocol and it needs to die. I can see why they did that, especially if the specs were locked in for a long time before and they assumed that HDMI would truly start to die sometime in 2014 or so.
I agree with you, and to be honest VGA is worlds worse. The problem is that projectors are swapped so rarely that in many places the only plug available is still VGA. In 2017. Some “newer” ones have a DVI port and most of the really new ones have HDMI in. I always need to bring a cornucopia of adapters to cover all my options. (Of course, people in academia are a bit of an outlier, I reckon most people use their display ports (small d) to connect external monitors. And for that, the current plug is great.
Originally Posted by P View Post
Intel is currently the poster child for low-end disruption. It's adding features for the high end - a mesh for multiple cores, AVX-256 and 512 for vector math, TSX for multithreaded code - and leaves itself open to cheaper competitors. I think that disuption theory has been oversold a bit, but it does apply here.
Judging by Intel's products it looks as if they are shifting away from the PC market and optimize their designs for small to big iron servers instead. Intel knows that the traditional PC market will contract and has given up on its tablet- and smartphone-focussed SoCs. The high-margin, low-volume server market remains, and to pay for their fabs (which need high volume to get good and remain profitable) are being opened up to fab other people's designs. So perhaps the lackluster efforts in the notebook and desktop spaces really are intentional.

On the other hand, Intel can't resist the temptation to play old-school Intel games: Withholding certain features from their desktops (e. g. ECC RAM support) limits the appeal of its chips for new applications, say, NASes.
Originally Posted by P View Post
NVidia almost certainly wanted to make an x86 core, but they just couldn't get a license from Intel. I'm pretty sure that they played extreme hardball to get it, but Intel wouldn't budge. They repurposed it as an ARM core, but the whole shebang simply had a too high TDP to work in a phone. [...] but other than that, the answer appears to be... self-driving cars.
To be honest I wasn't paying much attention to the CPU they stuck into their automotive chips, the other bits (GPU, “AI” co-processors, etc.) seemed more interesting at the time. But good to see that they are continuing their CPU efforts. The German Postal Service has plans to use nVidia's automotive parts to make some of its new electric StreetScooters self-driving.
Originally Posted by P View Post
TBH, I don't know what Intel is planning right now. The last forward-looking thing on their roadmap was the Purley platform, which is Skylake-E (with the mesh network) and Optane. I'm just not sure what else they can do. They can make the core a little wider [...] I just think that Intel needs to focus on the regular desktop and laptop market again now and, and this is one idea.
The last sentence pretty much captures it, although if they want to get really serious I think that would really require separate CPU designs for their server and mobile line-up. They are really hampered by the lack of a clear design target, instead their Cores have to straddle 4.5-180 W TDP, 2-22 cores, and give high-FP performance (which is more suitable for server- and workstation-type loads).
Originally Posted by P View Post
When Apple moved to a bigger L2 plus a victim L3 for the A9, I thought they were crazy - and then Zen and Skylake-E did the same thing. Clearly they are on to something there. A9X dropped the victim L3 - likely because the display controller can self refresh in the iPad - and that improved main memory latency.
It seems that Apple's chip teams really are top notch, and while the back and forth in their cache strategy was confusing, it really seems that they every time nailed the bulls eye of their design targets. I can't judge the A11's new co-processors, but from all benchmarks I have seen, Apple's GPU department went from 0 to competitive with best-of-breed competition in a single release.
I don't suffer from insanity, I enjoy every minute of it.
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Oct 16, 2017, 09:37 AM
 
Originally Posted by OreoCookie View Post
Yes, but a higher-wattage power supply allows you to charge faster. Alternatively, you can charge your phone and/or iPad and your computer at the same time. (It doesn't help that the wall wart that comes with the iPhone requires a cumbersome adapter if you are in a country with different outlets. For Apple's other power adapters, you just need to change one small piece at the end or take a cable with a Euro plug on one end with you.)

If you spend €€€€ on a computer or €€€ on an iPhone, I don't think it is outrageous to expect that Apple includes a USB-C cable free of charge — especially considering that it is switching to USB-C.
Okay, so let’s look at the options:

1.) they include the previous-generation USB-A Charger and cable. Compact, familiar, cable works with every car charger and car hi-fi, and all existing setups. Does not support fast charging. Will not connect to MacBook chargers (which these devices never have), needs extra cable ONLY for charging from MacBook power supplies or MacBook ports. Result: tiny handful of people buys USB-C cable, some percentage of those feel jilted.

2.) they include only a new USB-C charger and cable. Probably supports fast charging; won’t connect to anything except new-ish higher-end laptops and MacBook power supplies without an adapter. Result: virtually everybody goes and buys cables or adapter, some (likely high, judging from the outrage over the MBP) percentage feel jilted. Most power supplies and cables probably go unused.

3.) they include only new USB-C charger, but both a USB-A and a USB-C cable. Result: everything works everywhere, but most power supplies and cables probably go unused.

4.) they include only old USB-A charger, but both cables. Result: everybody except those who can obviously afford a higher-priced laptop doesn’t use their cable, some percentage feels jilted because Apple included a wasted cable but not a power supply capable of fast-charging.

I think they’re going for 2.), myself, but not yet.

No idea how inductive charging affects those options. Is it a widespread thing yet?
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 16, 2017, 10:40 AM
 
In my opinion option 3 would have been the right move (with the added condition that all USB-C cables that Apple makes should include fast charging, right now they offer one that does and another that doesn't). Having cables go to waste would be an acceptable compromise for the transition period. (BTW, even if you have inductive charging, you still need to run a cable from a power supply to your Qi-compatible inductive charger.)

Regarding the Qi standard, it's been around, but officially only supports 5 W, the same as Apple's small wall warts. Samsung and soon also Apple have a proprietary 7.5 W fast charging mode with their products. I actually haven't seen inductive chargers for phones out in the wild.
I don't suffer from insanity, I enjoy every minute of it.
     
And.reg
The Mighty
Join Date: Feb 2004
Location: Well the sports issue was within arm's reach but they closed up shop and kicked me out. And I'm out of toilet paper.
Status: Online
Reply With Quote
Yesterday, 07:31 PM
 
Why do the Touch Bar controls no longer work to Play/Pause iTunes when I am browsing in Safari? They worked just fine in 10.12. But in 10.13, when I press Play, nothing happens. Even when I click on the iTunes Dock icon and press Play on the Touch Bar, nothing happens.
This one time, at Boot Camp, I stuck a flute up my PC.
     
 
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 09:56 AM.
All contents of these forums © 1995-2015 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2015, Jelsoft Enterprises Ltd.,