Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Community > MacNN Lounge > Apple's Oct 30, 2018 Mac/iPad event

Apple's Oct 30, 2018 Mac/iPad event (Page 3)
Thread Tools
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Online
Reply With Quote
Nov 7, 2018, 05:54 PM
 
Originally Posted by subego View Post
Storage on the base model i5/i7 is already 256GB, so if that’s enough, Oreo’s got the best bang for the buck recommendation there.

32GB of the appropriate RAM from Crucial is US$285 right now. To round out the earlier discussion of self-upgrades, I’ve heard opening the case needs a special torx screwdriver, and may void your warranty.

To save a little more money, use a USB3 enclosure. It’s slower than Tbolt, but not by a lot.
Just a quick clarification... I was only looking at the higher tier base model. I didn’t notice the lower tier had a 128GB option.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 7, 2018, 08:40 PM
 
Originally Posted by ShortcutToMoncton View Post
One interesting thing is the heat issue—the i7 is well known to have overheating problems and mine got super hot very quickly on any intensive tasks (like playing a 24-bit ALAC/FLaC or h265 for example)—there are a bunch of tricks people do like re-applying thermal paste inside, etc (not an easy mod for the mini).
I don't think this is an accurate generalization: while true for some machines, I don't think this applies to most Macs.
Originally Posted by ShortcutToMoncton View Post
One of the early reviews noted that the new i7 ran the fans a fair bit under load, whereas the i3 was almost always quiet; and disabling i7 Turbo Boost seemed to mostly turn off that fan. Since the i5 also has Turbo Boost, I may wait and see if there’s any heat/fan differences between the i5 and i7–it could be that the i5 has the same heat and fan noise concerns and if so, I’d probably just go for the i7 at that point. (The i7 with 128Gb and the i5 with 256Gb are the same price up here.)
According to Marco Arment's review, the unit he had (that had the top-end CPU built-in) was silent unless he pushed it really, really hard. I would definitely get one of the six-core CPUs.
Originally Posted by ShortcutToMoncton View Post
I’ve used original Thunderbolt drive enclosures since early 2013 (currently a 6-bay) and couldn’t love the standard more. (MacOS had all sorts of USB 3 drive/sleep issues at first which drove me bananas.) I was also thinking about adding more internal space but to be honest, given that this is a media centre/desktop setup for me and I’m not concerned about portability, an extra TB3 SSD enclosure with a decent 1TB drive is probably the smarter and far cheaper bet.
Once you connect more than one external drive, getting cheap single-enclosure disk drives can get messy and decreases reliability.
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 7, 2018, 09:01 PM
 
Originally Posted by P View Post
Because there is a lot of pain at the transition, the benefits at the end aren't all that enticing this time, I haven't seen a solution for how to replace PCIe, I dislike Apple's current move towards more closed, and I actually like the possibility of being able to run Windows.
Apart from not being able to run Windows, what other pain points do you see forthcoming? I'm honestly asking, because the PowerPC-to-x86 transition was completely painless, I think only one piece of software broke, the software to my hardware screen calibration tool would no longer work. Did you encounter more problems during the last CPU architecture transition?

And do you know what the situation with PCIe is? What would prevent Apple from building a PCIe complex into its Mac SoCs? Are there any licensing issues I am not aware of? Also, I think PCIe will be tied to ThunderBolt (most Macs are mobile Macs), so the utility of PCIe may be coupled to Apple being able to license ThunderBolt from Intel. Presumably licensing is an issue that can be solved with money.
Originally Posted by P View Post
But one at a time. There is pain at the transition because everyone has to port things again, and Apple will cut off backwards compatibility sooner than I am really comfortable with. The Mac market isn't that large, so I think that the main source of ported apps will be ported from iOS. That is an idea that scares me. I don't ever want to have to rely on ported iOS apps on the Mac.
I think you are conflating two things: iOS apps on the Mac is happening independently of the underlying CPU ISA, and I am worried about that. But since it is independent of the CPU architecture, I don't think this is saying something one way or the other.
Originally Posted by P View Post
The benefits at the end are mostly about power consumption. I don't really care that much.
I don't understand this point: one clear benefit is that the year-over-year improvements in terms of performance and performance-per-watt on the Apple side outpace what Intel is doing by probably a factor of 5-10, depending on your metric. Even if the slope levels off, there will be a growing performance and performance-per-watt advantage on the Apple side of the ledger.
Originally Posted by P View Post
That isn't the goal here. I'm sure it will be a little better, but it isn't the doubling of performance overnight that we had last time - and that means that the emulation stage will be even more painful.
What about GPU performance? That is a big factor where Intel's efforts are lackluster and the timing unreliable. This has been Apple's beef with Intel since forever. (I remember when Apple put in nVidia 9400 chipsets/GPUs because it wasn't convinced that Intel's were good enough.) The other factor are other types of co-processors that get increasingly important for Apple, and Intel has nothing to offer here.

And lastly, it would give Apple a way to make better use of investments it is making anyway: the development of custom CPUs, GPUs and other co-processors.
Originally Posted by P View Post
PCIe, then. None of the iPads have it - in fact, they have nothing like it. There are no high-bandwidth ports out from the SoC at all. I don't think that Apple will replace it at all, because high-bandwidth ports take a lot of energy, and Apple doesn't want that.
PCIe is not necessary for an iPad, but the situation is different for at least some Macs. I think PCIe (especially in the form of external expansion slots) is on Apple's minds, and is crucially important for quite a few niche applications. A former colleague of mine uses it to significantly accelerate his numerical simulations.
Originally Posted by P View Post
As for closed... do you think that this new ARM-based Mac will have DIMM-slots? I already said that I don't think we'll see Thunderbolt again. We already can't replace storage, and if you remove PCIe you kill Thunderbolt, which means no fast external storage.. Connecting an external display? Sounds like something you'd need a "Pro" model for.
Why should there be versions with DIMM slots? If the new Mac mini and the iMac Pro are any indication, Apple has started listening to its customers again.
Originally Posted by P View Post
At the end of the day, this isn't a Mac on a new CPU ISA - this is an iPad under another name. I have an iPad, I probably use it more than my Mac because I bring it on every trip, but I want a Mac too.
I don't understand this point: the difference between a Mac and an iPad is the UI paradigm, so if an ARM-based Mac runs OS X, why should that be closer to an iPad than the predecessor that sported an Intel CPU? That strikes me as a bit weird like some of the Apple fans who didn't like the transition away from PowerPC to Intel, fearing that Macs would become less Mac-like.
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 7, 2018, 09:01 PM
 
Originally Posted by subego View Post
Just a quick clarification... I was only looking at the higher tier base model. I didn’t notice the lower tier had a 128GB option.
Yup. That's why I mentioned it. Even if you think 128 GB is enough, they'll be hard to sell afterwards. And running out of internal storage is just a pain.
I don't suffer from insanity, I enjoy every minute of it.
     
ShortcutToMoncton
Addicted to MacNN
Join Date: Sep 2000
Location: The Rock
Status: Offline
Reply With Quote
Nov 7, 2018, 11:19 PM
 
Originally Posted by OreoCookie View Post
Yup. That's why I mentioned it. Even if you think 128 GB is enough, they'll be hard to sell afterwards. And running out of internal storage is just a pain.
I disagree. For laptops it’s a killer, but minis are overwhelmingly used as desktop machines with attached peripherals. Almost everyone I know with a mini has some external enclosure attached. I think the people who buy these things are content with a fast, bare ones package that they can upgrade as required....basically, the antithesis of all that is Apple hahaha.

I for one stuck a 1Tb SSD in my 2012 mini, and ended up with 400gigs of (mostly) music and some locally stored pictures/ videos, and 12Tb (!) of all my real media in the external enclosure. Honestly, I probably don’t need anything but the OS on the actual computer.

Originally Posted by OreoCookie View Post
I don't think this is an accurate generalization: while true for some machines, I don't think this applies to most Macs.
Well I was specifically talking about the mini. Google i7 mini heat and you’ll get lots of talk about various mods to the 2012 models in particular.

Once you connect more than one external drive, getting cheap single-enclosure disk drives can get messy and decreases reliability.
I’m not talking cheap—Apple’s charging a fortune for their internal SSD upgrade. I’m sure you could get a nice TB3 SSD enclosure like OWC’s Express 4M2 and a very nice Samsung 2TB SSD for about the same price as Apple’s 1TB upgrade, and then you have a super cool external SSD drive enclosure as well for future SSD expansion. I’m sure it will not be as blazingly fast, but is that really the biggest complaint these days? Or hell, get one of their Thunderblades for around the same price and you can take your HD with you anywhere, if you’re into that....
Mankind's only chance is to harness the power of stupid.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Online
Reply With Quote
Nov 7, 2018, 11:46 PM
 
Question:

What’s making you think Mini vs. a dedicated file server?
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 8, 2018, 12:10 AM
 
Originally Posted by ShortcutToMoncton View Post
I disagree. For laptops it’s a killer, but minis are overwhelmingly used as desktop machines with attached peripherals. Almost everyone I know with a mini has some external enclosure attached. I think the people who buy these things are content with a fast, bare ones package that they can upgrade as required....basically, the antithesis of all that is Apple hahaha.
In my experience, this is not really correct. I have lived off of a 180 GB SSD + 1 TB hard drive first configured as two separate volumes back when I used my 2010 MacBook Pro as my primary machine. It just wasn't enough, and I ended up configuring it as a Fusion Drive.

I needed to link some folders to other folders on the second hard drive, and parts of the OS and some software just didn't like that. I'd run out of space. Plus, the SSD would significantly slow down once I filled it to 80+ % capacity (about 140~150 GB, but with a 128 GB SSD that is a mere 104 GB). Here were some ways space got eaten up:

- Copying RAM to disk for some of the deeper sleep modes. (= # GB of main memory, so at least 8 GB, but that could be more).
- Swap space.
- Software (e. g. samples from Garageband that weigh in at several GB)
- Years of emails.
- Space to download software and updates (an OS update or XCode could amount to several GB)
- Time Machine
- Do not fill SSDs to the brim, that significantly shortens their life span and slows them down.

Originally Posted by ShortcutToMoncton View Post
Well I was specifically talking about the mini. Google i7 mini heat and you’ll get lots of talk about various mods to the 2012 models in particular.
I believe you. I am just saying that this does not need to apply to this iteration of the Mac mini, and other “fastest” Macs have not suffered from increased CPU failure rates in recent memory.
Originally Posted by ShortcutToMoncton View Post
I’m not talking cheap—Apple’s charging a fortune for their internal SSD upgrade.
I think you misunderstood what I wrote: I was agreeing with you, and just added that getting a nice enclosure is even more important once you add more than one physical drive/SSD to the Mac mini.
( Last edited by OreoCookie; Nov 8, 2018 at 01:52 AM. )
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 8, 2018, 12:25 AM
 
Originally Posted by subego View Post
What’s making you think Mini vs. a dedicated file server?
First of all, a mini could be a dedicated file server. Depending on what you mean by file server, that may be cheaper, equally expensive or more expensive. I have recently set up a Xeon workstation with a 4-core CPU, 16 GB of RAM, a 512 GB SSD and two 8 GB NAS hard drives as a file server. A Mac mini would have been cheaper. My Synology NAS at home was cheaper, but is also much wimpier, so I can't really run Plex with transcoding on it.

Pros for the Mac mini:
- It runs macOS, is small, reliable and has everything most people need.
- It integrates nicely with many macOS apps, e. g. for software development via Xcode or to use it to encode video.
- Energy efficient.


Cons for the Mac mini:
- No ECC RAM, which to me limits it in some scenarios.
- Obvious limitations by the form factor.


Pros for other file servers:
- More flexibility, including all-internal storage.
- More flexibility when it comes to software (e. g. FreeNAS or the Linux derivatives that run on commercial NASes).
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Online
Reply With Quote
Nov 8, 2018, 01:27 AM
 
I guess I’m leading up to the question what exactly is going to be done with this.

If it’s for a home theatre type dealie, the best bang for the buck might be one of the used Minis about to glut the market. I don’t think that’s too wimpy.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 8, 2018, 09:44 AM
 
Originally Posted by ShortcutToMoncton View Post
Dumb it down for a stupid person, haha. I was trying to compare my old 2.3GHz four-core i7 vs. the new 3.6Ghz four-core i3 vs. the new 3.4gHz six-core i5.

The initial benchmarks I’ve seen recently appeared to suggest even the new i3 (base) is still a little faster than my old i7. Does that make sense to you, or are there situations when that might not be the case?
I am sure that even the i3 is faster. Higher TDP and higher base clock is very hard to beat.

Intel's advertising of its chips is geting absurd, but the way it works is this: There is a power level 1 (PL1) that it can run at "forever", and a power level 2 (PL2) that it can run at for a short period of time. This is meant for turbo boost. (There are more power levels, but I'm trying to keep it simple) There is exactly one thing that is solid in the specification: The base clock is the clockspeed which the CPU will run at when all the cores are running 100% on a task defined by Intel as being "the hardest possible task" and the CPU is running at PL1. There is one caveat to this in that if you're using AVX code the base clock isn't valid anymore, it drops 2-300 MHz, but for everything else, this is true.

Now, if you're only running four cores out of six, you can maintain a higher clock than base while staying at PL1 - because you simply have two cores less to keep powered. This means that a sixcore is going to be faster than a quadcore with the same rated base clock.

In practice, Intel bins these things and sets the power levels to make sure that he cheaper chip is never faster at anything than the more expensive one, so a dualcore will never be faster than a quad, even if only two threads are active.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 8, 2018, 10:40 AM
 
To add to P's very concise explanation: it makes comparing CPUs exceedingly hard to compare to both, other Intel CPUs and non-Intel CPUs. Since the base clock (PL1) in Intel's lowest-power parts is significantly lower than in its higher-power parts, but the Turbo boost (PL2) frequency can be much more comparable to those of higher-power chips. So the old days when you could just compare CPUs based on a single clock speed are long gone.

Apple's cores, for example, have a much smaller frequency gamut (I'm just speaking of maximum frequencies here): on Apple's A12, the big cores are clocked between 2.380 GHz (“PL1”) and 2.083 GHz (“PL2”), so only about 300 MHz difference. For the small cores, it is roughly 100 MHz. For the Intel CPU that is built into the MacBook (with a comparable TDP of about 5 W), the frequency range for the fastest model is 1.7 GHz (“PL1”) versus 3.6 GHz (“PL2”). Put another way, Intel has opted for a very different strategy than Apple. That is why for many mundane tasks the MacBook feels as fast as a MacBook Pro with a much beefier CPU: in both cases the CPUs are built around the same cores, so at the same frequency, performance will be very, very similar. But when you have longer tasks where the MacBook needs to throttle down to PL1, you really feel the difference.

Both, in real life applications and various benchmarks, this will advantage one strategy over another: short, bursty workloads benefit from Intel's strategy to raise the frequency through the roof. Apple's CPU cores shine when you have a high, sustained workload.
I don't suffer from insanity, I enjoy every minute of it.
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Nov 8, 2018, 11:26 AM
 
So my 2010 i7 iMac is not compatible with Mojave, so I've started looking into replacements. I entertained the idea of a Mini + display, but it looks like even the 2014 i7 3.0 gets crushed by the mighty 2010 iMac, at least according to EveryMac's Geekbench scores.

I could pick up a late-2012 27" iMac i7 for $500-600 that would be a nice speed upgrade and give me USB3 and Thunderbolt. 2013 iMacs offered a little speed bump and are running ~$850. 2014 and up look like they're over $1000. Any idea how much longer the late-2012s with Ivy Bridge will be supported?
     
ShortcutToMoncton
Addicted to MacNN
Join Date: Sep 2000
Location: The Rock
Status: Offline
Reply With Quote
Nov 8, 2018, 11:38 AM
 
Originally Posted by subego View Post
I guess I’m leading up to the question what exactly is going to be done with this.

If it’s for a home theatre type dealie, the best bang for the buck might be one of the used Minis about to glut the market. I don’t think that’s too wimpy.
Well the one nice benefit in keeping the old mini would be the SD card slot. I used that thing all the time for my GoPro, camera etc.....I guess there’s an adaptor for everything now? Also I don’t think there’s optical out....I use USB to a DAC/integrated amp so it’s not a concern, but many people may still have optical setups.

Otherwise, the TB3 ports, HDMI 2 (to some limited extent) and some of that extra processing power would be helpful moving forward, particularly as media transcoding is still a concern. If I’m pushing a hi-res music file or 4K or h265 video, my 2012 i7 would disentegrate into flames very quickly—5500RPM fan sounds like a vacuum cleaner in such a small enclosure.
Mankind's only chance is to harness the power of stupid.
     
ShortcutToMoncton
Addicted to MacNN
Join Date: Sep 2000
Location: The Rock
Status: Offline
Reply With Quote
Nov 8, 2018, 11:41 AM
 
I just realized that I have one USB port filled by a mini-HTPC wireless keyboard dongle, and another by my audio out. So I’ve maxed out the two USB ports from the get-go.
( Last edited by ShortcutToMoncton; Nov 8, 2018 at 03:59 PM. )
Mankind's only chance is to harness the power of stupid.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 8, 2018, 08:12 PM
 
Originally Posted by Laminar View Post
So my 2010 i7 iMac is not compatible with Mojave, so I've started looking into replacements. I entertained the idea of a Mini + display, but it looks like even the 2014 i7 3.0 gets crushed by the mighty 2010 iMac, at least according to EveryMac's Geekbench scores.
I wouldn't waste my time on any non-Retina machine, because a Retina screen makes a huge difference in everyday usage, much more than a 20 % boost of CPU performance.
I don't suffer from insanity, I enjoy every minute of it.
     
Thorzdad
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Offline
Reply With Quote
Nov 9, 2018, 09:17 AM
 
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Nov 9, 2018, 10:03 AM
 
Originally Posted by OreoCookie View Post
I wouldn't waste my time on any non-Retina machine, because a Retina screen makes a huge difference in everyday usage, much more than a 20 % boost of CPU performance.
I'm not a screen junkie, my parents have a Retina iMac and I can't tell the difference. A late 2012 27" i7 went for $500 on eBay and I almost nabbed it but couldn't bring myself to pull the trigger.
     
sek929
Posting Junkie
Join Date: Nov 1999
Location: Cape Cod, MA
Status: Offline
Reply With Quote
Nov 9, 2018, 06:15 PM
 
Originally Posted by Laminar View Post
I'm not a screen junkie, my parents have a Retina iMac and I can't tell the difference. A late 2012 27" i7 went for $500 on eBay and I almost nabbed it but couldn't bring myself to pull the trigger.
My sister gave me her old late 2012 27" i7, had a failing HDD but after an SSD swap this machine is lightning fast. Getting the RAM to 24GB wasn't too expensive either.
     
Doc HM
Professional Poster
Join Date: Oct 2008
Location: UKland
Status: Offline
Reply With Quote
Nov 10, 2018, 02:55 PM
 
Originally Posted by P View Post
(In practice OS X is so much better at caching and the modern iMacs have way more RAM to use for that, so it will hide the terrible random read performance to some extent).
Indeed. That hides some of the performance issues until the drive croaks. In addition to its truly abysmal performance the drives fitted to these iMacs are horrifically unreliable suffering performance degradation after far to short a time (based entirely on my customer experience). That Apple fits these drives into ANY iMac in 2018 is shameful. That they charge a premium price for the product is pretty much just scummy. For f's sake just slam in an SSD and be done with it you cheap skating b*****ds.

And while I'm annoyed. 32GB SSD on the 1TB fusion drive? F**k OFF!
This space for Hire! Reasonable rates. Reach an audience of literally dozens!
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 11, 2018, 04:35 PM
 
Originally Posted by OreoCookie View Post
Apart from not being able to run Windows, what other pain points do you see forthcoming? I'm honestly asking, because the PowerPC-to-x86 transition was completely painless, I think only one piece of software broke, the software to my hardware screen calibration tool would no longer work. Did you encounter more problems during the last CPU architecture transition?
I had some hardware that no longer works, but mainly, I'm tired of Apple deprecating APIs for shits and giggles.

This is perhaps a silly example, but bear with me: I'm a fan of the Civilization games. There are currently 6, plus 4 official spinoffs, and they nicely slot into the various eras of Mac hardware.

The first was 68k Mac only (and I think it's spinoff, Colonization, as well)
The second was both 68k and PPC.
The spin-off, Alpha Centauri (SMAC), was a PPC game for Classic Mac OS that got a semi-unofficial port to Mac OS X, but of course no x86 version.
The third was a game for both Classic and OS X, PPC only.
The fourth and its spinoff (another Colonization) was OS X, PPC and x86 - but only 32-bit
The fifth is Intel x86 only, but actually still 32-bit (the Windows version isn't)
The Beyond Earth spinoff and the sixth are 64-bit x86.

So right now, I can play the last three games, and its spin-offs. As soon as Apple drops the hammer on the 32-bit x86 libraries, I lose everything but the last. This is sad for me that I have to play the dumbed-down modern versions, but that's not the big problem. Each of these games include code from the older games (except that IV killed every trace of the very broken III code base), code that presumably has to be ported again. This is extra burden for the developers, for a small platform. This is what I mean by pain. I get that if there is a big benefit on the horizon, it makes sense to do this. The 68k and PPC platforms were both dying, and Apple had to do something. This time, the benefit is an even thinner MBP. That isn't enough of a benefit to me.

(I am still on Sierra, and I may stay on this version forever. High Sierra seems like it is problematic and buggy, Mojave kills sub-pixel rendering, and whatever 10.14 is will kill 32-bit apps. These are not features, they're regressions. If the tradeoff is playing Civ IV or having a secure Mac, I just might disable Wifi and become a hermit.)

And do you know what the situation with PCIe is? What would prevent Apple from building a PCIe complex into its Mac SoCs? Are there any licensing issues I am not aware of? Also, I think PCIe will be tied to ThunderBolt (most Macs are mobile Macs), so the utility of PCIe may be coupled to Apple being able to license ThunderBolt from Intel. Presumably licensing is an issue that can be solved with money.
They can build PCIe lanes into their chips, and Thunderbolt is becoming license-free at some point during this year to speed adoption. That isn't my worry. My worry is that high-speed connections are expensive, power-wise, and a big part of the reason Apple's SoCs are lower power. I don't think they want to give up that advantage, so I think we're getting less high-speed I/O down the line. They will have dedicated connections for storage and whatever port they want to put on there, but nothing general.

I think you are conflating two things: iOS apps on the Mac is happening independently of the underlying CPU ISA, and I am worried about that. But since it is independent of the CPU architecture, I don't think this is saying something one way or the other.
This ties in to my point above. Apple is burning some developers again by changing the platform. Continuous upgrades are how you make money in the business, and Apple is doing its best to kill that revenue stream. I think that if you have to port your app to the Mac again with a new ISA, most companies will just port the iOS app.

I don't understand this point: one clear benefit is that the year-over-year improvements in terms of performance and performance-per-watt on the Apple side outpace what Intel is doing by probably a factor of 5-10, depending on your metric. Even if the slope levels off, there will be a growing performance and performance-per-watt advantage on the Apple side of the ledger.
What is that wording they use in all those ads from financial advisors - "Past performance is no guarantee of future performance"? Something like that. It is far from certain that Apple will outperform Intel going forward. The last time they switched, they had to - there was no other option. This time, Intel will stay in the game. If Intel's Ice Lake or Sapphire Rapids is a fantastic new platform that beats everything Apple has, what will Apple do then? Stay behind what all other PC manufacturers can deliver?

Furthermore... Do you think Apple can measurably improve absolute performance by a significant number over what Intel is delivering? On a platform designed to run at some 2W? I think key in Apple's advantage is that they placed their power target much lower. Move up to 65W or so (desktop levels), and I have a hard time seeing a big performance advantage. Performance per watt sure, but I'm not so concerned about that right now.

What about GPU performance? That is a big factor where Intel's efforts are lackluster and the timing unreliable. This has been Apple's beef with Intel since forever. (I remember when Apple put in nVidia 9400 chipsets/GPUs because it wasn't convinced that Intel's were good enough.) The other factor are other types of co-processors that get increasingly important for Apple, and Intel has nothing to offer here.
Apple's graphics in the 2018 iPad Pro are a massive improvement over past years, but they're nowhere near their own 15" MBP in Geekbench. That MBP uses an old GPU - 2016 for that specific chip, 2012 if you want to count the basic design - and it smashes what Apple has, even if it is a low-end model. If you try ot compare it to a desktop chip, it isn't even funny.

Furthermore, Apple's graphics use deferred rendering (like many mobile chips). It is not at all clear how well they will run on an API designed for immediate rendering. Apple doesn't care - just use Metal! - but if your app is written for OpenGL, it may not be easy to port with good performance.

And lastly, it would give Apple a way to make better use of investments it is making anyway: the development of custom CPUs, GPUs and other co-processors.
That's great for Apple. Me, I don't really care. Also, what should I use those co-processors for in a Mac?

PCIe is not necessary for an iPad, but the situation is different for at least some Macs. I think PCIe (especially in the form of external expansion slots) is on Apple's minds, and is crucially important for quite a few niche applications. A former colleague of mine uses it to significantly accelerate his numerical simulations.
But building that in will mean that power budgets go up. Will Apple really do that? I could see them using the same core design for a different SoC - A12Y if you will - but then it becomes another chip entirely, and the big advantage of reusing designs is lost. 7nm masks are hideously expensive, apparently, so, will we see the iPhone chip, the iPad chip (also for laptops? Without PCIe in that case) and the desktop chip? A single one, for everything from mini to Mac Pro? That Mac Pro that will in all likelihood have a 28-core option in a few weeks time? Again, I can make up ideas (take a look at what AMD is doing with the latest Epyc for one idea) but they cost design investment. Will Apple take that investment for the tiny sliver of a sliver that is the Mac desktop market?

Why should there be versions with DIMM slots? If the new Mac mini and the iMac Pro are any indication, Apple has started listening to its customers again.
And those customers want ot upgrade their RAM?

I don't understand this point: the difference between a Mac and an iPad is the UI paradigm, so if an ARM-based Mac runs OS X, why should that be closer to an iPad than the predecessor that sported an Intel CPU? That strikes me as a bit weird like some of the Apple fans who didn't like the transition away from PowerPC to Intel, fearing that Macs would become less Mac-like.
Because of the combination of all the above. The pressure to reuse mobile chips for desktop machines, even more churn in existing programs when they have to be ported again, a GPU that is likely to be incompatible (at anything resembling decent performance) with existing graphics APIs...all so the MBP can become even thinner. Not worth it.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 12, 2018, 01:45 AM
 
Originally Posted by P View Post
I had some hardware that no longer works, but mainly, I'm tired of Apple deprecating APIs for shits and giggles.

This is perhaps a silly example, but bear with me: I'm a fan of the Civilization games.
Much of this seems completely independent from the expected Intel-to-ARM transition. Abandoning 32 bit x86 kills my beloved Star Wars games, too, but since I do see the reasoning behind Apple's transitions that is a worthwhile trade-off IMHO. Look at Microsoft and ask enterprise software developers what they think of Microsoft's “we'll support every piece of legacy technology” approach. It means that for mission critical pieces of software, you must have huge testing teams. (A close friend of mine worked for one of the big companies making a very prominent hypervisor. His colleagues had to validate 79 different versions of Windows 10 alone.)

I think for people like us who want to play legacy games, a VM solution sounds like a much better idea than keeping old APIs on life support.
Originally Posted by P View Post
They can build PCIe lanes into their chips, and Thunderbolt is becoming license-free at some point during this year to speed adoption. That isn't my worry. My worry is that high-speed connections are expensive, power-wise, and a big part of the reason Apple's SoCs are lower power. I don't think they want to give up that advantage, so I think we're getting less high-speed I/O down the line. They will have dedicated connections for storage and whatever port they want to put on there, but nothing general.
How can you be so sure of that? I think this may become a differentiating feature between pro and non-pro lines. Non-pro machines get USB-C whereas pro machines get Thunderbolt ports. Given Apple's investment in external GPU support, I think it'd be odd if they stopped supporting Thunderbolt peripherals.
Originally Posted by P View Post
This ties in to my point above. Apple is burning some developers again by changing the platform. Continuous upgrades are how you make money in the business, and Apple is doing its best to kill that revenue stream. I think that if you have to port your app to the Mac again with a new ISA, most companies will just port the iOS app.
You make it sound as if the effort is comparable to going from OS 9 to Mac OS X or 68k to PowerPC. I don't see how the new ISA will be a pain point for vanilla Mac developers: unless you have hand-optimized, platform-specific code, if you just rely on Apple's standard APIs, it may be as easy as recompiling your app with a new version of Xcode. Some developers will have to spend some time hand-optimizing some code, but in many cases, they may have already had to do just that when they ported their app to iOS.
Originally Posted by P View Post
What is that wording they use in all those ads from financial advisors - "Past performance is no guarantee of future performance"? Something like that. It is far from certain that Apple will outperform Intel going forward. The last time they switched, they had to - there was no other option. This time, Intel will stay in the game. If Intel's Ice Lake or Sapphire Rapids is a fantastic new platform that beats everything Apple has, what will Apple do then? Stay behind what all other PC manufacturers can deliver?
You are right that past performance is not a predictor of future growth. However, Apple presumably knows much more about Intel's road map than we do, and it knows its own SoC road map even better — and can make their decision based on that. It knows what performance and efficiency improvements Intel promises and what it things it can achieve with its A-series. And Apple can rely on the economy of scale to make the development worthwhile, because Apple's yearly cadence of new and improved CPUs, GPUs and assortment of co-processors is dictated by iOS.

Plus, what we do know about Intel doesn't exactly fill me with confidence: according to semiaccurate, Intel axed its 10 nm process and will skip directly to 7 nm, which is expected to arrive no earlier than 2020. That means the bulk of its products will be produced in a manufacturing process that is 1.5-2 generations behind*.

* I don't want to get into the weeds of what x nanometer means to each manufacturer and the like, and whether Intel's 14 nm are closer to TSMC's 10 nm. Intel is behind, and by the time 2020 rolls around, Intel's manufacturing competitors won't have rested on their laurels but improved their own processes
Originally Posted by P View Post
Furthermore... Do you think Apple can measurably improve absolute performance by a significant number over what Intel is delivering? On a platform designed to run at some 2W? I think key in Apple's advantage is that they placed their power target much lower. Move up to 65W or so (desktop levels), and I have a hard time seeing a big performance advantage. Performance per watt sure, but I'm not so concerned about that right now.
Definitely. And not because Apple cooks with something other than water, but because of the fact that Apple can leverage co-processors and make them easily accessible to developers and consumers. If you use Core ML on an iPad Pro, then automatically, your algorithms may be run on the Neural Engine co-processor instead of the CPU, for example. If you use certain image processing APIs, then the ISP may do the heavy lifting. In both cases, these specialized co-processors will be faster and more energy efficient than a general purpose CPU.
Originally Posted by P View Post
Apple's graphics in the 2018 iPad Pro are a massive improvement over past years, but they're nowhere near their own 15" MBP in Geekbench. That MBP uses an old GPU - 2016 for that specific chip, 2012 if you want to count the basic design - and it smashes what Apple has, even if it is a low-end model. If you try ot compare it to a desktop chip, it isn't even funny.
I don't see any reason why Apple couldn't offer support for external GPUs with its higher-end ARM-based Macs. In fact, it'd be mandatory.
Originally Posted by P View Post
Furthermore, Apple's graphics use deferred rendering (like many mobile chips). It is not at all clear how well they will run on an API designed for immediate rendering. Apple doesn't care - just use Metal! - but if your app is written for OpenGL, it may not be easy to port with good performance.
OpenGL on the Mac is or will be deprecated, but that is again independent of the CPU or GPU architecture macOS runs on.
Originally Posted by P View Post
That's great for Apple. Me, I don't really care. Also, what should I use those co-processors for in a Mac?
These would be used automatically when you call the respective APIs. Every time you use your finger print to log into your Mac, you use a co-processor.
Originally Posted by P View Post
But building that in will mean that power budgets go up. Will Apple really do that? I could see them using the same core design for a different SoC - A12Y if you will - but then it becomes another chip entirely, and the big advantage of reusing designs is lost. 7nm masks are hideously expensive, apparently, so, will we see the iPhone chip, the iPad chip (also for laptops? Without PCIe in that case) and the desktop chip? A single one, for everything from mini to Mac Pro?
I think if you made what you dubbed the A12Y multiprocessing capable and able to connect to a PCIe complex, then yes, I think there is a way to do just that.

A12 - iPhones and entry-level iPads
A12X - iPad Pros and (some?) non-Pro Macs (e. g. the MacBook and the MacBook Air).
A12Y - Pro mobile Macs and desktop Macs.

For example, you could differentiate the 13" MacBook Pro from the 15" MacBook Pro by adding a second A12Y onto the 15 inch model's motherboard. That'd roughly double performance. Plus, when Apple releases a larger touch-based device (think of a 15" iPad Pro or an iMac analog), it needs even more powerful chips for those machines as well, and power is less of an issue.
Originally Posted by P View Post
That Mac Pro that will in all likelihood have a 28-core option in a few weeks time? Again, I can make up ideas (take a look at what AMD is doing with the latest Epyc for one idea) but they cost design investment. Will Apple take that investment for the tiny sliver of a sliver that is the Mac desktop market?
I think this was the strongest argument against Apple switching to ARM. However, ARM is not PowerPC in that apart from Apple (and a handful of IBM workstations and servers) nobody used PowerPC. ARM is literally the most commonly used CPU platform on the planet.

For example, do we know whether Apple sells more Macs than AMD sells, say, mobile CPUs (honest question, I have not followed the AMD's financials)? It is safe to say that Apple sells more Macs now than it ever did PowerPC-based Macs, and back then Apple made that investment work for them. I don't think it'll be a problem to make it work financially or technologically. The biggest issue is the shortage of talent, I would say.
Originally Posted by P View Post
Because of the combination of all the above. The pressure to reuse mobile chips for desktop machines, even more churn in existing programs when they have to be ported again, a GPU that is likely to be incompatible (at anything resembling decent performance) with existing graphics APIs...all so the MBP can become even thinner. Not worth it.
I think you misstate the reason why Apple would switch: I don't think their motivation would be to make their computers thinner. Their motivation would be that comparing the long-term road maps of both platforms, one has a brighter future than the other.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 12, 2018, 07:22 AM
 
Originally Posted by OreoCookie View Post
Much of this seems completely independent from the expected Intel-to-ARM transition. Abandoning 32 bit x86 kills my beloved Star Wars games, too, but since I do see the reasoning behind Apple's transitions that is a worthwhile trade-off IMHO. Look at Microsoft and ask enterprise software developers what they think of Microsoft's “we'll support every piece of legacy technology” approach. It means that for mission critical pieces of software, you must have huge testing teams. (A close friend of mine worked for one of the big companies making a very prominent hypervisor. His colleagues had to validate 79 different versions of Windows 10 alone.)

I think for people like us who want to play legacy games, a VM solution sounds like a much better idea than keeping old APIs on life support.
What is the reason for abandoning 32-bit x86 right now? The platform is very much alive and well-supported, and if your application was supposed to be used by as many people as possible, it made a lot of sense to make it 32-bit. When we had 68k or PPC applications being emulated, there was a performance issue with using the emulator. There isn't one here. There are admittedly fewer architecture registers, but that is a very minor thing. If it is so easy to just recompile your programs for ARM, how come Apple can't compile their libraries for x86 32-bit as well as 64-bit? Don't tell me those libraries use RAM (we have virtual memory to take care of that) or space on disk (it is minuscule compared to all the other things Apple ships by default, like a hundred language file per application). What, beyond some obscene sense of neatness, is the reason?

How can you be so sure of that? I think this may become a differentiating feature between pro and non-pro lines. Non-pro machines get USB-C whereas pro machines get Thunderbolt ports. Given Apple's investment in external GPU support, I think it'd be odd if they stopped supporting Thunderbolt peripherals.
There can only be so many chips developed, is my point. There will always be one for the iPhone and that one will not be compromised by having features for something else. I think there can be at most two more. If the middle one, the current "A12X", is supposed to cover the iPad (Pro) and the thin and light Mac models, it will either have to gain a few PCIe lanes (which would use power and make the chip bigger) or those light models will lose Thunderbolt. Remember that all Macs except the 12" Macbook now support Thunderbolt. I had thought the MBA would not get Thunderbolt, but it does have it.

And then we have one more chip. That chip will then have to cover everything from 13" MBP to Mac Pro, a range in TDP from 15W to 200W. How many PCIe lanes? The 13" MBP has 4, and can't even support an external GPU. The top Xeons have 48 lanes, and AMD's have 64. They go from 2 cores to 28. Those extra cores and lanes are needed if the Mac Pro is going to do what it should, and Apple has reconfirmed that they will support replaceable GPUs going forward.

Doesn't seem possible, does it? Can we have two chips? One for the laptops "A12Y", and one for the desktops, "A12Z"? Sure - but Apple makes 80% laptops. Now you have to fund the development of the A12Z desktop chip off of the 20% that is desktops - and most of those 20% are iMacs that would probably be pretty OK with the A12Y. I don't see the economics working out. Remember that we didn't even get an A11X, presumably because that mask was too expensive.

You make it sound as if the effort is comparable to going from OS 9 to Mac OS X or 68k to PowerPC. I don't see how the new ISA will be a pain point for vanilla Mac developers: unless you have hand-optimized, platform-specific code, if you just rely on Apple's standard APIs, it may be as easy as recompiling your app with a new version of Xcode. Some developers will have to spend some time hand-optimizing some code, but in many cases, they may have already had to do just that when they ported their app to iOS.
It is never as easy as just recompiling - there is always something that you have to fix, and Apple will cut some older APIs loose again, because they can.

You are right that past performance is not a predictor of future growth. However, Apple presumably knows much more about Intel's road map than we do, and it knows its own SoC road map even better — and can make their decision based on that. It knows what performance and efficiency improvements Intel promises and what it things it can achieve with its A-series. And Apple can rely on the economy of scale to make the development worthwhile, because Apple's yearly cadence of new and improved CPUs, GPUs and assortment of co-processors is dictated by iOS.
But that development will be of the core itself. They still need to make masks for the new designs needed. They didn't fund the A11X - and A11 was a big improvement over the lackluster A10 - and there must have been a reason for that. My guess is money. Now you want to make one or two new masks, for an even smaller volume? How is that economy of scale?

Plus, what we do know about Intel doesn't exactly fill me with confidence: according to semiaccurate, Intel axed its 10 nm process and will skip directly to 7 nm, which is expected to arrive no earlier than 2020. That means the bulk of its products will be produced in a manufacturing process that is 1.5-2 generations behind*.

* I don't want to get into the weeds of what x nanometer means to each manufacturer and the like, and whether Intel's 14 nm are closer to TSMC's 10 nm. Intel is behind, and by the time 2020 rolls around, Intel's manufacturing competitors won't have rested on their laurels but improved their own processes
Semiaccurate isn't always correct, and Intel has denied that rumor in terms that would get them sued by the SEC if they were not true. 10nm is a disaster, but according to Intel, they're still working on it.

The confusion in naming might have something to do with it. According to reports, Intel's 10nm is better than TSMCs 7nm even after the simplifications, so maybe they're rebranding it?

In either case - I would not expect any moves to 5nm any time soon. AMD had a presentation recently, and indicated that they believed the industry would stay on 7nm for a long time. I got the feeling that we would have something like the 28nm situation at least, when we were all stuck on that node for over four years.

I don't see any reason why Apple couldn't offer support for external GPUs with its higher-end ARM-based Macs. In fact, it'd be mandatory.
If Apple is moving towards deferred rendering GPUs and makings its APIs for that, will they spend the money to support current immediate rendering GPUs for a tiny sliver of the market. Apple's GPUs are very different from current desktop GPUs in how they work, and reconciling that won't be easy.

OpenGL on the Mac is or will be deprecated, but that is again independent of the CPU or GPU architecture macOS runs on.
I don't think it is independent of the GPU it runs on. Everything I read indicate that the mobile GPUs cannot run desktop APIs well, and games written for those APIs will not run well on mobile GPUs. The key is to make the engine compatible with the mobile GPUs (like Unity is), but will any developer of real 3D programs do that work?

These would be used automatically when you call the respective APIs. Every time you use your finger print to log into your Mac, you use a co-processor.
And it works fine with an x86 CPU as the main CPU, arguably even more securely.

I think if you made what you dubbed the A12Y multiprocessing capable and able to connect to a PCIe complex, then yes, I think there is a way to do just that.

A12 - iPhones and entry-level iPads
A12X - iPad Pros and (some?) non-Pro Macs (e. g. the MacBook and the MacBook Air).
A12Y - Pro mobile Macs and desktop Macs.

For example, you could differentiate the 13" MacBook Pro from the 15" MacBook Pro by adding a second A12Y onto the 15 inch model's motherboard. That'd roughly double performance. Plus, when Apple releases a larger touch-based device (think of a 15" iPad Pro or an iMac analog), it needs even more powerful chips for those machines as well, and power is less of an issue.
See what I wrote above. The TL;DR is that the A12Y would have to stretch over a very large TDP range, a factor over 10, and widely varying PCIe lane counts.

But going dual socket is interesting as an idea, because everyone is moving away from that. Those that do it do so for reasons of memory channels and I/O lanes. The "flock of chickens" isn't seen as a good idea for performance right now.

I think this was the strongest argument against Apple switching to ARM. However, ARM is not PowerPC in that apart from Apple (and a handful of IBM workstations and servers) nobody used PowerPC. ARM is literally the most commonly used CPU platform on the planet.

For example, do we know whether Apple sells more Macs than AMD sells, say, mobile CPUs (honest question, I have not followed the AMD's financials)? It is safe to say that Apple sells more Macs now than it ever did PowerPC-based Macs, and back then Apple made that investment work for them. I don't think it'll be a problem to make it work financially or technologically. The biggest issue is the shortage of talent, I would say.
We don't know how many mobile chips AMD sells. AMD supposedly had a market share of 12% in the first half of 2018, and they have been as high as 30% in the past. They are aiming for those 30% again, because they seem to need a share like that to be comfortably competitive. Apple had a 7.1% share among desktops and 9.4% market share among laptops in the last estimate I saw (Q2 2018). Note that this is Gartner numbers, and they're infamously unreliable, but we don't have anything better.

I think you misstate the reason why Apple would switch: I don't think their motivation would be to make their computers thinner. Their motivation would be that comparing the long-term road maps of both platforms, one has a brighter future than the other.
Every chassis change Apple has made since forever has been to make its computers thinner. The 2012 iMac redesign still bugs me. They removed the 21" RAM door and 3.5" drive, reduced max RAM on all the models and let the cooling capacity crater, which limited GPU options - all because they wanted to make it thinner. Besides, what happens if the x86 platform crashes? Apple can just keep selling its current models for a year more (they have NO problem with that) and then make the switch. No reason to switch preemptively.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 12, 2018, 09:23 AM
 
Originally Posted by CharlesS View Post
Huh? They specifically mentioned that the RAM is on SO-DIMMs in the keynote.
I found a video of how to upgrade the RAM in a Mac mini — it definitely is on dual USB iBook-hard-drive-upgrade-side of things. You need a special screwdriver and quite a bit of patience. Plus, if you do not watch the instructions, you could rip off the antenna cable, ouch. The video makes the point that the procedure does not void warranty, so if that is correct, then I was wrong claiming that RAM needs to be upgraded by an authorized Apple service professional. I'll leave it to you to decide whether this is a distinction with or without a difference

The design seems a bit hostile to end users and service professionals.
I don't suffer from insanity, I enjoy every minute of it.
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Nov 12, 2018, 10:24 AM
 
Originally Posted by sek929 View Post
My sister gave me her old late 2012 27" i7, had a failing HDD but after an SSD swap this machine is lightning fast. Getting the RAM to 24GB wasn't too expensive either.
I threw an SSD along side a 3TB HD in my 2010, along with 12GB of RAM. Bought it for a grand like 6-7 years ago and it's been rock solid since then, I honestly have no complaints about the performance, but now that's it's unsupported, I'm looking for an excuse to upgrade.

Hmmm...I can pick up a 12-core 2012 Mac Pro for ~$800. Now that's tempting...
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 12, 2018, 11:06 AM
 
Originally Posted by P View Post
There can only be so many chips developed, is my point. [...] That chip will then have to cover everything from 13" MBP to Mac Pro, a range in TDP from 15W to 200W. How many PCIe lanes? The 13" MBP has 4, and can't even support an external GPU. The top Xeons have 48 lanes, and AMD's have 64. They go from 2 cores to 28. Those extra cores and lanes are needed if the Mac Pro is going to do what it should, and Apple has reconfirmed that they will support replaceable GPUs going forward.

Doesn't seem possible, does it? Can we have two chips? One for the laptops "A12Y", and one for the desktops, "A12Z"? Sure - but Apple makes 80% laptops. Now you have to fund the development of the A12Z desktop chip off of the 20% that is desktops - and most of those 20% are iMacs that would probably be pretty OK with the A12Y. I don't see the economics working out.
You argue that it wouldn't be economically feasible for Apple to design four variants of their CPU architecture, and you seem quite adamant about that. I'm very confused as to why you think this has to be the case, because Intel already shows one way you may go about it: You rev the consumer parts more often and let the workstation and server parts skip generations. Oh, and you charge a crapload for Xeons. Moreover, Apple used to design its own chipsets for years while selling way fewer machines and still making a healthy profit.

Another way is related to the multi-chip support that has been used by AMD's latest designs and (in combination with an AMD GPU by Intel), which further drives down cost.

I don't think the financial side is a problem at all, it is priced in already if you will. If an A12Z costs $800 to make, so what if it is destined for a machine that currently uses even more expensive CPUs. The problem I see is with talent and time, not economics.
Originally Posted by P View Post
Semiaccurate isn't always correct, and Intel has denied that rumor in terms that would get them sued by the SEC if they were not true. 10nm is a disaster, but according to Intel, they're still working on it.

The confusion in naming might have something to do with it. According to reports, Intel's 10nm is better than TSMCs 7nm even after the simplifications, so maybe they're rebranding it?
Poteto, potato.
Feature size is a touchy subject anyway, but so far we know for sure (because Intel told us so) that their next-gen process will come online on a mass scale in 2020 at the earliest, and that at least until then they are at least one, perhaps 1.5 generations behind.

Even when Intel finally catches up in terms of feature size and TSMC and Samsung haven't made the jump to 5 nm, it stands to reason that they will reach 5 nm before Intel does and they will have more time to optimize their 7 nm process node.
Originally Posted by P View Post
I don't think it is independent of the GPU it runs on. Everything I read indicate that the mobile GPUs cannot run desktop APIs well, and games written for those APIs will not run well on mobile GPUs. The key is to make the engine compatible with the mobile GPUs (like Unity is), but will any developer of real 3D programs do that work?
You seem to assume that Apple won't support external GPUs in their pro desktop Macs the way they do now. Why? As far as I can tell, the most logical strategy is that Apple just continues to support external GPUs (by nVidia and AMD) in addition to their internal GPU.
Originally Posted by P View Post
If it is so easy to just recompile your programs for ARM, how come Apple can't compile their libraries for x86 32-bit as well as 64-bit? Don't tell me those libraries use RAM (we have virtual memory to take care of that) or space on disk (it is minuscule compared to all the other things Apple ships by default, like a hundred language file per application). What, beyond some obscene sense of neatness, is the reason?
As far as I understand one big motivating factor has to do with the Swift/Objective C runtime: the 64 bit version uses optimizations that are not backwards compatible, and apparently keeping 32 bit support alive is a major pain because it prevents Apple from shifting AppKit towards Swift. That seems like a big reason to me.

Taken from his interview on ATP:
Originally Posted by Chris Lattner
One other technology problem [37:00] that is hilarious but also really important is that the Apple frameworks stack has to support 32-bit Mac apps. 32-bit Mac apps have this interesting challenge: they have the “classic” Objective-C runtime, which doesn't support things like non-fragile instance variables and things like that. At some point in time, the Swift team will need to make the Swift runtime work in that mode, or figure out some other solution to adapt it, because until that happens, it won't be [37:30] possible to use Swift in AppKit, for example.
Originally Posted by P View Post
Every chassis change Apple has made since forever has been to make its computers thinner. The 2012 iMac redesign still bugs me. They removed the 21" RAM door and 3.5" drive, reduced max RAM on all the models and let the cooling capacity crater, which limited GPU options - all because they wanted to make it thinner. Besides, what happens if the x86 platform crashes? Apple can just keep selling its current models for a year more (they have NO problem with that) and then make the switch. No reason to switch preemptively.
IMHO thinness is just scratching the surface here. Apple has had an opinion that they know better what their computers should look like, because customers tell them they want a faster horse anyway. Their focus on thinness is only a part of it. Why isn't RAM easily upgradable on their machines, including the Mac mini? I understand that for mobile Macs, the time has gone and there the trade-offs are (to some at least) worth it. Why don't MacBook Pros sports a cornucopia of ports just like the pro machines of past did? Why are their machines so hard to repair? (That doesn't seem environmentally friendly either.) Not all of this can be explained by thinness, because in some instances, thickness would not be impacted. But I see the quest for thinness to be part of it.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 12, 2018, 05:07 PM
 
Originally Posted by OreoCookie View Post
You argue that it wouldn't be economically feasible for Apple to design four variants of their CPU architecture, and you seem quite adamant about that. I'm very confused as to why you think this has to be the case, because Intel already shows one way you may go about it: You rev the consumer parts more often and let the workstation and server parts skip generations. Oh, and you charge a crapload for Xeons. Moreover, Apple used to design its own chipsets for years while selling way fewer machines and still making a healthy profit.
Everything I read on the topic says that designing masks for 7nm is insanely expensive, and that this cost has really exploded recently. This is one article on the topi:

https://www.extremetech.com/computin...m-process-node

(I don't vouch for the source, and the forward-looking stuff may be BS, but I suspect that the current figures are reliable because I have seen similar figures elsewhere.) According to that, it costs a cool $300 million to make a 7nm mask. In financial 2018, Apple sold 18 million Macs. According to Gruber, 20% of that is desktop, so 3.6 million. That is a cost of just under $100 if you can make one chip that covers all the desktops - but I don't think we can. We'd need a Mac Pro chip. The Mac Pro is "single-digit percentage" of all Macs, but Gruber thinks it is essentially 1%, or 180 000 Mac Pros per year. That's $1666 in pure cost for just making the chip - and that is on the current node. This cost goes up. Sure, we can leave it for two years and twice the sales and the cost becomes $833 - still a decent chunk of change. With Apple's margins being what they are, that's a lot of oncost for the consumer.

I don't see how Apple can amortise that cost on the small number of high-end desktops it sells. I can maybe see a single design that covers the MBP and the iMac, something like the current A12X but with some real I/O (and sure, let's say it has 8 performance cores - Apple can develop a ringbus if they don't already have one), but that will be a much weaker chip than what the Mac Pro usually has.

Another way is related to the multi-chip support that has been used by AMD's latest designs and (in combination with an AMD GPU by Intel), which further drives down cost.
Yes, this is an interesting setup, but it has real downsides. AMD has made a bunch of "chiplets" with just the CPU cores (and maybe PCIe, which I don't understand?) and then all of them connected to a single I/O chip, still made on 14nm. This is NOT on an interposer or EMIB or anything like that, so it seems to be essentially the old front side bus design of multiple sockets to a single memory controller. This has advantages in that AMD can rev one chip and keep the others standard, and the I/O chip can be on a cheaper process, but you lose the integrated memory controller. With that loss, your main memory latency goes up. It isn't a good design for the desktop. It will work for servers, because you get the improvement in that memory latency is now uniform, but you will lose performance in a desktop setup.

I don't think the financial side is a problem at all, it is priced in already if you will. If an A12Z costs $800 to make, so what if it is destined for a machine that currently uses even more expensive CPUs. The problem I see is with talent and time, not economics.
The problem isn't that the cost for each chip is $800 or whatever. The problem is that the first one costs $300 million today, and maybe $1.5 billion in a few years.

And I don't think that Apple pays $800 on average even for the Mac Pro CPUs. They pay far below list, and a lot of them will have to be more basic Xeons that cost far less even list.

Poteto, potato.
Feature size is a touchy subject anyway, but so far we know for sure (because Intel told us so) that their next-gen process will come online on a mass scale in 2020 at the earliest, and that at least until then they are at least one, perhaps 1.5 generations behind.
Consumer 10nm chips are still due for 2019. The Xeons are in 2020.

Even when Intel finally catches up in terms of feature size and TSMC and Samsung haven't made the jump to 5 nm, it stands to reason that they will reach 5 nm before Intel does and they will have more time to optimize their 7 nm process node.
Right, because Intel got to 14nm first, and this gave them a head start on getting to 10nm?

You seem to assume that Apple won't support external GPUs in their pro desktop Macs the way they do now. Why? As far as I can tell, the most logical strategy is that Apple just continues to support external GPUs (by nVidia and AMD) in addition to their internal GPU.
No, that isn't what I'm saying. My point is that Apple's internal GPUs are based on the technique of tile-based deferred rendering and current desktop GPUs use immediate mode rendering. This difference is too large to be hidden by a driver, as I understand, and code written for a regular immediate mode rendering GPU will usually run very slowly on TBDR-based GPU and vice-versa. Supporting both on one platform may not be feasible, and as far as I know, has not been done.

As far as I understand one big motivating factor has to do with the Swift/Objective C runtime: the 64 bit version uses optimizations that are not backwards compatible, and apparently keeping 32 bit support alive is a major pain because it prevents Apple from shifting AppKit towards Swift. That seems like a big reason to me.
So freeze the 32-bit libraries in time and never touch them again. It is fine on any UNIX system to have multiple versions of the libraries installed, all it takes is memory and disk space.

IMHO thinness is just scratching the surface here.
It is the root problem, though. Apple has anorexia.

Apple has had an opinion that they know better what their computers should look like, because customers tell them they want a faster horse anyway. Their focus on thinness is only a part of it. Why isn't RAM easily upgradable on their machines, including the Mac mini?
The iMac is because of thinness - it was upgradeable before, and then someone decided that being thin was more important (According to Don Melton, that someone was Steve Jobs, which is why it won't ever be reversed. Might as well cancel the fatwa on Salman Rushdie). The mini is probably because they really are running out of space, and it isn't that hard to replace anyway.

I understand that for mobile Macs, the time has gone and there the trade-offs are (to some at least) worth it.
LPDDR-anything isn't available as DIMMs. There is also the fact that one of the most common failure modes on laptops in the past, according to Apple, was that the DIMMs got dislodged, so there is a real gain there.

Why don't MacBook Pros sports a cornucopia of ports just like the pro machines of past did?
Because they are...altogether now!... too thin! You can't fit an HDMI port on there, it isn't physically possible. You could fit a USB-A or an Ethernet port if you did one of those fold-down ports, but those are of course unseemly. I don't think there are any other ports anyone would really want? Well OK, there are people clamoring for an SD-card, but they haven't realised how few and behind the times they are.

Why are their machines so hard to repair? (That doesn't seem environmentally friendly either.) Not all of this can be explained by thinness, because in some instances, thickness would not be impacted. But I see the quest for thinness to be part of it.
I don't know why they can't be repaired. Maybe it is because Apple realised that they had high costs from people trying to repair iPhones when they didn't know what they were up to, but that still doesn't excuse the pentalobe screws. Nintendo is the same way, by the way. Some of them are for thinness - the iMac moving to double-adhesive tape is one - but that isn't everything.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 14, 2018, 09:12 AM
 
Originally Posted by P View Post
Everything I read on the topic says that designing masks for 7nm is insanely expensive, and that this cost has really exploded recently. This is one article on the topi:

https://www.extremetech.com/computin...m-process-node
Quick aside: is extremetech.com a reliable source of information? I just became aware of them recently.
Originally Posted by P View Post
(I don't vouch for the source, and the forward-looking stuff may be BS, but I suspect that the current figures are reliable because I have seen similar figures elsewhere.) According to that, it costs a cool $300 million to make a 7nm mask. In financial 2018, Apple sold 18 million Macs. According to Gruber, 20% of that is desktop, so 3.6 million. That is a cost of just under $100 if you can make one chip that covers all the desktops - but I don't think we can. We'd need a Mac Pro chip. The Mac Pro is "single-digit percentage" of all Macs, but Gruber thinks it is essentially 1%, or 180 000 Mac Pros per year. That's $1666 in pure cost for just making the chip - and that is on the current node. This cost goes up. Sure, we can leave it for two years and twice the sales and the cost becomes $833 - still a decent chunk of change. With Apple's margins being what they are, that's a lot of oncost for the consumer.
First of all, that was why I proposed a multi chip option where you essentially repurpose the same chip across different desktops to reduce cost. We can argue about what a chip would cost in that scenario, and I don’t feel knowledgeable enough to quantify it. (Although I agree that just like with cars, a big cost is the tooling and you need economy of scales to reduce the per-item cost.) If you used the A12Z in the high-end iMacs, high-end Mac mini, iMac Pros and Mac Pros, you’d suddenly speak of 1+ million units. That seems quite reasonable.

Moreover, I fully expect that eventually Apple will expand its line-up to include more touch-based computers that need as much horsepower as an iMac or an iMac Pro has these days. And Apple could also dog food their Mac Pro SoCs in its data centers.

An architectural switch will be made on the basis of what is best for Apple, and for the vast majority of hardware they sell — iPhones, iPads and MacBooks (in all variants and shades of silver) — switching to ARM is a huge net benefit.
Originally Posted by P View Post
I don't see how Apple can amortise that cost on the small number of high-end desktops it sells. I can maybe see a single design that covers the MBP and the iMac, something like the current A12X but with some real I/O (and sure, let's say it has 8 performance cores - Apple can develop a ringbus if they don't already have one), but that will be a much weaker chip than what the Mac Pro usually has.
Apple has plenty of experience designing workstation class IO, so while I agree this is a non-trivial problem, it is a field that Apple already has expertise in. So yes, it is a problem to be solved, but it seems straightforward to a company like Apple to solve it.
Originally Posted by P View Post
Yes, this is an interesting setup, but it has real downsides. AMD has made a bunch of "chiplets" with just the CPU cores (and maybe PCIe, which I don't understand?) and then all of them connected to a single I/O chip, still made on 14nm. This is NOT on an interposer or EMIB or anything like that, so it seems to be essentially the old front side bus design of multiple sockets to a single memory controller. This has advantages in that AMD can rev one chip and keep the others standard, and the I/O chip can be on a cheaper process, but you lose the integrated memory controller. With that loss, your main memory latency goes up. It isn't a good design for the desktop. It will work for servers, because you get the improvement in that memory latency is now uniform, but you will lose performance in a desktop setup.
I understand the downsides, my point was more that this is a common theme in the CPU space these days. That’s also how Intel builds its many, many core monster Xeons. But no matter how Apple implements this specifically, multi chip solutions seem like one potential way forward here.
Originally Posted by P View Post
Consumer 10nm chips are still due for 2019. The Xeons are in 2020.
You are right, I stand corrected. But Intel is still behind and will remain behind.
Originally Posted by P View Post
Right, because Intel got to 14nm first, and this gave them a head start on getting to 10nm?
Catching up is much harder than defending a lead. Overtaking the competition is harder still. And this is happening at a very bad time for Intel: the traditional PC business is in decline, and its death grip on CPU architectures seems to loosen.
Originally Posted by P View Post
No, that isn't what I'm saying. My point is that Apple's internal GPUs are based on the technique of tile-based deferred rendering and current desktop GPUs use immediate mode rendering.
I understand this difference (not least because my brother and I discussed his purchase of a Kyro 2-based graphics card back in the day in quite some detail ). But I don’t think this is a problem for lower-end Macs, because macOS uses the same Metal APIs as iOS, and most of the software comes from iOS these days anyway. And I expect that higher-end Macs will retain discrete GPUs, so running high-end software at full speed doesn’t seem to be a problem either.
Originally Posted by P View Post
So freeze the 32-bit libraries in time and never touch them again. It is fine on any UNIX system to have multiple versions of the libraries installed, all it takes is memory and disk space.
But it would need to be tested and could animate some companies to rely on legacy technology. Dumb question: but is it hard to run an older version of macOS in a simulator? I have never had to, so I don’t know.
Originally Posted by P View Post
It is the root problem, though. Apple has anorexia.

(Seriously, that was funny.)
Originally Posted by P View Post
The iMac is because of thinness - it was upgradeable before, and then someone decided that being thin was more important (According to Don Melton, that someone was Steve Jobs, which is why it won't ever be reversed. Might as well cancel the fatwa on Salman Rushdie). The mini is probably because they really are running out of space, and it isn't that hard to replace anyway.
Whereas I understand your argument to be “that’s in the name of thinness”, I would just say “not just thinness”: ever since the iPod Apple has moved to make their Macs harder to upgrade. Replacing RAM on my iBook was easy, I needed to turn a plastic screw 90 degrees and release two spring-loaded tabs. It was meant to be easily accessible. The iPhone and iPad have further accelerated the trend. As you correctly pointed out, on mobile computers, having a closed design really has benefits to the user as it makes the machine reliable and smaller. But I don’t see it as the only factor.
Originally Posted by P View Post
Because they are...altogether now!... too thin! You can't fit an HDMI port on there, it isn't physically possible. You could fit a USB-A or an Ethernet port if you did one of those fold-down ports, but those are of course unseemly.
I was referring to the number of ports. I don’t mind not having ports if they are literally too large to fit the machine. But I mind taking away ports because of some misguided belief that “on an infinite time scale” a machine should have no ports at all.
Originally Posted by P View Post
I don't think there are any other ports anyone would really want? Well OK, there are people clamoring for an SD-card, but they haven't realised how few and behind the times they are.
If I owned a MacBook, I’d want at least two, better three ports (2 USB-C and one USB-A). If I owned a MacBook Air, I’d want a USB-A port in addition. The current Mac mini is quite alright in terms of ports, although I wished they offered 10 Gbit Ethernet by default.
Originally Posted by P View Post
I don't know why they can't be repaired. [...] Some of them are for thinness - the iMac moving to double-adhesive tape is one - but that isn't everything.
Fortunately, I think this is a trend that is reversing: Apple’s bad experience with glued in batteries and keyboards has shown them that this doesn’t just mean very high expenses for them (or Apple if the machines are under warranty, Apple Care or part of a recall program), but also reduces recyclability. And if they want to extend the life of their machines, customers should be able to ask a certified Apple service technician to replace their batteries, for example.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 15, 2018, 12:14 PM
 
Originally Posted by OreoCookie View Post
Quick aside: is extremetech.com a reliable source of information? I just became aware of them recently.
No idea. They appeared to link to sources for their statements in that article, so it isn't wccftech at least.

First of all, that was why I proposed a multi chip option where you essentially repurpose the same chip across different desktops to reduce cost. We can argue about what a chip would cost in that scenario, and I don’t feel knowledgeable enough to quantify it. (Although I agree that just like with cars, a big cost is the tooling and you need economy of scales to reduce the per-item cost.) If you used the A12Z in the high-end iMacs, high-end Mac mini, iMac Pros and Mac Pros, you’d suddenly speak of 1+ million units. That seems quite reasonable.
It is a balance, I just think that it will be very expensive. Apple currently has the following different masks available for its products:

A12
A12X
Mobile Y-series
Mobile U-series
Mobile H-series
Desktop S-series
Xeon LCC
Xeon HCC
Xeon XCC
(and S4 for the watch if you want to include that).

Did I miss one? Strictly speaking I'm cheating because Intel has different graphics solutions available and Apple just picked one from each, but they can do that again. I don't think that the iPhone chip will be compromised by being shared with something else, so we have 8 different masks that we now need to cover. Remember that a mask costs $300 million or so. Let's say that we have one for the A12X and the Y-series. That chip would be a lot like the current A12X. Can we stretch it to the U-series, if we add some I/O? This makes the iPad more expensive, but maybe they can eat that, and keep the cheap iPad on the A12. It also makes the 13" Pro something very similar to the weaker models, but maybe we can do it - let's be generous here. So we have H-series, desktop, and all the Xeons up to 28 cores to cover. Can we do that with one chip? I'm having a real hard time seeing that, squeezing in something designed for a server into a mini or a 15" MBP. It would make those (comparably high-runners) use a chip that was big, hot and expensive for no real good reason. And if we split again, it would be between desktops and the Xeons. Maybe that makes sense - make the Xeon-replacement a flock of chickens with lots of cores, running hot and damn the torpedoes, but can you really pay for that 300 million mask? I really really doubt that.

An architectural switch will be made on the basis of what is best for Apple, and for the vast majority of hardware they sell — iPhones, iPads and MacBooks (in all variants and shades of silver) — switching to ARM is a huge net benefit.
How is the iPhone helped by switching the Mac to ARM?

I understand the downsides, my point was more that this is a common theme in the CPU space these days. That’s also how Intel builds its many, many core monster Xeons. But no matter how Apple implements this specifically, multi chip solutions seem like one potential way forward here.
It isn't, actually. Intel has used exclusively monolithic Xeonswith integrated memory controllers for some time, and only changed course very recently and announced another MCM to try to steal some of AMD's thunder - but even that one has an integrated memory controller for each die (ie, NUMA again). AMD moving back to having an external memory controller is a bold move - let's see how it works out for them, ie what the main memory latency is. AMD is extremely tight-lipped about that.

I understand this difference (not least because my brother and I discussed his purchase of a Kyro 2-based graphics card back in the day in quite some detail ). But I don’t think this is a problem for lower-end Macs, because macOS uses the same Metal APIs as iOS, and most of the software comes from iOS these days anyway. And I expect that higher-end Macs will retain discrete GPUs, so running high-end software at full speed doesn’t seem to be a problem either.
I think it is a problem that your optimization target shifts that completely, that a program that will run well on a low-end model might be disastrous on a higher-end model. I am not aware of any platform that has successfully navigated a split like that.

But it would need to be tested and could animate some companies to rely on legacy technology. Dumb question: but is it hard to run an older version of macOS in a simulator? I have never had to, so I don’t know.
There is no simulator for older MacOS, I think. It would have to be a VM. This was in violation of the license for a long time. It is now supposedly OK, but it isn't exactly easy to get going. If you own an older version of the OS - remember, everything before 10.9 was something you had to pay for, and Apple isn't selling those versions anymore - you can get it from the App Store, but it isn't straightforward at all.

Apple could solve this by offering this as a service - run Mac OS version 10.9 (or whatever) on a server in their datacenter, and access it over VNC - but they don't seem to get that this is a problem.

I was referring to the number of ports. I don’t mind not having ports if they are literally too large to fit the machine. But I mind taking away ports because of some misguided belief that “on an infinite time scale” a machine should have no ports at all.
I agree with that, but it really only affects one machine - the 12" Macbook. It has one port and there is no reason why it doesn't have two. For the new MBA and the non-TB MBP, they only have a single TB controller, and that controller only supports two ports. I'm not sure about the mini, because someone said that it was four ports from a single controller, but at least for the first gen TB3-controllers, that wasn't possible. Perhaps Intel added that in the second gen controllers (in which case they should add two more ports to the other models.

If I owned a MacBook, I’d want at least two, better three ports (2 USB-C and one USB-A). If I owned a MacBook Air, I’d want a USB-A port in addition. The current Mac mini is quite alright in terms of ports, although I wished they offered 10 Gbit Ethernet by default.
Put the extra ports on the charger!

I know that I go on about this, but I have that and it's amazing. It solves all my port problems.

Fortunately, I think this is a trend that is reversing: Apple’s bad experience with glued in batteries and keyboards has shown them that this doesn’t just mean very high expenses for them (or Apple if the machines are under warranty, Apple Care or part of a recall program), but also reduces recyclability. And if they want to extend the life of their machines, customers should be able to ask a certified Apple service technician to replace their batteries, for example.
I hope you're right, but I'm not sure. Note that the overall repair frequency of the 2016 MBP was LOWER than the 2015, even with the keyboard issues. Without those, it would be fantastically better. Apple's more closed designs do appears to be improving reliability for them. This is a single datapoint and it could be wrong - the 2012-2015 models could be the biggest lemon in the history of the MBP line, and the 2016 a reversion to the mean - but it is an indication.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
ShortcutToMoncton
Addicted to MacNN
Join Date: Sep 2000
Location: The Rock
Status: Offline
Reply With Quote
Nov 15, 2018, 12:19 PM
 
Question—is there any benefit to 10Gbit Ethernet if your computer can’t be plugged into your network?

My computer has to rely on Wifi because it’s located on the opposite side of the house from the wireless router, and since it’s a finished house with many plaster walls running Ethernet cables was just too much work. So I got a nice router instead and Wifi has been fine.

There’s no local attached reason to add 10Gbit, is there? I’m going to get the i5.
Mankind's only chance is to harness the power of stupid.
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Nov 15, 2018, 01:06 PM
 
Originally Posted by P View Post
Note that the overall repair frequency of the 2016 MBP was LOWER than the 2015, even with the keyboard issues. Without those, it would be fantastically better.
Do you have a source for that data?

Because it doesn't jive with what I've heard from within service.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Online
Reply With Quote
Nov 15, 2018, 03:13 PM
 
Originally Posted by ShortcutToMoncton View Post
Question—is there any benefit to 10Gbit Ethernet if your computer can’t be plugged into your network?

My computer has to rely on Wifi because it’s located on the opposite side of the house from the wireless router, and since it’s a finished house with many plaster walls running Ethernet cables was just too much work. So I got a nice router instead and Wifi has been fine.

There’s no local attached reason to add 10Gbit, is there? I’m going to get the i5.
My (perhaps incorrect) thought process is if it becomes an issue, I can Tbolt dongle a 10Gbit port.

One doesn’t exist now, but I assume it will by the time it’s a problem. Spend the hundie on RAM.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 15, 2018, 04:57 PM
 
Originally Posted by Spheric Harlot View Post
Do you have a source for that data?

Because it doesn't jive with what I've heard from within service.
https://appleinsider.com/articles/18...s-older-models

Look at the raw data. The 2016 had 1402 warranty events in the first year. The 2017 had 1161 in the first 11 months. Meanwhile, the 2015 had 1904 in its first year, and the 2014 had 2120. According to Apple’s analyst calls, the 2016 MBP sold very well after its release, so the number of sold MBPs was likely higher in 2016 than in 2014 or 2015 - yet the number of repair events was a third lower.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Nov 15, 2018, 07:14 PM
 
Thanks!
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 15, 2018, 08:57 PM
 
Originally Posted by P View Post
I don't think that the iPhone chip will be compromised by being shared with something else, so we have 8 different masks that we now need to cover. Remember that a mask costs $300 million or so. Let's say that we have one for the A12X and the Y-series. That chip would be a lot like the current A12X. Can we stretch it to the U-series, if we add some I/O? This makes the iPad more expensive, but maybe they can eat that, and keep the cheap iPad on the A12. It also makes the 13" Pro something very similar to the weaker models, but maybe we can do it - let's be generous here. So we have H-series, desktop, and all the Xeons up to 28 cores to cover. Can we do that with one chip? I'm having a real hard time seeing that, squeezing in something designed for a server into a mini or a 15" MBP.
I can see a way with adding only one more variant, and let me call it the A12Y as you did.

iPhone, cheap iPad: A12
iPad Pro, MacBook, MacBook Air: A12X (yes, you lose Thunderbolt, but USB-C is fast enough for consumers and allows you to drive an external display, for example). You could differentiate the MacBook from the Air by allowing the A12X to run at a higher TDP in the latter model.

Now on to the A12Y. Let me assume that Apple adopts a technology similar to AMD where the A12Y features an Apple Infinite Bus that can be used to connect it to an IO controller. Further, let me assume that Apple designs, say, two variants of this IO controller and fabs those in a cheaper process node.

For the sake of concreteness, allow me to make up some numbers on how many A12Ys connect to these IO controllers. Assume the smaller variant can take up to 3 A12Ys whereas the bigger one up to 8. The bigger one can also support more PCIe lanes, ECC RAM and other goodies.

13" MacBook Pro: 2xA12Ys + small IO
15" MacBook Pro: up to 3xA12Y + small IO + discrete GPU (Apple could offer a cheaper 2xA12Y entry-level model as well; alternatively, you could allow for a larger TDP and bump up the clock speeds a little)

iMacs: 3xA12Y + small IO + external GPU
iMac Pro: up to 6xA12Y + big IO + big external GPU
Mac Pro: up to 8xA12Y + big IO + big external GPU(s) + modularity

That seems to cover the whole spectrum, and Apple could amortize the investment for the A12Y across the MacBook Pro, iMac, iMac Pro and Mac Pro. The small IO controller would have a rather large volume as well, and because you fab it in a larger process node (say, 10 nm), it would be cheaper as well. Only the big IO controller would be limited to lower-volume models, but given that you can manufacture it in a cheaper process node, you can still make that more economical. In your opinion, why wouldn't that work?
Originally Posted by P View Post
How is the iPhone helped by switching the Mac to ARM?
I was arguing that it is most beneficial to the iPads, which (software- and hardware-wise) are in turn derived from iPhones. Apple believes that more and more computers that people use adopt a touch-centric UI paradigm, which currently means iOS. And hence, eventually Apple will want to release larger-screen iPad-like devices that are also more powerful.
Originally Posted by P View Post
It isn't, actually. Intel has used exclusively monolithic Xeonswith integrated memory controllers for some time, and only changed course very recently and announced another MCM to try to steal some of AMD's thunder - but even that one has an integrated memory controller for each die (ie, NUMA again). AMD moving back to having an external memory controller is a bold move - let's see how it works out for them, ie what the main memory latency is. AMD is extremely tight-lipped about that.
How do you know it isn't? To me it seems more like a technical solution with obvious trade-offs (e. g. inconsistent cache latency and transfer rates on the negative side; being able to easily offer fewer core variants without having to change much on the manufacturing side on the other is a plus, as is being able to fab the IO controller in a cheaper process). AMD seems to think the trade-offs are worth it, at least for the workloads they target. I am not claiming that AMD has a winning strategy, but given that Intel also combines CPU dies (in admittedly different ways), the overarching theme is the same.

Given these trends, I think it is reasonable to speculate that Apple may use such a solution as well. And I wouldn't dismiss it out of hand.
Originally Posted by P View Post
I think it is a problem that your optimization target shifts that completely, that a program that will run well on a low-end model might be disastrous on a higher-end model. I am not aware of any platform that has successfully navigated a split like that.
When we are talking about integrated graphics, which the majority of the Macs sold have (most Macs sold today are notebooks, and while I don't have exact numbers, most are not the 15 inch MacBook Pro), we have to take into account the performance boost an Apple-designed GPU would have and whether compared to what Intel offers you wouldn't end up with a huge performance boost in most situations.
Originally Posted by P View Post
I agree with that, but it really only affects one machine - the 12" Macbook. It has one port and there is no reason why it doesn't have two. For the new MBA and the non-TB MBP, they only have a single TB controller, and that controller only supports two ports.
Correction: it only supports two Thunderbolt ports. There is no limitation on including e. g. USB-A ports or more USB-C ports. Two ports, especially if one is lost to the power cable, is not enough. If I had a machine with two USB ports on my desk, and I plugged power into one TB port and the display into the other, I would have no ports left (unless the display sports a built-in USB hub).

My current machine (a 2015 13" Pro) has power, two USB-A ports, two Thunderbolt ports, one HDMI port, an SD card reader and a headphone jack. Hence, discounting the SD card reader and the headphone jack I have five ports, and I regularly use three of them. Hence, I'd have problems with the models that only have two ports only, the new Air and the non-TouchBar 13" Pro.
Originally Posted by P View Post
Put the extra ports on the charger!
That, too! But I want extra ports for e. g. USB sticks, SD card readers, external displays and the like that I can use simultaneously, and sometimes even while charging!
Originally Posted by P View Post
I know that I go on about this, but I have that and it's amazing. It solves all my port problems.
I got a 60 W Anker USB charger with five ports, 2 high-power ports, and the thing is amazing. It is in our bedroom and my wife and I can now charge all of our doo-dads while sleeping without cable salad.

For travels, I would really love Apple to reward their best customers by giving us two, perhaps three USB-C ports on their chargers. This way, we can charge our Macs, iPhones and perhaps another device simultaneously.
Originally Posted by P View Post
I hope you're right, but I'm not sure. Note that the overall repair frequency of the 2016 MBP was LOWER than the 2015, even with the keyboard issues. Without those, it would be fantastically better. Apple's more closed designs do appears to be improving reliability for them. This is a single datapoint and it could be wrong - the 2012-2015 models could be the biggest lemon in the history of the MBP line, and the 2016 a reversion to the mean - but it is an indication.
Let's see. But usually Apple is serious when they make such big announcements as they did in the last few keynotes: they want better recyclability, longevity and utility of their products. Of course only time will tell how well they keep their promises. At the very least, I think those are the right priorities for this age.
I don't suffer from insanity, I enjoy every minute of it.
     
ShortcutToMoncton
Addicted to MacNN
Join Date: Sep 2000
Location: The Rock
Status: Offline
Reply With Quote
Nov 15, 2018, 11:23 PM
 
Originally Posted by subego View Post
My (perhaps incorrect) thought process is if it becomes an issue, I can Tbolt dongle a 10Gbit port.

One doesn’t exist now, but I assume it will by the time it’s a problem. Spend the hundie on RAM.
Exactly mine as well. But I’m probably gonna get the biggest standard SSD I can instead, and maybe wait a year or so to step up to 32 gigs of RAM.
Mankind's only chance is to harness the power of stupid.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 16, 2018, 12:12 AM
 
Originally Posted by P View Post
https://appleinsider.com/articles/18...s-older-models

Look at the raw data. The 2016 had 1402 warranty events in the first year. The 2017 had 1161 in the first 11 months. Meanwhile, the 2015 had 1904 in its first year, and the 2014 had 2120. According to Apple’s analyst calls, the 2016 MBP sold very well after its release, so the number of sold MBPs was likely higher in 2016 than in 2014 or 2015 - yet the number of repair events was a third lower.
Thanks for that data point, this is indeed quite helpful, because we can check to some degree whether it agrees with the perceived quality of Apple products of late.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 16, 2018, 07:58 AM
 
Originally Posted by OreoCookie View Post
Now on to the A12Y. Let me assume that Apple adopts a technology similar to AMD where the A12Y features an Apple Infinite Bus that can be used to connect it to an IO controller. Further, let me assume that Apple designs, say, two variants of this IO controller and fabs those in a cheaper process node.

For the sake of concreteness, allow me to make up some numbers on how many A12Ys connect to these IO controllers. Assume the smaller variant can take up to 3 A12Ys whereas the bigger one up to 8. The bigger one can also support more PCIe lanes, ECC RAM and other goodies.

13" MacBook Pro: 2xA12Ys + small IO
15" MacBook Pro: up to 3xA12Y + small IO + discrete GPU (Apple could offer a cheaper 2xA12Y entry-level model as well; alternatively, you could allow for a larger TDP and bump up the clock speeds a little)

iMacs: 3xA12Y + small IO + external GPU
iMac Pro: up to 6xA12Y + big IO + big external GPU
Mac Pro: up to 8xA12Y + big IO + big external GPU(s) + modularity

That seems to cover the whole spectrum, and Apple could amortize the investment for the A12Y across the MacBook Pro, iMac, iMac Pro and Mac Pro. The small IO controller would have a rather large volume as well, and because you fab it in a larger process node (say, 10 nm), it would be cheaper as well. Only the big IO controller would be limited to lower-volume models, but given that you can manufacture it in a cheaper process node, you can still make that more economical. In your opinion, why wouldn't that work?
It would work, and it is what I was thinking of what I was thinking of when I brought up the AMD Epyc solution. I can even make it better - make the I/O chip on 14nm (or GF 12nm) to be even cheaper, and include the southbridge functions in it, including TB3. There are ways to make this work, but it has one major weakness in that the main memory latency of all the A12Y products is higher than than for the other CPUs. The key here is HOW much higher that latency is. AMD isn't saying - they are explicitly avoiding the question - so my worry is that it is quite a big deal.

Note that main memory latency has been a focus of Apple's in its improvements of the Ax series of chips. Almost every generation of them are a little better than the previous, and it is a big part of the generation to generation improvements. It was also AMD's silver bullet when they last gained massive market share with Opteron, and Intel could only counter by doing the same thing. Backing off on that improvement will come with very noticeable costs, and we can't say how large they will be until Rome launches.

(There is also one minor complaint in that the chips for both the Macbook Pros will be physically much larger than the current ones, which eats into battery capacity. To some extent you can compensate for that if you integrate the southbridge like I suggested above, but it is a concern. The 13" use the U-series which already have the southbridge on package, so this chip will be bigger than them.)

I was arguing that it is most beneficial to the iPads, which (software- and hardware-wise) are in turn derived from iPhones. Apple believes that more and more computers that people use adopt a touch-centric UI paradigm, which currently means iOS. And hence, eventually Apple will want to release larger-screen iPad-like devices that are also more powerful.
iPads might benefit, but the iPad business is currently a good chunk smaller than the Mac business. The market for a large, artist-focused iPad will be even smaller. This is the tail wagging the dog.

How do you know it isn't?
Current Xeons use a mesh with a couple of squares for memory controller and the others for cores. This is the very epitome of using a big chip to get the main memory latency as low as it can be for all cores.

To me it seems more like a technical solution with obvious trade-offs (e. g. inconsistent cache latency and transfer rates on the negative side; being able to easily offer fewer core variants without having to change much on the manufacturing side on the other is a plus, as is being able to fab the IO controller in a cheaper process). AMD seems to think the trade-offs are worth it, at least for the workloads they target. I am not claiming that AMD has a winning strategy, but given that Intel also combines CPU dies (in admittedly different ways), the overarching theme is the same.
But the tradeoff might very well be worth it for the servers, which is where AMD is doing this, while not being worth it on the desktop. AMD has not said that the next Ryzen chips will use the same setup and it has in fact said that Navi, the next 7nm GPU, will not work like this. Apple's server presence is essentially zero.

Given these trends, I think it is reasonable to speculate that Apple may use such a solution as well. And I wouldn't dismiss it out of hand.
Speculation is always fun, but the main memory latency is a problem that needs to be solved. Remember that the performance improvement from going to Arm is not that large, so if you lose some of it with a slower memory solution, you risk ending up with lots of weak cores - exactly the flock of chickens that Android manufacturers have tried, and failed, to counter the iPhone with.

When we are talking about integrated graphics, which the majority of the Macs sold have (most Macs sold today are notebooks, and while I don't have exact numbers, most are not the 15 inch MacBook Pro), we have to take into account the performance boost an Apple-designed GPU would have and whether compared to what Intel offers you wouldn't end up with a huge performance boost in most situations.
Do we know that the Apple-designed GPU will beat Intel's? Because in the only tests I can google up, it is almost exactly even with my old Iris 550 (2016 13" MBP) on a clearly mobile-focused synthetic benchmark. I can't find anything remotely resembling a real-world bench, but if the synthetic is true, Apple is barely even with a three year old GPU that is two process nodes behind. Add in that Intel has recently hired Raja Koduri and now have a roadmap for discrete GPUs, and I'm far from certain that Apple will win on integrated graphics.

Correction: it only supports two Thunderbolt ports. There is no limitation on including e. g. USB-A ports or more USB-C ports. Two ports, especially if one is lost to the power cable, is not enough. If I had a machine with two USB ports on my desk, and I plugged power into one TB port and the display into the other, I would have no ports left (unless the display sports a built-in USB hub).
Yes, but you can't fit a USB-A port on the current MBP (because...yeah, you know). They alternative then would be multiple USB-C ports with different characteristics, something that Apple would never do.

My current machine (a 2015 13" Pro) has power, two USB-A ports, two Thunderbolt ports, one HDMI port, an SD card reader and a headphone jack. Hence, discounting the SD card reader and the headphone jack I have five ports, and I regularly use three of them. Hence, I'd have problems with the models that only have two ports only, the new Air and the non-TouchBar 13" Pro.
I know. I have four ports, and it is fine for me - two wouldn't be enough. I'm just saying that the restriction is there for a reason.

That, too! But I want extra ports for e. g. USB sticks, SD card readers, external displays and the like that I can use simultaneously, and sometimes even while charging!
That is what they are. If I plug in my charger and put a USB stick on the charger's USB-A port, it shows up on my desktop, and works at USB 3.0 speeds.

I got a 60 W Anker USB charger with five ports, 2 high-power ports, and the thing is amazing. It is in our bedroom and my wife and I can now charge all of our doo-dads while sleeping without cable salad.
I buy the three-port IKEA chargers and put them everywhere. Dirt cheap, but no USB-C ports yet.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
mindwaves  (op)
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Nov 17, 2018, 05:24 AM
 
Just got the 2018 MBA with 16GB RAM and 512GB HDD. Looking good so far. The keyboard does feel different compared to the 2nd gen butterfly in my 2017 MBA. Actually, the keyboard is exactly the same, just added the membrane.
     
mindwaves  (op)
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Nov 17, 2018, 08:47 AM
 
Originally Posted by mindwaves View Post
Just got the 2018 MBA with 16GB RAM and 512GB HDD. Looking good so far. The keyboard does feel different compared to the 2nd gen butterfly in my 2017 MBA. Actually, the keyboard is exactly the same, just added the membrane.
Although Apple will never do this (sadly), they should put a USB port charger (A or C) on their power adapter. There is at least one company that has an adapter that attaches to the MB adapter, which I won't be buying now because I have two USB ports this time.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 17, 2018, 09:35 AM
 
Originally Posted by P View Post
There are ways to make this work, but it has one major weakness in that the main memory latency of all the A12Y products is higher than than for the other CPUs. The key here is HOW much higher that latency is. AMD isn't saying - they are explicitly avoiding the question - so my worry is that it is quite a big deal.
Yes, it’ll be interesting to see what the numbers will look like for AMD’s new Rome processors. Who knows what Apple will do. The only thing I am sure of is that if they switch to ARM, they at least have a chip on their roadmap that slots into the iMac Pro.
Originally Posted by P View Post
Note that main memory latency has been a focus of Apple's in its improvements of the Ax series of chips. Almost every generation of them are a little better than the previous, and it is a big part of the generation to generation improvements.
Now that you mention it, that’s a good point. Memory latency is particularly important because of the shared memory architecture iOS devices use. So perhaps it’ll be less important on a device with a discrete GPU. But nevertheless, also CPU performance will suffer.
It was also AMD's silver bullet when they last gained massive market share with Opteron, and Intel could only counter by doing the same thing. Backing off on that improvement will come with very noticeable costs, and we can't say how large they will be until Rome launches.
Originally Posted by P View Post
iPads might benefit, but the iPad business is currently a good chunk smaller than the Mac business. The market for a large, artist-focused iPad will be even smaller. This is the tail wagging the dog.
Currently, you are right, but I still think Apple thinks touch-based computers are the future for the vast majority of the people — and if it does, Apple should put its money where its mouth is. What is holding the iPad back at this point is software, not hardware.
Originally Posted by P View Post
Current Xeons use a mesh with a couple of squares for memory controller and the others for cores. This is the very epitome of using a big chip to get the main memory latency as low as it can be for all cores.
According to AMD’s performance projections (which have been relatively accurate lately), Rome will be more than competitive. However, this is for server workloads. Even for workstation workloads, returns are diminishing at a core count that is much lower than what the highest-end parts of Intel and AMD have.
Originally Posted by P View Post
Speculation is always fun, but the main memory latency is a problem that needs to be solved. Remember that the performance improvement from going to Arm is not that large, so if you lose some of it with a slower memory solution, you risk ending up with lots of weak cores - exactly the flock of chickens that Android manufacturers have tried, and failed, to counter the iPhone with.
I don’t think memory latency is the only problem that needs solving, you very correctly pointed to another, the problem that making chips at smaller and smaller structure sizes becomes increasingly expensive. A third is that we run out of process shrinks, as far as I have heard people are expecting that you can go to 5 nm, but after that it is not clear whether there will ever be another shrink. So perhaps you have to fab some parts at larger, older and cheaper process nodes to make a financially viable product.
Originally Posted by P View Post
Do we know that the Apple-designed GPU will beat Intel's? Because in the only tests I can google up, it is almost exactly even with my old Iris 550 (2016 13" MBP) on a clearly mobile-focused synthetic benchmark. I can't find anything remotely resembling a real-world bench, but if the synthetic is true, Apple is barely even with a three year old GPU that is two process nodes behind.
To be honest, I don’t know of any good benchmarks here. Before Anandtech was able to run parts of the SPEC CPU suite on the A12, all we had was Geekbench. We are missing the equivalent in the GPU space.
Originally Posted by P View Post
Add in that Intel has recently hired Raja Koduri and now have a roadmap for discrete GPUs, and I'm far from certain that Apple will win on integrated graphics.
Intel has taken several stabs at building a discrete GPU, and while I would like to see another serious contender in the GPU space, I am pessimistic. I have no doubt they have the technical acumen, you have things like patent portfolios and a go-to-market strategy to worry about. The PC gaming market isn’t that big either for a company of Intel’s size. The compute market is exploding, so perhaps this is what Intel could be after, but they’d be way behind.

Out of curiosity, do you see any other options for Apple here? If Apple tasked you with the transition, what would be your answer for Apple’s higher-end machines?
I don't suffer from insanity, I enjoy every minute of it.
     
mindwaves  (op)
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Nov 19, 2018, 05:20 AM
 
Just wanted to say that I'm happy that the new MBA has a 30W power adapter, which is the same or about the same as the old MBA and the 12" MB (non-Air).

People always say that the MBA is 2.75 lbs and the 13" MBP is 3 lbs and for 0.25 lbs more, you can get a lot more power. That is partially true because that MBP has a 60W power adapter which is significantly heavier and bigger than the one for my retina MBA, which can be a big deal in some laptop bags and backs.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 19, 2018, 09:10 AM
 
Originally Posted by OreoCookie View Post
Currently, you are right, but I still think Apple thinks touch-based computers are the future for the vast majority of the people — and if it does, Apple should put its money where its mouth is. What is holding the iPad back at this point is software, not hardware.
I agree, completely, and this is where Apple needs to work right now - make it easy to port Mac apps to iOS. This has a few implications for the OS - mainly we need to be able to access files - but that may be coming. Transition to ARM by eating the Mac from below.

According to AMD’s performance projections (which have been relatively accurate lately), Rome will be more than competitive. However, this is for server workloads. Even for workstation workloads, returns are diminishing at a core count that is much lower than what the highest-end parts of Intel and AMD have.
I think Rome will be competitive because it will have lots of cores, lots of bandwidth, and good communication between each group of four cores. In a case where few tasks use more than 8 threads, it could very well be competitive. For anything where you rent computing power as a resource, I think it could be great. This is low-end disruption at its finest.

Intel has taken several stabs at building a discrete GPU, and while I would like to see another serious contender in the GPU space, I am pessimistic. I have no doubt they have the technical acumen, you have things like patent portfolios and a go-to-market strategy to worry about. The PC gaming market isn’t that big either for a company of Intel’s size. The compute market is exploding, so perhaps this is what Intel could be after, but they’d be way behind.
The Intel 740 (their last discrete GPU) was 20 years ago. Larrabee was an attempt to do GPU by using x86, something they seem to have given up on. This time they have decided to make a GPU like a GPU, and hired Raja Koduri from AMD to make it happen. I think they're allowed one try per decade, and this one looks a lot better than Larrabee.

Out of curiosity, do you see any other options for Apple here? If Apple tasked you with the transition, what would be your answer for Apple’s higher-end machines?
Stay on x86 as the main CPU for the desktops for now, but with a smallish ARM CPU to make sure that ARM applications can still run. Work on a coherent low-latency bus, and evaluate a different cache hierarchy, to be able to support something like the AMD Epyc design if the day comes to kick x86 finally. If it doesn't, that coherent bus will be excellent for those co-processors that Apple seems to love so much. If it does, put those little chips on one big interposer (which isn't what AMD is doing) to minimize latency. I don't think it needs to have crazy core counts - 16 cores is a lot - but it needs to have a good average latency, so if all else fails, NUMA with one big block of low-latency memory on the die itself.

But ideally, this isn't what I want to do. What I want to do is this: The difference between MacOS and iOS should be touchscreens or not, not which architecture the main CPU is. There is nothing wrong with having MacOS support more than one arch for more than two generations. iOS needs more pro features, and it needs real file system access, no matter what CPU it uses.

The question is what makes a low-end Mac laptop sell in a day when the iPad is both faster and cheaper. I think that part of it is inertia - that people think the iPad is less capable, less flexible, and slower. Two of those are no longer true, so it is time to fix the one that is - flexibility. What do common users need to do that they can't on an iPad? Here is where I think Apple has misapplied the 80-20 rule. They have looked at what the average user needs to do, and decided to support what they do 80% of the time. They need to support what 80% of users need to do 100% of the time.

(This, btw, is also why MS Word won the word processing war. MS understood the rule, and its competitors didn't. MS could add features that a tiny part of its user base would use, because that made their application the only one that did 100% of what those users wanted. One example of that is the Equation Editor. If you're going to write a paper with lots of mathematical symbols, you're not going to do that in MS Word with Equation Editor. You would go insane clicking at those tiny boxes, so everyone that does that decides to learn Latex. MS knows that - Equation Editor isn't for them. It is for the regular word processing users that need the ability to write that equation about twice a year. If you have the tool, MS Word becomes your tool 100% of the time, which is why it won.)

The other thing is what it looks like. A Mac laptop looks like a tool - an iPad with a flimsy keyboard looks like a toy. Make a real first-party keyboard - not a flimsy foldable thing, but a real solid one that you can put your iPad in and close it like a laptop. It should look like a small Macbook Air unless you look closely. Do everything you can to eat your own low-end Mac laptop market from below.

Integrate the App Stores, so you can sell one application for all platforms. I think they wanted to do this, which is why they made the Mac App Store so restrictive, but backed off because nobody stayed in the Mac App Store. Instead, they need to make the iOS App Store a little bit more forgiving - enable paid upgrades and let an app require the presence of a keyboard, stylus or heck a game controller to be sold (now they all have to work with touch exclusively). I think we all suspect that freer file system access is coming to the iPad anyway.

Sell a service to let me run Mac Apps on Apple's servers and show the results on my iPad screen, using what input devices I have connected. Give it a cost structure so that the base variant, for someone who just needs to run an app every now and then, is reasonably cheap (or even included with iCloud) while there are more powerful instances available. Let the AppleTV connect to this service as well.

Do all this, and you can eat the Mac laptop market from below. It won't kill it, but it will eliminate all but the real Pro usage. That one can use an x86 CPU with discrete graphics for now, because it doesn't need to have great battery life.

The desktops can just keep running x86. Apple still can't match Intel when it has 100W for the CPU and 300W for the GPU, and it doesn't need to.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 19, 2018, 08:16 PM
 
Originally Posted by P View Post
(This, btw, is also why MS Word won the word processing war. [...] One example of that is the Equation Editor. If you're going to write a paper with lots of mathematical symbols, you're not going to do that in MS Word with Equation Editor. You would go insane clicking at those tiny boxes, so everyone that does that decides to learn Latex. MS knows that - Equation Editor isn't for them. It is for the regular word processing users that need the ability to write that equation about twice a year. If you have the tool, MS Word becomes your tool 100% of the time, which is why it won.)
First of all, why did you have to bring up Word's equation editor?!? And unfortunately, you (and until last weekend I) underestimate how widespread this evil piece of software really is. I spent most of Friday and half of Saturday cursing at Word, because we had edit a paper for one of the lower-tier Nature journals. The preferred format is indeed Microsoft Word. (Which I didn't know until Friday evening.)

I don't want to go into too much detail, but I haven't experienced in years that a piece of software stops saving documents and randomly deletes equations. That cost us quite a few man-hours of work.

Sorry, I just had to get that out there :argh:
Originally Posted by P View Post
I think Rome will be competitive because it will have lots of cores, lots of bandwidth, and good communication between each group of four cores. In a case where few tasks use more than 8 threads, it could very well be competitive. For anything where you rent computing power as a resource, I think it could be great. This is low-end disruption at its finest.
Yes, it does, and AMD has been relatively honest when it came to managing expectations around performance levels. So I am optimistic here, too.
Originally Posted by P View Post
The Intel 740 (their last discrete GPU) was 20 years ago. Larrabee was an attempt to do GPU by using x86, something they seem to have given up on. This time they have decided to make a GPU like a GPU, and hired Raja Koduri from AMD to make it happen. I think they're allowed one try per decade, and this one looks a lot better than Larrabee.
Sure, they can throw money at it and hope for a hail Mary, but at this stage, I am not sure what the point is for Intel to make a discrete GPU. I (as a complete non-expert amateur) thought the idea to create a GPU but forcing it to be based on x86 cores was completely bonkers, like a company that drank too much of its x86 kool aid. But the main problem is a go-to-market strategy. The high-margin bits of the market are dominated by nVidia, not just because of their cards, but also because of their SDK and platform-specific APIs. Intel's GPUs must be a lot better for people to give up on existing code bases in the high performance computing market. Being on par doesn't mean anything IMHO.
Originally Posted by P View Post
Stay on x86 as the main CPU for the desktops for now, but with a smallish ARM CPU to make sure that ARM applications can still run. Work on a coherent low-latency bus, and evaluate a different cache hierarchy, to be able to support something like the AMD Epyc design if the day comes to kick x86 finally.
But then all you are saying is that it takes a tad longer for Apple to make a larger processor. If it started switching in 2020 and made that transition last two years, that'd give Apple at least four, more realistically five or six years (as the decision for a switch would have been made a year or two ago). If we take the speed at which Apple improved its iPhone SoCs from embedded-system-level of performance to can-legitimately-stand-toe-to-toe-with-most-laptops-type of performance as an indication, don't you think a better option is to just transition the big systems last instead of having to create and support hybrid architecture systems?
Originally Posted by P View Post
The desktops can just keep running x86. Apple still can't match Intel when it has 100W for the CPU and 300W for the GPU, and it doesn't need to.
Indefinitely? In the preceding paragraphs you seem to argue that Apple still needs more time to develop the necessary technologies to make an ARM-based chip powerful enough to compete with Xeons and AMD's workstation processors.
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 19, 2018, 09:30 PM
 
@P
PS I think there is also the question of how many cores even a Mac Pro needs. My Mac Pro has 8, and that is plenty for what I do. If you were to spec a “mainstream” workstation CPU, I doubt you'd go higher than 32 cores, and even that already feels like way out on the tail end. So would 16-24-32 cores look like a reasonable offering for a Mac Pro? What about in 5 years?
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 20, 2018, 06:05 AM
 
To summarize it a little better, perhaps - I don't think Apple should transition the Mac away from x86. I think they should improve the iPad line to do more of what the low-end Macbooks do, through all of these things I wrote, and simply stop making the 5W and 7W Macbooks, leaving their spots to be taken by a more powerful iPad. But since you asked for how I would manage the transition, my answer is that I would transition only the laptops and keep the desktops until and unless Apple has figured out how to make a good desktop chip at a cost they can live with.

The thing I've been thinking of as an idea for solving the main memory latency problem is eDRAM as a big cache. If you put in a big L2, ditch the L3 and put a big chunk of eDRAM into the I/O die (as a memory-side cache, like Intel does for the Skylake and later CPUs with Iris Plus graphics), that can hide some of the main memory latency. It will also give the GPU (in case of an iGPU, which we will probably have for the Pro laptops at least, and likely many iMacs as well) some high bandwidth memory for its framebuffer, just like Iris Plus does. This means that the connections between the CPU chiplets and the I/O die need to be high bandwidth, but that is probably easier to solve than getting fantastically low latency.

I think that a Mac Pro with 16 cores will probably be enough. You have to consider that Apple is still missing SMT, so they probably need more cores than Intel, but 16 is a lot.

As for the GPU... the gaming market is finally finally turning on nVidia. The overpriced 20x0 cards have really riled people up, their constant price gouging and refusal to discount the 10x0 series now have driven people into looking at anything else. The fact that some of the 20x0 cards apparently burning up at the moment (literally - as in, open flame inside the case) with only limited acknowledgement from nVidia isn't helping. Since many of them are so stuck in their fanboi-ism to never consider AMD, many are hoping for an Intel return. The problem for compute remains CUDA, but Intel has more than a little skill at writing tools and compilers, and Red Hat (ie, IBM soon) is working on replacing CUDA as well. Those customers are also more than a little pissed off at nVidia's pricing.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 20, 2018, 09:26 AM
 
nVidia and price gouging, have a look at what the compute cards cost, it'll make you cry — or laugh, if it isn't your money. I thought my colleague was joking and surely mistaken. He was not. Here in Japan, the higher-specced cards cost $10k (about ¥1,000,000). The Xeon box to plug them in is by comparison almost free
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Online
Reply With Quote
Nov 20, 2018, 08:06 PM
 
Good news on the price gouging front...

     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 20, 2018, 09:40 PM
 
Originally Posted by P View Post
But since you asked for how I would manage the transition, my answer is that I would transition only the laptops and keep the desktops until and unless Apple has figured out how to make a good desktop chip at a cost they can live with.
Given Apple's history and Intel's roadmap, how long do you think that'd take (from making the decision to switch to bringing a Mac Pro-class chip to market)? Four, five years? In terms of CPU cores, arguably Apple has already arrived at parity.
Originally Posted by P View Post
The thing I've been thinking of as an idea for solving the main memory latency problem is eDRAM as a big cache. If you put in a big L2, ditch the L3 and put a big chunk of eDRAM into the I/O die (as a memory-side cache, like Intel does for the Skylake and later CPUs with Iris Plus graphics), that can hide some of the main memory latency.
That's what the XBox used to do with its unified memory architecture, until they just merged a GPU memory architecture with a CPU and could do away with the eDRAM.

Earlier you said you were worried about the problems you encounter when you switch Macs to tile-based deferred rendering-based GPUs. I think there is a second, perhaps equally important issue, and that is whether or not you have a unified memory architecture. As the latest XBox shows, that doesn't have to be slow. But it requires a different set of optimizations and places different bandwidth requirements on the constituent parts of your machine (e. g. you no longer need to worry about shuffling data or textures from main memory to video RAM, but on the other hand you probably have less bandwidth and you need to share said bandwidth).

If the new ARM-based Macs were to place even more emphasis on co-processors, also here memory bandwidth and latency would play a more important role, since you need to feed all these beasts from a single troth.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 21, 2018, 02:32 PM
 
Originally Posted by OreoCookie View Post
Given Apple's history and Intel's roadmap, how long do you think that'd take (from making the decision to switch to bringing a Mac Pro-class chip to market)? Four, five years? In terms of CPU cores, arguably Apple has already arrived at parity.
That’s probably a good guess. It took AMD just over four years to make a new CPU architecture from scratch.

That's what the XBox used to do with its unified memory architecture, until they just merged a GPU memory architecture with a CPU and could do away with the eDRAM.
It is exactly what Skylake does today. What the Xbox One did was to combine slow DDR3 RAM with a small chunk of high bandwidth eSRAM to get a high total bandwidth, not mainly to cut latency. It wasn’t strictly a cache, it was something you had to address manually.

But you’re right that the Xbox One X ditched this setup and went all in on what the PS4 did, a high bandwidth/high latency GDDR5 setup for main memory. It did work out, I suppose.

Earlier you said you were worried about the problems you encounter when you switch Macs to tile-based deferred rendering-based GPUs. I think there is a second, perhaps equally important issue, and that is whether or not you have a unified memory architecture. As the latest XBox shows, that doesn't have to be slow. But it requires a different set of optimizations and places different bandwidth requirements on the constituent parts of your machine (e. g. you no longer need to worry about shuffling data or textures from main memory to video RAM, but on the other hand you probably have less bandwidth and you need to share said bandwidth).

If the new ARM-based Macs were to place even more emphasis on co-processors, also here memory bandwidth and latency would play a more important role, since you need to feed all these beasts from a single troth.
Macs support both memory architectures today, though - the integrated graphics use a shared memory setup, while discrete graphics use a split setup. For gaming, there are no real advantages to shared memory (except possibly that you can get away with less total memory in some situations).

One possible way around this is HBM. AMD uses this in their graphics, including in the newest Vega graphics in the MBP. I know that it is higher latency than plain DDR4 in current GPUs, but that is because it is configured that way. You should be able to improve that with some clever configuration.

(Sidenote: Vega 20 in the latest MBP is killing even gaming laptops in early reviews. This GPU is matching a desktop 1050 Ti, the most efficient GPU of the Pascal generation, in less than half the power.)
( Last edited by P; Nov 21, 2018 at 05:18 PM. )
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 21, 2018, 02:36 PM
 
Originally Posted by subego View Post
Good news on the price gouging front...

I know. I picked up a Vega 56 with a good deal last week. First upgrade in almost five years.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
 
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 05:19 PM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,