Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Community > MacNN Lounge > Apple's Oct 30, 2018 Mac/iPad event

Apple's Oct 30, 2018 Mac/iPad event (Page 3)
Thread Tools
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Online
Reply With Quote
Nov 7, 2018, 05:54 PM
 
Originally Posted by subego View Post
Storage on the base model i5/i7 is already 256GB, so if that’s enough, Oreo’s got the best bang for the buck recommendation there.

32GB of the appropriate RAM from Crucial is US$285 right now. To round out the earlier discussion of self-upgrades, I’ve heard opening the case needs a special torx screwdriver, and may void your warranty.

To save a little more money, use a USB3 enclosure. It’s slower than Tbolt, but not by a lot.
Just a quick clarification... I was only looking at the higher tier base model. I didn’t notice the lower tier had a 128GB option.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 7, 2018, 08:40 PM
 
Originally Posted by ShortcutToMoncton View Post
One interesting thing is the heat issue—the i7 is well known to have overheating problems and mine got super hot very quickly on any intensive tasks (like playing a 24-bit ALAC/FLaC or h265 for example)—there are a bunch of tricks people do like re-applying thermal paste inside, etc (not an easy mod for the mini).
I don't think this is an accurate generalization: while true for some machines, I don't think this applies to most Macs.
Originally Posted by ShortcutToMoncton View Post
One of the early reviews noted that the new i7 ran the fans a fair bit under load, whereas the i3 was almost always quiet; and disabling i7 Turbo Boost seemed to mostly turn off that fan. Since the i5 also has Turbo Boost, I may wait and see if there’s any heat/fan differences between the i5 and i7–it could be that the i5 has the same heat and fan noise concerns and if so, I’d probably just go for the i7 at that point. (The i7 with 128Gb and the i5 with 256Gb are the same price up here.)
According to Marco Arment's review, the unit he had (that had the top-end CPU built-in) was silent unless he pushed it really, really hard. I would definitely get one of the six-core CPUs.
Originally Posted by ShortcutToMoncton View Post
I’ve used original Thunderbolt drive enclosures since early 2013 (currently a 6-bay) and couldn’t love the standard more. (MacOS had all sorts of USB 3 drive/sleep issues at first which drove me bananas.) I was also thinking about adding more internal space but to be honest, given that this is a media centre/desktop setup for me and I’m not concerned about portability, an extra TB3 SSD enclosure with a decent 1TB drive is probably the smarter and far cheaper bet.
Once you connect more than one external drive, getting cheap single-enclosure disk drives can get messy and decreases reliability.
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 7, 2018, 09:01 PM
 
Originally Posted by P View Post
Because there is a lot of pain at the transition, the benefits at the end aren't all that enticing this time, I haven't seen a solution for how to replace PCIe, I dislike Apple's current move towards more closed, and I actually like the possibility of being able to run Windows.
Apart from not being able to run Windows, what other pain points do you see forthcoming? I'm honestly asking, because the PowerPC-to-x86 transition was completely painless, I think only one piece of software broke, the software to my hardware screen calibration tool would no longer work. Did you encounter more problems during the last CPU architecture transition?

And do you know what the situation with PCIe is? What would prevent Apple from building a PCIe complex into its Mac SoCs? Are there any licensing issues I am not aware of? Also, I think PCIe will be tied to ThunderBolt (most Macs are mobile Macs), so the utility of PCIe may be coupled to Apple being able to license ThunderBolt from Intel. Presumably licensing is an issue that can be solved with money.
Originally Posted by P View Post
But one at a time. There is pain at the transition because everyone has to port things again, and Apple will cut off backwards compatibility sooner than I am really comfortable with. The Mac market isn't that large, so I think that the main source of ported apps will be ported from iOS. That is an idea that scares me. I don't ever want to have to rely on ported iOS apps on the Mac.
I think you are conflating two things: iOS apps on the Mac is happening independently of the underlying CPU ISA, and I am worried about that. But since it is independent of the CPU architecture, I don't think this is saying something one way or the other.
Originally Posted by P View Post
The benefits at the end are mostly about power consumption. I don't really care that much.
I don't understand this point: one clear benefit is that the year-over-year improvements in terms of performance and performance-per-watt on the Apple side outpace what Intel is doing by probably a factor of 5-10, depending on your metric. Even if the slope levels off, there will be a growing performance and performance-per-watt advantage on the Apple side of the ledger.
Originally Posted by P View Post
That isn't the goal here. I'm sure it will be a little better, but it isn't the doubling of performance overnight that we had last time - and that means that the emulation stage will be even more painful.
What about GPU performance? That is a big factor where Intel's efforts are lackluster and the timing unreliable. This has been Apple's beef with Intel since forever. (I remember when Apple put in nVidia 9400 chipsets/GPUs because it wasn't convinced that Intel's were good enough.) The other factor are other types of co-processors that get increasingly important for Apple, and Intel has nothing to offer here.

And lastly, it would give Apple a way to make better use of investments it is making anyway: the development of custom CPUs, GPUs and other co-processors.
Originally Posted by P View Post
PCIe, then. None of the iPads have it - in fact, they have nothing like it. There are no high-bandwidth ports out from the SoC at all. I don't think that Apple will replace it at all, because high-bandwidth ports take a lot of energy, and Apple doesn't want that.
PCIe is not necessary for an iPad, but the situation is different for at least some Macs. I think PCIe (especially in the form of external expansion slots) is on Apple's minds, and is crucially important for quite a few niche applications. A former colleague of mine uses it to significantly accelerate his numerical simulations.
Originally Posted by P View Post
As for closed... do you think that this new ARM-based Mac will have DIMM-slots? I already said that I don't think we'll see Thunderbolt again. We already can't replace storage, and if you remove PCIe you kill Thunderbolt, which means no fast external storage.. Connecting an external display? Sounds like something you'd need a "Pro" model for.
Why should there be versions with DIMM slots? If the new Mac mini and the iMac Pro are any indication, Apple has started listening to its customers again.
Originally Posted by P View Post
At the end of the day, this isn't a Mac on a new CPU ISA - this is an iPad under another name. I have an iPad, I probably use it more than my Mac because I bring it on every trip, but I want a Mac too.
I don't understand this point: the difference between a Mac and an iPad is the UI paradigm, so if an ARM-based Mac runs OS X, why should that be closer to an iPad than the predecessor that sported an Intel CPU? That strikes me as a bit weird like some of the Apple fans who didn't like the transition away from PowerPC to Intel, fearing that Macs would become less Mac-like.
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 7, 2018, 09:01 PM
 
Originally Posted by subego View Post
Just a quick clarification... I was only looking at the higher tier base model. I didn’t notice the lower tier had a 128GB option.
Yup. That's why I mentioned it. Even if you think 128 GB is enough, they'll be hard to sell afterwards. And running out of internal storage is just a pain.
I don't suffer from insanity, I enjoy every minute of it.
     
ShortcutToMoncton
Addicted to MacNN
Join Date: Sep 2000
Location: The Rock
Status: Offline
Reply With Quote
Nov 7, 2018, 11:19 PM
 
Originally Posted by OreoCookie View Post
Yup. That's why I mentioned it. Even if you think 128 GB is enough, they'll be hard to sell afterwards. And running out of internal storage is just a pain.
I disagree. For laptops it’s a killer, but minis are overwhelmingly used as desktop machines with attached peripherals. Almost everyone I know with a mini has some external enclosure attached. I think the people who buy these things are content with a fast, bare ones package that they can upgrade as required....basically, the antithesis of all that is Apple hahaha.

I for one stuck a 1Tb SSD in my 2012 mini, and ended up with 400gigs of (mostly) music and some locally stored pictures/ videos, and 12Tb (!) of all my real media in the external enclosure. Honestly, I probably don’t need anything but the OS on the actual computer.

Originally Posted by OreoCookie View Post
I don't think this is an accurate generalization: while true for some machines, I don't think this applies to most Macs.
Well I was specifically talking about the mini. Google i7 mini heat and you’ll get lots of talk about various mods to the 2012 models in particular.

Once you connect more than one external drive, getting cheap single-enclosure disk drives can get messy and decreases reliability.
I’m not talking cheap—Apple’s charging a fortune for their internal SSD upgrade. I’m sure you could get a nice TB3 SSD enclosure like OWC’s Express 4M2 and a very nice Samsung 2TB SSD for about the same price as Apple’s 1TB upgrade, and then you have a super cool external SSD drive enclosure as well for future SSD expansion. I’m sure it will not be as blazingly fast, but is that really the biggest complaint these days? Or hell, get one of their Thunderblades for around the same price and you can take your HD with you anywhere, if you’re into that....
Mankind's only chance is to harness the power of stupid.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Online
Reply With Quote
Nov 7, 2018, 11:46 PM
 
Question:

What’s making you think Mini vs. a dedicated file server?
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 8, 2018, 12:10 AM
 
Originally Posted by ShortcutToMoncton View Post
I disagree. For laptops it’s a killer, but minis are overwhelmingly used as desktop machines with attached peripherals. Almost everyone I know with a mini has some external enclosure attached. I think the people who buy these things are content with a fast, bare ones package that they can upgrade as required....basically, the antithesis of all that is Apple hahaha.
In my experience, this is not really correct. I have lived off of a 180 GB SSD + 1 TB hard drive first configured as two separate volumes back when I used my 2010 MacBook Pro as my primary machine. It just wasn't enough, and I ended up configuring it as a Fusion Drive.

I needed to link some folders to other folders on the second hard drive, and parts of the OS and some software just didn't like that. I'd run out of space. Plus, the SSD would significantly slow down once I filled it to 80+ % capacity (about 140~150 GB, but with a 128 GB SSD that is a mere 104 GB). Here were some ways space got eaten up:

- Copying RAM to disk for some of the deeper sleep modes. (= # GB of main memory, so at least 8 GB, but that could be more).
- Swap space.
- Software (e. g. samples from Garageband that weigh in at several GB)
- Years of emails.
- Space to download software and updates (an OS update or XCode could amount to several GB)
- Time Machine
- Do not fill SSDs to the brim, that significantly shortens their life span and slows them down.

Originally Posted by ShortcutToMoncton View Post
Well I was specifically talking about the mini. Google i7 mini heat and you’ll get lots of talk about various mods to the 2012 models in particular.
I believe you. I am just saying that this does not need to apply to this iteration of the Mac mini, and other “fastest” Macs have not suffered from increased CPU failure rates in recent memory.
Originally Posted by ShortcutToMoncton View Post
I’m not talking cheap—Apple’s charging a fortune for their internal SSD upgrade.
I think you misunderstood what I wrote: I was agreeing with you, and just added that getting a nice enclosure is even more important once you add more than one physical drive/SSD to the Mac mini.
( Last edited by OreoCookie; Nov 8, 2018 at 01:52 AM. )
I don't suffer from insanity, I enjoy every minute of it.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 8, 2018, 12:25 AM
 
Originally Posted by subego View Post
What’s making you think Mini vs. a dedicated file server?
First of all, a mini could be a dedicated file server. Depending on what you mean by file server, that may be cheaper, equally expensive or more expensive. I have recently set up a Xeon workstation with a 4-core CPU, 16 GB of RAM, a 512 GB SSD and two 8 GB NAS hard drives as a file server. A Mac mini would have been cheaper. My Synology NAS at home was cheaper, but is also much wimpier, so I can't really run Plex with transcoding on it.

Pros for the Mac mini:
- It runs macOS, is small, reliable and has everything most people need.
- It integrates nicely with many macOS apps, e. g. for software development via Xcode or to use it to encode video.
- Energy efficient.


Cons for the Mac mini:
- No ECC RAM, which to me limits it in some scenarios.
- Obvious limitations by the form factor.


Pros for other file servers:
- More flexibility, including all-internal storage.
- More flexibility when it comes to software (e. g. FreeNAS or the Linux derivatives that run on commercial NASes).
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Online
Reply With Quote
Nov 8, 2018, 01:27 AM
 
I guess I’m leading up to the question what exactly is going to be done with this.

If it’s for a home theatre type dealie, the best bang for the buck might be one of the used Minis about to glut the market. I don’t think that’s too wimpy.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 8, 2018, 09:44 AM
 
Originally Posted by ShortcutToMoncton View Post
Dumb it down for a stupid person, haha. I was trying to compare my old 2.3GHz four-core i7 vs. the new 3.6Ghz four-core i3 vs. the new 3.4gHz six-core i5.

The initial benchmarks I’ve seen recently appeared to suggest even the new i3 (base) is still a little faster than my old i7. Does that make sense to you, or are there situations when that might not be the case?
I am sure that even the i3 is faster. Higher TDP and higher base clock is very hard to beat.

Intel's advertising of its chips is geting absurd, but the way it works is this: There is a power level 1 (PL1) that it can run at "forever", and a power level 2 (PL2) that it can run at for a short period of time. This is meant for turbo boost. (There are more power levels, but I'm trying to keep it simple) There is exactly one thing that is solid in the specification: The base clock is the clockspeed which the CPU will run at when all the cores are running 100% on a task defined by Intel as being "the hardest possible task" and the CPU is running at PL1. There is one caveat to this in that if you're using AVX code the base clock isn't valid anymore, it drops 2-300 MHz, but for everything else, this is true.

Now, if you're only running four cores out of six, you can maintain a higher clock than base while staying at PL1 - because you simply have two cores less to keep powered. This means that a sixcore is going to be faster than a quadcore with the same rated base clock.

In practice, Intel bins these things and sets the power levels to make sure that he cheaper chip is never faster at anything than the more expensive one, so a dualcore will never be faster than a quad, even if only two threads are active.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 8, 2018, 10:40 AM
 
To add to P's very concise explanation: it makes comparing CPUs exceedingly hard to compare to both, other Intel CPUs and non-Intel CPUs. Since the base clock (PL1) in Intel's lowest-power parts is significantly lower than in its higher-power parts, but the Turbo boost (PL2) frequency can be much more comparable to those of higher-power chips. So the old days when you could just compare CPUs based on a single clock speed are long gone.

Apple's cores, for example, have a much smaller frequency gamut (I'm just speaking of maximum frequencies here): on Apple's A12, the big cores are clocked between 2.380 GHz (“PL1”) and 2.083 GHz (“PL2”), so only about 300 MHz difference. For the small cores, it is roughly 100 MHz. For the Intel CPU that is built into the MacBook (with a comparable TDP of about 5 W), the frequency range for the fastest model is 1.7 GHz (“PL1”) versus 3.6 GHz (“PL2”). Put another way, Intel has opted for a very different strategy than Apple. That is why for many mundane tasks the MacBook feels as fast as a MacBook Pro with a much beefier CPU: in both cases the CPUs are built around the same cores, so at the same frequency, performance will be very, very similar. But when you have longer tasks where the MacBook needs to throttle down to PL1, you really feel the difference.

Both, in real life applications and various benchmarks, this will advantage one strategy over another: short, bursty workloads benefit from Intel's strategy to raise the frequency through the roof. Apple's CPU cores shine when you have a high, sustained workload.
I don't suffer from insanity, I enjoy every minute of it.
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Nov 8, 2018, 11:26 AM
 
So my 2010 i7 iMac is not compatible with Mojave, so I've started looking into replacements. I entertained the idea of a Mini + display, but it looks like even the 2014 i7 3.0 gets crushed by the mighty 2010 iMac, at least according to EveryMac's Geekbench scores.

I could pick up a late-2012 27" iMac i7 for $500-600 that would be a nice speed upgrade and give me USB3 and Thunderbolt. 2013 iMacs offered a little speed bump and are running ~$850. 2014 and up look like they're over $1000. Any idea how much longer the late-2012s with Ivy Bridge will be supported?
     
ShortcutToMoncton
Addicted to MacNN
Join Date: Sep 2000
Location: The Rock
Status: Offline
Reply With Quote
Nov 8, 2018, 11:38 AM
 
Originally Posted by subego View Post
I guess I’m leading up to the question what exactly is going to be done with this.

If it’s for a home theatre type dealie, the best bang for the buck might be one of the used Minis about to glut the market. I don’t think that’s too wimpy.
Well the one nice benefit in keeping the old mini would be the SD card slot. I used that thing all the time for my GoPro, camera etc.....I guess there’s an adaptor for everything now? Also I don’t think there’s optical out....I use USB to a DAC/integrated amp so it’s not a concern, but many people may still have optical setups.

Otherwise, the TB3 ports, HDMI 2 (to some limited extent) and some of that extra processing power would be helpful moving forward, particularly as media transcoding is still a concern. If I’m pushing a hi-res music file or 4K or h265 video, my 2012 i7 would disentegrate into flames very quickly—5500RPM fan sounds like a vacuum cleaner in such a small enclosure.
Mankind's only chance is to harness the power of stupid.
     
ShortcutToMoncton
Addicted to MacNN
Join Date: Sep 2000
Location: The Rock
Status: Offline
Reply With Quote
Nov 8, 2018, 11:41 AM
 
I just realized that I have one USB port filled by a mini-HTPC wireless keyboard dongle, and another by my audio out. So I’ve maxed out the two USB ports from the get-go.
( Last edited by ShortcutToMoncton; Nov 8, 2018 at 03:59 PM. )
Mankind's only chance is to harness the power of stupid.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 8, 2018, 08:12 PM
 
Originally Posted by Laminar View Post
So my 2010 i7 iMac is not compatible with Mojave, so I've started looking into replacements. I entertained the idea of a Mini + display, but it looks like even the 2014 i7 3.0 gets crushed by the mighty 2010 iMac, at least according to EveryMac's Geekbench scores.
I wouldn't waste my time on any non-Retina machine, because a Retina screen makes a huge difference in everyday usage, much more than a 20 % boost of CPU performance.
I don't suffer from insanity, I enjoy every minute of it.
     
Thorzdad
Moderator
Join Date: Aug 2001
Location: Nobletucky
Status: Offline
Reply With Quote
Nov 9, 2018, 09:17 AM
 
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Nov 9, 2018, 10:03 AM
 
Originally Posted by OreoCookie View Post
I wouldn't waste my time on any non-Retina machine, because a Retina screen makes a huge difference in everyday usage, much more than a 20 % boost of CPU performance.
I'm not a screen junkie, my parents have a Retina iMac and I can't tell the difference. A late 2012 27" i7 went for $500 on eBay and I almost nabbed it but couldn't bring myself to pull the trigger.
     
sek929
Posting Junkie
Join Date: Nov 1999
Location: Cape Cod, MA
Status: Offline
Reply With Quote
Nov 9, 2018, 06:15 PM
 
Originally Posted by Laminar View Post
I'm not a screen junkie, my parents have a Retina iMac and I can't tell the difference. A late 2012 27" i7 went for $500 on eBay and I almost nabbed it but couldn't bring myself to pull the trigger.
My sister gave me her old late 2012 27" i7, had a failing HDD but after an SSD swap this machine is lightning fast. Getting the RAM to 24GB wasn't too expensive either.

Classist bigot and incurable ideologue
     
Doc HM
Mac Elite
Join Date: Oct 2008
Location: UKland
Status: Offline
Reply With Quote
Nov 10, 2018, 02:55 PM
 
Originally Posted by P View Post
(In practice OS X is so much better at caching and the modern iMacs have way more RAM to use for that, so it will hide the terrible random read performance to some extent).
Indeed. That hides some of the performance issues until the drive croaks. In addition to its truly abysmal performance the drives fitted to these iMacs are horrifically unreliable suffering performance degradation after far to short a time (based entirely on my customer experience). That Apple fits these drives into ANY iMac in 2018 is shameful. That they charge a premium price for the product is pretty much just scummy. For f's sake just slam in an SSD and be done with it you cheap skating b*****ds.

And while I'm annoyed. 32GB SSD on the 1TB fusion drive? F**k OFF!
This space for Hire! Reasonable rates. Reach an audience of literally dozens!
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 11, 2018, 04:35 PM
 
Originally Posted by OreoCookie View Post
Apart from not being able to run Windows, what other pain points do you see forthcoming? I'm honestly asking, because the PowerPC-to-x86 transition was completely painless, I think only one piece of software broke, the software to my hardware screen calibration tool would no longer work. Did you encounter more problems during the last CPU architecture transition?
I had some hardware that no longer works, but mainly, I'm tired of Apple deprecating APIs for shits and giggles.

This is perhaps a silly example, but bear with me: I'm a fan of the Civilization games. There are currently 6, plus 4 official spinoffs, and they nicely slot into the various eras of Mac hardware.

The first was 68k Mac only (and I think it's spinoff, Colonization, as well)
The second was both 68k and PPC.
The spin-off, Alpha Centauri (SMAC), was a PPC game for Classic Mac OS that got a semi-unofficial port to Mac OS X, but of course no x86 version.
The third was a game for both Classic and OS X, PPC only.
The fourth and its spinoff (another Colonization) was OS X, PPC and x86 - but only 32-bit
The fifth is Intel x86 only, but actually still 32-bit (the Windows version isn't)
The Beyond Earth spinoff and the sixth are 64-bit x86.

So right now, I can play the last three games, and its spin-offs. As soon as Apple drops the hammer on the 32-bit x86 libraries, I lose everything but the last. This is sad for me that I have to play the dumbed-down modern versions, but that's not the big problem. Each of these games include code from the older games (except that IV killed every trace of the very broken III code base), code that presumably has to be ported again. This is extra burden for the developers, for a small platform. This is what I mean by pain. I get that if there is a big benefit on the horizon, it makes sense to do this. The 68k and PPC platforms were both dying, and Apple had to do something. This time, the benefit is an even thinner MBP. That isn't enough of a benefit to me.

(I am still on Sierra, and I may stay on this version forever. High Sierra seems like it is problematic and buggy, Mojave kills sub-pixel rendering, and whatever 10.14 is will kill 32-bit apps. These are not features, they're regressions. If the tradeoff is playing Civ IV or having a secure Mac, I just might disable Wifi and become a hermit.)

And do you know what the situation with PCIe is? What would prevent Apple from building a PCIe complex into its Mac SoCs? Are there any licensing issues I am not aware of? Also, I think PCIe will be tied to ThunderBolt (most Macs are mobile Macs), so the utility of PCIe may be coupled to Apple being able to license ThunderBolt from Intel. Presumably licensing is an issue that can be solved with money.
They can build PCIe lanes into their chips, and Thunderbolt is becoming license-free at some point during this year to speed adoption. That isn't my worry. My worry is that high-speed connections are expensive, power-wise, and a big part of the reason Apple's SoCs are lower power. I don't think they want to give up that advantage, so I think we're getting less high-speed I/O down the line. They will have dedicated connections for storage and whatever port they want to put on there, but nothing general.

I think you are conflating two things: iOS apps on the Mac is happening independently of the underlying CPU ISA, and I am worried about that. But since it is independent of the CPU architecture, I don't think this is saying something one way or the other.
This ties in to my point above. Apple is burning some developers again by changing the platform. Continuous upgrades are how you make money in the business, and Apple is doing its best to kill that revenue stream. I think that if you have to port your app to the Mac again with a new ISA, most companies will just port the iOS app.

I don't understand this point: one clear benefit is that the year-over-year improvements in terms of performance and performance-per-watt on the Apple side outpace what Intel is doing by probably a factor of 5-10, depending on your metric. Even if the slope levels off, there will be a growing performance and performance-per-watt advantage on the Apple side of the ledger.
What is that wording they use in all those ads from financial advisors - "Past performance is no guarantee of future performance"? Something like that. It is far from certain that Apple will outperform Intel going forward. The last time they switched, they had to - there was no other option. This time, Intel will stay in the game. If Intel's Ice Lake or Sapphire Rapids is a fantastic new platform that beats everything Apple has, what will Apple do then? Stay behind what all other PC manufacturers can deliver?

Furthermore... Do you think Apple can measurably improve absolute performance by a significant number over what Intel is delivering? On a platform designed to run at some 2W? I think key in Apple's advantage is that they placed their power target much lower. Move up to 65W or so (desktop levels), and I have a hard time seeing a big performance advantage. Performance per watt sure, but I'm not so concerned about that right now.

What about GPU performance? That is a big factor where Intel's efforts are lackluster and the timing unreliable. This has been Apple's beef with Intel since forever. (I remember when Apple put in nVidia 9400 chipsets/GPUs because it wasn't convinced that Intel's were good enough.) The other factor are other types of co-processors that get increasingly important for Apple, and Intel has nothing to offer here.
Apple's graphics in the 2018 iPad Pro are a massive improvement over past years, but they're nowhere near their own 15" MBP in Geekbench. That MBP uses an old GPU - 2016 for that specific chip, 2012 if you want to count the basic design - and it smashes what Apple has, even if it is a low-end model. If you try ot compare it to a desktop chip, it isn't even funny.

Furthermore, Apple's graphics use deferred rendering (like many mobile chips). It is not at all clear how well they will run on an API designed for immediate rendering. Apple doesn't care - just use Metal! - but if your app is written for OpenGL, it may not be easy to port with good performance.

And lastly, it would give Apple a way to make better use of investments it is making anyway: the development of custom CPUs, GPUs and other co-processors.
That's great for Apple. Me, I don't really care. Also, what should I use those co-processors for in a Mac?

PCIe is not necessary for an iPad, but the situation is different for at least some Macs. I think PCIe (especially in the form of external expansion slots) is on Apple's minds, and is crucially important for quite a few niche applications. A former colleague of mine uses it to significantly accelerate his numerical simulations.
But building that in will mean that power budgets go up. Will Apple really do that? I could see them using the same core design for a different SoC - A12Y if you will - but then it becomes another chip entirely, and the big advantage of reusing designs is lost. 7nm masks are hideously expensive, apparently, so, will we see the iPhone chip, the iPad chip (also for laptops? Without PCIe in that case) and the desktop chip? A single one, for everything from mini to Mac Pro? That Mac Pro that will in all likelihood have a 28-core option in a few weeks time? Again, I can make up ideas (take a look at what AMD is doing with the latest Epyc for one idea) but they cost design investment. Will Apple take that investment for the tiny sliver of a sliver that is the Mac desktop market?

Why should there be versions with DIMM slots? If the new Mac mini and the iMac Pro are any indication, Apple has started listening to its customers again.
And those customers want ot upgrade their RAM?

I don't understand this point: the difference between a Mac and an iPad is the UI paradigm, so if an ARM-based Mac runs OS X, why should that be closer to an iPad than the predecessor that sported an Intel CPU? That strikes me as a bit weird like some of the Apple fans who didn't like the transition away from PowerPC to Intel, fearing that Macs would become less Mac-like.
Because of the combination of all the above. The pressure to reuse mobile chips for desktop machines, even more churn in existing programs when they have to be ported again, a GPU that is likely to be incompatible (at anything resembling decent performance) with existing graphics APIs...all so the MBP can become even thinner. Not worth it.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 12, 2018, 01:45 AM
 
Originally Posted by P View Post
I had some hardware that no longer works, but mainly, I'm tired of Apple deprecating APIs for shits and giggles.

This is perhaps a silly example, but bear with me: I'm a fan of the Civilization games.
Much of this seems completely independent from the expected Intel-to-ARM transition. Abandoning 32 bit x86 kills my beloved Star Wars games, too, but since I do see the reasoning behind Apple's transitions that is a worthwhile trade-off IMHO. Look at Microsoft and ask enterprise software developers what they think of Microsoft's “we'll support every piece of legacy technology” approach. It means that for mission critical pieces of software, you must have huge testing teams. (A close friend of mine worked for one of the big companies making a very prominent hypervisor. His colleagues had to validate 79 different versions of Windows 10 alone.)

I think for people like us who want to play legacy games, a VM solution sounds like a much better idea than keeping old APIs on life support.
Originally Posted by P View Post
They can build PCIe lanes into their chips, and Thunderbolt is becoming license-free at some point during this year to speed adoption. That isn't my worry. My worry is that high-speed connections are expensive, power-wise, and a big part of the reason Apple's SoCs are lower power. I don't think they want to give up that advantage, so I think we're getting less high-speed I/O down the line. They will have dedicated connections for storage and whatever port they want to put on there, but nothing general.
How can you be so sure of that? I think this may become a differentiating feature between pro and non-pro lines. Non-pro machines get USB-C whereas pro machines get Thunderbolt ports. Given Apple's investment in external GPU support, I think it'd be odd if they stopped supporting Thunderbolt peripherals.
Originally Posted by P View Post
This ties in to my point above. Apple is burning some developers again by changing the platform. Continuous upgrades are how you make money in the business, and Apple is doing its best to kill that revenue stream. I think that if you have to port your app to the Mac again with a new ISA, most companies will just port the iOS app.
You make it sound as if the effort is comparable to going from OS 9 to Mac OS X or 68k to PowerPC. I don't see how the new ISA will be a pain point for vanilla Mac developers: unless you have hand-optimized, platform-specific code, if you just rely on Apple's standard APIs, it may be as easy as recompiling your app with a new version of Xcode. Some developers will have to spend some time hand-optimizing some code, but in many cases, they may have already had to do just that when they ported their app to iOS.
Originally Posted by P View Post
What is that wording they use in all those ads from financial advisors - "Past performance is no guarantee of future performance"? Something like that. It is far from certain that Apple will outperform Intel going forward. The last time they switched, they had to - there was no other option. This time, Intel will stay in the game. If Intel's Ice Lake or Sapphire Rapids is a fantastic new platform that beats everything Apple has, what will Apple do then? Stay behind what all other PC manufacturers can deliver?
You are right that past performance is not a predictor of future growth. However, Apple presumably knows much more about Intel's road map than we do, and it knows its own SoC road map even better — and can make their decision based on that. It knows what performance and efficiency improvements Intel promises and what it things it can achieve with its A-series. And Apple can rely on the economy of scale to make the development worthwhile, because Apple's yearly cadence of new and improved CPUs, GPUs and assortment of co-processors is dictated by iOS.

Plus, what we do know about Intel doesn't exactly fill me with confidence: according to semiaccurate, Intel axed its 10 nm process and will skip directly to 7 nm, which is expected to arrive no earlier than 2020. That means the bulk of its products will be produced in a manufacturing process that is 1.5-2 generations behind*.

* I don't want to get into the weeds of what x nanometer means to each manufacturer and the like, and whether Intel's 14 nm are closer to TSMC's 10 nm. Intel is behind, and by the time 2020 rolls around, Intel's manufacturing competitors won't have rested on their laurels but improved their own processes
Originally Posted by P View Post
Furthermore... Do you think Apple can measurably improve absolute performance by a significant number over what Intel is delivering? On a platform designed to run at some 2W? I think key in Apple's advantage is that they placed their power target much lower. Move up to 65W or so (desktop levels), and I have a hard time seeing a big performance advantage. Performance per watt sure, but I'm not so concerned about that right now.
Definitely. And not because Apple cooks with something other than water, but because of the fact that Apple can leverage co-processors and make them easily accessible to developers and consumers. If you use Core ML on an iPad Pro, then automatically, your algorithms may be run on the Neural Engine co-processor instead of the CPU, for example. If you use certain image processing APIs, then the ISP may do the heavy lifting. In both cases, these specialized co-processors will be faster and more energy efficient than a general purpose CPU.
Originally Posted by P View Post
Apple's graphics in the 2018 iPad Pro are a massive improvement over past years, but they're nowhere near their own 15" MBP in Geekbench. That MBP uses an old GPU - 2016 for that specific chip, 2012 if you want to count the basic design - and it smashes what Apple has, even if it is a low-end model. If you try ot compare it to a desktop chip, it isn't even funny.
I don't see any reason why Apple couldn't offer support for external GPUs with its higher-end ARM-based Macs. In fact, it'd be mandatory.
Originally Posted by P View Post
Furthermore, Apple's graphics use deferred rendering (like many mobile chips). It is not at all clear how well they will run on an API designed for immediate rendering. Apple doesn't care - just use Metal! - but if your app is written for OpenGL, it may not be easy to port with good performance.
OpenGL on the Mac is or will be deprecated, but that is again independent of the CPU or GPU architecture macOS runs on.
Originally Posted by P View Post
That's great for Apple. Me, I don't really care. Also, what should I use those co-processors for in a Mac?
These would be used automatically when you call the respective APIs. Every time you use your finger print to log into your Mac, you use a co-processor.
Originally Posted by P View Post
But building that in will mean that power budgets go up. Will Apple really do that? I could see them using the same core design for a different SoC - A12Y if you will - but then it becomes another chip entirely, and the big advantage of reusing designs is lost. 7nm masks are hideously expensive, apparently, so, will we see the iPhone chip, the iPad chip (also for laptops? Without PCIe in that case) and the desktop chip? A single one, for everything from mini to Mac Pro?
I think if you made what you dubbed the A12Y multiprocessing capable and able to connect to a PCIe complex, then yes, I think there is a way to do just that.

A12 - iPhones and entry-level iPads
A12X - iPad Pros and (some?) non-Pro Macs (e. g. the MacBook and the MacBook Air).
A12Y - Pro mobile Macs and desktop Macs.

For example, you could differentiate the 13" MacBook Pro from the 15" MacBook Pro by adding a second A12Y onto the 15 inch model's motherboard. That'd roughly double performance. Plus, when Apple releases a larger touch-based device (think of a 15" iPad Pro or an iMac analog), it needs even more powerful chips for those machines as well, and power is less of an issue.
Originally Posted by P View Post
That Mac Pro that will in all likelihood have a 28-core option in a few weeks time? Again, I can make up ideas (take a look at what AMD is doing with the latest Epyc for one idea) but they cost design investment. Will Apple take that investment for the tiny sliver of a sliver that is the Mac desktop market?
I think this was the strongest argument against Apple switching to ARM. However, ARM is not PowerPC in that apart from Apple (and a handful of IBM workstations and servers) nobody used PowerPC. ARM is literally the most commonly used CPU platform on the planet.

For example, do we know whether Apple sells more Macs than AMD sells, say, mobile CPUs (honest question, I have not followed the AMD's financials)? It is safe to say that Apple sells more Macs now than it ever did PowerPC-based Macs, and back then Apple made that investment work for them. I don't think it'll be a problem to make it work financially or technologically. The biggest issue is the shortage of talent, I would say.
Originally Posted by P View Post
Because of the combination of all the above. The pressure to reuse mobile chips for desktop machines, even more churn in existing programs when they have to be ported again, a GPU that is likely to be incompatible (at anything resembling decent performance) with existing graphics APIs...all so the MBP can become even thinner. Not worth it.
I think you misstate the reason why Apple would switch: I don't think their motivation would be to make their computers thinner. Their motivation would be that comparing the long-term road maps of both platforms, one has a brighter future than the other.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 12, 2018, 07:22 AM
 
Originally Posted by OreoCookie View Post
Much of this seems completely independent from the expected Intel-to-ARM transition. Abandoning 32 bit x86 kills my beloved Star Wars games, too, but since I do see the reasoning behind Apple's transitions that is a worthwhile trade-off IMHO. Look at Microsoft and ask enterprise software developers what they think of Microsoft's “we'll support every piece of legacy technology” approach. It means that for mission critical pieces of software, you must have huge testing teams. (A close friend of mine worked for one of the big companies making a very prominent hypervisor. His colleagues had to validate 79 different versions of Windows 10 alone.)

I think for people like us who want to play legacy games, a VM solution sounds like a much better idea than keeping old APIs on life support.
What is the reason for abandoning 32-bit x86 right now? The platform is very much alive and well-supported, and if your application was supposed to be used by as many people as possible, it made a lot of sense to make it 32-bit. When we had 68k or PPC applications being emulated, there was a performance issue with using the emulator. There isn't one here. There are admittedly fewer architecture registers, but that is a very minor thing. If it is so easy to just recompile your programs for ARM, how come Apple can't compile their libraries for x86 32-bit as well as 64-bit? Don't tell me those libraries use RAM (we have virtual memory to take care of that) or space on disk (it is minuscule compared to all the other things Apple ships by default, like a hundred language file per application). What, beyond some obscene sense of neatness, is the reason?

How can you be so sure of that? I think this may become a differentiating feature between pro and non-pro lines. Non-pro machines get USB-C whereas pro machines get Thunderbolt ports. Given Apple's investment in external GPU support, I think it'd be odd if they stopped supporting Thunderbolt peripherals.
There can only be so many chips developed, is my point. There will always be one for the iPhone and that one will not be compromised by having features for something else. I think there can be at most two more. If the middle one, the current "A12X", is supposed to cover the iPad (Pro) and the thin and light Mac models, it will either have to gain a few PCIe lanes (which would use power and make the chip bigger) or those light models will lose Thunderbolt. Remember that all Macs except the 12" Macbook now support Thunderbolt. I had thought the MBA would not get Thunderbolt, but it does have it.

And then we have one more chip. That chip will then have to cover everything from 13" MBP to Mac Pro, a range in TDP from 15W to 200W. How many PCIe lanes? The 13" MBP has 4, and can't even support an external GPU. The top Xeons have 48 lanes, and AMD's have 64. They go from 2 cores to 28. Those extra cores and lanes are needed if the Mac Pro is going to do what it should, and Apple has reconfirmed that they will support replaceable GPUs going forward.

Doesn't seem possible, does it? Can we have two chips? One for the laptops "A12Y", and one for the desktops, "A12Z"? Sure - but Apple makes 80% laptops. Now you have to fund the development of the A12Z desktop chip off of the 20% that is desktops - and most of those 20% are iMacs that would probably be pretty OK with the A12Y. I don't see the economics working out. Remember that we didn't even get an A11X, presumably because that mask was too expensive.

You make it sound as if the effort is comparable to going from OS 9 to Mac OS X or 68k to PowerPC. I don't see how the new ISA will be a pain point for vanilla Mac developers: unless you have hand-optimized, platform-specific code, if you just rely on Apple's standard APIs, it may be as easy as recompiling your app with a new version of Xcode. Some developers will have to spend some time hand-optimizing some code, but in many cases, they may have already had to do just that when they ported their app to iOS.
It is never as easy as just recompiling - there is always something that you have to fix, and Apple will cut some older APIs loose again, because they can.

You are right that past performance is not a predictor of future growth. However, Apple presumably knows much more about Intel's road map than we do, and it knows its own SoC road map even better — and can make their decision based on that. It knows what performance and efficiency improvements Intel promises and what it things it can achieve with its A-series. And Apple can rely on the economy of scale to make the development worthwhile, because Apple's yearly cadence of new and improved CPUs, GPUs and assortment of co-processors is dictated by iOS.
But that development will be of the core itself. They still need to make masks for the new designs needed. They didn't fund the A11X - and A11 was a big improvement over the lackluster A10 - and there must have been a reason for that. My guess is money. Now you want to make one or two new masks, for an even smaller volume? How is that economy of scale?

Plus, what we do know about Intel doesn't exactly fill me with confidence: according to semiaccurate, Intel axed its 10 nm process and will skip directly to 7 nm, which is expected to arrive no earlier than 2020. That means the bulk of its products will be produced in a manufacturing process that is 1.5-2 generations behind*.

* I don't want to get into the weeds of what x nanometer means to each manufacturer and the like, and whether Intel's 14 nm are closer to TSMC's 10 nm. Intel is behind, and by the time 2020 rolls around, Intel's manufacturing competitors won't have rested on their laurels but improved their own processes
Semiaccurate isn't always correct, and Intel has denied that rumor in terms that would get them sued by the SEC if they were not true. 10nm is a disaster, but according to Intel, they're still working on it.

The confusion in naming might have something to do with it. According to reports, Intel's 10nm is better than TSMCs 7nm even after the simplifications, so maybe they're rebranding it?

In either case - I would not expect any moves to 5nm any time soon. AMD had a presentation recently, and indicated that they believed the industry would stay on 7nm for a long time. I got the feeling that we would have something like the 28nm situation at least, when we were all stuck on that node for over four years.

I don't see any reason why Apple couldn't offer support for external GPUs with its higher-end ARM-based Macs. In fact, it'd be mandatory.
If Apple is moving towards deferred rendering GPUs and makings its APIs for that, will they spend the money to support current immediate rendering GPUs for a tiny sliver of the market. Apple's GPUs are very different from current desktop GPUs in how they work, and reconciling that won't be easy.

OpenGL on the Mac is or will be deprecated, but that is again independent of the CPU or GPU architecture macOS runs on.
I don't think it is independent of the GPU it runs on. Everything I read indicate that the mobile GPUs cannot run desktop APIs well, and games written for those APIs will not run well on mobile GPUs. The key is to make the engine compatible with the mobile GPUs (like Unity is), but will any developer of real 3D programs do that work?

These would be used automatically when you call the respective APIs. Every time you use your finger print to log into your Mac, you use a co-processor.
And it works fine with an x86 CPU as the main CPU, arguably even more securely.

I think if you made what you dubbed the A12Y multiprocessing capable and able to connect to a PCIe complex, then yes, I think there is a way to do just that.

A12 - iPhones and entry-level iPads
A12X - iPad Pros and (some?) non-Pro Macs (e. g. the MacBook and the MacBook Air).
A12Y - Pro mobile Macs and desktop Macs.

For example, you could differentiate the 13" MacBook Pro from the 15" MacBook Pro by adding a second A12Y onto the 15 inch model's motherboard. That'd roughly double performance. Plus, when Apple releases a larger touch-based device (think of a 15" iPad Pro or an iMac analog), it needs even more powerful chips for those machines as well, and power is less of an issue.
See what I wrote above. The TL;DR is that the A12Y would have to stretch over a very large TDP range, a factor over 10, and widely varying PCIe lane counts.

But going dual socket is interesting as an idea, because everyone is moving away from that. Those that do it do so for reasons of memory channels and I/O lanes. The "flock of chickens" isn't seen as a good idea for performance right now.

I think this was the strongest argument against Apple switching to ARM. However, ARM is not PowerPC in that apart from Apple (and a handful of IBM workstations and servers) nobody used PowerPC. ARM is literally the most commonly used CPU platform on the planet.

For example, do we know whether Apple sells more Macs than AMD sells, say, mobile CPUs (honest question, I have not followed the AMD's financials)? It is safe to say that Apple sells more Macs now than it ever did PowerPC-based Macs, and back then Apple made that investment work for them. I don't think it'll be a problem to make it work financially or technologically. The biggest issue is the shortage of talent, I would say.
We don't know how many mobile chips AMD sells. AMD supposedly had a market share of 12% in the first half of 2018, and they have been as high as 30% in the past. They are aiming for those 30% again, because they seem to need a share like that to be comfortably competitive. Apple had a 7.1% share among desktops and 9.4% market share among laptops in the last estimate I saw (Q2 2018). Note that this is Gartner numbers, and they're infamously unreliable, but we don't have anything better.

I think you misstate the reason why Apple would switch: I don't think their motivation would be to make their computers thinner. Their motivation would be that comparing the long-term road maps of both platforms, one has a brighter future than the other.
Every chassis change Apple has made since forever has been to make its computers thinner. The 2012 iMac redesign still bugs me. They removed the 21" RAM door and 3.5" drive, reduced max RAM on all the models and let the cooling capacity crater, which limited GPU options - all because they wanted to make it thinner. Besides, what happens if the x86 platform crashes? Apple can just keep selling its current models for a year more (they have NO problem with that) and then make the switch. No reason to switch preemptively.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 12, 2018, 09:23 AM
 
Originally Posted by CharlesS View Post
Huh? They specifically mentioned that the RAM is on SO-DIMMs in the keynote.
I found a video of how to upgrade the RAM in a Mac mini — it definitely is on dual USB iBook-hard-drive-upgrade-side of things. You need a special screwdriver and quite a bit of patience. Plus, if you do not watch the instructions, you could rip off the antenna cable, ouch. The video makes the point that the procedure does not void warranty, so if that is correct, then I was wrong claiming that RAM needs to be upgraded by an authorized Apple service professional. I'll leave it to you to decide whether this is a distinction with or without a difference

The design seems a bit hostile to end users and service professionals.
I don't suffer from insanity, I enjoy every minute of it.
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Nov 12, 2018, 10:24 AM
 
Originally Posted by sek929 View Post
My sister gave me her old late 2012 27" i7, had a failing HDD but after an SSD swap this machine is lightning fast. Getting the RAM to 24GB wasn't too expensive either.
I threw an SSD along side a 3TB HD in my 2010, along with 12GB of RAM. Bought it for a grand like 6-7 years ago and it's been rock solid since then, I honestly have no complaints about the performance, but now that's it's unsupported, I'm looking for an excuse to upgrade.

Hmmm...I can pick up a 12-core 2012 Mac Pro for ~$800. Now that's tempting...
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 12, 2018, 11:06 AM
 
Originally Posted by P View Post
There can only be so many chips developed, is my point. [...] That chip will then have to cover everything from 13" MBP to Mac Pro, a range in TDP from 15W to 200W. How many PCIe lanes? The 13" MBP has 4, and can't even support an external GPU. The top Xeons have 48 lanes, and AMD's have 64. They go from 2 cores to 28. Those extra cores and lanes are needed if the Mac Pro is going to do what it should, and Apple has reconfirmed that they will support replaceable GPUs going forward.

Doesn't seem possible, does it? Can we have two chips? One for the laptops "A12Y", and one for the desktops, "A12Z"? Sure - but Apple makes 80% laptops. Now you have to fund the development of the A12Z desktop chip off of the 20% that is desktops - and most of those 20% are iMacs that would probably be pretty OK with the A12Y. I don't see the economics working out.
You argue that it wouldn't be economically feasible for Apple to design four variants of their CPU architecture, and you seem quite adamant about that. I'm very confused as to why you think this has to be the case, because Intel already shows one way you may go about it: You rev the consumer parts more often and let the workstation and server parts skip generations. Oh, and you charge a crapload for Xeons. Moreover, Apple used to design its own chipsets for years while selling way fewer machines and still making a healthy profit.

Another way is related to the multi-chip support that has been used by AMD's latest designs and (in combination with an AMD GPU by Intel), which further drives down cost.

I don't think the financial side is a problem at all, it is priced in already if you will. If an A12Z costs $800 to make, so what if it is destined for a machine that currently uses even more expensive CPUs. The problem I see is with talent and time, not economics.
Originally Posted by P View Post
Semiaccurate isn't always correct, and Intel has denied that rumor in terms that would get them sued by the SEC if they were not true. 10nm is a disaster, but according to Intel, they're still working on it.

The confusion in naming might have something to do with it. According to reports, Intel's 10nm is better than TSMCs 7nm even after the simplifications, so maybe they're rebranding it?
Poteto, potato.
Feature size is a touchy subject anyway, but so far we know for sure (because Intel told us so) that their next-gen process will come online on a mass scale in 2020 at the earliest, and that at least until then they are at least one, perhaps 1.5 generations behind.

Even when Intel finally catches up in terms of feature size and TSMC and Samsung haven't made the jump to 5 nm, it stands to reason that they will reach 5 nm before Intel does and they will have more time to optimize their 7 nm process node.
Originally Posted by P View Post
I don't think it is independent of the GPU it runs on. Everything I read indicate that the mobile GPUs cannot run desktop APIs well, and games written for those APIs will not run well on mobile GPUs. The key is to make the engine compatible with the mobile GPUs (like Unity is), but will any developer of real 3D programs do that work?
You seem to assume that Apple won't support external GPUs in their pro desktop Macs the way they do now. Why? As far as I can tell, the most logical strategy is that Apple just continues to support external GPUs (by nVidia and AMD) in addition to their internal GPU.
Originally Posted by P View Post
If it is so easy to just recompile your programs for ARM, how come Apple can't compile their libraries for x86 32-bit as well as 64-bit? Don't tell me those libraries use RAM (we have virtual memory to take care of that) or space on disk (it is minuscule compared to all the other things Apple ships by default, like a hundred language file per application). What, beyond some obscene sense of neatness, is the reason?
As far as I understand one big motivating factor has to do with the Swift/Objective C runtime: the 64 bit version uses optimizations that are not backwards compatible, and apparently keeping 32 bit support alive is a major pain because it prevents Apple from shifting AppKit towards Swift. That seems like a big reason to me.

Taken from his interview on ATP:
Originally Posted by Chris Lattner
One other technology problem [37:00] that is hilarious but also really important is that the Apple frameworks stack has to support 32-bit Mac apps. 32-bit Mac apps have this interesting challenge: they have the “classic” Objective-C runtime, which doesn't support things like non-fragile instance variables and things like that. At some point in time, the Swift team will need to make the Swift runtime work in that mode, or figure out some other solution to adapt it, because until that happens, it won't be [37:30] possible to use Swift in AppKit, for example.
Originally Posted by P View Post
Every chassis change Apple has made since forever has been to make its computers thinner. The 2012 iMac redesign still bugs me. They removed the 21" RAM door and 3.5" drive, reduced max RAM on all the models and let the cooling capacity crater, which limited GPU options - all because they wanted to make it thinner. Besides, what happens if the x86 platform crashes? Apple can just keep selling its current models for a year more (they have NO problem with that) and then make the switch. No reason to switch preemptively.
IMHO thinness is just scratching the surface here. Apple has had an opinion that they know better what their computers should look like, because customers tell them they want a faster horse anyway. Their focus on thinness is only a part of it. Why isn't RAM easily upgradable on their machines, including the Mac mini? I understand that for mobile Macs, the time has gone and there the trade-offs are (to some at least) worth it. Why don't MacBook Pros sports a cornucopia of ports just like the pro machines of past did? Why are their machines so hard to repair? (That doesn't seem environmentally friendly either.) Not all of this can be explained by thinness, because in some instances, thickness would not be impacted. But I see the quest for thinness to be part of it.
I don't suffer from insanity, I enjoy every minute of it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Nov 12, 2018, 05:07 PM
 
Originally Posted by OreoCookie View Post
You argue that it wouldn't be economically feasible for Apple to design four variants of their CPU architecture, and you seem quite adamant about that. I'm very confused as to why you think this has to be the case, because Intel already shows one way you may go about it: You rev the consumer parts more often and let the workstation and server parts skip generations. Oh, and you charge a crapload for Xeons. Moreover, Apple used to design its own chipsets for years while selling way fewer machines and still making a healthy profit.
Everything I read on the topic says that designing masks for 7nm is insanely expensive, and that this cost has really exploded recently. This is one article on the topi:

https://www.extremetech.com/computin...m-process-node

(I don't vouch for the source, and the forward-looking stuff may be BS, but I suspect that the current figures are reliable because I have seen similar figures elsewhere.) According to that, it costs a cool $300 million to make a 7nm mask. In financial 2018, Apple sold 18 million Macs. According to Gruber, 20% of that is desktop, so 3.6 million. That is a cost of just under $100 if you can make one chip that covers all the desktops - but I don't think we can. We'd need a Mac Pro chip. The Mac Pro is "single-digit percentage" of all Macs, but Gruber thinks it is essentially 1%, or 180 000 Mac Pros per year. That's $1666 in pure cost for just making the chip - and that is on the current node. This cost goes up. Sure, we can leave it for two years and twice the sales and the cost becomes $833 - still a decent chunk of change. With Apple's margins being what they are, that's a lot of oncost for the consumer.

I don't see how Apple can amortise that cost on the small number of high-end desktops it sells. I can maybe see a single design that covers the MBP and the iMac, something like the current A12X but with some real I/O (and sure, let's say it has 8 performance cores - Apple can develop a ringbus if they don't already have one), but that will be a much weaker chip than what the Mac Pro usually has.

Another way is related to the multi-chip support that has been used by AMD's latest designs and (in combination with an AMD GPU by Intel), which further drives down cost.
Yes, this is an interesting setup, but it has real downsides. AMD has made a bunch of "chiplets" with just the CPU cores (and maybe PCIe, which I don't understand?) and then all of them connected to a single I/O chip, still made on 14nm. This is NOT on an interposer or EMIB or anything like that, so it seems to be essentially the old front side bus design of multiple sockets to a single memory controller. This has advantages in that AMD can rev one chip and keep the others standard, and the I/O chip can be on a cheaper process, but you lose the integrated memory controller. With that loss, your main memory latency goes up. It isn't a good design for the desktop. It will work for servers, because you get the improvement in that memory latency is now uniform, but you will lose performance in a desktop setup.

I don't think the financial side is a problem at all, it is priced in already if you will. If an A12Z costs $800 to make, so what if it is destined for a machine that currently uses even more expensive CPUs. The problem I see is with talent and time, not economics.
The problem isn't that the cost for each chip is $800 or whatever. The problem is that the first one costs $300 million today, and maybe $1.5 billion in a few years.

And I don't think that Apple pays $800 on average even for the Mac Pro CPUs. They pay far below list, and a lot of them will have to be more basic Xeons that cost far less even list.

Poteto, potato.
Feature size is a touchy subject anyway, but so far we know for sure (because Intel told us so) that their next-gen process will come online on a mass scale in 2020 at the earliest, and that at least until then they are at least one, perhaps 1.5 generations behind.
Consumer 10nm chips are still due for 2019. The Xeons are in 2020.

Even when Intel finally catches up in terms of feature size and TSMC and Samsung haven't made the jump to 5 nm, it stands to reason that they will reach 5 nm before Intel does and they will have more time to optimize their 7 nm process node.
Right, because Intel got to 14nm first, and this gave them a head start on getting to 10nm?

You seem to assume that Apple won't support external GPUs in their pro desktop Macs the way they do now. Why? As far as I can tell, the most logical strategy is that Apple just continues to support external GPUs (by nVidia and AMD) in addition to their internal GPU.
No, that isn't what I'm saying. My point is that Apple's internal GPUs are based on the technique of tile-based deferred rendering and current desktop GPUs use immediate mode rendering. This difference is too large to be hidden by a driver, as I understand, and code written for a regular immediate mode rendering GPU will usually run very slowly on TBDR-based GPU and vice-versa. Supporting both on one platform may not be feasible, and as far as I know, has not been done.

As far as I understand one big motivating factor has to do with the Swift/Objective C runtime: the 64 bit version uses optimizations that are not backwards compatible, and apparently keeping 32 bit support alive is a major pain because it prevents Apple from shifting AppKit towards Swift. That seems like a big reason to me.
So freeze the 32-bit libraries in time and never touch them again. It is fine on any UNIX system to have multiple versions of the libraries installed, all it takes is memory and disk space.

IMHO thinness is just scratching the surface here.
It is the root problem, though. Apple has anorexia.

Apple has had an opinion that they know better what their computers should look like, because customers tell them they want a faster horse anyway. Their focus on thinness is only a part of it. Why isn't RAM easily upgradable on their machines, including the Mac mini?
The iMac is because of thinness - it was upgradeable before, and then someone decided that being thin was more important (According to Don Melton, that someone was Steve Jobs, which is why it won't ever be reversed. Might as well cancel the fatwa on Salman Rushdie). The mini is probably because they really are running out of space, and it isn't that hard to replace anyway.

I understand that for mobile Macs, the time has gone and there the trade-offs are (to some at least) worth it.
LPDDR-anything isn't available as DIMMs. There is also the fact that one of the most common failure modes on laptops in the past, according to Apple, was that the DIMMs got dislodged, so there is a real gain there.

Why don't MacBook Pros sports a cornucopia of ports just like the pro machines of past did?
Because they are...altogether now!... too thin! You can't fit an HDMI port on there, it isn't physically possible. You could fit a USB-A or an Ethernet port if you did one of those fold-down ports, but those are of course unseemly. I don't think there are any other ports anyone would really want? Well OK, there are people clamoring for an SD-card, but they haven't realised how few and behind the times they are.

Why are their machines so hard to repair? (That doesn't seem environmentally friendly either.) Not all of this can be explained by thinness, because in some instances, thickness would not be impacted. But I see the quest for thinness to be part of it.
I don't know why they can't be repaired. Maybe it is because Apple realised that they had high costs from people trying to repair iPhones when they didn't know what they were up to, but that still doesn't excuse the pentalobe screws. Nintendo is the same way, by the way. Some of them are for thinness - the iMac moving to double-adhesive tape is one - but that isn't everything.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Today, 09:12 AM
 
Originally Posted by P View Post
Everything I read on the topic says that designing masks for 7nm is insanely expensive, and that this cost has really exploded recently. This is one article on the topi:

https://www.extremetech.com/computin...m-process-node
Quick aside: is extremetech.com a reliable source of information? I just became aware of them recently.
Originally Posted by P View Post
(I don't vouch for the source, and the forward-looking stuff may be BS, but I suspect that the current figures are reliable because I have seen similar figures elsewhere.) According to that, it costs a cool $300 million to make a 7nm mask. In financial 2018, Apple sold 18 million Macs. According to Gruber, 20% of that is desktop, so 3.6 million. That is a cost of just under $100 if you can make one chip that covers all the desktops - but I don't think we can. We'd need a Mac Pro chip. The Mac Pro is "single-digit percentage" of all Macs, but Gruber thinks it is essentially 1%, or 180 000 Mac Pros per year. That's $1666 in pure cost for just making the chip - and that is on the current node. This cost goes up. Sure, we can leave it for two years and twice the sales and the cost becomes $833 - still a decent chunk of change. With Apple's margins being what they are, that's a lot of oncost for the consumer.
First of all, that was why I proposed a multi chip option where you essentially repurpose the same chip across different desktops to reduce cost. We can argue about what a chip would cost in that scenario, and I don’t feel knowledgeable enough to quantify it. (Although I agree that just like with cars, a big cost is the tooling and you need economy of scales to reduce the per-item cost.) If you used the A12Z in the high-end iMacs, high-end Mac mini, iMac Pros and Mac Pros, you’d suddenly speak of 1+ million units. That seems quite reasonable.

Moreover, I fully expect that eventually Apple will expand its line-up to include more touch-based computers that need as much horsepower as an iMac or an iMac Pro has these days. And Apple could also dog food their Mac Pro SoCs in its data centers.

An architectural switch will be made on the basis of what is best for Apple, and for the vast majority of hardware they sell — iPhones, iPads and MacBooks (in all variants and shades of silver) — switching to ARM is a huge net benefit.
Originally Posted by P View Post
I don't see how Apple can amortise that cost on the small number of high-end desktops it sells. I can maybe see a single design that covers the MBP and the iMac, something like the current A12X but with some real I/O (and sure, let's say it has 8 performance cores - Apple can develop a ringbus if they don't already have one), but that will be a much weaker chip than what the Mac Pro usually has.
Apple has plenty of experience designing workstation class IO, so while I agree this is a non-trivial problem, it is a field that Apple already has expertise in. So yes, it is a problem to be solved, but it seems straightforward to a company like Apple to solve it.
Originally Posted by P View Post
Yes, this is an interesting setup, but it has real downsides. AMD has made a bunch of "chiplets" with just the CPU cores (and maybe PCIe, which I don't understand?) and then all of them connected to a single I/O chip, still made on 14nm. This is NOT on an interposer or EMIB or anything like that, so it seems to be essentially the old front side bus design of multiple sockets to a single memory controller. This has advantages in that AMD can rev one chip and keep the others standard, and the I/O chip can be on a cheaper process, but you lose the integrated memory controller. With that loss, your main memory latency goes up. It isn't a good design for the desktop. It will work for servers, because you get the improvement in that memory latency is now uniform, but you will lose performance in a desktop setup.
I understand the downsides, my point was more that this is a common theme in the CPU space these days. That’s also how Intel builds its many, many core monster Xeons. But no matter how Apple implements this specifically, multi chip solutions seem like one potential way forward here.
Originally Posted by P View Post
Consumer 10nm chips are still due for 2019. The Xeons are in 2020.
You are right, I stand corrected. But Intel is still behind and will remain behind.
Originally Posted by P View Post
Right, because Intel got to 14nm first, and this gave them a head start on getting to 10nm?
Catching up is much harder than defending a lead. Overtaking the competition is harder still. And this is happening at a very bad time for Intel: the traditional PC business is in decline, and its death grip on CPU architectures seems to loosen.
Originally Posted by P View Post
No, that isn't what I'm saying. My point is that Apple's internal GPUs are based on the technique of tile-based deferred rendering and current desktop GPUs use immediate mode rendering.
I understand this difference (not least because my brother and I discussed his purchase of a Kyro 2-based graphics card back in the day in quite some detail ). But I don’t think this is a problem for lower-end Macs, because macOS uses the same Metal APIs as iOS, and most of the software comes from iOS these days anyway. And I expect that higher-end Macs will retain discrete GPUs, so running high-end software at full speed doesn’t seem to be a problem either.
Originally Posted by P View Post
So freeze the 32-bit libraries in time and never touch them again. It is fine on any UNIX system to have multiple versions of the libraries installed, all it takes is memory and disk space.
But it would need to be tested and could animate some companies to rely on legacy technology. Dumb question: but is it hard to run an older version of macOS in a simulator? I have never had to, so I don’t know.
Originally Posted by P View Post
It is the root problem, though. Apple has anorexia.

(Seriously, that was funny.)
Originally Posted by P View Post
The iMac is because of thinness - it was upgradeable before, and then someone decided that being thin was more important (According to Don Melton, that someone was Steve Jobs, which is why it won't ever be reversed. Might as well cancel the fatwa on Salman Rushdie). The mini is probably because they really are running out of space, and it isn't that hard to replace anyway.
Whereas I understand your argument to be “that’s in the name of thinness”, I would just say “not just thinness”: ever since the iPod Apple has moved to make their Macs harder to upgrade. Replacing RAM on my iBook was easy, I needed to turn a plastic screw 90 degrees and release two spring-loaded tabs. It was meant to be easily accessible. The iPhone and iPad have further accelerated the trend. As you correctly pointed out, on mobile computers, having a closed design really has benefits to the user as it makes the machine reliable and smaller. But I don’t see it as the only factor.
Originally Posted by P View Post
Because they are...altogether now!... too thin! You can't fit an HDMI port on there, it isn't physically possible. You could fit a USB-A or an Ethernet port if you did one of those fold-down ports, but those are of course unseemly.
I was referring to the number of ports. I don’t mind not having ports if they are literally too large to fit the machine. But I mind taking away ports because of some misguided belief that “on an infinite time scale” a machine should have no ports at all.
Originally Posted by P View Post
I don't think there are any other ports anyone would really want? Well OK, there are people clamoring for an SD-card, but they haven't realised how few and behind the times they are.
If I owned a MacBook, I’d want at least two, better three ports (2 USB-C and one USB-A). If I owned a MacBook Air, I’d want a USB-A port in addition. The current Mac mini is quite alright in terms of ports, although I wished they offered 10 Gbit Ethernet by default.
Originally Posted by P View Post
I don't know why they can't be repaired. [...] Some of them are for thinness - the iMac moving to double-adhesive tape is one - but that isn't everything.
Fortunately, I think this is a trend that is reversing: Apple’s bad experience with glued in batteries and keyboards has shown them that this doesn’t just mean very high expenses for them (or Apple if the machines are under warranty, Apple Care or part of a recall program), but also reduces recyclability. And if they want to extend the life of their machines, customers should be able to ask a certified Apple service technician to replace their batteries, for example.
I don't suffer from insanity, I enjoy every minute of it.
     
 
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 05:31 PM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,