Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Community > MacNN Lounge > iPhone Apple XS Max?

iPhone Apple XS Max? (Page 4)
Thread Tools
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 1, 2018, 08:55 PM
 
Originally Posted by P View Post
But it was powerful when it launched:

https://www.anandtech.com/show/5365/...or-smartphones
Intel Atom launched in 2008, not 2012, and the SoC aimed at smartphones was arguably its high water mark. It was used in Netbooks, and had abysmal performance and mediocre battery life. (The CPUs themselves were reasonably low power, but Intel seemingly forgot about the fact that their chipset was using a crapton of energy, too.)
Originally Posted by P View Post
And while it was indeed one notch behind the bleeding edge at 32nm, it wasn't crazy behind. The issue is that it wasn't enough better to displace ARM.
My brother had an Android phone back then with said SoC. Performance was good for a phone in its price class. But it was hampered by emulation (which, of course, degrades performance). On the other hand, it wasn't really any better.
Originally Posted by P View Post
The original Atom for phones (Medfield) was better than anything ARM had at the time, but at some point, making an efficient in-order core is a solved problem. Remember - according to your bench early on, the Cortex A7 was more efficient than what came later. Intel moved on to making an efficient out-of-order core in Silvermont, while ARM moved to making a slightly less efficient (than A7), but more powerful, in-order core in the A53.
Correction: ARM had already moved to big.LITTLE by that point and the point is that you don't need to (in fact, probably can't) make in-order cores much more powerful, and Intel because it insists on using a single type of core, they needed to go OoO. Whereas ARM didn't because they have big.LITTLE.
Originally Posted by P View Post
I was referring to the Cortex-A9 versus Cortex-A8 scenario. It's pretty clear in that case that the A9 was more efficient. The Apple A5 was way more than twice as powerful than the A4 (partly, of course, because it went dualcore), on the same process, and it didn't cut battery life in half. I linked to the power consumption stats further up.
This is a back-of-the-envelope calculation that compares a modded Cortex A8 with a highly hand optimized Cortex A9, and Apple included a second for good measure. Because battery life with smartphones isn't tested when the CPU is running at full tilt, I think it isn't that easy to quantify the relative performance.

I would like evidence apart from that, e. g. you mentioned a book that supposedly explains why OoO cores are more energy efficient. I'm still interested in that, but unless you are providing additional evidence, I think I have said all I can say.
Originally Posted by P View Post
What is a power consumption test that includes periods of low activity - isn't that just a battery life test with mixed usage? Because that is the only case where the in-order can gain something back - short bursts of non-latency critical load that require less than the performance available at the lowest clock.
But that is precisely the types of workloads you have for standard desktop computers and phones.
Originally Posted by P View Post
And if that low-load scenario is the key, why include so many cores? If it is a low-load scenario, include one - or two, if you want to implement them as power states of the main cores - but why more?
I think especially Apple is only including as many cores as it thinks is optimal. I don't know the reasoning why 4 are better than 2 or 1, but I don't think they arrive at this conclusion just by throwing darts at the board. A while back Android SoC manufacturers were advertising high core counts in their designs (higher = better!!), but Apple has never done that. It stuck for a long time with two cores, then offered 2+2 when Android phones long had 4+4. Now Apple is at 2+4, which is still lower, but arguably better.

That's yet another difficulty if we want to make these comparisons, some decisions are not made by prudent engineers (who would e. g. clock some SoCs lower to have them be more energy efficient) but by marketing people (more = better, higher clocks = better benchmarks = better).
Originally Posted by P View Post
2004, so more than a few at this point. (2004 was the time when Intel introduced the "1% higher power consumption requires at least 2% higher performance" as a design guideline).
I thought it was more recent than that that they had to reconsider (Haswell maybe?).
Originally Posted by P View Post
Well, then Intel's workloads match the actual case better, don't they? All phones today run modified desktop OSes.
That's debatable: Intel's simulated workloads span a much wider gamut and include workstation-style and server-style workloads which are not well-suited optimization points for battery conscious devices.

ARM can use more narrow optimization points (Apple more narrow still). But you do have a point when you look at the transition period starting from the Cortex A8 that was designed just when the iPhone came out till, probably, the Cortex A57. The Cambrian smartphone explosion took everyone, including ARM, by surprise.

I think Intel could do better by doing what it originally did when it switched from the Pentium 4 design philosophy to its Cores: design a dedicated mobile core/cores and desktop cores.
Originally Posted by P View Post
My point is that a well-designed in-order core with a focus on efficiency to the exclusion of all else will be less efficient than a well-designed OoOE core with a focus on efficiency to the exclusion of all else, when running general computing tasks. Just want to make that clear. You can certainly make an OoOE core that is inefficient but powerful.
I have understood your opinion for quite a few posts now, but I am not seeing any evidence except for a back-of-the-envelope computation. It just seems that none of the SoC manufacturers seem to agree with you on that. The only exception is Intel, because their CPUs only sport a single cpu core type. But here we (I think) both agreed that this is less efficient than offering two types of CPUs optimized for different performance and efficiency sweet spots (be they in-order or OoO).
Originally Posted by P View Post
And I think that Apple included those in-order cores for specific tasks that are run on programmable GPUs on desktops - like their image manipulation stuff. Apple's GPUs aren't very programmable right now (or very powerful).
Do you have any evidence for that? If so, that'd be interesting, because this is not how big.LITTLE is used outside of Apple.
Originally Posted by P View Post
I don't think you can send your task to a core at all, can you? I think that if you just send a task to be executed, it goes to the OoOE cores if they're awake and maybe one of the weak cores if they're not. I think that tasks get sent to the array in-order cores when you call a specific API that is programmed to make use of them.
Of course you can with techniques like processor affinity (aka cpu pinning) or bound multiprocessing. In the simplest version, it allows you to assign a thread to a specific core or a range of cores. Processor affinity can also be used to avoid using virtual (aka hyperthreaded) cores, so in case of Intel CPUs where, say, with a 2-core CPU, cores 0 and 1 are the “real” cores and cores 2 and 3 are the “virtual” cores. If you want to avoid hyperthreading for a specific workload, you could pin it to cpus 0 and 1.

Of course, I don't know whether this is possible with Apple's SoCs or other SoCs that use heterogeneous multiprocessing: by convention you start number with, say, the big cores and then the little cores. So on an A12 CPU0 and CPU1 would be big cores and CPU2-CPU5 would be small cores.
Originally Posted by P View Post
This isn't actually the advantage of an in-order core. In any core, you can turn off most units completely - you can turn off decoders, execution units, load/store units, etc. You can make it as wide as you like and it doesn't matter, because all those extra execution units are just turned off. All you need to do to keep it functional is to keep power to the registers. The problem for an OoOE core is that you have lots of registers in your physical register file (PRF), and a translation table to keep track of which is which, and a lot of state to keep track of which instruction should go in which order. If you turn it off, you need to save all of this data, and then restore it when you wake it up. This means that waking a core requires an investment of power.
In my mind, that is exactly what I said: you have to keep the registers powered so as to not lose the data, which consumes power. Or offload the data and then invest power (and time) when you wake up an OoO core.
Originally Posted by P View Post
The power consumption during running are effectively a case of big caches using power.
I understand that it is a trade-off, but I haven't seen evidence that it is a net benefit. Why don't you provide evidence for that from sources other than your back-of-the-envelope calculation?
Originally Posted by P View Post
Do you have details on any of this, that iOS will move things between the small and big cores over time, other than a very simple case of moving something from the small to the big core when the phone wakes up and dropping it back down when it is locked?
Nope, I don't. But this is how it works with newer incarnations of Android, for example.
Originally Posted by P View Post
But that can be the case either way. You could have multiple cores with different specialities, and we don't know what the small cores in the A12 are. The fact that Apple claims them to be "50% more efficient" made me think - because if it is a simple in-order core the old one must have been terrible inefficient for that to even be possible - but they're probably just counting the process improvements.
I think it is the latter: Apple clearly didn't aim for massive CPU performance boosts in this generation — and I think it didn't have to. It is still easily 1-2 generations ahead of the competition. They decided to invest the silicon into the CPU and especially its Neural Engine.
Originally Posted by P View Post
The problem with Apple's setup - form the perspective of a general workload - is that they have more of the low-power cores than the powerful ones, which doesn't make sense if they're just there to offload the big cores when they're not working hard.
Do you want to argue that Apple and other SoC manufacturers are not basing their decision to include 4 small cores on bad reasoning (e. g. marketing over engineering)? Or are you arguing that you don't understand why it is objectively better to include 4 small cores? When Peter Greenhalgh was asked about his personal SoC, he also picked a 2 big + 4 little configuration. So that seems to be the sweet spot for mobile workloads.
Originally Posted by P View Post
I can find more if you want to count them. There is clearly a core in the flash controller, for one. What I'm after are things the application developer can send code to.
Clearly, and we can start arguing whether we should count the core(s) in the modem, etc.

But that's why I limited myself to the types of cores I have mentioned: if you use CoreML, you use CPUs, GPUs and the Neural Engine. Although AFAIK this is controlled by the API. Ditto for image manipulations, if you use Apple's APIs, you will automatically use the ISP (in part).
Originally Posted by P View Post
Do we know what the Neural Engine is, by the way? Because it seems like a complete buzzword right now. Is it some sort of tensor manipulation engine?
No, but I think it should be optimized for tensors and perhaps lower precision computations. In any case, what we do know for sure is that Apple has dedicated a lot more silicon to the Neural Engine and made it up to 8x faster (usually Apple's performance claims
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Oct 2, 2018, 07:21 PM
 
Originally Posted by reader50 View Post
FaceID has had legal and privacy concerns for awhile. Namely if you can refuse to unlock, the way you presumably can with a passcode. Those concerns are no longer theoretical. The Feds have forced a suspect to unlock their iPhone X. By looking at it.

Activating the SOS mode disables FaceID, but only if you recognize the threat in advance. And the list of people who might be targeted for intrusive searches is growing.

• Anyone crossing a US border. Warrantless electronics searches are way up.
• I seem to recall DHS arguing any international airport is a "border", allowing border exceptions to the 4th amendment.
• Anyone who looks "middle-eastern", or wears a turban.
• Anyone walking around, who happens to be black.
• Walking around while brown might work too.
• If you look too poor to own an iPhone X.
• If your car looks too expensive vs your clothes.
• If you might be a drug dealer. Or look similar to a drug dealer's face. Or if a drug dealer ever parked in your driveway.

During suspicionless car stops, police have argued that everything is suspicious. Car interior too clean. Car too messy. Driving on an interstate. Out-of-state plates. Air freshener hanging from mirror. Possessing cash. Being too calm talking to officers. Being too nervous talking to armed people (officers).

If I get a phone with TouchID or FaceID, I expect to turn them both off. Passcode-only, to protect my rights. I don't think I'm in any of the suspicious group yet, but that "group" is getting so large, it's only a matter of time before we're all in the group. Only whites with a million or more seem definitely immune from warrantless searches.
This might be 12 only, but it looks like you don’t have to actually go into SOS mode or turn the phone off.

Just pulling up the power/SOS screen disables FaceTime until the unlock code is entered.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Oct 2, 2018, 07:36 PM
 
I forgot to mention, there are some pluses. If you're dead, FaceID will prevent authorities from snooping. TouchID has been used to unlock some phones. Stop by the morgue, apply cold fingers to the sensor.

But from what I've read, several FaceID attempts have been made. And Apple did a good enough job on live-person detection. All the attempts ended in failure. After the drive-by, your phone will keep your secrets as safe as you will.
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Oct 3, 2018, 09:17 AM
 
Originally Posted by reader50 View Post
But from what I've read, several FaceID attempts have been made. And Apple did a good enough job on live-person detection. All the attempts ended in failure. After the drive-by, your phone will keep your secrets as safe as you will.
I've noticed that my phone requires me to look at it to unlock, so I can see where an unwilling or dead person wouldn't unlock the phone. But then the phone works with sunglasses on, so I'm not sure what's stopping police from throwing some Ray Bans on an unconscious (or worse) person and unlocking the phone.
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Oct 3, 2018, 10:43 AM
 
Originally Posted by Laminar View Post
I've noticed that my phone requires me to look at it to unlock, so I can see where an unwilling or dead person wouldn't unlock the phone. But then the phone works with sunglasses on, so I'm not sure what's stopping police from throwing some Ray Bans on an unconscious (or worse) person and unlocking the phone.
You can turn the requirement for a direct look on and off.
People turn it off because it sometimes causes issues with face recognition.

-t
     
mindwaves
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Oct 3, 2018, 11:34 AM
 
Face ID is actually quite clever. It even works with one eye open or a spoon in my mouth, but that took two times to open.
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Oct 3, 2018, 06:17 PM
 
Originally Posted by Laminar View Post
I've noticed that my phone requires me to look at it to unlock, so I can see where an unwilling or dead person wouldn't unlock the phone. But then the phone works with sunglasses on, so I'm not sure what's stopping police from throwing some Ray Bans on an unconscious (or worse) person and unlocking the phone.
Face ID sees through glasses and still detects whether you’re looking at it — unless the shades filter the specific light frequency used by the Face ID projector. None of mine do, so Face ID and attention detection work just fine with all three pairs of sunshades and my regular glasses.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 6, 2018, 12:14 PM
 
Originally Posted by OreoCookie View Post
Intel Atom launched in 2008, not 2012, and the SoC aimed at smartphones was arguably its high water mark. It was used in Netbooks, and had abysmal performance and mediocre battery life. (The CPUs themselves were reasonably low power, but Intel seemingly forgot about the fact that their chipset was using a crapton of energy, too.)
OK yes, it launched in 2008 as a design. It was intended as a chip for "mobile internet devices" or whatever, which was a vision a lot like what the iPad would become. That the chipset was inefficient was because Intel quickly retooled the chip for netbooks when that design idea didn't fly and didn't have a PC chipset ready.

Correction: ARM had already moved to big.LITTLE by that point and the point is that you don't need to (in fact, probably can't) make in-order cores much more powerful, and Intel because it insists on using a single type of core, they needed to go OoO. Whereas ARM didn't because they have big.LITTLE.
You can make in-order cores as powerful as you like - again, Itanium, but we also have POWER6 and UltraSPARC. They're all just crazy inefficient.

This is a back-of-the-envelope calculation that compares a modded Cortex A8 with a highly hand optimized Cortex A9, and Apple included a second for good measure. Because battery life with smartphones isn't tested when the CPU is running at full tilt, I think it isn't that easy to quantify the relative performance.
Why is one modded and the other hand-optimized? Both of them are ARM designs, straight from bottle. They are on the same process node. And yes, it is a back-of-the-envelope calculation because the difference is massive. The Apple A5 is easily 2.5 times faster than the Apple A4, and its power consumption isn't 2.5 times higher in mixed usage, or the battery life would be less than half.

I would like evidence apart from that, e. g. you mentioned a book that supposedly explains why OoO cores are more energy efficient. I'm still interested in that, but unless you are providing additional evidence, I think I have said all I can say.
There isn't a single quote I can pull from that book that explains it all, but I'll try to paraphrase (just spent some time re-reading it).

One of the chips that is reviewed in some detail in that book is the PowerPC 603. That CPU, if you recall, was designed to be the more power-efficient variant of the PowerPC architecture for use in Mac laptops. I remembered that as being the last in-order core Apple ever used, but it wasn't - turns out that it is the first true out-of-order. The execution window is very small, but instructions are issued out of order and then reordered at the end to go back into program order. There is a total of 5 rename registers for the GPRs, another 4 for the FPRs and one each for the special registers - a total of 12. There are 32 architectural GPRs, 32 FPRs and 3 more special registers, so the number of registers has only increased by less than 25%.

The way the OoOE works here is that the dispatcher sends out the instructions in program order, but that for each execution unit, there is a reservation station that holds 1 or 2 instructions. Inside each of these queues, things are also issued in order (something the 604 changed, but let's ignore that for now), but the various units can and do issue out of order - in particular, the load/store unit will issue without waiting for the other, more heavily loaded, units to finish what they're doing.

This is an extremely limited form of out-of-order processing, and it seems obvious to me that this will be more efficient. The power required to keep 12 more registers active cannot possible be enough to outweigh the performance advantage of just getting those load-store instructions out of the way and fetching data.

So doing this extremely limited form of OoOE is a win. Going a little further is probably a win. Going a lot further probably isn't, but if we're trying to design the most efficient processor available, it should probably be a little bit OoOE. Looking around, who designs in-order cores anymore? Intel did with the Atom, but they didn't design it new, it was an old Pentium core lightly modified, and they moved away from it. Other than that, it is only ARM. Why? It may be marketing - pay us more for the big OoOE core! - but it doesn't seem likely. It may be patent-related: WARF has been suing people for a patent that seems to apply here, and only recently got smacked down, so ARM may have wanted to avoid licensing for the small cores. It may be related to size, as ARM is clearly focused on at least having some really tiny cores available. I don't know why, but they are alone. Samsung, which is now making its own cores, keeps using ARM cores for the small ones, and I can't see anyone else trying. AMD's "cat" cores are OoOE, and Intel gave up on in-order. And Apple? Well, I have doubts about that, but I'll put that at the bottom because it is better as a separate segment.

I think especially Apple is only including as many cores as it thinks is optimal. I don't know the reasoning why 4 are better than 2 or 1, but I don't think they arrive at this conclusion just by throwing darts at the board. A while back Android SoC manufacturers were advertising high core counts in their designs (higher = better!!), but Apple has never done that. It stuck for a long time with two cores, then offered 2+2 when Android phones long had 4+4. Now Apple is at 2+4, which is still lower, but arguably better.

That's yet another difficulty if we want to make these comparisons, some decisions are not made by prudent engineers (who would e. g. clock some SoCs lower to have them be more energy efficient) but by marketing people (more = better, higher clocks = better benchmarks = better).
I have tried quite hard to find a good argument for why Apple would put multiple small cores on their SOC, and I have completely struck out. Nobody seems to understand why. The fact that we don't have a good analysis of the A11 anywhere doesn't help either.

Do you have any evidence for that? If so, that'd be interesting, because this is not how big.LITTLE is used outside of Apple.
No, I'm trying to figure why they would do a 2+4 design, because it doesn't make sense to me - and apparently not to anyone else either, at least that I can find.

Of course you can with techniques like processor affinity (aka cpu pinning) or bound multiprocessing. In the simplest version, it allows you to assign a thread to a specific core or a range of cores. Processor affinity can also be used to avoid using virtual (aka hyperthreaded) cores, so in case of Intel CPUs where, say, with a 2-core CPU, cores 0 and 1 are the “real” cores and cores 2 and 3 are the “virtual” cores. If you want to avoid hyperthreading for a specific workload, you could pin it to cpus 0 and 1.

Of course, I don't know whether this is possible with Apple's SoCs or other SoCs that use heterogeneous multiprocessing: by convention you start number with, say, the big cores and then the little cores. So on an A12 CPU0 and CPU1 would be big cores and CPU2-CPU5 would be small cores.
I have investigated this a bit, and it seems that the key is that you can set a QoS marking on each thread:

https://developer.apple.com/library/...rkWithQoS.html

This sounds a lot like it would control which type of core the thread would run on.

In my mind, that is exactly what I said: you have to keep the registers powered so as to not lose the data, which consumes power. Or offload the data and then invest power (and time) when you wake up an OoO core.
Maybe it is splitting hairs, but you could have a core with a short pipeline, very limited or no OoOE resources, but lots of execution units. Such a core would only require a very small amount of power to wake up, and would be useful for low-power work. An example of such a core is the PowerPC 7450, ie G4, which is still used as an embedded core.

Nope, I don't. But this is how it works with newer incarnations of Android, for example.
(This was about moving a task from low-power to high-power cores). If you see the QoS link above, you will see that iOS will change priority levels, presumably moving tasks, in certain situations but clearly wants to avoid doing that given the wording. An interactive task should be on the big cores and never leave - a background task should be on the small cores and never leave. Which is interesting...

Do you want to argue that Apple and other SoC manufacturers are not basing their decision to include 4 small cores on bad reasoning (e. g. marketing over engineering)? Or are you arguing that you don't understand why it is objectively better to include 4 small cores? When Peter Greenhalgh was asked about his personal SoC, he also picked a 2 big + 4 little configuration. So that seems to be the sweet spot for mobile workloads.
I'm arguing that I haven't seen a good argument for why multiple small cores make sense for general purpose computing. I can understand that they make sense for specific tasks that are otherwise run on a programmable GPU.

Anyway: Anandtech has made a review, and it is quite interesting, partly because of what it says about the A11

https://www.anandtech.com/show/13392...ilicon-secrets

After reading this, I think that we have been wasting electrons arguing about in-order or not when that is irrelevant to the question. I don't think that the high-efficiency cores in the A11 (and presumably A12) are in-order at all. They're much too big for that, almost the size of ARMs A76 cores. I think that Apple put in-order cores in the A10, saw that they were inefficient, and decided to replace them with efficient OoOE cores with a much more limited execution window and fewer rename registers than their big cores. This matches much better with what they're saying in the iOS documentation: Tasks that are not latency critical should always run on the low-power cores, even when the CPU is all woken up and the big cores are available. Also remember that Apple sold the A11 with an argument about how much faster the small cores were - 70% faster, when the clock only went up by 45%. Not only would that be a fantastic IPC improvement if they had stayed in-order - why spend resources improving the performance of the small cores at all, if they're just supposed be used when nobody is looking at the phone?
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Jawbone54
Posting Junkie
Join Date: Mar 2005
Location: Louisiana
Status: Offline
Reply With Quote
Oct 8, 2018, 03:04 PM
 
I tried the original Pixel XL for a while, but the camera's autofocus wasn't quick enough to keep up with my kids. Luckily Google had a very forgiving return policy.

The camera was supposedly incredible in the Pixel 2. Tomorrow's Pixel 3 livestream should be interesting. If the price point is more appealing than the iPhone XS, I may be looking at a more permanent switch.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Oct 24, 2018, 10:58 AM
 
One more on the topic of the little core:

https://www.anandtech.com/show/13453...nn-performance

TL;DR is that Anandtech thinks that the little core is essentially the Swift core from the A6, and the A11 had something very similar. Very much not an in-order core.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Oct 24, 2018, 09:44 PM
 
I read that, too. But I was too busy to post something intelligent here. Apparently even the small cores are on par with a Cortex A73, which is quite a feat in itself. But to briefly add on to our discussion:

(1) In iOS you are apparently able to affix workloads to specific cores.

(2) At the performance level of Apple's little cores, which rival big cores of competitors, the in- vs. out-of-order discussion may be moot: if you target that level of performance, I believe you have to design OoO cores. Even if in-order cores are more efficient, their single thread performance is no longer insufficient for Apple's iOS devices.

PS I really like that Anandtech got SPECmarks to run (with two caveats, none of the benchmarks written in Fortran would run and for one of the benchmarks, they had to reduce the RAM footprint of the workload). I really wish, though, that they would have been more specific in their comparison to Intel CPUs and quantified what Intel CPU offers comparable performance.
( Last edited by OreoCookie; Oct 25, 2018 at 01:45 AM. Reason: Misremembered: the little cores are on par with an A73, not an A72. Even better.)
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Nov 10, 2018, 05:05 PM
 
For people with OLEDs, zoom in to the coast, and then move the page around.

     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Nov 10, 2018, 05:41 PM
 
That’s weird.
     
Doc HM
Professional Poster
Join Date: Oct 2008
Location: UKland
Status: Offline
Reply With Quote
Nov 10, 2018, 06:55 PM
 
Replaced my old 7 with a shiny red XR. Couldn't really justify the extra ££ on the XS and the specs on the R are really very good plus its a big enough jump from the 7 (the 8 didn't feel like it was).

I was pretty into FaceID before using it but I have to say I'm sold. Seems pretty flawless and the revised UI works pretty well. I'm not 100% sold on the size. I find it a wee bit clumsy in my hands which may be a bit small.

All in it feels like Apple didn't hobble the low end product for once in their cheese-pairing lives. And the red colour is lush!
This space for Hire! Reasonable rates. Reach an audience of literally dozens!
     
Ham Sandwich
Guest
Status:
Reply With Quote
Nov 10, 2018, 09:06 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:06 AM. )
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Nov 11, 2018, 04:39 AM
 
Originally Posted by Doc HM View Post
Replaced my old 7 with a shiny red XR. Couldn't really justify the extra ££ on the XS and the specs on the R are really very good plus its a big enough jump from the 7 (the 8 didn't feel like it was).
All the reviews have confirmed what you can read off the spec page. To quote Gruber, who settled this masterfully in his first paragraph:
Originally Posted by Gruber
There’s got to be a catch. […] Well, there is no catch.
I don't suffer from insanity, I enjoy every minute of it.
     
Doc HM
Professional Poster
Join Date: Oct 2008
Location: UKland
Status: Offline
Reply With Quote
Nov 11, 2018, 10:27 AM
 
Originally Posted by And.reg View Post
Well I wouldn't call a $749 phone "low end"...
well the dual core 21in iMac with a spinning drive wasn’t a low end product. It was still brain numbing lay awful.

So it’s not like apple don’t have form here.
This space for Hire! Reasonable rates. Reach an audience of literally dozens!
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Nov 11, 2018, 08:25 PM
 
I went and checked out the XR at the Apple Store on launch day and was quite impressed. If the X/XS/Max didn’t exist people would be quite happy. It really is flagship and flagship+ but unfortunately peole see the XR as the budget phone. That said really loving the XS Max a d wouldn’t take an XR.
     
Ham Sandwich
Guest
Status:
Reply With Quote
Nov 12, 2018, 11:39 AM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:06 AM. )
     
Ham Sandwich
Guest
Status:
Reply With Quote
Apr 18, 2019, 02:41 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:06 AM. )
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 18, 2019, 09:08 PM
 
@And.reg
That depends on the settlement: we all know that Apple has a big team working on a modem. So if part of the deal is that Apple can use all of Qualcomm's FRAND patents, it may be worth it for our favorite fruit company. The big issue here is that Qualcomm and Apple had lawsuits all over the world and Qualcomm was bound to get lucky in some of them — at huge expense to Apple. Perhaps that isn't fair, but $6-8 billion is a small amount to pay compare to Apple's profits from the iPhone and iPads.
I don't suffer from insanity, I enjoy every minute of it.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Apr 18, 2019, 10:47 PM
 
Let's not forget that the FTC is suing Qualcomm for unfair competition (licensing terms) and monopoly behavior. If the FTC wins, any such terms will become null and void. And we don't have access to the terms of the Apple agreement.

Suppose Apple's payments are spread out over the 6-8 years of the agreement. So if the FTC wins, Qualcomm only gets a fraction of the payout, and Apple can make all their iPhones in the meantime. The agreement might even require Qualcomm to refund the payment if it's ruled illegal.

Should the FTC lose (or drop the case), Apple has to eventually pay it all. But in the meantime, they can still make any and all iPhones.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Apr 21, 2019, 08:45 PM
 
@reader
I completely agree that Qualcomm has won even though IMHO they are stretching what FRAND means to the absolute limit — and probably beyond. This non-sense is one of the major reasons why I view patents as something that is causing more trouble than they are worth, they are stifling innovation even when FRAND licensing terms should prevent abuse.
I don't suffer from insanity, I enjoy every minute of it.
     
Ham Sandwich
Guest
Status:
Reply With Quote
May 1, 2019, 05:18 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:06 AM. )
     
Chongo
Addicted to MacNN
Join Date: Aug 2007
Location: Phoenix, Arizona
Status: Offline
Reply With Quote
May 1, 2019, 05:53 PM
 
Originally Posted by And.reg View Post
Here's a little more:

https://www.macrumors.com/2019/05/01...ent-4-billion/

Qualcomm gets $4.5 Billion.

And likely will be the primary supplier of 5G modems in iPhones circa... September 2020...?
That will compensate for the 1b Qualcomm paid my company after the merger fell through.
45/47
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
May 22, 2019, 02:21 PM
 
The FTC has won their case against Qualcomm.
"Qualcomm must not condition the supply of modem chips on a customer's patent license status," says Koh's ruling, "...and must negotiate or renegotiate license terms with customers in good faith under conditions free from the threat of lack of access to or discriminatory provision of modem chip supply or associated technical support or access to software."
I suspect Apple's agreement allowed for this. So Qualcomm will be getting a lot less money, and the patent-licensing requirements will go away. Just sell the chips already, without the extortion.
     
 
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 12:18 PM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,