Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Community > MacNN Lounge > iPhone Apple XS Max?

iPhone Apple XS Max? (Page 2)
Thread Tools
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Sep 16, 2018, 04:31 PM
 
Originally Posted by Waragainstsleep View Post
The Watch is absolutely the star of that show. Within a few years every elderly care home resident in the western world plus China is going to have one plus every elderly person living alone who has a reasonably well-off kid or grandkid. With all the ageing populations, thats several million buttloads of money from a whole new market segment.
An interesting plan. But I think the Watch will need far better standby time for it to work. Or motion charging. Or a solar wristband. Elderly folks are not good with recurring tasks, like charging things up.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 16, 2018, 04:31 PM
 
“I’ve fallen, and I can’t get up!”

“Playing ‘Get Up’ by James Brown.’
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Sep 16, 2018, 05:43 PM
 
Originally Posted by reader50 View Post
An interesting plan. But I think the Watch will need far better standby time for it to work. Or motion charging. Or a solar wristband. Elderly folks are not good with recurring tasks, like charging things up.
Yep.

And before some genius says to set a timer to charge the watch, that takes care of charging only.
Now you need another reminder to put the charged watch back on. Not sure this is going to work reliably.

-t
     
Waragainstsleep
Posting Junkie
Join Date: Mar 2004
Location: UK
Status: Offline
Reply With Quote
Sep 17, 2018, 06:42 AM
 
Good points about the batteries. I still think thats where Apple are heading, be interesting to see how they solve that one.
I have plenty of more important things to do, if only I could bring myself to do them....
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Sep 17, 2018, 10:40 AM
 
Originally Posted by P View Post
It saves mm2 in the core, but Apple hasn’t worried about that before. I can only see one case where it makes sense - when there is specially tuned code that will work on the in-order core with minimal efficiency loss, and where having lots of cores is more important than one fast. Which is why I think that having an A11 or higher only matters for AR applications, not general code. [...] Yes, I’m sure that they have ways to move tasks from an in-order to an OoOE core. My question was why they have more in-order cores than OoOE cores, because the OoOE will be more efficient for general purpose computing.
Having more little cores than big cores, and the 2 + 4 core configuration in particular, precedes the focus on AR. I remember that the designer of the Cortex A53, when asked in an Anandtech interview (and in the comments by yours truly) what his personal SoC for a mid-range phone would look like, also answered 2 big + 4 LITTLE cores. So it seems that this core configuration need not be chosen to suit just AR workloads but also general computing workloads. I'm not sure where the optimum lies and whether the optimum has shifted as programmers learn to use the available resources, but I don't think this has anything to do with AR.

Moreover, the consensus seems to be that it is indeed more energy efficient to have a number of large OoO cores and at least as many, if not more, in-order cores.
Originally Posted by P View Post
LLC=Last Level Cache. I wrote it that way because I’m not sure Apple has an L3 right now, it varies between models.
Apple had a quasi-L4 cache in earlier design if memory serves, which was also shared with the GPU. Nowadays, because Anand presumably continues doing his in-depth SoC reverse engineering analyses in house for Apple, we know way less about them.
Originally Posted by P View Post
(Note: Apple’s OoOE cores as implemented may not be, because Apple may just be running them at very high clocks while they’re even active and power usage goes as the clockspeed cubed at these speeds, but in general, an OoOE core will use less power to get the task done if clocks and voltages are equal)
AFAIK the efficiency of cores isn't as simple and clear cut as you present it, it really depends on the load. In order cores are way more efficient at lower loads, and there is a cross over at the top end of the little core and low end of the power envelope of the big core between the two.
Originally Posted by P View Post
I also don’t like that Apple didn’t launch a phone for me, and the SE being killed clearly means that there won’t be another phone in that case, but I’m trying to see the bright side here. Apple made three new phones here, as many as they have ever made. They probably couldn’t do four. What if these three were the ones where they could make real updates over the current model, and XR mini would be functionally very similar to an iPhone 8. I can understand them skipping it then.
I think this sounds like a very plausible argument for why Apple went with the sizes and models they did. On the other hand, I don't think the engineering effort of making size variations of one model is the same as making an entirely different model. They make more variations of laptops, and given the size they are, I think they should try to do that.
Originally Posted by P View Post
Other than design, the advantages of these phones were all about AR. I don’t care about AR, at least not yet. If I don’t, I might as well stick with my SE.
I noticed on my wife's phone that a lot of apps don't play nice with the SE form factor any longer, the UIs seem positively crammed and optimized for decidedly larger screens. But of course if there is a long-term perspective for more than two sizes (apparently the Xr has the same effective resolution as the Xs Max), I think there will be developer support.
I don't suffer from insanity, I enjoy every minute of it.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 17, 2018, 02:46 PM
 
Originally Posted by OreoCookie View Post
@subego
I can say that there is a noticeable speed difference, but my 7 has been fast enough for me. When I wait it is usually for networking, not the CPU.
I was mistaken to focus on the hardware. That’s part of the issue, but the bigger one is Apple loses interest in optimizing iOS for older hardware long before they stop forcing updates.

I hear iOS 12 is surprisingly good in this regard, but my guess is the 7 will get gimped by 13 in a way the XR won’t.

Which, for someone buying now, should be a concern. Not as much for someone who already has a 7. Waiting a year isn’t going to be particularly painful, especially if iOS 12 is as nice as claimed.
     
mindwaves
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Sep 17, 2018, 08:35 PM
 
Installing iOS 12 on my iPad Pro 10.1" now, and if all goes well, will be installing on my iPhone 7 also.
     
reader50
Administrator
Join Date: Jun 2000
Location: California
Status: Offline
Reply With Quote
Sep 17, 2018, 09:27 PM
 
Ars has benchmarked iOS 12 vs 11 and (sometimes) 10 on the same phones. 12 is faster than 11 in all cases, on the same hardware. In a majority of tests, it's even faster than iOS 10.

Apple put a lot of effort into optimization for older phones.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 17, 2018, 09:47 PM
 
Or, you could say Apple didn’t put any effort into optimizing 11.

I just put 11.4.1 on my X so I didn’t need to get 12 until it’s cooked, but so far this is hot garbage. 11.2.whatever was better. I may have to go early adopter.
     
electrode17
Fresh-Faced Recruit
Join Date: Aug 2018
Status: Offline
Reply With Quote
Sep 18, 2018, 04:05 AM
 
Every OS update makes me anxious. I hope this update is worth it.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Sep 18, 2018, 08:44 AM
 
Originally Posted by OreoCookie View Post
Having more little cores than big cores, and the 2 + 4 core configuration in particular, precedes the focus on AR. I remember that the designer of the Cortex A53, when asked in an Anandtech interview (and in the comments by yours truly) what his personal SoC for a mid-range phone would look like, also answered 2 big + 4 LITTLE cores. So it seems that this core configuration need not be chosen to suit just AR workloads but also general computing workloads. I'm not sure where the optimum lies and whether the optimum has shifted as programmers learn to use the available resources, but I don't think this has anything to do with AR.
The in-order versus OoOE debate goes back to 1995 at least. The OoOE needs to keep its reorder buffer and scheduler turned on at all times, and it seems like there should be energy to be saved there. Furthermore, having four identical execution units when most of the time we only use one - of course we can be more efficient! Surely, we can fix this somewhere else, schedule the instructions correctly? We can fix it in the compiler! We can clock higher with a lower power requirement, so it doesn't matter! We can use Hyperthreading to hide the latencies! Guess what? Every single attempt to make an in-order core compete on efficiency has failed. Intel tried with Itanic and lost. They tried again with the first-gen Atom and lost. ARM tried for a very long time to make in-order work, and it was only with the first OoOE (Cortex A9) that they really managed to break out and make something that performed well.

The reason is that you need to fetch the data before it can be used, and we have so far not managed to beat the speed of light for transmitting data (transfers in the chip are something like 70% of that speed). I'm not talking about main memory access, because nothing can hide that - I'm talking about a hit in the L2. If you have a memory access to an address that needs to be calculated, you need to have the calculation done well in time so the address can be fetched 10-15 cycles before it is needed for a calculation. There is just no way that a dual-issue in-order core can do that. It will sit there waiting until the data is available.

So to make the in-order cores be more efficient (which is what Apple is branding them as), they need to run code that is carefully designed to avoid that trap. All fetches to register are happening long enough before use that data has time to show up before it is needed, with no complicated code to calculate reads. This can be more efficient, but general purpose code won't be written this way. The in-order cores will never be more efficient on general purpose code.

Moreover, the consensus seems to be that it is indeed more energy efficient to have a number of large OoO cores and at least as many, if not more, in-order cores.
Sure, but that is because they don't think we need more than two cores for general-purpose code. That's fine. The in-order cores are working like quasi-GPU in this context.

Apple had a quasi-L4 cache in earlier design if memory serves, which was also shared with the GPU. Nowadays, because Anand presumably continues doing his in-depth SoC reverse engineering analyses in house for Apple, we know way less about them.
The A9, which is the last I saw a good analysis on, had a massive 1.5MB L2 for each core and a joint 4MB L3 victim cache (ie, it was not inclusive, it stored what had just been evacuated from one L2). The A9X ditched the victim cache entirely, which I speculated was due to the display doing self refresh in the iPad Pro, but of course I don't know.

AFAIK the efficiency of cores isn't as simple and clear cut as you present it, it really depends on the load. In order cores are way more efficient at lower loads, and there is a cross over at the top end of the little core and low end of the power envelope of the big core between the two.
The evidence does not bear this out. This idea of in-order versus OoOE has been tested so many times, and the OoOE model wins every time. It is only in GPUs where wide, weak designs win. I can see the in-order cores doing something similar to a GPU compute workload, but not general purpose computing.

I think this sounds like a very plausible argument for why Apple went with the sizes and models they did. On the other hand, I don't think the engineering effort of making size variations of one model is the same as making an entirely different model. They make more variations of laptops, and given the size they are, I think they should try to do that.
I think that manufacturing and testing are the big things that are different for making another size phone.

I noticed on my wife's phone that a lot of apps don't play nice with the SE form factor any longer, the UIs seem positively crammed and optimized for decidedly larger screens. But of course if there is a long-term perspective for more than two sizes (apparently the Xr has the same effective resolution as the Xs Max), I think there will be developer support.
This is a problem. I hear sometimes that developers are annoyed at the low SE resolution when designing their apps, and if application support for the SE begins to dry up, that is a real problem for the platform. Apple needs to communicate clearly what the future for that size is.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Sep 18, 2018, 10:20 AM
 
Originally Posted by P View Post
This is a problem. I hear sometimes that developers are annoyed at the low SE resolution when designing their apps, and if application support for the SE begins to dry up, that is a real problem for the platform. Apple needs to communicate clearly what the future for that size is.
Seems to me that Apple has spoken clearly. Small form factors are dead.

-t
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Sep 18, 2018, 10:51 AM
 
Originally Posted by turtle777 View Post
Seems to me that Apple has spoken clearly. Small form factors are dead.

-t
I was going to say that, too.

A glance at the models available on the iPhone product page speaks very clearly.

Unfortunately.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 18, 2018, 02:57 PM
 
X vs. XS pictures of Gruberpuss showing “Smart HDR”.



     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Sep 18, 2018, 03:23 PM
 
Wow.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 18, 2018, 03:48 PM
 
Yeah. That’s pretty incredible.

IIUC, that’s all happening in real-time. So that’s what you see on the display, too. Sort of an offshoot of the AR tech.

https://daringfireball.net/2018/09/the_iphones_xs
     
ort888
Addicted to MacNN
Join Date: Feb 2001
Location: Your Anus
Status: Offline
Reply With Quote
Sep 18, 2018, 03:59 PM
 
Do reviewers have watches? Anyone know the embargo date for the watch?

My sig is 1 pixel too big.
     
OreoCookie
Moderator
Join Date: May 2001
Location: Hilbert space
Status: Offline
Reply With Quote
Sep 18, 2018, 08:58 PM
 
Originally Posted by P View Post
The in-order versus OoOE debate goes back to 1995 at least. The OoOE needs to keep its reorder buffer and scheduler turned on at all times, and it seems like there should be energy to be saved there. [...] Guess what? Every single attempt to make an in-order core compete on efficiency has failed. Intel tried with Itanic and lost. They tried again with the first-gen Atom and lost.
I think we are talking past each other. You seem to argue a point I am not contesting, namely that if you want to maximize absolute performance, you need to get OoO. That thing is obvious. But when performance per watt counts, then in order cores are more energy efficient (see the evidence below).
Originally Posted by P View Post
The evidence does not bear this out. This idea of in-order versus OoOE has been tested so many times, and the OoOE model wins every time. It is only in GPUs where wide, weak designs win. I can see the in-order cores doing something similar to a GPU compute workload, but not general purpose computing.
That is really weird, there is plenty of hard numbers to back that in-order cores of the same generation produced with the same process are more energy efficient. Have a look at the BaseMark OS II — XML Parsing Energy Efficiency benchmark comparing the in-order Cortex A53 to the out-of-order Cortex A57. The in-order cores are more energy efficient. The same goes for tests where you combine GPU and CPU workloads. If you want to have a broader perspective across different generations of ARM cores, have a look here:

Of course, if you run both CPUs at full tilt, the difference in energy efficiency will be appreciable, but not earth shattering. Once you consider these cores at partial loads, it's not even a contest, though:


The in-order cores are not just a little bit more energy efficient, but a lot more energy efficient. (Both images are taken from here.)

There is also a graph that I can't find right now (I soon have to catch my next flight) that shows exactly the cross over point in terms of performance, i. e. the little cores are optimized for lower power, more efficient operation (e. g. by choosing the appropriate transistor types) whereas the big cores are optimized for speed. It helps that the little cores cost comparatively little die area.

Of course, there is the question whether using a big.LITTLE implementation is more energy efficient than just using big or little cores. And here, it really depends. Samsung's first two attempts resulted in worse performance. But that has since been fixed, also because with newer ARM SoCs that sport big.LITTLE configurations you can use all cores concurrently, and the OS's scheduler knows that there are different types of cores and distributes the workload accordingly.

Lastly, I do not know what the optimal number of big vs. little cores is, but especially Apple has shown historically that it gives zero sh*t about core counts (which was and is mostly a marketing stunt in the Android world). I am certain that if 2 big + 1 little or 3 big + 2 little offered a better balance of total performance and energy efficiency, Apple would build just such a CPU.
I don't suffer from insanity, I enjoy every minute of it.
     
mindwaves
Registered User
Join Date: Sep 2000
Location: Irvine, CA
Status: Offline
Reply With Quote
Sep 19, 2018, 03:26 AM
 
I got an iPhone XS 64GB silver in the mail. Bought for the portrait mode and overall bigger screen and Memoji. Coming from an iPhone 7 64GB, which I will be selling. I also feel like switching between apps on the XS will be a lot better than double pressing my home button.

Will miss Touch ID. Will not be buying a wireless charger.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 19, 2018, 06:05 AM
 
Originally Posted by mindwaves View Post
I got an iPhone XS 64GB silver in the mail. Bought for the portrait mode and overall bigger screen and Memoji. Coming from an iPhone 7 64GB, which I will be selling. I also feel like switching between apps on the XS will be a lot better than double pressing my home button.

Will miss Touch ID. Will not be buying a wireless charger.
Of course, I really love FaceID and wireless charging, but won’t try to sell you on them.

I think you’ll like the display. I won’t say it’s mind-blowing like others have, but it’s pleasant in a way most displays aren’t.
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Sep 19, 2018, 07:40 AM
 
One more reason against iPhone X:

Control Center - swipe from upper right ?

WTF ? It's essentially impossible to do with one hand. So now, I would need TWO hands just to reach Control Center ?
Apple, this is f&cking stupid

-t
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 19, 2018, 08:33 AM
 
It’s awkward, but not really impossible for a right-hander.

More irritating in daily use is they decided to sacrifice the Do Not Disturb indicator to the notch. I forget it’s on all the time.
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Sep 19, 2018, 09:16 AM
 
Originally Posted by subego View Post
It’s awkward, but not really impossible for a right-hander.
Sure, it also depends on the size of your hands.

For me, it's already almost impossible on an iPhone 7. With the bigger screen on the X, I couldn't do it.

There is no reason for this.
They could have made it a swipe from the lower left or lower right. Swipe from the top is so stupid on a big phone.

-t
     
Laminar
Posting Junkie
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Sep 19, 2018, 09:22 AM
 
Originally Posted by mindwaves View Post
Will miss Touch ID. Will not be buying a wireless charger.
I got one for $20 off of Amazon. Setting it on my nightstand at night to charge is really nice.
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Sep 19, 2018, 09:23 AM
 
Originally Posted by OreoCookie View Post
I think we are talking past each other. You seem to argue a point I am not contesting, namely that if you want to maximize absolute performance, you need to get OoO. That thing is obvious. But when performance per watt counts, then in order cores are more energy efficient (see the evidence below).
No, I'm talking about performance/watt. Absolute performance isn't a super-useful metric anymore.

That is really weird, there is plenty of hard numbers to back that in-order cores of the same generation produced with the same process are more energy efficient. Have a look at the BaseMark OS II — XML Parsing Energy Efficiency benchmark comparing the in-order Cortex A53 to the out-of-order Cortex A57. The in-order cores are more energy efficient.
That is only because of clockspeed, though. The A57 is running at a higher clock, which in itself makes it less efficient. We don't know the actual clock it was running at during the test, but its ceiling is 1.9 GHz. According to the graph in the same article, it uses 5.48W for three cores (the XML test is described in the text as using three cores) at that clock. Dropping down to 1.3 GHz (=A53 top clock) drops the power requirement to 2.19W - a neat 60% reduction in power. Let's do a silly back-of-the-envelope calculation - dropping the clock from 1.9 to 1.3 GHz means that the test takes longer to run, but since the W used is so much lower, the total energy used at 1.3 GHz comes to 58% of the energy used at 1.9 GHz.

Let's plug these figures into the table. A57 got 155.29 MB/s at 1.9 GHz, so it gets 106.25 MB/s at 1.3 GHz. Energy consumption goes down to 58% of what it was before, so it is now 16.1mWh, for a "performance factor" of 6.61, which is more than the A53 got (6.39). Now, I have a million complaints with both the methodology used in the article and the one I used in my back-of-the-envelope math here, but it gives us a decent estimate. What is curious here is that the 106.25 MB/s is very very close to what the in-order core got (109.36 MB/s), so without the clockspeed boost, they're essentially even. This is probably a task that is bottlenecking on something else, at least part of the time. The A57 wins the efficiency game by using less overall power. I expect that it can power down execution units quicker and save some power there, or maybe the methodology is just not exact enough, and they're effectively even on this bench.

I went into that article expecting to find a microbenchmark that the in-order cores would win, especially since A57 isn't a very impressive OoOE core while the A53 is pretty decent for an in-order core. There are lots of such microbenchmarks, but the in-order cores lose in more general benches. It is quite amusing that the A53 actually lost this test as well, if we correct for the clockspeed difference.

(It is also interesting that the A7 has a better "performance per energy" than the A53. To be efficient on that measure, we should clearly use the older, weaker design? No, I suspect that it is the clockspeed thing again)

Isn't that just memory bottlenecking we see there?

If you want to have a broader perspective across different generations of ARM cores, have a look here:
But that graph doesn't make sense! It assumes that performance is linear with clockspeed, which is fine, but it also assumes that power consumption is linear with clockspeed, which is insane! Power goes as clockspeed^3 right now.

Of course, if you run both CPUs at full tilt, the difference in energy efficiency will be appreciable, but not earth shattering. Once you consider these cores at partial loads, it's not even a contest, though:
It isn't a contest because they're running on different tracks. More efficient would mean that a dotted line (in-order core) would be higher up than a solid line (OoOE cores) at the same X coordinate, and doesn't happen very often, because they don't exist at the same performance level. The exceptions are that the Snapdragon 810 has terrible performance (but it is also made on the ancient 20nm process and competing with more modern chips), and that the Kirin 950 (A53) nudges out above the older OoOE cores at the 500 point of the X-axis - most likely because of an L2 cache that is twice the size of that Exynos 7420, for instance (Caches growing like this is why SPEC2000 was abandoned for PCs in the first place - tasks stayed entirely resident in cache, which made the test worthless).

My issue is that the curves on that graph are all strange. The in-order curves are what I would expect, but the OoOE curves are all linear. This indicates that they're operating very far from their optimum, because those curves should be the same shape. Performance/watt being constant implies that the voltage is constant - performance/watt being linear implies that the voltage is being reduced by the square root of the clockspeed, which is also non-optimal.

The in-order cores are not just a little bit more energy efficient, but a lot more energy efficient. (Both images are taken from here.)
Not because of their design, because of their clockspeed and voltage target. If you need performance X and you have an in-order and an OoOE core that can both deliver performance X, the most efficient way to do it is to use the OoOE core and clock it down to the same performance. That will use less power than the in-order core, unless you have to clock it so low that you're past the peak efficiency.

Again - this is all for general purpose computing. There are tasks where the in-order core will be efficient. My issue is the general purpose computing tasks, where it will be less efficient. Remember that this all started by me saying that I don't care about the A10 and newer because I don't care about AR. I think those cores do some of the photo-manipulation tricks as well - that is another task that would work well for an in-order core - but the general point stands that they won't improve my experience browsing macnn.com one bit.

There is also a graph that I can't find right now (I soon have to catch my next flight) that shows exactly the cross over point in terms of performance, i. e. the little cores are optimized for lower power, more efficient operation (e. g. by choosing the appropriate transistor types) whereas the big cores are optimized for speed. It helps that the little cores cost comparatively little die area.
But that's orthogonal. If that A53 is more efficient on the special transistors, an equivalently clocked A73 on the same special transistors will be even more efficient.

Of course, there is the question whether using a big.LITTLE implementation is more energy efficient than just using big or little cores. And here, it really depends. Samsung's first two attempts resulted in worse performance. But that has since been fixed, also because with newer ARM SoCs that sport big.LITTLE configurations you can use all cores concurrently, and the OS's scheduler knows that there are different types of cores and distributes the workload accordingly.
But how does the scheduler know that? Are the threads marked, somehow? If they are, that just further reinforces my point that the in-order cores aren't cut out for general purpose computing.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 19, 2018, 04:12 PM
 
Originally Posted by subego View Post
More irritating in daily use is they decided to sacrifice the Do Not Disturb indicator to the notch. I forget it’s on all the time.
Fixed in 12!
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 19, 2018, 04:37 PM
 
Originally Posted by turtle777 View Post
Sure, it also depends on the size of your hands.

For me, it's already almost impossible on an iPhone 7. With the bigger screen on the X, I couldn't do it.
You’re holding it wrong.

I’m actually serious. Your grip adjusts after a few days of using it.

Now that I’m trained, I can get my thumb a good two centimeters past the swipe point.


Edit: I have smallish hands.
( Last edited by subego; Sep 19, 2018 at 05:04 PM. )
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Sep 19, 2018, 11:39 PM
 
I ordered the Max. Highly doubt I’ll be able to one-hand CC or notifications.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 20, 2018, 03:48 AM
 
Due on Tuesday!
     
P
Moderator
Join Date: Apr 2000
Location: Gothenburg, Sweden
Status: Offline
Reply With Quote
Sep 20, 2018, 08:26 AM
 
Originally Posted by subego View Post
You’re holding it wrong.

I’m actually serious. Your grip adjusts after a few days of using it.

Now that I’m trained, I can get my thumb a good two centimeters past the swipe point.


Edit: I have smallish hands.
The difference is that with a 5-sized phone, you hold it fixed with all fingers from the last joint and move your thumb. If you move up in size, the thumb doesn't reach any more, so you change the grip a bit. With a 6-size phone, I rest the phone on my palm with my little finger below to stop it from sliding down, and the very tips of my fingers on the far edge. The trick with that is that you can move the phone relative to your hand with your fingers so the thumb can reach. The downside with that is that your grip is much less secure, and the rounded edges of the newer phones make this even worse.
The new Mac Pro has up to 30 MB of cache inside the processor itself. That's more than the HD in my first Mac. Somehow I'm still running out of space.
     
Ham Sandwich
Guest
Status:
Reply With Quote
Sep 20, 2018, 08:27 AM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:00 AM. )
     
Spheric Harlot
Clinically Insane
Join Date: Nov 1999
Location: 888500128, C3, 2nd soft.
Status: Offline
Reply With Quote
Sep 20, 2018, 01:44 PM
 
The camera bump is supposedly different, so older cases won't fit, I hear.
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Sep 20, 2018, 01:56 PM
 
Originally Posted by Spheric Harlot View Post
The camera bump is supposedly different, so older cases won't fit, I hear.
I read it will fit, but not perfectly. See pics in the link.

https://9to5mac.com/2018/09/19/iphon...might-not-fit/

YMMV, since every case hase different tolerances.

-t
     
Ham Sandwich
Guest
Status:
Reply With Quote
Sep 20, 2018, 02:39 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:00 AM. )
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 20, 2018, 02:45 PM
 
That’s what it sounds like.

The Neural Engine probably can’t do the required processing at 60fps.
     
Ham Sandwich
Guest
Status:
Reply With Quote
Sep 20, 2018, 02:50 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:00 AM. )
     
Ham Sandwich
Guest
Status:
Reply With Quote
Sep 20, 2018, 03:09 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:00 AM. )
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 20, 2018, 03:12 PM
 
Originally Posted by And.reg View Post
5 trillion ops isn’t enough?

Possible dealbreaker for me.

Unclear what Apple means by “extended” as in, MORE HDR than the HDR capabilities at 60 FPS?
Maybe it’s an I/O problem then.

I assume they mean “extended” as in, more HDR than footage with no HDR.
     
Ham Sandwich
Guest
Status:
Reply With Quote
Sep 20, 2018, 04:47 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:00 AM. )
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Sep 20, 2018, 07:36 PM
 
Ok, I don’t have any idea if this makes sense, but here we go.

Isn’t HDR taking two or more “pictures” with different exposures, and then overlaying them into ONE picture that show both highlights and shadows in a more accurate way ?

In taking video, wouldn’t that necessitate to take MORE frames than you would ultimately display, and “calculate” better frames out of it ?
In other words, the physical limit is taking 60 frames, so there is no way to do HDR for 60 frames, because it can’t record 120 frames first, and combine the exposures.

-t
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 21, 2018, 02:56 AM
 
Originally Posted by turtle777 View Post
Ok, I don’t have any idea if this makes sense, but here we go.

Isn’t HDR taking two or more “pictures” with different exposures, and then overlaying them into ONE picture that show both highlights and shadows in a more accurate way ?

In taking video, wouldn’t that necessitate to take MORE frames than you would ultimately display, and “calculate” better frames out of it ?
In other words, the physical limit is taking 60 frames, so there is no way to do HDR for 60 frames, because it can’t record 120 frames first, and combine the exposures.

-t
If it’s using the “double print” method, then you’re absolutely correct. It would need to shoot 120 frames to get 60.

This could be a bottleneck anywhere along the chain. That’s a hair shy of a billion pixels per second it would need to sling around.
     
subego
Clinically Insane
Join Date: Jun 2001
Location: Chicago, Bang! Bang!
Status: Offline
Reply With Quote
Sep 21, 2018, 03:46 AM
 
I misunderstood this post the first time around, so let me take another swing.

Originally Posted by And.reg View Post
So, like, if I take 4K at 30 FPS on the XS, that’s guaranteed smart HDR video,

but if I go to 4K at 60 FPS, that means “not extended,” so, what does that mean? Does it mean, “no longer any HDR at all, and instead the video looks no different than on the iPhone X”? Or, does it mean, “Not quite as much correction for light and shadow as at 30 FPS, but still HDR”?
In the Apple universe, the term HDR (smart or otherwise) only applies to still images. As far as Apple is concerned, there’s no such thing as HDR video.

Apple does have something called “extended dynamic range for video”, which is only on the XS, and only at 30 fps or slower.

Without anything else to go on, this is HDR video, but Apple doesn’t want to call it that for whatever reason.
     
turtle777
Clinically Insane
Join Date: Jun 2001
Location: planning a comeback !
Status: Offline
Reply With Quote
Sep 21, 2018, 07:31 AM
 
Originally Posted by subego View Post
Apple does have something called “extended dynamic range for video”, which is only on the XS, and only at 30 fps or slower.

Without anything else to go on, this is HDR video, but Apple doesn’t want to call it that for whatever reason.
Apples picture HDR uses multiple pictures to get to the final HDR pic.
Can’t do the same with video, unless 10 FPS is acceptable.

They probably don’t want to create the impression the two HDR are the same, which they can’t.

-t
     
Ham Sandwich
Guest
Status:
Reply With Quote
Sep 21, 2018, 02:18 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:00 AM. )
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Sep 21, 2018, 02:35 PM
 
Originally Posted by subego View Post
I misunderstood this post the first time around, so let me take another swing.



In the Apple universe, the term HDR (smart or otherwise) only applies to still images. As far as Apple is concerned, there’s no such thing as HDR video.

Apple does have something called “extended dynamic range for video”, which is only on the XS, and only at 30 fps or slower.

Without anything else to go on, this is HDR video, but Apple doesn’t want to call it that for whatever reason.
Are you sure it isn't shooting actual HDR video (ala Dolby Vision) and not just using some image magic to fuss out details? The entire concept of HDR photos was to show more dynamic range than monitor tech allowed - now that we have wider gamuts and thousands of nits of brightness that could start to change.
     
ort888
Addicted to MacNN
Join Date: Feb 2001
Location: Your Anus
Status: Offline
Reply With Quote
Sep 21, 2018, 03:03 PM
 
Played with all the new stuff. The new jumbo phone is pretty freaking bad ass. Feels smaller than the current plus in my hand.

The new watch is sweet as well. The big screen looks really nice.

My sig is 1 pixel too big.
     
OAW
Addicted to MacNN
Join Date: May 2001
Status: Offline
Reply With Quote
Sep 21, 2018, 06:15 PM
 
Originally Posted by ort888 View Post
Played with all the new stuff. The new jumbo phone is pretty freaking bad ass. Feels smaller than the current plus in my hand.

The new watch is sweet as well. The big screen looks really nice.
It actually is smaller which is amazing!

iPhone 8+

6.24 x 3.07 x 0.30 inches

iPhone XS Max

6.20 x 3.05 x 0.30 inches

OAW
     
Ham Sandwich
Guest
Status:
Reply With Quote
Sep 21, 2018, 08:50 PM
 
[...deleted...]
( Last edited by Ham Sandwich; Apr 23, 2020 at 10:00 AM. )
     
Brien
Professional Poster
Join Date: Jun 2002
Location: Southern California
Status: Offline
Reply With Quote
Sep 22, 2018, 12:14 AM
 
Originally Posted by ort888 View Post
Played with all the new stuff. The new jumbo phone is pretty freaking bad ass. Feels smaller than the current plus in my hand.

The new watch is sweet as well. The big screen looks really nice.
Same here. Only a mm narrower but somehow feels smaller than a Plus. Probably helps my new case isn’t as bulky.
     
justinn007
Fresh-Faced Recruit
Join Date: Sep 2018
Status: Offline
Reply With Quote
Sep 22, 2018, 03:51 AM
 
I might keep my 2017 X for as long as I can stand it just because I don't like the prices on the XS or XR.
จีคลับบนมือถือ
That is, unless I require dual mic input for my 4K videos and better photo quality for regular outdoor photos.
( Last edited by justinn007; Sep 30, 2018 at 04:00 AM. )
     
 
Thread Tools
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 04:05 AM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,