> That will help save enormous amounts of power: up to 48 percent on a single charge,
Why does refresh rate have such a large impact on power consumption? I understand that the control electronics are 60x more active at 60 Hz than 1 Hz, but shouldn't the light emission itself be the dominant source of power consumption by far?
I used to be a display architect about 15 years back (for Qualcomm mirasol, et al), so my knowledge of the specifics / numbers is outdated. Sharing what I know.
High pixel density displays have disproportionately higher display refresh power (not just proportional to the total number of pixels as the column lines capacitances need to be driven again for writing each row of pixels). This was an important concern as high pixel densities were coming along.
Display needs fast refreshing not just because pixel would lose charge, but because the a refresh can be visible or result in flicker. Some pixels tech require flipping polarity on each refresh but the curves are not exactly symmetric between polarities, and further, this can vary across the panel. A fast enough refresh hides the mismatch.
Since you are knowledgable about this, do you have any idea what happened to Mirasol technology? I was fascinated by those colour e-paper like displays, and disappointed when plans to manufacture it was shelved. Then I learnt Apple purchased it but it looks more like a patent padding purchase than for tech development as nothing has come out of it form Apple too. Is it in some way still being developed or parts of its research tech being used in display development?
Being a key technology architect for it (not the core inventor), I know all about it, and then some more!
I cannot however talk publicly about it. :-(
It has been a disappointment for me as well. I had worked on it for nearly eight years. The idea was so interesting--using thin-film interference for creating images is akin to shaping Newton's rings into arbitrary images, something which even Newton would not have imagined! The demos and comparisons we had shown to various industry leaders and sometimes publicly were often instantly compelling. The people/engineers in the team were mostly the best I have ever worked with, and with whom I still maintain a great connection. But unfortunately, there were problems (not saying how much tech how much people) that were recognized by some but never got (timely) addressed. And a tech like it does not exist till date.
I do not think anything on it is being developed further.
The earliest of the patents would have expired by now.
Liquavista, Pixtronics, etc., have been alternative display technologies that also ultimately didn't make the impact desired, AFAIK.
Meanwhile, LCDs developed high pixel densities (which led to pressures on mirasol tech too), Plasma got sidelined. EInk displays have since then made good progress, though, in my opinion, are still far from colors and speeds that mirasol had. And of course, OLED, Quantum dots, ...
My fantasy display would be some kind of reflective-mode display that can passively show static images like e-ink, have partial updates like MIP LCD in wearables, response times like modern LCD and AMOLED, and "super-real" contrast/gain.
I.e. actually do wavelength conversion to not just reflect a narrow-pass filtered version of the ambient light, but convert that broad spectrum energy into the desired visuals, so it isn't always inherently dimmer than the environment. I can only imagine this being either:
1. some wild materials science stuff that manages interference
2. some wild materials science stuff that controls multi-photon fluorescence
3. some wild materials science stuff to fuse photoelectric and electroemissive functions in the same panel. i.e. not really passive but extremely low loss active system to double-convert the ambient light that can follow the power curve of available light
>> My fantasy display would be some kind of reflective-mode display that can passively show static images like e-ink, have partial updates like MIP LCD in wearables, response times like modern LCD and AMOLED, and "super-real" contrast/gain.
What about cost? :-) It is an important factor too outside of the fantasy world and can kill new display technologies. The latter often suffer from yield issues (dead pixels, etc.) during early phases of R&D which can make initial costs be still higher as compared to already matured technologies.
>> I.e. actually do wavelength conversion to not just reflect a narrow-pass filtered version of the ambient light, but convert that broad spectrum energy into the desired visuals
Reflecting filtered version of the ambient light, if done efficiently, brings the display to as bright as other natural/common objects around. So it should be good enough for most purposes, even in a somewhat darker ambient with eyes adjusted.
It would not however be attention-grabbing by being brighter than those surrounding objects. So many users, often used to seeing brighter emissive displays, still do not pick those as a preference.
>> I can only imagine this being either:
>> ...
Another way to make it look brighter is to reflect more light towards the users/eyes while capturing it from broader directions. This would compromise on viewing angle (unless more fantasy tech is brought in), but I think this in itself take the display to wow levels.
Well, the reflectivity of color MIP LCD is not very satisfactory. It is barely adequate, even for people like me who are fans. This is both because of the narrow-band RGB filtering and the inherent losses of the polarization-based switching method. Even the "white" state is discarding most polarizations of the ambient light, and then the darker colors are even blocking that.
My fantasy is having the reflectivity be at least as good as good white paper, and with deep contrast too.
It also needs to be brighter in practice than normal objects because, no matter what, it will have to overcome some glare from whatever protective glass and touch sensing layers there are over the actual display.
> the column lines capacitances need to be driven again for writing each row of pixels
Not my field so please forgive a possibly obvious question: That seems true regardless of the pixel count (?), so for that process why wouldn't power also be proportional to the pixel count?
I notice I'm saying 'pixel count' and you are saying 'pixel density'; does it have something to do with their proximity to each other?
Total column line capacitance is impacted by the number of pixels hanging onto it as each transistor (going to the pixel capacitance) adds some parasitic capacitance of its own. Hope that answers your question. You are right in the sense that a part of the total column capacitance would depend on just the length and width of it, irrespective of the number of pixels hanging onto it.
I had back then developed what was perhaps the most sophisticated system-level model for display power, including refresh, illumination, etc., and it included all those terms for capacitance, a simplified transistor model, pixel model, etc.
I did not carefully distinguish pixel density vs. pixel count while writing my previous comments here, just to keep it simple. You can perhaps imagine that increasing display size without changing pixel count can lead to higher active pixel area percentage, which in turn would lead to better light generation/transmission/reflection efficiency. So it ultimately comes down to mathematical modeling, and the scaling laws / derivatives depend on the actual numbers chosen.
With the amount of bullshit animations all OSes come with these days, enabled by default, and most applications being webapp with their own secondary layer of animations, and with the typical developer's near-zero familiarity with how floating point numbers behave, I imagine there's nearly always some animation somewhere, almost but not quite eased to a stop, that's making subtle color changes across some chunk of the screen - not enough to notice, enough to change some pixel values several times per second.
I wonder what existing mitigations are at play to prevent redisplay churn? It probably wouldn't matter on Windows today, but will matter with those low-refresh-rate screens.
Android has a debug tool that flashes colors when any composed layer changes. It's probably an easy optimization for them to not re-render when nothing changes.
Normally, your posts are very coherent, but this one flies on the rails. (Half joking: Did someone hack your account!?) I don't understand your rant here:
> With the amount of bullshit animations all OSes come with these days, enabled by default, and most applications being webapp with their own secondary layer of animations, and with the typical developer's near-zero familiarity with how floating point numbers behave
I use KDE/GNU/Linux, and I don't see a lot of unnecessary animations. Even at work where I use Win11, it seems fine. "[M]ost applications being webapp": This is a pretty wild claim. Again, I don't think any apps that I use on Linux are webapps, and most at work (on Win11) are not.
The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.
Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.
Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.
Sharp MIP makes every pixel an SRAM bit: near-zero current and no refresh necessary. The full color moral equivalent of Sharp MIP would be 3 DACs per pixel. TFT (à la LG Oxide) is closer to DRAM, except the charge level isn't just high/low.
So, no, there is a meaningful difference in the nature of the circuits.
Xdamage isn’t a thing if you’re using a compositor for what it’s worth. It’s more expensive to try to incrementally render than to just render the entire scene (for a GPU anyway).
And regardless, the HW path still involves copying the entire frame buffer - it’s literally in the name.
Thats not true. I wrote a compositor based on xcompmgr, and there damage was widely used. It's true that it's basically pointless to do damage tracking for the final pass on gl, but damage was still useful to figure out which windows required new blurs and updated glows.
It was, but xdamage is part of the composting side of the final bitmap image generation, before that final bitmap is clocked out to the display.
The frame buffer, at least the portion of the GPU responsible for reading the frame buffer and shipping the contents out over the port to the display, the communications cable to the display screen itself, and the display screen were still reading, transmitting, and refreshing every pixel of the display at 60hz (or more).
This LG display tech. claims to be able to turn that last portion's speed down to a 1Hz rate from whatever it usually is running at.
Really disappointing to only learn this after a decade, but on Linux changing from 60hz to 40hz decreased my power draw by 40% in the last hour since reading this comment.
I think the idea is that in an always-on display mode, most of the screen is black and the rest is dim, so circuitry power budget becomes a much larger fraction of overhead.
I interpreted that bit as E2E system uptime being up by 48%. Sounds more plausible to me, as there'd fewer video frames that would need to be produced and pushed out.
This is an OLED display, so I don't think the control electronics are actually any less active. (They would be for LCD, which is where most of these low-refresh-rate optimizations make sense.)
The connection between the GPU and the display has been run length encoded (or better) since forever, since that reduces the amount of energy used to send the next frame to the display controller. Maybe by "1Hz" they mean they also only send diffs between frames? That'd be a bigger win than "1Hz" for most use cases.
But, to answer your question, the light emission and computation of the frames (which can be skipped for idle screen regions, regardless of frame rate) should dwarf the transmission cost of sending the frame from the GPU to the panel.
The more I think about this, the less sense it makes. (The next step in my analysis would involve computing the wattage requirements of the CPU, GPU and light emission, then comparing that to the KWh of the laptop battery + advertised battery life.
> The more I think about this, the less sense it makes
And yet, it’s the fundamental technology enabling always on phone and smartwatch displays
The intent of this is to reduce the time that the CPU, GPU, and display controller is in an active state (as well as small reductions in power of components in between those stages).
for small screen sizes and low information density displays, like a watch that updates every second this makes a lot of sense
it would make a lot of sense in situations where the average light generating energy is substantially smaller:
pretend you are a single pixel on a screen (laptop, TV) which emits photons in a large cone of steradians, of which a viewer's pupil makes up a tiny pencil ray; 99.99% of the light just misses an observer's pupils. in this case this technology seems to offer few benefits, since the energy consumed by the link (generating a clock and transmitting data over wires) is dwarfed by the energy consumed in generating all this light (which mostly misses human eye pupils)!
Now consider smart glasses / HUD's; the display designer knows the approximate position of the viewer's eyes. The optical train can be designed so that a significantly larger fraction of generated photons arrive on the retina. Indeed XReal or NReal's line of smart glasses consume about 0.5 W! In such a scenario the links energy consumption becomes a sizable proportion of the energy consumption; hence having a low energy state that still presents content but updates less frequently makes sense.
One would have expected smart glasses to already outcompete smartphones and laptops, just by prolonged battery life, or conversely, splitting the difference in energy saved, one could keep half of the energy saved (doubling battery life) while allocating the other half of the energy for more intensive calculations (GPU, CPU etc.).
Before OLED (and similar), most displays were lit with LEDs (behind or around the screen, through a diffuser, then through liquid crystals) which was indeed the dominant power draw... like 90% or so!
But the article is about an OLED display, so the pixels themselves are emitting light.
It doesn't. They take extreme use cases such as watching video until the battery depletes at maximum brightness where 90% of power consumption is the display. But in realistic use cases the fraction of power draw consumed by the display is much smaller when the CPU is actually doing things.
Phones and watches do that with LTPO OLED which I don't believe exists at higher screen sizes although I'm not sure why. This is supposed to be special because it isn't OLED so should be able to get brighter and not have to worry about burn in.
LTPO has problems with uniformity of brightness, that get worse the larger the panels are. On a phone screen, this is usually not perceivable, but if you made a 27" screen out of it, most such screens would be visibly brighter in some corner or other.
What's the real-world battery life though? My mac gets 8 hours real world; 16 in benchmarks; 24 claimed by apple.
Assuming the xps has the same size battery, and this really reduces power consumption by 48%, I'd expect 16 hours real world, 32 in benchmarks and 48 in some workload Dell can cherry pick.
Both my last two XPSes have had shit battery life. Maybe 3.5h when new and only 2h after a few months of use. They also experience a lot of thermal throttling (i7 12700h, 9750h) and newer updates have removed the option of undervolting which used to fix that.
Positive is that the battery life couldn't possibly get worse with newer ones.
Last I checked: the XPS was one of the few laptop product lines offering native Linux (Ubuntu) as an alternative default configuration option to order
It's how I got mine about 6-7 years back anyways, still works great (except the battery)
...never let windows get it's claws into the machine in the first place
Edit: to add, I realized over time that having a battery that lasts longer just can't seem to beat my older laptop experiences: being able to just swap an extra battery in and have full charge at will (without soldering and all that 'ish)
In that sense I feel that the future is coming full circle to modularity, swapability, repairability - to the point they're becoming my primary considerations for the next portable computing select I will need to acquire.
Apparently they stopped making the Developer Edition which came with Ubuntu in 2022-2023 (which was definitely cheaper by 100-200 bucks or so than the Windows version with exact same hardware, I recall the developer edition os discount very clearly)
Now the XPS line has fallen as well, as apparently even the SSD now gets soldered to the motherboard, no longer possible to service with basic tools really once it starts failing.
My old 2018-ish XPS has an M.2 slot and a battery that is relatively simple by modern standards to replace with some screwdrivers and careful handling (something I think is vital for a workhorse computer, as batteries 'decimate' in capacity within 2-3 years or so in my experience)
I don't even know what's left out there anymore among major makers...
when I have to look again, maybe framework... Been hearing about them for a bit now and they seem quite relevant to the discussion - haven't seen one live yet to be fair
While I concede that powerbanks may satisfy the proximal problem - literally making charging available on demand...
Consider that it does not in any way resolve the distal problem of having a 'portable computing device', which heavily compromises on the 'portable' aspect - by forcing a state of permanent battery anxiety without external life support (i.e. no power source - dead in minutes of intensive work)
The powerbank is a fine workaround to be fair, but as I see it: still a workaround at best. The ability to swap a battery without getting into things like soldering - allows for far more flexible functionality and longevity than a powerbank could.
That is without even mentioning the ultimate problem of parts sustainability and longevity. When you can swap individual components as they degrade, it's possible to use the rest of the machine for far longer than a degraded battery or a failing SSD would allow.
Powerbanks simply feel like treating symptoms, instead of rehabilitating the system itself (obviously still use them for phones and such of course)
OLED iPad dont have always on because of burn-in. Considering people certainly use it as photo frame, notification and time daahboars, kitchen recipe book, etc.
Less of a problem for iphones that unlikely to stay for a week in the same place plugged in and unused.
They dont buy it for this purpose. Its just end up like that for a lot of people I know since it just weird device between iphone and macbook that end not being used for much.
I just pointing out how quite a big part of Apple consumer base use these devices: buy most expensive one, play with it for a few weeks and then leave it as kitchen tablet that is used ocassionally. You know every second housewife wants to be an artist but very few actually use it for this beyond first few weeks.
Providing this audience with always-on display is a sure way to have a lot of people unhappy with burned-in OLED screens.
Second, it is not a fault of the device that consumers are brain dead, buying something they do not need and then whine about how the device is “useless”. It sucks to suck
>M4 iPad Pro lacks always-on display despite OLED panel with variable refresh rate (2024):
Brightness, Uniformity, Colour Accuracy etc. It is hard as we take more and more features for granted. There is also cost issues, which is why you only see them in smaller screens.
I'm not sure that there's really anything new here? 1Hz might be lower. Adoption might be not that good. But this might just be iteration on something that many folks have just not really taken good advantage of till now. There's perhaps signficiant display tech advancements to get the Hz low, without having significant G-Sync style screen-buffers to support it.
One factor that might be interesting, I don't know if there's a partial refresh anywhere. Having something moving on the screen but everything else stable would be neat to optimize for. I often have a video going in part of a screen. But that doesn't mean the whole screen needs to redraw.
I think you're assuming that LCDs all have framebuffers, but this is not the case. A basic/cheap LCD does not store the state of its pixels anywhere. It electrically refreshes them as the signal comes in, much like a CRT. The pixels are blocking light instead of emitting it, but they will still fade out if left unrefreshed for long. So, the simple answer is, you can't get direct access to something when it doesn't even exist in the first place.
> HKC has announced a new laptop display panel that supports adaptive refresh across a 1 to 60Hz range, including a 1Hz mode for static content. HKC says the panel uses an Oxide (metal-oxide TFT) backplane and its low leakage characteristics to keep the image stable even at 1Hz.
Ok, that makes some amount of sense. The article claims this is an OLED display, and I haven't heard of significant power games from low-refresh-rate OLED (since they have to signal the LED to stay on regardless of refresh rate).
However, do TFT's really use as much power as the rest of the laptop combined?
They're claiming 48% improvement, so the old TFT (without backlight) has to be equivalent to backlight + wifi + bluetooth + CPU + GPU + keyboard backlight + ...
Sorry, might be obvious to some, but is that rate applied to the whole screen or can certain parts be limited to 1Hz whilst others are at a higher rate?
The ability to vary it seems like it would be valuable as there are significant portions of a screen that remain fairly static for longer periods but equally there are sections that would need to change more often and would thus mess with the ability to stick to a low rate if it's a whole screen all-or-nothing scenario.
From what I understand, the laptop will reduce the refresh rate (of the entire display) to as low as 1Hz if what is being displayed effectively “allows” it.
I think windows has a feature built in on some adaptive refresh rate displays to dynamically shift the frame rate down (to 30, on my screen) or up to the cap, depending on what’s actually happening.
I remember playing with it a bit, and it would dynamically change to a high refresh rate as you moved the mouse, and then drop down as soon as the mouse cursor stopped moving.
I had issues with it sometimes being lower refresh rate even when there was motion on screen, so the frame rate swings were unfortunately noticeable. Motion would get smoother for all content whenever the mouse moved.
1hz is drastically fewer refreshes. I hope they have the “is this content static” measurement actually worked out to a degree where it’s not noticeable.
Which would make me want the refresh rate to be user-configurable. I would not mind at all if the 1 Hz refresh rate caused parts of the page I don't care about, such as animated ads to stutter and become unwatchable. If given the choice between stuttering ads but longer battery life, or smoothly-animated ads with shorter battery life, I'd choose the unwatchable ads every time.
Ideally, I would be able to bind a keyboard shortcut to the refresh-rate switch, so that the software doesn't have to figure out that now I'm on Youtube so I actually want the higher refresh rate, but now I'm on a mostly-text page so I want the refresh rate to go back down to 1 Hz. If I can control that with a simple Fn+F11 combination or something, that would be the ideal situation.
Not that any laptop manufacturers are likely to see this comment... but you never know.
I assume this will just be using Window's dynamic refresh rate feature, which you can turn on and off in the display settings, and when it's off you can set the refresh rate manually. I guess the question is whether they will let you set it as low as 1hz manually though.
With current LCD controllers but new drivers/firmware you could selectively refresh horizontal stripes of the screen at different rates if you wanted to.
I don't think you could divide vertically though.
Don't think anyone has done this yet. You could be the first.
I believe E-ink displays do this for faster updates for touch interactivity. Updatimg the whole display as the user writes on the touch screen would otherwise be too slow for Eink.
Today it's mostly "all-or-nothing" at the panel level, but under the hood there's already a lot of cleverness trying to approximate the behavior you're describing
Anyone who has accidentally snapped the controller off a working LCD can tell you that the pixel capacitance keeps the colours approximately correct for about 10 seconds before it all becomes a murky shadowy mess...
So it makes sense you could cut the refresh time down to a second to save power...
Although one wonders if it's worth it when the backlight uses far more power than the control electronics...
> A 1Hz panel is almost, but not quite, on the level of an e-ink panel, which isn’t the prettiest to look at. LG’s panel also uses LED technology, the mainstream panel technology that’s being overtaken at the high end by OLED panels with essentially perfect contrast.
I'm guessing that for this to work you need to be able to selectively refresh parts of the screen at different rates? a 1Hz refresh rate would be rubbish just to follow the mouse cursor, so at least that part of the screen needs to refresh faster. However, it does make sense for the parts of the screen that are mostly static. Looking at my screen as I type this, the only part that needs a high-refresh rate is the text-box where I'm typing (I can type several keys per second so I wouldn't want a refresh rate of 1 Hz). However, the rest of the screen is not changing at all so a slow refresh is perfectly fine.
You're not moving your mouse 100% of the time. Probably less than 25% of the time. Probably using your keyboard less than 25% of the time. It doesn't need to degrade experience OR selectively refresh part of the screen (which it certainly doesn't).
Horrid website: forced cookies, invisible adverts (Mamma Mia, anyone?), and that thing where it’s a page of garbage links when you go back. I will never click a PC World URL again.
Sure dropping toward 1Hz could be huge. But the moment you scroll, watch video, or even have subtle UI animations, you're back in higher refresh territory
A low refresh rate probably still requires the same display-side framebuffer as PSR.
With conventional PSR, I think the goal is to power off the link between the system framebuffer and the display controller and potentially power down the system framebuffer and GPU too. This may not be beneficial unless it can be left off long enough, and there may be substantial latency to fire it all back up. You do it around sleep modes where you are expecting a good long pause.
Targeting 1 Hz sounds like actually planning to clock down the link and the system framebuffer so they can run sustain low bandwidth in a more steady state fashion. Presumably you also want to clock down any app and GPU work to not waste time rendering screens nobody will see. This seems just as challenging, i.e. having a "sync to vblank" that can adapt all the way down to 1 Hz?
But why 1hz? Can’t the panel just leave the pixels on the screen for an arbitrary length of time until something triggers refresh? Only a small amount of my screen changes as I’m typing.
When PSR or adaptive refresh rate systems suspend or re-clock the link, this requires reengineering of the link and its controls. All of this evolved out of earlier display links, which evolved out of earlier display DACs for CRTs, which continuously scanned the system framebuffer to serialize pixel data into output signals. This scanning was synchronized to the current display mode and only changed timings when the display mode was set, often which a disruptive glitch and resynchronization period. Much of this design cruft is still there, including the whole idea of "sync to vblank".
When you have display persistence, you can imagine a very different architecture where you address screen regions and send update packets all the way to the screen. The screen in effect becomes a compositor. But then you may also want transactional boundaries, so do you end up wanting the screen's embedded buffers to also support double or triple buffering and a buffer-swap command? Or do you just want a sufficiently fast and coordinated "blank and refill" command that can send a whole screen update as a fast burst, and require the full buffer to be composited upstream of the display link?
This persistence and selective addressing is actually a special feature of the MIP screens embedded in watches etc. They have a link mode to address and update a small rectangular area of the framebuffer embedded in the screen. It sends a smaller packet of pixel data over the link, rather than sending the whole screen worth of pixels again. This requires different application and graphics driver structure to really support properly and with power efficiency benefits. I.e. you don't want to just set a smaller viewport and have the app continue to render into off-screen areas. You want it to focus on only rendering the smaller updated pixel area.
> This seems just as challenging, i.e. having a "sync to vblank" that can adapt all the way down to 1 Hz?
I was under the impression that modern compositors operated on a callback basis where they send explicit requests for new frames only when they are needed.
There are multiple problems here, coming from opposite needs.
A compositor could request new frames when it needs them to composite, in order to reduce its own buffering. But how does it know it is needed? Only in a case like window management where you decided to "reveal" a previously hidden application output area. This is a like older "damage" signals to tell an X application to draw its content again.
But for power-saving, display-persistence scenarios, an application would be the one that knows it needs to update screen content. It isn't because of a compositor event demanding pixels, it is because something in the domain logic of the app decided its display area (or a small portion of it) needs to change.
In the middle, naive apps that were written assuming isochronous input/process/output event loops are never going to be power efficient in this regard. They keep re-drawing into a buffer whether the compositor needs it or not, and they keep re-drawing whether their display area is actually different or not. They are not structured around diffs between screen updates.
It takes a completely different app architecture and mindset to try to exploit the extreme efficiency realms here. Ideally, the app should be completely idle until an async event wakes it, causes it to change its internal state, and it determines that a very small screen output change should be conveyed back out to the display-side compositor. Ironically, it is the oldest display pipelines that worked this way with immediate-mode text or graphics drawing primitives, with some kind of targeted addressing mode to apply mutations to a persistent screen state model.
Think of a graphics desktop that only updates the seconds digits of an embedded clock every second, and the minutes digits every minute. And an open text messaging app only adds newly typed characters to the screen, rather than constantly re-rendering an entire text display canvas. But, if it re-flows the text and has to move existing characters around, it addresses a larger screen region to do so. All those other screen areas are not just showing static imagery, but actually having a lack of application CPU, GPU, framebuffer, and display link activities burning energy to maintain that static state.
I mean sure, you raise an interesting point that at low enough refresh rates application architectures and display protocols begin needing to explicitly account for that fact in order for the system as a whole to make use of the feature.
But the other side of things - the driver and compositor and etc supporting arbitrarily low frequencies - seems like it's already (largely?) solved in the real world. To your responsiveness point, I guess you wouldn't want to use such a scheme without a variable refresh rate. But that seems to be a standard feature in ~all new consumer electronics at this point. Redrawing the entire panel when you could have gotten away with only a small patch is unfortunate but certainly not the end of the world.
I wouldn't get a mini LED laptop for creative work. We have a mini LED TV, and manufacturers need to choose one of these two problems because of physical limitations:
- The LEDs for a mostly dark region with a point source are too bright so the point source is the correct brightness. Benchmark sites call this "blooming" and ding displays for it, so new ones pick the other problem:
- The LEDs for mostly dark regions with a point source are too dim so the black pixels don't appear gray. This means that white on black text (like linux terminals) render strangely, with the left part of the line much brighter than the right (since it is next to the "$ ls" and "$" of the surrounding lines). Also, it means that white mouse pointers on black backgrounds render as dark gray.
For creative work, I'd pick pretty much any other monitor technology (with high color gamut, of course) over mini LED. However mini-LED is great if you have a TV that is in direct sunlight, since it can blast watts at the brightest parts of the screen without overheating.
Modern software regularly takes like 1 second to load anyways.
200ms is the minimum human reaction time, so adding 100ms would only add like 50% to the REPL user interaction. Something like 10Hz might be quite usable while minimally contributing to lag.
The idea of having a 60Hz screen is nice, but in practice it turns out that display refresh rate is not the bottleneck for most software.
For sample-and-hold panel technologies like LCD and OLED, refresh is about updating the pixel state (color). There is a process that takes place for that even when the pixel data remains unchanged between frames. However, the pixels still need to emit light between refreshes, which for LCD is a backlight but for OLED are the pixel themselves. The light emission is often regulated using PWM at a higher frequency than the refresh rate. PWM frequency affects power consumption as well. Higher PWM frequency is better for the eyes, but also consumes more power.
OLED is fundamentally not sample and hold, because it is using PWM, right?
Ignoring switching costs, keeping a sample-and-hold LED at 0%, 50% and 100% brightness all cost zero energy. For an OLED, the costs are closer to linear in the duty cycle (again, ignoring switching costs, but those are happening much faster than the framerate for OLED, right?)
(Also, according to another comment, the panel manufacturer says this is TFT, not OLED, which makes a lot more sense.)
I don't believe LED-pixel displays use PWM. I would expect them to use bit planes: for each pixel transform the gamma-compressed intensity to the linear photon-proportional domain. Represent the linear intensity as a binary number. Start with the most significant bit, and all pixels with that bit get a current pulse, then for the next bitplane all the pixels having the 2nd bit set are turned on with half that current for the same duration, each progressive bitplane sending half as much current per pixel. After the least significant bitplane has been lit each pixel location has emitted a total number of photons proportional to what was requested in the linear domain.
PWM still counts as sample-and-hold, because it sustains the brightness throughout the duration of a frame, resulting in significant motion blur. The converse are impulse-driven displays like CRT and plasma.
LED backlights using PWM likewise don’t change the sample-and-hold nature of LCD panels.
My understanding is that PWM switching costs aren’t negligible, and that this contributes to why PWM frequencies are often fairly low.
this is just regurgitating the manufacturer's claim. I believe it when I see it. Most of display energy use is to turn on the OLED/backlight. They're claiming, because our display flickers less, it's 48% more efficient now.
I once had an external monitor with a maximum refresh rate of 30 Hz, and mouse movements were noticeably sluggish. It was part of a multi-monitor setup, so it was very obvious as I moved the mouse between monitors.
I'm not sure if this LG display will have the same issue, but I won't be an early adopter.
> That will help save enormous amounts of power: up to 48 percent on a single charge,
Why does refresh rate have such a large impact on power consumption? I understand that the control electronics are 60x more active at 60 Hz than 1 Hz, but shouldn't the light emission itself be the dominant source of power consumption by far?
I used to be a display architect about 15 years back (for Qualcomm mirasol, et al), so my knowledge of the specifics / numbers is outdated. Sharing what I know.
High pixel density displays have disproportionately higher display refresh power (not just proportional to the total number of pixels as the column lines capacitances need to be driven again for writing each row of pixels). This was an important concern as high pixel densities were coming along.
Display needs fast refreshing not just because pixel would lose charge, but because the a refresh can be visible or result in flicker. Some pixels tech require flipping polarity on each refresh but the curves are not exactly symmetric between polarities, and further, this can vary across the panel. A fast enough refresh hides the mismatch.
Since you are knowledgable about this, do you have any idea what happened to Mirasol technology? I was fascinated by those colour e-paper like displays, and disappointed when plans to manufacture it was shelved. Then I learnt Apple purchased it but it looks more like a patent padding purchase than for tech development as nothing has come out of it form Apple too. Is it in some way still being developed or parts of its research tech being used in display development?
Being a key technology architect for it (not the core inventor), I know all about it, and then some more!
I cannot however talk publicly about it. :-(
It has been a disappointment for me as well. I had worked on it for nearly eight years. The idea was so interesting--using thin-film interference for creating images is akin to shaping Newton's rings into arbitrary images, something which even Newton would not have imagined! The demos and comparisons we had shown to various industry leaders and sometimes publicly were often instantly compelling. The people/engineers in the team were mostly the best I have ever worked with, and with whom I still maintain a great connection. But unfortunately, there were problems (not saying how much tech how much people) that were recognized by some but never got (timely) addressed. And a tech like it does not exist till date.
I do not think anything on it is being developed further.
The earliest of the patents would have expired by now.
Liquavista, Pixtronics, etc., have been alternative display technologies that also ultimately didn't make the impact desired, AFAIK.
Meanwhile, LCDs developed high pixel densities (which led to pressures on mirasol tech too), Plasma got sidelined. EInk displays have since then made good progress, though, in my opinion, are still far from colors and speeds that mirasol had. And of course, OLED, Quantum dots, ...
My fantasy display would be some kind of reflective-mode display that can passively show static images like e-ink, have partial updates like MIP LCD in wearables, response times like modern LCD and AMOLED, and "super-real" contrast/gain.
I.e. actually do wavelength conversion to not just reflect a narrow-pass filtered version of the ambient light, but convert that broad spectrum energy into the desired visuals, so it isn't always inherently dimmer than the environment. I can only imagine this being either:
1. some wild materials science stuff that manages interference
2. some wild materials science stuff that controls multi-photon fluorescence
3. some wild materials science stuff to fuse photoelectric and electroemissive functions in the same panel. i.e. not really passive but extremely low loss active system to double-convert the ambient light that can follow the power curve of available light
>> My fantasy display would be some kind of reflective-mode display that can passively show static images like e-ink, have partial updates like MIP LCD in wearables, response times like modern LCD and AMOLED, and "super-real" contrast/gain.
What about cost? :-) It is an important factor too outside of the fantasy world and can kill new display technologies. The latter often suffer from yield issues (dead pixels, etc.) during early phases of R&D which can make initial costs be still higher as compared to already matured technologies.
>> I.e. actually do wavelength conversion to not just reflect a narrow-pass filtered version of the ambient light, but convert that broad spectrum energy into the desired visuals
Reflecting filtered version of the ambient light, if done efficiently, brings the display to as bright as other natural/common objects around. So it should be good enough for most purposes, even in a somewhat darker ambient with eyes adjusted.
It would not however be attention-grabbing by being brighter than those surrounding objects. So many users, often used to seeing brighter emissive displays, still do not pick those as a preference.
>> I can only imagine this being either:
>> ...
Another way to make it look brighter is to reflect more light towards the users/eyes while capturing it from broader directions. This would compromise on viewing angle (unless more fantasy tech is brought in), but I think this in itself take the display to wow levels.
Well, the reflectivity of color MIP LCD is not very satisfactory. It is barely adequate, even for people like me who are fans. This is both because of the narrow-band RGB filtering and the inherent losses of the polarization-based switching method. Even the "white" state is discarding most polarizations of the ambient light, and then the darker colors are even blocking that.
My fantasy is having the reflectivity be at least as good as good white paper, and with deep contrast too.
It also needs to be brighter in practice than normal objects because, no matter what, it will have to overcome some glare from whatever protective glass and touch sensing layers there are over the actual display.
What's interesting about these newer 1Hz claims is that they're basically trying to sidestep the exact problems you mention
Correct.
I myself have been privy to similar R&D going on for more than a decade.
> the column lines capacitances need to be driven again for writing each row of pixels
Not my field so please forgive a possibly obvious question: That seems true regardless of the pixel count (?), so for that process why wouldn't power also be proportional to the pixel count?
I notice I'm saying 'pixel count' and you are saying 'pixel density'; does it have something to do with their proximity to each other?
Total column line capacitance is impacted by the number of pixels hanging onto it as each transistor (going to the pixel capacitance) adds some parasitic capacitance of its own. Hope that answers your question. You are right in the sense that a part of the total column capacitance would depend on just the length and width of it, irrespective of the number of pixels hanging onto it.
I had back then developed what was perhaps the most sophisticated system-level model for display power, including refresh, illumination, etc., and it included all those terms for capacitance, a simplified transistor model, pixel model, etc.
I did not carefully distinguish pixel density vs. pixel count while writing my previous comments here, just to keep it simple. You can perhaps imagine that increasing display size without changing pixel count can lead to higher active pixel area percentage, which in turn would lead to better light generation/transmission/reflection efficiency. So it ultimately comes down to mathematical modeling, and the scaling laws / derivatives depend on the actual numbers chosen.
There's definitely a few reasons but one of them is that you have to ask the GPU to do ~60x less work when you render 60x less frames
PSR (panel self-refresh) lets you send a single frame from software and tell the display to keep using that.
You don’t need to render 60 times the same frame in software just to keep that visible on screen.
How often is that used? Is there a way to check?
With the amount of bullshit animations all OSes come with these days, enabled by default, and most applications being webapp with their own secondary layer of animations, and with the typical developer's near-zero familiarity with how floating point numbers behave, I imagine there's nearly always some animation somewhere, almost but not quite eased to a stop, that's making subtle color changes across some chunk of the screen - not enough to notice, enough to change some pixel values several times per second.
I wonder what existing mitigations are at play to prevent redisplay churn? It probably wouldn't matter on Windows today, but will matter with those low-refresh-rate screens.
Android has a debug tool that flashes colors when any composed layer changes. It's probably an easy optimization for them to not re-render when nothing changes.
Normally, your posts are very coherent, but this one flies on the rails. (Half joking: Did someone hack your account!?) I don't understand your rant here:
I use KDE/GNU/Linux, and I don't see a lot of unnecessary animations. Even at work where I use Win11, it seems fine. "[M]ost applications being webapp": This is a pretty wild claim. Again, I don't think any apps that I use on Linux are webapps, and most at work (on Win11) are not.Seriously? What is _this_ comment? TeMPOraL makes perfect sense.
LLMs learned that users have post histories? /s
Why? Surely copying the same pixels out sixty times doesn't take that much power?
The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.
Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.
Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.
[1] https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...
So it's a Sharp MIP scaled up? https://sharpdevices.com/memory-lcd/
Sharp MIP makes every pixel an SRAM bit: near-zero current and no refresh necessary. The full color moral equivalent of Sharp MIP would be 3 DACs per pixel. TFT (à la LG Oxide) is closer to DRAM, except the charge level isn't just high/low.
So, no, there is a meaningful difference in the nature of the circuits.
Thanks. Great explanation.
Copying , Draw() is called 60 times a second .
It isn't for any reasonable UI stack. For instance, the xdamage X11 extension for this was released over 20 years ago. I doubt it was the first.
Xdamage isn’t a thing if you’re using a compositor for what it’s worth. It’s more expensive to try to incrementally render than to just render the entire scene (for a GPU anyway).
And regardless, the HW path still involves copying the entire frame buffer - it’s literally in the name.
Thats not true. I wrote a compositor based on xcompmgr, and there damage was widely used. It's true that it's basically pointless to do damage tracking for the final pass on gl, but damage was still useful to figure out which windows required new blurs and updated glows.
At the software level yes, but it seems nobody has taken the time to do this at the hardware level as well. This is LG's stab at it.
Apple has been doing this since they started having 'always-on' displays.
So has Samsung, but we're talking mobile devices with OLED displays, which is an entirely different universe both hardware and software-wise.
It was, but xdamage is part of the composting side of the final bitmap image generation, before that final bitmap is clocked out to the display.
The frame buffer, at least the portion of the GPU responsible for reading the frame buffer and shipping the contents out over the port to the display, the communications cable to the display screen itself, and the display screen were still reading, transmitting, and refreshing every pixel of the display at 60hz (or more).
This LG display tech. claims to be able to turn that last portion's speed down to a 1Hz rate from whatever it usually is running at.
What’s your metal model of what happens when a dirty region is updated and now we need to get that buffer on the display?
You forget that all modern UI toolkits brag about who has the highest frame rate, instead of updating only what's changed and only when it changes.
Really disappointing to only learn this after a decade, but on Linux changing from 60hz to 40hz decreased my power draw by 40% in the last hour since reading this comment.
I think the idea is that in an always-on display mode, most of the screen is black and the rest is dim, so circuitry power budget becomes a much larger fraction of overhead.
Ohh like property tax on a vacant building
Your GPU rendering 1 frame vs your GPU rendering 60 frames.
I interpreted that bit as E2E system uptime being up by 48%. Sounds more plausible to me, as there'd fewer video frames that would need to be produced and pushed out.
This is an OLED display, so I don't think the control electronics are actually any less active. (They would be for LCD, which is where most of these low-refresh-rate optimizations make sense.)
The connection between the GPU and the display has been run length encoded (or better) since forever, since that reduces the amount of energy used to send the next frame to the display controller. Maybe by "1Hz" they mean they also only send diffs between frames? That'd be a bigger win than "1Hz" for most use cases.
But, to answer your question, the light emission and computation of the frames (which can be skipped for idle screen regions, regardless of frame rate) should dwarf the transmission cost of sending the frame from the GPU to the panel.
The more I think about this, the less sense it makes. (The next step in my analysis would involve computing the wattage requirements of the CPU, GPU and light emission, then comparing that to the KWh of the laptop battery + advertised battery life.
Not OLED.
> LG Display is also preparing to begin mass production of a 1Hz OLED panel incorporating the same technology in 2027.
> This is an OLED display
The LG press release states that it's LCD/TFT.
https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...
> The more I think about this, the less sense it makes
And yet, it’s the fundamental technology enabling always on phone and smartwatch displays
The intent of this is to reduce the time that the CPU, GPU, and display controller is in an active state (as well as small reductions in power of components in between those stages).
for small screen sizes and low information density displays, like a watch that updates every second this makes a lot of sense
it would make a lot of sense in situations where the average light generating energy is substantially smaller:
pretend you are a single pixel on a screen (laptop, TV) which emits photons in a large cone of steradians, of which a viewer's pupil makes up a tiny pencil ray; 99.99% of the light just misses an observer's pupils. in this case this technology seems to offer few benefits, since the energy consumed by the link (generating a clock and transmitting data over wires) is dwarfed by the energy consumed in generating all this light (which mostly misses human eye pupils)!
Now consider smart glasses / HUD's; the display designer knows the approximate position of the viewer's eyes. The optical train can be designed so that a significantly larger fraction of generated photons arrive on the retina. Indeed XReal or NReal's line of smart glasses consume about 0.5 W! In such a scenario the links energy consumption becomes a sizable proportion of the energy consumption; hence having a low energy state that still presents content but updates less frequently makes sense.
One would have expected smart glasses to already outcompete smartphones and laptops, just by prolonged battery life, or conversely, splitting the difference in energy saved, one could keep half of the energy saved (doubling battery life) while allocating the other half of the energy for more intensive calculations (GPU, CPU etc.).
Before OLED (and similar), most displays were lit with LEDs (behind or around the screen, through a diffuser, then through liquid crystals) which was indeed the dominant power draw... like 90% or so!
But the article is about an OLED display, so the pixels themselves are emitting light.
> But the article is about an OLED display
The article is about an LCD display, actually.
I just wish "we" wouldn't have discarded the option to use pure black for dark modes in favor of a seemingly ever-brightening blue-grey...
It doesn't. They take extreme use cases such as watching video until the battery depletes at maximum brightness where 90% of power consumption is the display. But in realistic use cases the fraction of power draw consumed by the display is much smaller when the CPU is actually doing things.
For whatever reason I keep catching my macbook on max brightness. Maybe not an unrealistic test.
Haven't phones, watches and tablets been using low refresh rates to enable battery improvements for a while?
The Apple Watch Series 5 (2019) has a refresh rate down to 1Hz.
M4 iPad Pro lacks always-on display despite OLED panel with variable refresh rate (2024):
https://9to5mac.com/2024/05/09/m4-ipad-pro-always-on-display...
Phones and watches do that with LTPO OLED which I don't believe exists at higher screen sizes although I'm not sure why. This is supposed to be special because it isn't OLED so should be able to get brighter and not have to worry about burn in.
LTPO has problems with uniformity of brightness, that get worse the larger the panels are. On a phone screen, this is usually not perceivable, but if you made a 27" screen out of it, most such screens would be visibly brighter in some corner or other.
https://arstechnica.com/gadgets/2026/03/lg-display-starts-ma... is a better article but LG is light on details of their new proprietary display tech.
Dell needs to sell these XPS. The AI button doesn't do the trick, so battery life may do it.
What's the real-world battery life though? My mac gets 8 hours real world; 16 in benchmarks; 24 claimed by apple.
Assuming the xps has the same size battery, and this really reduces power consumption by 48%, I'd expect 16 hours real world, 32 in benchmarks and 48 in some workload Dell can cherry pick.
Both my last two XPSes have had shit battery life. Maybe 3.5h when new and only 2h after a few months of use. They also experience a lot of thermal throttling (i7 12700h, 9750h) and newer updates have removed the option of undervolting which used to fix that.
Positive is that the battery life couldn't possibly get worse with newer ones.
I have a December 2024 XPS 15 and I regularly get 7-8 hours out of a charge whilst doing a mixture of tasks. On Linux too, no less.
I put my MBP in low power mode when using the battery and I get easily 12-15 hours with my full dev environment running.
Dell has to deal with windows cuts that in half with all the slop and spyware.
Last I checked: the XPS was one of the few laptop product lines offering native Linux (Ubuntu) as an alternative default configuration option to order
It's how I got mine about 6-7 years back anyways, still works great (except the battery) ...never let windows get it's claws into the machine in the first place
Edit: to add, I realized over time that having a battery that lasts longer just can't seem to beat my older laptop experiences: being able to just swap an extra battery in and have full charge at will (without soldering and all that 'ish) In that sense I feel that the future is coming full circle to modularity, swapability, repairability - to the point they're becoming my primary considerations for the next portable computing select I will need to acquire.
> Last I checked
I checked 10 seconds ago. The only models I can order in my country with linux are Pro Max and a Precision workstation.
If I pretend to be located in the US, an XPS 13 from 2024 becomes available at 200$ more than the Windows variant, and no OLED option.
What a weird marketing strategy from Dell...
Yeah... Took a moment to look it up now:
Apparently they stopped making the Developer Edition which came with Ubuntu in 2022-2023 (which was definitely cheaper by 100-200 bucks or so than the Windows version with exact same hardware, I recall the developer edition os discount very clearly)
Now the XPS line has fallen as well, as apparently even the SSD now gets soldered to the motherboard, no longer possible to service with basic tools really once it starts failing. My old 2018-ish XPS has an M.2 slot and a battery that is relatively simple by modern standards to replace with some screwdrivers and careful handling (something I think is vital for a workhorse computer, as batteries 'decimate' in capacity within 2-3 years or so in my experience)
I don't even know what's left out there anymore among major makers... when I have to look again, maybe framework... Been hearing about them for a bit now and they seem quite relevant to the discussion - haven't seen one live yet to be fair
Powerbanks fill that role well. We have USB-C PD now
While I concede that powerbanks may satisfy the proximal problem - literally making charging available on demand... Consider that it does not in any way resolve the distal problem of having a 'portable computing device', which heavily compromises on the 'portable' aspect - by forcing a state of permanent battery anxiety without external life support (i.e. no power source - dead in minutes of intensive work) The powerbank is a fine workaround to be fair, but as I see it: still a workaround at best. The ability to swap a battery without getting into things like soldering - allows for far more flexible functionality and longevity than a powerbank could.
That is without even mentioning the ultimate problem of parts sustainability and longevity. When you can swap individual components as they degrade, it's possible to use the rest of the machine for far longer than a degraded battery or a failing SSD would allow.
Powerbanks simply feel like treating symptoms, instead of rehabilitating the system itself (obviously still use them for phones and such of course)
OLED iPad dont have always on because of burn-in. Considering people certainly use it as photo frame, notification and time daahboars, kitchen recipe book, etc.
Less of a problem for iphones that unlikely to stay for a week in the same place plugged in and unused.
I don't think many people are spending $1k on an iPad Pro, the only iPad with OLED, to use as a picture frame.
They dont buy it for this purpose. Its just end up like that for a lot of people I know since it just weird device between iphone and macbook that end not being used for much.
It’s a professional mobile artist bonanza idk why you claim it isn’t used much when this expensive device is more than earning its worth
Yeah sure if you buy it as a toy it may not be used for much lol. Check your consumerism
I not saying anything about device itself.
I just pointing out how quite a big part of Apple consumer base use these devices: buy most expensive one, play with it for a few weeks and then leave it as kitchen tablet that is used ocassionally. You know every second housewife wants to be an artist but very few actually use it for this beyond first few weeks.
Providing this audience with always-on display is a sure way to have a lot of people unhappy with burned-in OLED screens.
Yeah so pretty niche use case. No need to attack others with snarky childish comments just because you dont like reality out there
First of all I love snark
Second, it is not a fault of the device that consumers are brain dead, buying something they do not need and then whine about how the device is “useless”. It sucks to suck
>M4 iPad Pro lacks always-on display despite OLED panel with variable refresh rate (2024):
Brightness, Uniformity, Colour Accuracy etc. It is hard as we take more and more features for granted. There is also cost issues, which is why you only see them in smaller screens.
iPad Pro only goes down to 10 FPS. This may be the display of the upcoming MacBook Pro.
What LG is pitching here is basically bringing that 1Hz floor capability to large laptop panels
Yes but I’m unaware of larger ones.
Panel Self Refresh should largely just work, and I believe has been on laptops for a long long time. Here's Intel demo'ing it in 2011. https://www.theregister.com/2011/09/14/intel_demos_panel_sel...
I'm not sure that there's really anything new here? 1Hz might be lower. Adoption might be not that good. But this might just be iteration on something that many folks have just not really taken good advantage of till now. There's perhaps signficiant display tech advancements to get the Hz low, without having significant G-Sync style screen-buffers to support it.
One factor that might be interesting, I don't know if there's a partial refresh anywhere. Having something moving on the screen but everything else stable would be neat to optimize for. I often have a video going in part of a screen. But that doesn't mean the whole screen needs to redraw.
I’m not an expert here, but …
CRTs needed to be refreshed to keep the phosphors glowing. But all screens are now digital: why is there a refresh rate at all?
Can’t we memory-map the actual hardware bits behind each pixel and just draw directly (using PCIe or whatever)?
I think you're assuming that LCDs all have framebuffers, but this is not the case. A basic/cheap LCD does not store the state of its pixels anywhere. It electrically refreshes them as the signal comes in, much like a CRT. The pixels are blocking light instead of emitting it, but they will still fade out if left unrefreshed for long. So, the simple answer is, you can't get direct access to something when it doesn't even exist in the first place.
Probably patent licensing shenanigans kept it holed up for awhile.
> LG’s press release leaves several questions unanswered, including the source of the “Oxide” name...
> Source: https://www.pcworld.com/article/3096432 [2026-03-23]
---
> HKC has announced a new laptop display panel that supports adaptive refresh across a 1 to 60Hz range, including a 1Hz mode for static content. HKC says the panel uses an Oxide (metal-oxide TFT) backplane and its low leakage characteristics to keep the image stable even at 1Hz.
> Source: https://videocardz.com/newz/hkc-reveals-1hz-to-60hz-adaptive... [2025-12-29]
---
> History is always changing behind us, and the past changes a little every time we retell it. ~ Hilary Mantel
> Oxide (metal-oxide TFT)
Ok, that makes some amount of sense. The article claims this is an OLED display, and I haven't heard of significant power games from low-refresh-rate OLED (since they have to signal the LED to stay on regardless of refresh rate).
However, do TFT's really use as much power as the rest of the laptop combined?
They're claiming 48% improvement, so the old TFT (without backlight) has to be equivalent to backlight + wifi + bluetooth + CPU + GPU + keyboard backlight + ...
The article says this is an LED panel and LG is working toward an OLED version.
Sorry, might be obvious to some, but is that rate applied to the whole screen or can certain parts be limited to 1Hz whilst others are at a higher rate?
The ability to vary it seems like it would be valuable as there are significant portions of a screen that remain fairly static for longer periods but equally there are sections that would need to change more often and would thus mess with the ability to stick to a low rate if it's a whole screen all-or-nothing scenario.
From what I understand, the laptop will reduce the refresh rate (of the entire display) to as low as 1Hz if what is being displayed effectively “allows” it.
For example:
- reading an article with intermittent scrolling
- typing with periodic breaks
I think windows has a feature built in on some adaptive refresh rate displays to dynamically shift the frame rate down (to 30, on my screen) or up to the cap, depending on what’s actually happening.
I remember playing with it a bit, and it would dynamically change to a high refresh rate as you moved the mouse, and then drop down as soon as the mouse cursor stopped moving.
I had issues with it sometimes being lower refresh rate even when there was motion on screen, so the frame rate swings were unfortunately noticeable. Motion would get smoother for all content whenever the mouse moved.
1hz is drastically fewer refreshes. I hope they have the “is this content static” measurement actually worked out to a degree where it’s not noticeable.
Who “decides” the frame rate? Does the gpu keep sending data and the monitor checks to determine when pixels change?
Probably the display board, anything else would be subject to OS and GPU driver support and it would never work anywhere.
Got it. Thanks!
Articles have animated ads, though.
On such an article it would not go down to 1Hz. It's checking if the image is changing or not.
Which would make me want the refresh rate to be user-configurable. I would not mind at all if the 1 Hz refresh rate caused parts of the page I don't care about, such as animated ads to stutter and become unwatchable. If given the choice between stuttering ads but longer battery life, or smoothly-animated ads with shorter battery life, I'd choose the unwatchable ads every time.
Ideally, I would be able to bind a keyboard shortcut to the refresh-rate switch, so that the software doesn't have to figure out that now I'm on Youtube so I actually want the higher refresh rate, but now I'm on a mostly-text page so I want the refresh rate to go back down to 1 Hz. If I can control that with a simple Fn+F11 combination or something, that would be the ideal situation.
Not that any laptop manufacturers are likely to see this comment... but you never know.
I assume this will just be using Window's dynamic refresh rate feature, which you can turn on and off in the display settings, and when it's off you can set the refresh rate manually. I guess the question is whether they will let you set it as low as 1hz manually though.
It would help making the ad less distracting, in some cases.
Run uBlock Origin and you will have few (and in most cases, none) animated ads.
not with an adblocker
Ad supported content industry: "Gee, we just can't figure out why anyone would use an ad blocker!"
With current LCD controllers but new drivers/firmware you could selectively refresh horizontal stripes of the screen at different rates if you wanted to.
I don't think you could divide vertically though.
Don't think anyone has done this yet. You could be the first.
I believe E-ink displays do this for faster updates for touch interactivity. Updatimg the whole display as the user writes on the touch screen would otherwise be too slow for Eink.
Today it's mostly "all-or-nothing" at the panel level, but under the hood there's already a lot of cleverness trying to approximate the behavior you're describing
Anyone who has accidentally snapped the controller off a working LCD can tell you that the pixel capacitance keeps the colours approximately correct for about 10 seconds before it all becomes a murky shadowy mess...
So it makes sense you could cut the refresh time down to a second to save power...
Although one wonders if it's worth it when the backlight uses far more power than the control electronics...
It's for OLED screens, so there's no backlight, but also no persistence.
It's an LCD display.
Are you sure? Article says:
> A 1Hz panel is almost, but not quite, on the level of an e-ink panel, which isn’t the prettiest to look at. LG’s panel also uses LED technology, the mainstream panel technology that’s being overtaken at the high end by OLED panels with essentially perfect contrast.
These are self emissive pixels.
Edit: apparently not? Article says OLED with this tech will come in 2027, seems this panel it’s LCD
Article also says "LG’s panel also uses LED technology"
> A 1Hz panel is almost, but not quite, on the level of an e-ink panel, which isn’t the prettiest to look at.
level of what? Power consumption? dude e-ink takes 0 power between refreshs.
And e-ink is pretty?
It just proved the author knows nothing about either technology.
I'm guessing that for this to work you need to be able to selectively refresh parts of the screen at different rates? a 1Hz refresh rate would be rubbish just to follow the mouse cursor, so at least that part of the screen needs to refresh faster. However, it does make sense for the parts of the screen that are mostly static. Looking at my screen as I type this, the only part that needs a high-refresh rate is the text-box where I'm typing (I can type several keys per second so I wouldn't want a refresh rate of 1 Hz). However, the rest of the screen is not changing at all so a slow refresh is perfectly fine.
You're not moving your mouse 100% of the time. Probably less than 25% of the time. Probably using your keyboard less than 25% of the time. It doesn't need to degrade experience OR selectively refresh part of the screen (which it certainly doesn't).
Horrid website: forced cookies, invisible adverts (Mamma Mia, anyone?), and that thing where it’s a page of garbage links when you go back. I will never click a PC World URL again.
It’s truly unusable. What a mess the web has become.
Just activate Reader Mode immediately.
The real unanswered question is how much of this is the panel itself and how much is baked into Windows.
Saving battery is nice, but I'm not leaving Linux for that misery any time soon
As soon as I saw this announced, I wondered if this is why we haven’t seen OLED MacBook Pro yet.
Apple already uses similar tech on the phones and watches.
Still waiting on e-ink laptops. This just seems like a no-brainer.
What these variable refresh panels are trying to do is kind of the "best of both worlds"
Sure dropping toward 1Hz could be huge. But the moment you scroll, watch video, or even have subtle UI animations, you're back in higher refresh territory
How is this a but? This is exactly what you want: the screen refreshes only when a new content appears or once a second.
Is this materially different from panel self refresh?
A low refresh rate probably still requires the same display-side framebuffer as PSR.
With conventional PSR, I think the goal is to power off the link between the system framebuffer and the display controller and potentially power down the system framebuffer and GPU too. This may not be beneficial unless it can be left off long enough, and there may be substantial latency to fire it all back up. You do it around sleep modes where you are expecting a good long pause.
Targeting 1 Hz sounds like actually planning to clock down the link and the system framebuffer so they can run sustain low bandwidth in a more steady state fashion. Presumably you also want to clock down any app and GPU work to not waste time rendering screens nobody will see. This seems just as challenging, i.e. having a "sync to vblank" that can adapt all the way down to 1 Hz?
But why 1hz? Can’t the panel just leave the pixels on the screen for an arbitrary length of time until something triggers refresh? Only a small amount of my screen changes as I’m typing.
When PSR or adaptive refresh rate systems suspend or re-clock the link, this requires reengineering of the link and its controls. All of this evolved out of earlier display links, which evolved out of earlier display DACs for CRTs, which continuously scanned the system framebuffer to serialize pixel data into output signals. This scanning was synchronized to the current display mode and only changed timings when the display mode was set, often which a disruptive glitch and resynchronization period. Much of this design cruft is still there, including the whole idea of "sync to vblank".
When you have display persistence, you can imagine a very different architecture where you address screen regions and send update packets all the way to the screen. The screen in effect becomes a compositor. But then you may also want transactional boundaries, so do you end up wanting the screen's embedded buffers to also support double or triple buffering and a buffer-swap command? Or do you just want a sufficiently fast and coordinated "blank and refill" command that can send a whole screen update as a fast burst, and require the full buffer to be composited upstream of the display link?
This persistence and selective addressing is actually a special feature of the MIP screens embedded in watches etc. They have a link mode to address and update a small rectangular area of the framebuffer embedded in the screen. It sends a smaller packet of pixel data over the link, rather than sending the whole screen worth of pixels again. This requires different application and graphics driver structure to really support properly and with power efficiency benefits. I.e. you don't want to just set a smaller viewport and have the app continue to render into off-screen areas. You want it to focus on only rendering the smaller updated pixel area.
> This seems just as challenging, i.e. having a "sync to vblank" that can adapt all the way down to 1 Hz?
I was under the impression that modern compositors operated on a callback basis where they send explicit requests for new frames only when they are needed.
There are multiple problems here, coming from opposite needs.
A compositor could request new frames when it needs them to composite, in order to reduce its own buffering. But how does it know it is needed? Only in a case like window management where you decided to "reveal" a previously hidden application output area. This is a like older "damage" signals to tell an X application to draw its content again.
But for power-saving, display-persistence scenarios, an application would be the one that knows it needs to update screen content. It isn't because of a compositor event demanding pixels, it is because something in the domain logic of the app decided its display area (or a small portion of it) needs to change.
In the middle, naive apps that were written assuming isochronous input/process/output event loops are never going to be power efficient in this regard. They keep re-drawing into a buffer whether the compositor needs it or not, and they keep re-drawing whether their display area is actually different or not. They are not structured around diffs between screen updates.
It takes a completely different app architecture and mindset to try to exploit the extreme efficiency realms here. Ideally, the app should be completely idle until an async event wakes it, causes it to change its internal state, and it determines that a very small screen output change should be conveyed back out to the display-side compositor. Ironically, it is the oldest display pipelines that worked this way with immediate-mode text or graphics drawing primitives, with some kind of targeted addressing mode to apply mutations to a persistent screen state model.
Think of a graphics desktop that only updates the seconds digits of an embedded clock every second, and the minutes digits every minute. And an open text messaging app only adds newly typed characters to the screen, rather than constantly re-rendering an entire text display canvas. But, if it re-flows the text and has to move existing characters around, it addresses a larger screen region to do so. All those other screen areas are not just showing static imagery, but actually having a lack of application CPU, GPU, framebuffer, and display link activities burning energy to maintain that static state.
I mean sure, you raise an interesting point that at low enough refresh rates application architectures and display protocols begin needing to explicitly account for that fact in order for the system as a whole to make use of the feature.
But the other side of things - the driver and compositor and etc supporting arbitrarily low frequencies - seems like it's already (largely?) solved in the real world. To your responsiveness point, I guess you wouldn't want to use such a scheme without a variable refresh rate. But that seems to be a standard feature in ~all new consumer electronics at this point. Redrawing the entire panel when you could have gotten away with only a small patch is unfortunate but certainly not the end of the world.
Today I learned, laptop comes with backlit vs edgelit panel. And, they have different energy consumption.
There are also mini LED laptop for creative work. Few more things to check before buying new laptop.
I wouldn't get a mini LED laptop for creative work. We have a mini LED TV, and manufacturers need to choose one of these two problems because of physical limitations:
- The LEDs for a mostly dark region with a point source are too bright so the point source is the correct brightness. Benchmark sites call this "blooming" and ding displays for it, so new ones pick the other problem:
- The LEDs for mostly dark regions with a point source are too dim so the black pixels don't appear gray. This means that white on black text (like linux terminals) render strangely, with the left part of the line much brighter than the right (since it is next to the "$ ls" and "$" of the surrounding lines). Also, it means that white mouse pointers on black backgrounds render as dark gray.
For creative work, I'd pick pretty much any other monitor technology (with high color gamut, of course) over mini LED. However mini-LED is great if you have a TV that is in direct sunlight, since it can blast watts at the brightest parts of the screen without overheating.
Tried to open this page on my mobile, good grief the changing advert spam overload kills the reading experience.
Firefox Android + ublock origin. There's ads on the internet? Wouldn't know.
Very weird to see people on hackernews of all places complain about ads on the internet. We solved this like 15 years ago.
Hacker news attracts all sorts of curious people, including luddites like myself! ^_^
Modern software regularly takes like 1 second to load anyways. 200ms is the minimum human reaction time, so adding 100ms would only add like 50% to the REPL user interaction. Something like 10Hz might be quite usable while minimally contributing to lag.
The idea of having a 60Hz screen is nice, but in practice it turns out that display refresh rate is not the bottleneck for most software.
Apple introduced variable refresh rate back in 2015. That’s over a decade ago, I’m sure there’s some new tech involved here, but quite the omission.
And if Apple introduced it a decade ago, then it's at least five years older than that.
What's new here is the 1 Hz minimum.
Apple might have convinced some gullible customers that this was something new.
But to the rest of the world variable refresh rate existed for years by then. As is with most Apple "inventions".
In this case the patent goes back to 1982: https://patents.google.com/patent/US4511892A/en
Apple doesn't manufacture panels, they buy from others. I wonder how Apple can claim they have this feature.
Stroke CRT displays been able to do variable refresh rate since like the 80s, quite the omission there buddy.
What's the chance this will even work on Linux with GNOME?
Perhaps it can do 50Hz, which may be beneficial for emulating PAL systems.
You can use CRU (custom resolution utility) to add 50Hz to most screens.
Ostensibly any display capable of VRR should be able to operate at any range.
You don't need VRR for this, but there are some step functions of usefulness:
24Hz - now you can correctly play movies.
30Hz - NTSC (deinterlaced) including TV shows + video game emulators.
50Hz - (24 * 2 = 50 in Hollywood. Go look it up!) Now you can correctly play PAL and movies.
120Hz - Can play frame-accurate movies and NTSC (interlaced or not). Screw Europe because the judder is basically unnoticeable at 120Hz.
144Hz - Can play movies + pwn n00bs or something.
150Hz - Unobtanium but would play NTSC (deinterlaced), PAL and movies with frame level accuracy.
240Hz - Not sure why this is a thing, TBH. (300 would make sense...)
240 = 2 x 120, or 4 x 60 (or 8 x 30)
So if a pixel is not refreshed, it doesn't use any power?
For sample-and-hold panel technologies like LCD and OLED, refresh is about updating the pixel state (color). There is a process that takes place for that even when the pixel data remains unchanged between frames. However, the pixels still need to emit light between refreshes, which for LCD is a backlight but for OLED are the pixel themselves. The light emission is often regulated using PWM at a higher frequency than the refresh rate. PWM frequency affects power consumption as well. Higher PWM frequency is better for the eyes, but also consumes more power.
OLED is fundamentally not sample and hold, because it is using PWM, right?
Ignoring switching costs, keeping a sample-and-hold LED at 0%, 50% and 100% brightness all cost zero energy. For an OLED, the costs are closer to linear in the duty cycle (again, ignoring switching costs, but those are happening much faster than the framerate for OLED, right?)
(Also, according to another comment, the panel manufacturer says this is TFT, not OLED, which makes a lot more sense.)
I don't believe LED-pixel displays use PWM. I would expect them to use bit planes: for each pixel transform the gamma-compressed intensity to the linear photon-proportional domain. Represent the linear intensity as a binary number. Start with the most significant bit, and all pixels with that bit get a current pulse, then for the next bitplane all the pixels having the 2nd bit set are turned on with half that current for the same duration, each progressive bitplane sending half as much current per pixel. After the least significant bitplane has been lit each pixel location has emitted a total number of photons proportional to what was requested in the linear domain.
So for an 8bit color display you have 24 lines of various currents going across each row (or column) of pixels?
PWM still counts as sample-and-hold, because it sustains the brightness throughout the duration of a frame, resulting in significant motion blur. The converse are impulse-driven displays like CRT and plasma.
LED backlights using PWM likewise don’t change the sample-and-hold nature of LCD panels.
My understanding is that PWM switching costs aren’t negligible, and that this contributes to why PWM frequencies are often fairly low.
E-ink displays can do this. That's why they're used in ereaders. Display in TFA OTOH emits light, so definately not.
It does, especially with LCDs like this, where the backlight is the primary driver of the power consumption of the panel.
I'm not even sure how they got their 48% figure. Sounds like a whole-system measurement, maybe that's the trick.
If the screen is only refreshing once per second, less energy is used to refresh the screen. The pixel uses the same amount of power.
I was not under the impression that sending some control signals took that much power.
Maybe not, but doing it once a second instead of 60 times a second is a pretty massive savings.
You have to compute the new frame too. I would assume that is were most of the power use is.
this is just regurgitating the manufacturer's claim. I believe it when I see it. Most of display energy use is to turn on the OLED/backlight. They're claiming, because our display flickers less, it's 48% more efficient now.
Make a new phone with this please.
imagine what it will do to neo !
I once had an external monitor with a maximum refresh rate of 30 Hz, and mouse movements were noticeably sluggish. It was part of a multi-monitor setup, so it was very obvious as I moved the mouse between monitors.
I'm not sure if this LG display will have the same issue, but I won't be an early adopter.
Read the article.
The display has a refresh rate of 120hz when needed. The low refresh rate is for battery savings when there is a static image.
Variable refresh rate for power savings is a feature that other manufacturers already have (apple for one). So you might already be an early adopter.