A new Wayland protocol is in the works that should support screen cutout information out of the box: https://phosh.mobi/posts/xdg-cutouts/ Hopefully this will be extended to include color information whenever applicable, so that "hiding" the screen cutout (by coloring the surrounding area deep black) can also be a standard feature and maybe even be active by default.
You can't be serious. Wayland is the opposite of modular, and the concept of an extensible protocol only creates fragmentation.
Every compositor needs to implement the giant core spec, or, realistically, rely on a shared library to implement it for them. Then every compositor can propose and implement arbitrary protocols of their own, which should also be supported by all client applications.
It's insanity. This thing is nearly two decades old, and I still have basic clipboard issues[1]. This esoteric cutouts feature has no chances of seeing stable real-world use in at least a decade from now.
Shh...you're not supposed to mention these things alas you be down voted to death.
I also have tremendous issues with Plasma. Things such as graphics glitching in the alt+tab task switcher or Firefox choking the whole system when opening a single 4k PNG image. This is pre-alpha software... So back to X11 it is. Try again in another decade or two.
Each controller and subcomponent on the motherboard needs a driver that correctly puts it into low power and sleep states to get battery savings.
Most of those components are proprietary and don't use the standard drivers available in Linux kernel.
So someone needs to go and reverse engineer them, upstream the drivers and pray that Apple doesn't change them in next revision (which they did) or the whole process needs to start again.
In other words: get an actually Linux supported laptop for Linux.
That's an admirable goal, but, depending on the hardware, it can run into that pesky thing called reality.
It's getting very tiresome to hear complaints about things that don't work on Linux, only to find that they're trying to run it on hardware that's poorly supported, and that's something they could have figured out by doing a little research beforehand.
Sometimes old hardware just isn't going to be well-supported by any OS. (Though, of course, with Linux, older hardware is more likely to be supported than bleeding-edge kit.)
This is very true. I've been asked by lots of people "how do I start with Linux" and, despite being 99.9% Linux user for everything everyday, my advice was always:
1. Use VirtualBox. Seriously, it won't look cool, but it will 100% work after maybe 5 mins mucking around with installing guest additions. Also snapshots. Also no messing with WiFi drivers or graphics card drivers or such.
2. Get a used beaten down old Thinkpad that people on Reddit confirm to be working with Linux without any drivers. Then play there. If it breaks, reinstall.
3. If the above didn't make you yet disinterested, THEN dual boot.
Also, if you don't care about GUI, then use the best blessing Microsoft ever created - WSL, and look no further.
This doesn't match my experience. My previous three laptops (two AMD Lenovo Thinkpads, one Intel Sony VAIO) had essentially the same battery life running Linux as running Windows.
The new project leadership team has prioritized upstreaming the existing work over reverse engineering on newer systems.
> Our priority is kernel upstreaming. Our downstream Linux tree contains over 1000 patches required for Apple Silicon that are not yet in upstream Linux. The upstream kernel moves fast, requiring us to constantly rebase our changes on top of upstream while battling merge conflicts and regressions. Janne, Neal, and marcan have rebased our tree for years, but it is laborious with so many patches. Before adding more, we need to reduce our patch stack to remain sustainable long-term.
> Last time, we announced that the core SMC driver had finally been merged upstream after three long years. Following that success, we have started the process of merging the SMC’s subdevice drivers which integrate all of the SMC’s functionality into the various kernel subsystems. The hwmon driver has already been accepted for 6.19, meaning that the myriad voltage, current, temperature and power sensors controlled by the SMC will be readable using the standard hwmon interfaces. The SMC is also responsible for reading and setting the RTC, and the driver for this function has also been merged for 6.19! The only SMC subdevices left to merge is the driver for the power button and lid switch, which is still on the mailing list, and the battery/power supply management driver, which currently needs some tweaking to deal with changes in the SMC firmware in macOS 26.
Also finally making it upstream are the changes required to support USB3 via the USB-C ports. This too has been a long process, with our approach needing to change significantly from what we had originally developed downstream
Asahi is all reverse engineering. It’s nothing short of a miracle what has already accomplished, despite, not because of, Apple.
That said some of the prominent developers have left the project. As long as Apple keeps hoarding their designs it’s going to be a struggle, even more so now.
If you care about FOSS operating systems or freedom over your own hardware there isn’t a reason to choose Apple.
To be clear, the work the asahi folks are doing is incredible. I’m ashamed to say sometimes their documentation is better than the internal stuff.
I’ve heard it’s mostly because there wasn’t an m3 Mac mini which is a much easier target for CI since it isn’t a portable. Also, there have been a ton of hardware changes internally between M2 and M3. M4 is a similar leap. More coprocessors, more security features, etc.
For example, PPL was replaced by SPTM and all the exclave magic.
I strongly support a projects stance that you shouldn't ask when it will be done. But the time between the M1 launch and a good experience was less than the time since M3 I would love to know what is involved.
Very little progress made this year after high profile departures (Hector Martin, project lead, Asahi Lina and Alyssa Rosenzweig - GPU gurus). Alyssa's departure isn't reflected on Asahi's website yet, but it is in her blog. I believe she also left Valve, which I think was sponsoring some aspects of the Asahi project. So when people say "Asahi hasn't seen any setbacks" be sure to ask them who has stepped in to make up for these losses in both talent and sponsorship.
Without official support, the Asahi team needs to RE a lot of stuffs. I’d expect it to lag behind a couple of generations at least.
I blame Apple on pushing out new models every year. I don’t get why it does that. A M1 is perfectly fine after a few years but Apple treats it like an iPhone. I think one new model every 2-3 years is good enough.
M1 is indeed quite adequate for most, but each generation has brought substantial boosts in performance in single-threaded, multi-threaded, and with the M5 generation in particular GPU-bound tasks. These advancements are required to keep pace with the industry and in a few aspects stay ahead of competitors, plus there exist high end users whose workloads greatly benefit from these performance improvements.
I agree. But Apple doesn’t sell new M1 chip laptops anymore AFAIK. There are some refurbished ones but most likely I need to go into a random store to find one. I only saw M4 and M5 laptops online.
That’s why I don’t like it as a consumer. If they keep producing M1 and M2 I’d assume we can get better prices because the total quantity would be much larger. Sure it is probably better for Apple to move forward quickly though.
In the US, Walmart is still selling the M1 MacBook Air new, for $599 (and has been discounted to $549 or better at times, such as Black Friday).
In general, I don't think it's reasonable to worry that Apple's products aren't thoroughly achieving economies of scale. The less expensive consumer-oriented products are extremely popular, various components are shared across product lines (eg. the same chip being used in Macs and iPads) and across multiple generations (except for the SoC itself, obviously), and Apple rather famously has a well-run supply chain.
From a strategic perspective, it seems likely that Apple's long history of annual iteration on their processors in the iPhone and their now well-established pattern of updating the Mac chips less often but still frequently is part of how Apple's chips have been so successful. Annual(ish) chip updates with small incremental improvements compounds over the years. Compare Apple's past decade of chip progress against Intel's troubled past decade of infrequent technology updates (when you look past the incrementing of the branding), uneven improvements and some outright regressions in important performance metrics.
> That’s why I don’t like it as a consumer. If they keep producing M1 and M2 I’d assume we can get better prices because the total quantity would be much larger.
Why would this be true? An M5 MacBook Air today costs the same as an M1 MacBook Air cost in 2020 or whenever they released it, and is substantially more performant. Your dollar per performance is already better.
If they kept selling the same old stuff, then you spread production across multiple different nodes and the pricing would be inherently worse.
If you want the latest and greatest you can get it. If an M1 is fine you can get a great deal on one and they’re still great machines and supported by Apple.
author mentions he paid $750 for a MacBook Air M2 with 16GB while on Amazon a M4 Air with 16GB is usually $750-800. I get it that M4/M3 aren't supported to boot Asahi yet, but still.
I really wanted this to work, and it WAS remarkably good, but palm rejection on the (ginormous) Apple trackpad didn't work at all, rendering the whole thing unusable if you ever typed anything.
That was a month ago, this article is a year old. I'd love to be wrong, but I don't think this problem has been solved.
Every compositor needs to implement the giant core spec, or, realistically, rely on a shared library to implement it for them. Then every compositor can propose and implement arbitrary protocols of their own, which should also be supported by all client applications.
It's insanity. This thing is nearly two decades old, and I still have basic clipboard issues[1]. This esoteric cutouts feature has no chances of seeing stable real-world use in at least a decade from now.
[1]: https://bugs.kde.org/show_bug.cgi?id=466041
I also have tremendous issues with Plasma. Things such as graphics glitching in the alt+tab task switcher or Firefox choking the whole system when opening a single 4k PNG image. This is pre-alpha software... So back to X11 it is. Try again in another decade or two.
Most of those components are proprietary and don't use the standard drivers available in Linux kernel.
So someone needs to go and reverse engineer them, upstream the drivers and pray that Apple doesn't change them in next revision (which they did) or the whole process needs to start again.
In other words: get an actually Linux supported laptop for Linux.
40% battery for 4hrs of real work is better than pretty much any linux supported laptop I've ever used
For a lot of people the point is to extend the life of their already-purchased hardware.
If your vendor is hostile like Apple, it will be hard to make it keep on working.
It's getting very tiresome to hear complaints about things that don't work on Linux, only to find that they're trying to run it on hardware that's poorly supported, and that's something they could have figured out by doing a little research beforehand.
Sometimes old hardware just isn't going to be well-supported by any OS. (Though, of course, with Linux, older hardware is more likely to be supported than bleeding-edge kit.)
This is very true. I've been asked by lots of people "how do I start with Linux" and, despite being 99.9% Linux user for everything everyday, my advice was always:
1. Use VirtualBox. Seriously, it won't look cool, but it will 100% work after maybe 5 mins mucking around with installing guest additions. Also snapshots. Also no messing with WiFi drivers or graphics card drivers or such.
2. Get a used beaten down old Thinkpad that people on Reddit confirm to be working with Linux without any drivers. Then play there. If it breaks, reinstall.
3. If the above didn't make you yet disinterested, THEN dual boot.
Also, if you don't care about GUI, then use the best blessing Microsoft ever created - WSL, and look no further.
2. If your priority is system lifespan, you are already using OEM macOS.
Every thread about Linux inevitably someone says “it gave new life to my [older computer model].” We’ve all seen it countless times.
[1] https://wiki.archlinux.org/title/TLP
> Our priority is kernel upstreaming. Our downstream Linux tree contains over 1000 patches required for Apple Silicon that are not yet in upstream Linux. The upstream kernel moves fast, requiring us to constantly rebase our changes on top of upstream while battling merge conflicts and regressions. Janne, Neal, and marcan have rebased our tree for years, but it is laborious with so many patches. Before adding more, we need to reduce our patch stack to remain sustainable long-term.
https://asahilinux.org/2025/02/passing-the-torch/
For instance, in this month's progress report:
> Last time, we announced that the core SMC driver had finally been merged upstream after three long years. Following that success, we have started the process of merging the SMC’s subdevice drivers which integrate all of the SMC’s functionality into the various kernel subsystems. The hwmon driver has already been accepted for 6.19, meaning that the myriad voltage, current, temperature and power sensors controlled by the SMC will be readable using the standard hwmon interfaces. The SMC is also responsible for reading and setting the RTC, and the driver for this function has also been merged for 6.19! The only SMC subdevices left to merge is the driver for the power button and lid switch, which is still on the mailing list, and the battery/power supply management driver, which currently needs some tweaking to deal with changes in the SMC firmware in macOS 26.
Also finally making it upstream are the changes required to support USB3 via the USB-C ports. This too has been a long process, with our approach needing to change significantly from what we had originally developed downstream
https://asahilinux.org/2025/12/progress-report-6-18/
That said some of the prominent developers have left the project. As long as Apple keeps hoarding their designs it’s going to be a struggle, even more so now.
If you care about FOSS operating systems or freedom over your own hardware there isn’t a reason to choose Apple.
https://lore.kernel.org/asahi/20251215-macsmc-subdevs-v6-4-0...
I’ve heard it’s mostly because there wasn’t an m3 Mac mini which is a much easier target for CI since it isn’t a portable. Also, there have been a ton of hardware changes internally between M2 and M3. M4 is a similar leap. More coprocessors, more security features, etc.
For example, PPL was replaced by SPTM and all the exclave magic.
https://randomaugustine.medium.com/on-apple-exclaves-d683a2c...
As always, opinions are my own
Stop buying Apple laptops to run Linux.
https://rosenzweig.io/blog/asahi-gpu-part-n.html
I blame Apple on pushing out new models every year. I don’t get why it does that. A M1 is perfectly fine after a few years but Apple treats it like an iPhone. I think one new model every 2-3 years is good enough.
That’s why I don’t like it as a consumer. If they keep producing M1 and M2 I’d assume we can get better prices because the total quantity would be much larger. Sure it is probably better for Apple to move forward quickly though.
In general, I don't think it's reasonable to worry that Apple's products aren't thoroughly achieving economies of scale. The less expensive consumer-oriented products are extremely popular, various components are shared across product lines (eg. the same chip being used in Macs and iPads) and across multiple generations (except for the SoC itself, obviously), and Apple rather famously has a well-run supply chain.
From a strategic perspective, it seems likely that Apple's long history of annual iteration on their processors in the iPhone and their now well-established pattern of updating the Mac chips less often but still frequently is part of how Apple's chips have been so successful. Annual(ish) chip updates with small incremental improvements compounds over the years. Compare Apple's past decade of chip progress against Intel's troubled past decade of infrequent technology updates (when you look past the incrementing of the branding), uneven improvements and some outright regressions in important performance metrics.
Why would this be true? An M5 MacBook Air today costs the same as an M1 MacBook Air cost in 2020 or whenever they released it, and is substantially more performant. Your dollar per performance is already better.
If they kept selling the same old stuff, then you spread production across multiple different nodes and the pricing would be inherently worse.
I've got a few ideas