> aims to remove: Most AI features, Copilot, Shopping features, ...
I grew up on DOS, and my first browser was IE3. My first tech book as a kid was for HTML[1], and I was in absolute awe at what you could make with all the tags, especially interactive form controls.
I remember Firefox being revolutionary for simply having tabs. Every time a new Visual Basic (starting with DOS) release came out, I was excited at the new standardized UI controls we had available.
I remember when Tweetie for iPhone OS came out and invented pull-down refresh that literally every app and mobile OS uses now.
Are those days permanently gone? The days when actual UI/UX innovation was a thing?
[1] Can someone help me find this book? I've been looking for years. It used the Mosaic browser.
I feel like wishing for UI innovation is using the Monkey's paw. My web experience feels far too innovative and not enough consistent. I go to the Internet to read and do business not explore the labyrinth of concepts UI designers feel I should want. Take me back to standards, shortcuts, and consistency.
The labyrinth of ways to interact with the temporal path between pages is a cluster. History, bookmark, tab, window,, tab groups.
There are many different reasons to have a tab, bookmark, or history entry. They dont all mean the same thing. Even something as simple as comparison shopping could have a completely different workflow of sorting and bucketing the results, including marking items as leading candidate, candidate, no, no but. Contextualizing why I am leaving something open vs closing it is information ONLY stored in my head, that would be useful to have stored elsewhere.
Think about when you use the back button vs the close tab button. What does the difference between those two concepts mean to you? When do you choose to open a new tab vs click? There is much to be explored and innovated. People have tried radical redesigns, havent seen anything stick , yet.
If you expect the browser to help you manage your various workflows beyond generic containers (tabs, tab groups), then you become tied into the browser's way of doing things. Are you sure you want that?
I'm not saying your hopes are bad, exactly. I'm interested in what such workflows might look like. Maybe there _is_ a good UX for a web shopping assistant. I have an inkling you could cobble something interesting together quite fast with an agentic browser and a note-taking webapp. But I do worry that such a app will become yet another way for its owner to surveil their users in some of the more accurate and intimate areas of their lives. Careful what you wish for, I reckon.
In the meantime, what's so hard about curating a Notepad/Notes/Obsidian/Org mode file, or Trello/Notion board to help you manage your projects?
Yes! I don't want a car with an "innovative" way of steering. I don't want a huge amount of creativity to go into how my light switches work. I don't want shoes that "reinvent" walking for me (whatever the marketing tagline might say).
Some stuff has been solved. A massive number of annoyances in my daily life are due to people un-solving problems with more or less standardized solutions due to perverse economic incentives.
The thing why this was only a research project and never came into mass production was regulatory stuff, IIRC?
(most EU countries require, still until today, a "physical connection between steering wheel and wheels" in their trafic regulation)
> Yes! I don't want a car with an "innovative" way of steering.
You might, but you'll never really know.
I mean, steering wheels themselves were once novel inventions. Before those there was "tillers" (a rod with handle essentially)[0], and before those: reigns, to pull the front in the direction you want.
I highly doubt there's a steering input device so superior to the current wheel shape that it's worth throwing out the existing standard. Yes, at one point how steering should work (or how you should navigate the Web) was uncertain, but eventually everyone settled on something that worked well enough that it was no longer worthwhile to mess with it.
Although, one thought I had is that there's nothing wrong with experimenting with non-standard interfaces as long as you still have the option to still just buy, say, a Toyota with a standard steering wheel instead of 3D Moebius Steering or whatever. The problem is when the biggest manufacturers keep forcing changes by top-down worldwide fiat, forcing customers to either grin and bear it or quit driving (or using the Web) entirely.
I sympathise with the frustration, but I think the issue isn't innovation itself: it's that we've lost the ability to distinguish between solving actual problems and just making things different.
Take mobile interfaces. When touchscreens arrived, we genuinely needed new patterns. A mouse pointer paradigm on a 3.5" screen with fat fingers simply doesn't work. Swipe gestures, pull-down menus, bottom navigation—these emerged because the constraints demanded it, not because someone thought "wouldn't it be novel if..."
The problem now is that innovation has become cargo-culted. Companies innovate because they think they should, not because they've identified a genuine problem. Every app wants its own navigation paradigm, its own gesture language, its own idea of where the back button lives. That's not innovation, that's just noise.
However, I'd have to push back on the car analogy: steering wheels were an innovation over tillers, and a crucial one. Tillers gave you poor mechanical advantage and required constant two-handed attention. The steering wheel solved real problems: better control, one-handed operation, more space for passengers. It succeeded because it was genuinely better, and then it standardised because there was no reason to keep experimenting.
The web needs more of that approach: innovate when there's a genuine problem, then standardise when you've found something that works. The issue isn't innovation, it's the perverse incentive to differentiate for its own sake.
You need to be careful here, because we have a real tendency to get stuck in local maxima with technology. For instance, the QWERTY keyboard layout exists to prevent typewriter keys from jamming, but we're stuck with it because it's the "standardized solution" and you can't really buy a non-QWERTY keyboard without getting into the enthusiast market.
I do agree changing things for the sake of change isn't a good thing, but we should also be afraid of being stuck in a rut
I agree with you, but I'm completely aware that the point you're making is the same point that's causing the problem.
"Stuck in a rut" is a matter of perspective. A good marketer can make even the most established best practice be perceived as a "rut", that's the first step of selling someone something: convince them they have a problem.
It's easy to get a non-QWERTY keyboard. I'm typing on a split orthlinear one now. I'm sure we agree it would not be productive for society if 99% of regular QWERTY keyboards deviated a little in search of that new innovation that will turn their company into the next Xerox or Hoover or Google. People need some stability to learn how to make the most of new features.
Technology evolves in cycles, there's a boom of innovation and mass adoption which inevitably levels out with stabilisation and maturity. It's probably time for browser vendors to accept it's time to transition into stability and maturity. The cost of not doing that is things like adblockers, noscript, justthebrowser etc will gain popularity and remove any anti-consumer innovations they try. Maybe they'll get to a position where they realise their "innovative" features are being disable by so many users that it makes sense to shift dev spending to maintenance and improvement of existing features, instead of "innovation".
> For instance, the QWERTY keyboard layout exists to prevent typewriter keys from jamming, but we're stuck with it because it's the "standardized solution" and you can't really buy a non-QWERTY keyboard without getting into the enthusiast market.
So, we are "stuck" with something that apparently seems to work fine for most people, and when it doesn't there is an option to also use something else?
If you mean the default German keyboard layout then, yes, putting backslashes, braces and brackets behind AtlGr makes it sub-optimal in my book. Thankfully what's printed on the keys is not that important so you too can have a QWERRTY keyboard if you want.
As someone that makes my own keyboard firmware, 100% agree. For most people, typing speed isn't a bottleneck. There is a whole community of people that type faster than 250wpm on custom, chording-enabled keyboards. The tradeoff is that it takes years to relearn how to type. Its the same as being a stenographer at that point. Its not worth it for most people.
Even if there was a new layout that did suddenly allow everyone to type twice as fast, what would we get with that? Maybe twice as many social media posts, but nothing actually useful.
One don't need to be a scientist to take a look at own hands and fingers, to see that they are not crooked to the left. Ortholinear keyboard would be objectively better, even with the same keymap like QWERTY, but we don't produce those for masses for a variety of reasons. Same with many other ideas.
If I recall correctly, QWERTY was designed to minimize jamming. The myth is that it was designed to slow people down.
Whether it does slow people down, as a side effect, is not as well established since, as another person pointed out, typing speed isn't the bottleneck for most people. Learning the layout and figuring out what to write is. On top of that, most of the claims for faster layouts come from marketing materials. It doesn't mean they are wrong, but there is a vested interest.
If there was a demonstrably much faster input method for most users, I suspect it would have been adopted long ago.
It definitely feels like it is gone. Of course I'm largely talking about the applications that I use, e.g. MS Word which is still using the searchless 1980s character map and has a crazy esoteric add-on installation process. It's hilariously bad when we consider the half-screen UI which obscures a considerable amount of the ribbon.
The UX is also awful.
But I think this is a compounding problem that spans generations of applications. Consider the page convention — a great deal of the writing content we typically publish, at a societal level, will be digital-only so why are we still defaulting to paper document formats? Why is it so fucking hard to set a picture in?
And it's that kind of ossification and familiar demand that reinforces the continuum that we see, I think. And when a company does get creative and sees some breakthrough success it is constrained to nascency before it gets swallowed by conglomerate interests and strangled.
And Google's alternative ecosystem has all of these parallels. It's crazy to see these monolithic companies floundering like this. That's what I don't understand.
Kinda yeah, kinda no. Big-thinking drastic UI experiences are usually shit. But small, thoughtful touches made with care can still make a big difference between a website that just delivers the data you need and one that's pleasant to interact with.
There's a similar amateurs-do-too-much effect with typography and design. I studied typography for four semesters in college, as well as creative writing. The best lessons I learned were:
In writing, show, don't tell.
In typography, use the type to clarify the text - the typography itself should be transparent and only lead to greater immersion, never take the reader out of the text.
Good UI follows those same principles. Good UX is the UI you don't notice.
> invented pull-down refresh that literally every app and mobile OS uses now
I'm forced to use WhatsApp for a local group, and for some reason, when in the group chat, when I pull up to ensure that I see the latest message, that stupid app opens an audio-recording thingy at the bottom as if I wanted to send an audio note to the group.
Who designed that? Has that person been fired?
Also, I wish that on Windows "windows" weren't able to provide their own chrome and remove the title bar. Add some things to it yes, but fully replace it? No thank you.
> Can someone help me find this book? I've been looking for years. It used the Mosaic browser.
Would it happen to be HTML Manual of Style: Clear, Concise Reference for Hypertext Markup Language by Larry Aronson? [1]
From the description:
> This book introduces HTML, the program language used to create World-Wide Web "pages", so that users of Mosaic and other Web browsers can access data. Forty to 50 new "pages" are being added to the WWW every day and this will be the first book out on the subject.
Forty to fifty new "pages" per day! </Dr. Evil air quotes>
> Are those days permanently gone? The days when actual UI/UX innovation was a thing?
I think "yes" and "a bit", in that order. The early days of the web and mobile, where everything was new, are gone. In those days, there was no established pattern for standard UX. Designers had to innovate.
It makes sense that we have a lot less innovation now. There's probably room for a lot more than we see, but not for the level that was there in the early days of the web.
Only speaking for myself, but I have "front end exhaustion". Text based sites like this are the only ones I spend any time on anymore.
There's no reason to "learn" a UI or use shortcuts on most sites, because they change everything around every few months.
I see people reminiscing about tabs in firefox, well today a majority of the top websites don't even allow you to open links in new tabs! The links aren't even real links anymore, and everything's a webapp. ( and by top websites, I mean social media, not the top sites used by the HN crowd. Sites like YT, FB, IG, and TT ).
I try to interact with the "UI" of websites as little as possible these days. I use RSS readers for as much as possible. Any time I get a popup on any site, I get mad. I don't care about news updates, software updates, or offers. Anything that pops up at me, or moves around before I can click it, looks like a scam to me. Even if it's "legitimate". The modern web feels like an arcade game that's trying to waste my time.
> The days when actual UI/UX innovation was a thing?
There is more than enough of it. Now it is, of course, AI agents. Before that, Material Design was quite innovative. Interestingly, with the raise of search engines and later LLMs, we are getting back to the command line! It is not the scary black window where you type magic incantations, it is a less scary text field where you type in natural language, but fundamentally, it works like a command line.
It is a good thing? For me, it is a mixed bag, I miss traditional desktop UIs (pre-Windows 8), but I like search-based UIs on the Desktop, an I am not a fan of AI agents: too slow an unpredictable, and that's before privacy considerations. When it is not killing performance, I find Material Design to be pretty good on mobile, but terrible on the desktop. That there is innovation doesn't mean it is all good.
Are those days permanently gone? The days when actual UI/UX innovation was a thing?
I agree mostly with your sentiment. But I still think there is still some work being done. For example the Arc and Zen Browsers. I never used Arc because it is closed source. But it sure looked beautiful. And Zen I tested, but it seemed laggy. I think I might give it another go to see if some of the performance issues have been fixed.
> Are those days permanently gone? The days when actual UI/UX innovation was a thing?
No. You just need to look outside of desktop computing, and computing in general.
For example, I'm getting into CAD and 3d printing. Learning it reminds me of when my father learned to program in the late '80s, or when my grandfather telling me about how he got his Model A up to 50 mph.
Remember: Desktop computers and the web are ultimately tools for a purpose, and that purpose isn't always "nerd toy." We (the nerds) need to find and invent our toys every generation or so.
Fun fact: Opera had a tab functionality before Firefox. In fact a little-known browser called InternetWorks from the 90s is thought to be the first that had them.
I was an Opera user. They were the innovators in the browser space back in the day. Eventually it just felt too bloated, and sadly now they are essentially another Chromium fork.
Yes. When coming from DOS, all the UI/UX that could have been created has been created. What we have now is a loop of tries to refresh the existing but it's hard, mainly because it's now everywhere and it has reached maturity.
As an example, the "X" to close and the left arrow for back won't be replaced before a long time, just like we still have a floppy to represent save.
Cars have tried to refresh their ui/UX but they failed and are now reverting back to knobs and buttons.
It seems that VisionOS is a place where innovation could come but it's not really a success.
Moreover, designers keep trying to justify their own jobs by changing fully functional interfaces, and then claiming post-hoc that the new UIs are better because they are better.
Designers decided that scrollbars that shrink to super-thin columns when not in use were better. Maybe... but often it results in shrunken scrollbars that require extra work to accurately hover over and expand.
Designers decided that gray text on gray backgrounds were easier to read, and there was even a study to "prove" it... which resulted in idiots picking poor contrast choices of gray-on-gray, without understanding the limits on this idea.
I will say that the current push for accessibility is forcing some of these "innovations" back onto the junk heap where they belong. I was annoyed the first time an accessibility review complained about the contrast of my color choices on a form once... but once I got over my ego, I have to admit they were right; the higher-contrast colors are easier to read.
Honestly, I could endlessly vehemently express my frustration to any designer that find this "cool".
/* rant /
Those designer never had to scroll to a long, long scrollable section of a page to reach the end and sadly discover that the "end" button doesn't work, because of course the browser goes to the end of the page, not the end of the scrollable section.
And of course, the scrollbar is 2 pixels wide (I took a screenshot to measure it) and it's only visible if I put my mouse in the section.
And of course, it's right next to the scrollbar that the dev decided to put the Action Icons for each item in the scrollable section.
1 Pixel left, open the popup to delete the item, 1 pixel right, scrollbar.
And of course, if I increase the zoom on my browser, everything grows, except the scrollbar.
I can have icons the size of my fist on a 27" screen but those scrollbar stay thinner than an uncooked spaghetti.
I remember what it was like before tabs, when there was that Multi Document Interface (something like that) instead, so you had the main parent window but then each page was its own window within it that you could resize, minimise, maximise…
MDI was rightfully seen as a complete failure, but there was also SDI, where each open thing is a separate window. I don't know how we got from MDI in office apps being completely terrible, to MDI in browsers being the accepted norm.
Actual MDI was so much worse than browser tabs, unrelated tabs can be merged into the same window or split apart into their own, instead of floating on top of an awkward background.
The question is why aren't they a feature of the window manager instead of the application. We should be able to have windows with tabs from different applications.
Tabbed MDI is effectively just a better interface to SDI (for most situations)
Actual MDI applications feel so dated. It made more sense when there wasn't a unified task bar kinda thing (which when you think of it, is kinda like tabs as well)
Well websites and documents are not the same thing so it makes sense that a paradigm that works for one doesn't necessarily work for the other. I do find web-based document editors very annoying to use when they are in the same window as other tabs - at least web browser MDIs allow you do effortlessly separate tabs into a new window these days.
Good example because Liquid Glass is obviously preparing for the next paradigm shift in computing which will actually require/open up a lot of innovation on the UI front again.
Apple has the unfortunate burden of needing to shepherd millions of developers over to this new paradigm (AR) before it really exists, and so is shoving Liquid Glass onto devices that don't really benefit from it.
But in practice people are generally not happy about lots of new experimentation going on. By definition, most of the results suck. In retrospect we get to stand in awe of those that survived the evolutionary battle and say "wow browser tabs" and "wow pull to refresh" and forget the millions of other bad ideas that we tried.
> Good example because Liquid Glass is obviously preparing for the next paradigm shift in computing which will actually require/open up a lot of innovation on the UI front again.
Bruh, I just want to be able to read the text on my phone.
Yeah: most experiments fail and even the ones that ultimately succeed have rough edges.
That's my point about people swooning about the days of UI experimentation. There's a reason we don't do it once we figure out good solutions to problems (experimentation is hard and mostly bad).
Vista/Aero 2.0 was purely for aesthetics. Liquid Glass is obviously to enable UIs overlaid on top of uncontrolled content (i.e. camera input from the real world, or be used through fully transparent displays).
Apple really has to bite the bullet somehow here if they want to get everyone over to what they see as the next computing paradigm.
Much like transparent glass tablets in sci Fi movies, this looks pretty cool but I think makes text hard to read and gets old immediately. Is it really a compelling new paradigm?
I think if I had a really improved version of Apple vision I would still want non transparent windows that are clean and easy to read, not floating holograms with glass like distortion?
All important questions to answer and problems to solve.
It would be interesting if someone had a way to throw a couple hundreds thousand designers and developers into an environment where they have to find solutions so we could get a head start before the relevant hardware goes fully mass-market...
I already have a physical keyboard! So what will a touchscreen do for me?
Turns out that interaction shift actually enabled a lot.
IMO any individual (like you or I) are unlikely to immediately conjure up every possible high-value idea that AR makes possible.
Not saying those ideas necessarily exist (though I suspect they do), just that your lack of imagination isn't evidence against them existing and being discoverable in the next 10-20 years.
I went through the same (or at least very similar) experience. I loved that.
New apps were announced in blogs, and people downloaded them to try them out. I remember downloading Opera, using it for a few days or weeks, and then going back to Firefox.
Can we stop innovating on UI for existing problems?
The standard affordances for most well-known problems are long settled. Unless you're solving an entirely new class of problem, maybe you don't need to reinvent a large number of wheels, again. We're all tired of the triangular wheels coming out.
Which makes it funny that the request for UI innovation is prefixed with a quote that amounts to "but what if browsers were permanently frozen ca. 2012?". Mind, I can sympathize with some of the thoughts behind the request, even if I disagree - but you can't ask for a stop in new features & problem classes to be accompanied by continued UI innovation.
That is, as my art teacher used to say, "intellectual wankery in the disguise of creativity".
> Are those days permanently gone? The days when actual UI/UX innovation was a thing?
I don't think these are permanently gone, but the corporations failed us, and also the "not for profit" fakers such as Mozilla.
We need a new web - one developed by the people, for the people. Whenever corporations jump in, they try to skew things to their favour, which almost always means in disfavour of the people.
Except in the early days of smartphone, people pushed back against pull-to-refresh[0][1][2]. Android devs were confused why it was a thing. It's a design with zero discoverability - how do you know what would happen when you pull down? Perhaps the app would show a search bar. Or pinned posts if it's a forum board. Or ask you to review the app? How do you know pulling down is a gesture at all?
The only reason pull-to-refresh got accepted is that it came so early that the UX of smartphone app wasn't well established. Before pull-to-search or pull-to-whatever had a chance.
> nihilistically
It's quite nihilistic to think history doesn't exist and things were born as they are currently.
I had a look at what it actually does in the Firefox settings and all it seems to do is to disable one AI feature flag, change the default search engine, and then set a few other flags that are changes that you may or may not want to make, unrelated to AI. Not sure you want to run a 3rd party shell script just to do that…
A few weeks ago I noticed some mysterious app was killing my (poor) internet downloading a large file.
It was chrome, downloading a multi GB file without any sort of UI hints that it was doing so. A generative AI file.
Is this why chrome uses so much ram? They’ve just been pushing up the memory usage in preparation for this day, hoping I wouldn’t notice the extra software now running on my (old, outdated) system?
Does it also remove Firefox's translation models that uses local CPU? I find that feature very useful and totally obliterated my dependence on Chrome's translate features. Models are surprisingly good, especially for languages like English, Spanish and German.
I can see the use of LLMs and machine learning tools like TTS, translators and grammar checkers to be integrated to browser, but only depending on local models or better, like Firefox's case to CPU optimized local models.
It explicitly doesn't, though they don't explain why not. It's not an on/off device distinction because it disables Firefox's automatic tab groups too.
A lot of anti-AI backlash seems to exempt machine translation, which as far as I can tell is just because it's been around for so long that people are comfortable with it and don't see it as new or AI-y, which imho spells doom for a lot of this- in ten years automatic tab groups will seem just as natural and non-intrusive as machine translation.
It's not mere familiarity. Machine translation is immediately useful to me. I was going to pull up google translate anyway; keeping it local to my device improves both convenience and privacy.
A local LLM that I explicitly bring up to ask a question and dismiss (ie no CPU or RAM usage) when I'm done consulting it is nice. A piece of software I'm using interrupting what I'm doing to ask me a useless and annoying question or to make an unsolicited change to my workspace leaves me thinking about permanently uninstalling it.
I will never want automatic tab groups or automatic anything else. I don't even want an "integrated" desktop environment - I use i3 to get away from that. I hate all the useless bullshit half baked features that are constantly shoved in my face.
If the modern web was compatible with it I'd use a text based browser for 90% of what I do online. And if that were the case I'd still welcome a built in machine translation feature because it's an incredibly useful tool.
Firefox's translation by default does pop up to interrupt and ask if you want to translate a page that it's detected is in another language. We're just more used to that and it's a more reliable signal that you probably want to run a tool than most.
It's still relatively new in FF and I don't think I've seen anyone complaining about it annoying them with popups, even though it absolutely does throw up an interrupting overlay, especially on mobile.
Every single thing for the past 10 years has had (opt-out, which most people didn't) telemetry and that correlates with a decline in quality, not improvement.
- Use of analytics tends to replace user trials/interviews entirely, trading away rich signals for weaker ones
- Analytics can be used to justify otherwise unpopular or ill-advised changes
- When combined with certain changes (e.g. making features harder to access), the numbers can be “steered” in a particular direction to favor a particular outcome and better enable the last point (“Looks like nobody’s using that thing we hid behind an obscure feature flag! Guess we’re safe to remove it entirely now!”).
In theory telemetry/analytics have strong potential for improving software quality, but more often than not they’re just massaged and misused by product managers bent on pushing the software a particular direction.
How intrusive is AI in a browser that you feel you need another browser that advertises no-ai? Is it a privacy thing? Like for me in edge, it is completely out of the way.
Suggesting bash/curl'ing to get a 12 lines JSON file is just... Not great. We've seen a shitload of developers account getting compromised (with all the supply chain attacks) and developers account turning evil.
Also there's absolutely zero need to be sudo to put a JSON config file for Firefox on Linux.
You're basically bash/curl'ing the kitchen sink, with all the security risks that entails, executing a shell script as root (which may or may not be malicious now or at some point in the future), just to...
Put a 12 lines JSON file in a user's Firefox config folder.
Way to go my "fremen" brothers [1].
[1] the "fremen" in Dune as those who adore the Shai-Hulud
Administrator access or sudo is required because the configuration paths (C:\Program Files\Mozilla Firefox on Windows, /etc/firefox/policies/ on Linux) are protected. The browser guides explain the manual install and uninstall process for anyone who doesn't trust the script.
It'll be good to just use the browser again, so I will def be trying this out. But I can't help but feel that for simple dumb questions it's a lot easier to just ask AI bots instead of searching on a web browser. Does this just depend on the context? Example most recently I wanted to know how many miles would a pair of running shoes last. AI can answer this instantly (hooray instant gratification) and googling something like this would take longer. And of course this is why they shove this stuff on the browser.
I guess then, the browser and AI just serve different purposes now?
I noticed that Safari is not mentioned - is it because is not relevant on Desktop or because it didn't go through the same enshittification process as the other two major browsers?
Probably both? I did find its omission spoke loudly. I use it every day on desktop. The only enshitification I have to worry about is Alan Dye’s hit and run crimes against usability.
Half of the webapps maybe. Actual websites don't have a reason to use any of these features and most don't (except for fonts maybe, but removing those doesn't prevent the website from working).
I also today tried Qwant and for the first time, in a long while,
the results Qwant delivered were objectively better than from Google
Search. What the heck is Google doing?
It's silly to treat this like a totalizing partisan issue where everything must be clearly "pro-ai" or "anti-ai".
Browsers are currently incentivised to add a bunch of new features outside their traditional role. Some people prefer to keep the browser's role simple. It's not ideological and it's not "hating".
This niche will get smaller over time. The key hurdle right now is that most "AI" is just LLMs. People currently prefer to go to a website or open a dedicated application for AI inference. As better integrations with other workflows are made and people see them, the resistance will weaken.
Microsoft shoving LLMs into literally everything, including Notepad, is what people are currently hating, because it isn't quite ready.
I grew up on DOS, and my first browser was IE3. My first tech book as a kid was for HTML[1], and I was in absolute awe at what you could make with all the tags, especially interactive form controls.
I remember Firefox being revolutionary for simply having tabs. Every time a new Visual Basic (starting with DOS) release came out, I was excited at the new standardized UI controls we had available.
I remember when Tweetie for iPhone OS came out and invented pull-down refresh that literally every app and mobile OS uses now.
Are those days permanently gone? The days when actual UI/UX innovation was a thing?
[1] Can someone help me find this book? I've been looking for years. It used the Mosaic browser.
The labyrinth of ways to interact with the temporal path between pages is a cluster. History, bookmark, tab, window,, tab groups.
There are many different reasons to have a tab, bookmark, or history entry. They dont all mean the same thing. Even something as simple as comparison shopping could have a completely different workflow of sorting and bucketing the results, including marking items as leading candidate, candidate, no, no but. Contextualizing why I am leaving something open vs closing it is information ONLY stored in my head, that would be useful to have stored elsewhere.
Think about when you use the back button vs the close tab button. What does the difference between those two concepts mean to you? When do you choose to open a new tab vs click? There is much to be explored and innovated. People have tried radical redesigns, havent seen anything stick , yet.
I'm not saying your hopes are bad, exactly. I'm interested in what such workflows might look like. Maybe there _is_ a good UX for a web shopping assistant. I have an inkling you could cobble something interesting together quite fast with an agentic browser and a note-taking webapp. But I do worry that such a app will become yet another way for its owner to surveil their users in some of the more accurate and intimate areas of their lives. Careful what you wish for, I reckon.
In the meantime, what's so hard about curating a Notepad/Notes/Obsidian/Org mode file, or Trello/Notion board to help you manage your projects?
Some stuff has been solved. A massive number of annoyances in my daily life are due to people un-solving problems with more or less standardized solutions due to perverse economic incentives.
99.5 % agree, because I would love to try SAAB:s drive-by-wire concept from 1992: https://www.saabplanet.com/saab-9000-drive-by-wire-1992/
You might, but you'll never really know.
I mean, steering wheels themselves were once novel inventions. Before those there was "tillers" (a rod with handle essentially)[0], and before those: reigns, to pull the front in the direction you want.
[0]: https://en.wikipedia.org/wiki/Benz_Patent-Motorwagen
Although, one thought I had is that there's nothing wrong with experimenting with non-standard interfaces as long as you still have the option to still just buy, say, a Toyota with a standard steering wheel instead of 3D Moebius Steering or whatever. The problem is when the biggest manufacturers keep forcing changes by top-down worldwide fiat, forcing customers to either grin and bear it or quit driving (or using the Web) entirely.
Take mobile interfaces. When touchscreens arrived, we genuinely needed new patterns. A mouse pointer paradigm on a 3.5" screen with fat fingers simply doesn't work. Swipe gestures, pull-down menus, bottom navigation—these emerged because the constraints demanded it, not because someone thought "wouldn't it be novel if..."
The problem now is that innovation has become cargo-culted. Companies innovate because they think they should, not because they've identified a genuine problem. Every app wants its own navigation paradigm, its own gesture language, its own idea of where the back button lives. That's not innovation, that's just noise.
However, I'd have to push back on the car analogy: steering wheels were an innovation over tillers, and a crucial one. Tillers gave you poor mechanical advantage and required constant two-handed attention. The steering wheel solved real problems: better control, one-handed operation, more space for passengers. It succeeded because it was genuinely better, and then it standardised because there was no reason to keep experimenting.
The web needs more of that approach: innovate when there's a genuine problem, then standardise when you've found something that works. The issue isn't innovation, it's the perverse incentive to differentiate for its own sake.
I do agree changing things for the sake of change isn't a good thing, but we should also be afraid of being stuck in a rut
"Stuck in a rut" is a matter of perspective. A good marketer can make even the most established best practice be perceived as a "rut", that's the first step of selling someone something: convince them they have a problem.
It's easy to get a non-QWERTY keyboard. I'm typing on a split orthlinear one now. I'm sure we agree it would not be productive for society if 99% of regular QWERTY keyboards deviated a little in search of that new innovation that will turn their company into the next Xerox or Hoover or Google. People need some stability to learn how to make the most of new features.
Technology evolves in cycles, there's a boom of innovation and mass adoption which inevitably levels out with stabilisation and maturity. It's probably time for browser vendors to accept it's time to transition into stability and maturity. The cost of not doing that is things like adblockers, noscript, justthebrowser etc will gain popularity and remove any anti-consumer innovations they try. Maybe they'll get to a position where they realise their "innovative" features are being disable by so many users that it makes sense to shift dev spending to maintenance and improvement of existing features, instead of "innovation".
So, we are "stuck" with something that apparently seems to work fine for most people, and when it doesn't there is an option to also use something else?
Not sure if that's a great example
Sometimes good enough is just good enough
Is my digital life at a natural end now?
even if it is true (is it a myth by any chance?), it does not mean that alternatives are better at say typing speed
Even if there was a new layout that did suddenly allow everyone to type twice as fast, what would we get with that? Maybe twice as many social media posts, but nothing actually useful.
Whether it does slow people down, as a side effect, is not as well established since, as another person pointed out, typing speed isn't the bottleneck for most people. Learning the layout and figuring out what to write is. On top of that, most of the claims for faster layouts come from marketing materials. It doesn't mean they are wrong, but there is a vested interest.
If there was a demonstrably much faster input method for most users, I suspect it would have been adopted long ago.
The UX is also awful.
But I think this is a compounding problem that spans generations of applications. Consider the page convention — a great deal of the writing content we typically publish, at a societal level, will be digital-only so why are we still defaulting to paper document formats? Why is it so fucking hard to set a picture in?
And it's that kind of ossification and familiar demand that reinforces the continuum that we see, I think. And when a company does get creative and sees some breakthrough success it is constrained to nascency before it gets swallowed by conglomerate interests and strangled.
And Google's alternative ecosystem has all of these parallels. It's crazy to see these monolithic companies floundering like this. That's what I don't understand.
There's a similar amateurs-do-too-much effect with typography and design. I studied typography for four semesters in college, as well as creative writing. The best lessons I learned were:
In writing, show, don't tell.
In typography, use the type to clarify the text - the typography itself should be transparent and only lead to greater immersion, never take the reader out of the text.
Good UI follows those same principles. Good UX is the UI you don't notice.
I'm forced to use WhatsApp for a local group, and for some reason, when in the group chat, when I pull up to ensure that I see the latest message, that stupid app opens an audio-recording thingy at the bottom as if I wanted to send an audio note to the group.
Who designed that? Has that person been fired?
Also, I wish that on Windows "windows" weren't able to provide their own chrome and remove the title bar. Add some things to it yes, but fully replace it? No thank you.
Also, I despise telegram (just as much as X), because in Germany both are rotten to the core in terms of user base, worse than WhatsApp.
Signal or Threema would be great, and I voted for Signal, but the majority uses WhatsApp.
I used to use Telegram, but ever since Covid and the whackos that found their "truth" over there I say no thank you.
Would it happen to be HTML Manual of Style: Clear, Concise Reference for Hypertext Markup Language by Larry Aronson? [1]
From the description:
> This book introduces HTML, the program language used to create World-Wide Web "pages", so that users of Mosaic and other Web browsers can access data. Forty to 50 new "pages" are being added to the WWW every day and this will be the first book out on the subject.
Forty to fifty new "pages" per day! </Dr. Evil air quotes>
[1]: https://welib.org/md5/d456fbbef6aee150706c6a507a031593
To an extent, yes. The ecosystem has matured. The things that work have been discovered, the things that don't have been discarded.
I think it'll take another big leap in hardware form factor (Apple Vision being an example of an attempt at it) for us to see meaningful UI changes.
I think "yes" and "a bit", in that order. The early days of the web and mobile, where everything was new, are gone. In those days, there was no established pattern for standard UX. Designers had to innovate.
It makes sense that we have a lot less innovation now. There's probably room for a lot more than we see, but not for the level that was there in the early days of the web.
There's no reason to "learn" a UI or use shortcuts on most sites, because they change everything around every few months.
I see people reminiscing about tabs in firefox, well today a majority of the top websites don't even allow you to open links in new tabs! The links aren't even real links anymore, and everything's a webapp. ( and by top websites, I mean social media, not the top sites used by the HN crowd. Sites like YT, FB, IG, and TT ).
I try to interact with the "UI" of websites as little as possible these days. I use RSS readers for as much as possible. Any time I get a popup on any site, I get mad. I don't care about news updates, software updates, or offers. Anything that pops up at me, or moves around before I can click it, looks like a scam to me. Even if it's "legitimate". The modern web feels like an arcade game that's trying to waste my time.
There is more than enough of it. Now it is, of course, AI agents. Before that, Material Design was quite innovative. Interestingly, with the raise of search engines and later LLMs, we are getting back to the command line! It is not the scary black window where you type magic incantations, it is a less scary text field where you type in natural language, but fundamentally, it works like a command line.
It is a good thing? For me, it is a mixed bag, I miss traditional desktop UIs (pre-Windows 8), but I like search-based UIs on the Desktop, an I am not a fan of AI agents: too slow an unpredictable, and that's before privacy considerations. When it is not killing performance, I find Material Design to be pretty good on mobile, but terrible on the desktop. That there is innovation doesn't mean it is all good.
I agree mostly with your sentiment. But I still think there is still some work being done. For example the Arc and Zen Browsers. I never used Arc because it is closed source. But it sure looked beautiful. And Zen I tested, but it seemed laggy. I think I might give it another go to see if some of the performance issues have been fixed.
No. You just need to look outside of desktop computing, and computing in general.
For example, I'm getting into CAD and 3d printing. Learning it reminds me of when my father learned to program in the late '80s, or when my grandfather telling me about how he got his Model A up to 50 mph.
Remember: Desktop computers and the web are ultimately tools for a purpose, and that purpose isn't always "nerd toy." We (the nerds) need to find and invent our toys every generation or so.
Chrome's Whats New seems like half AI stuff and half UI features for people who have tons of tabs.
But it would be funny if it's this: https://archive.org/details/teachyourselfweb00lema/page/n9/m...
- https://www.goodreads.com/book/show/11177063-creating-cool-w... - https://www.goodreads.com/book/show/1097095.HTML_for_Dummies...
Why would it be funny though? Am I missing something?
Yes. When coming from DOS, all the UI/UX that could have been created has been created. What we have now is a loop of tries to refresh the existing but it's hard, mainly because it's now everywhere and it has reached maturity.
As an example, the "X" to close and the left arrow for back won't be replaced before a long time, just like we still have a floppy to represent save.
Cars have tried to refresh their ui/UX but they failed and are now reverting back to knobs and buttons.
It seems that VisionOS is a place where innovation could come but it's not really a success.
Designers decided that scrollbars that shrink to super-thin columns when not in use were better. Maybe... but often it results in shrunken scrollbars that require extra work to accurately hover over and expand.
Designers decided that gray text on gray backgrounds were easier to read, and there was even a study to "prove" it... which resulted in idiots picking poor contrast choices of gray-on-gray, without understanding the limits on this idea.
I will say that the current push for accessibility is forcing some of these "innovations" back onto the junk heap where they belong. I was annoyed the first time an accessibility review complained about the contrast of my color choices on a form once... but once I got over my ego, I have to admit they were right; the higher-contrast colors are easier to read.
Honestly, I could endlessly vehemently express my frustration to any designer that find this "cool".
/* rant /
Those designer never had to scroll to a long, long scrollable section of a page to reach the end and sadly discover that the "end" button doesn't work, because of course the browser goes to the end of the page, not the end of the scrollable section.
And of course, the scrollbar is 2 pixels wide (I took a screenshot to measure it) and it's only visible if I put my mouse in the section.
And of course, it's right next to the scrollbar that the dev decided to put the Action Icons for each item in the scrollable section.
1 Pixel left, open the popup to delete the item, 1 pixel right, scrollbar.
And of course, if I increase the zoom on my browser, everything grows, except the scrollbar.
I can have icons the size of my fist on a 27" screen but those scrollbar stay thinner than an uncooked spaghetti.
/ end of rant */
Like the AOL browser, come to think of it.
Tabs in Firefox were such an unfamiliar thing.
The question is why aren't they a feature of the window manager instead of the application. We should be able to have windows with tabs from different applications.
Actual MDI applications feel so dated. It made more sense when there wasn't a unified task bar kinda thing (which when you think of it, is kinda like tabs as well)
It's still a thing but it went off the rails, see Apple and their latest no-contrast UI.
Apple has the unfortunate burden of needing to shepherd millions of developers over to this new paradigm (AR) before it really exists, and so is shoving Liquid Glass onto devices that don't really benefit from it.
But in practice people are generally not happy about lots of new experimentation going on. By definition, most of the results suck. In retrospect we get to stand in awe of those that survived the evolutionary battle and say "wow browser tabs" and "wow pull to refresh" and forget the millions of other bad ideas that we tried.
Bruh, I just want to be able to read the text on my phone.
That's my point about people swooning about the days of UI experimentation. There's a reason we don't do it once we figure out good solutions to problems (experimentation is hard and mostly bad).
> Yeah: most experiments fail and even the ones that ultimately succeed have rough edges.
Vista / Aero 2.0 already did Liquid Glass. At least they had the decency to ship a "turn this shit off" toggle that actually worked.
Apple really has to bite the bullet somehow here if they want to get everyone over to what they see as the next computing paradigm.
I think if I had a really improved version of Apple vision I would still want non transparent windows that are clean and easy to read, not floating holograms with glass like distortion?
It would be interesting if someone had a way to throw a couple hundreds thousand designers and developers into an environment where they have to find solutions so we could get a head start before the relevant hardware goes fully mass-market...
Oh wait, I have them all off. So what will AR do for me?
Turns out that interaction shift actually enabled a lot.
IMO any individual (like you or I) are unlikely to immediately conjure up every possible high-value idea that AR makes possible.
Not saying those ideas necessarily exist (though I suspect they do), just that your lack of imagination isn't evidence against them existing and being discoverable in the next 10-20 years.
New apps were announced in blogs, and people downloaded them to try them out. I remember downloading Opera, using it for a few days or weeks, and then going back to Firefox.
The standard affordances for most well-known problems are long settled. Unless you're solving an entirely new class of problem, maybe you don't need to reinvent a large number of wheels, again. We're all tired of the triangular wheels coming out.
Which makes it funny that the request for UI innovation is prefixed with a quote that amounts to "but what if browsers were permanently frozen ca. 2012?". Mind, I can sympathize with some of the thoughts behind the request, even if I disagree - but you can't ask for a stop in new features & problem classes to be accompanied by continued UI innovation.
That is, as my art teacher used to say, "intellectual wankery in the disguise of creativity".
I don't think these are permanently gone, but the corporations failed us, and also the "not for profit" fakers such as Mozilla.
We need a new web - one developed by the people, for the people. Whenever corporations jump in, they try to skew things to their favour, which almost always means in disfavour of the people.
The only reason pull-to-refresh got accepted is that it came so early that the UX of smartphone app wasn't well established. Before pull-to-search or pull-to-whatever had a chance.
> nihilistically
It's quite nihilistic to think history doesn't exist and things were born as they are currently.
[0] https://web.archive.org/web/20201204045158/https://www.fastc...
[1] https://web.archive.org/web/20120331181045/http://android.cy...
[2] https://www.reddit.com/r/androiddev/comments/vbt6d/pull_to_r...
Discoverability is more than simply visual cues. Seeing other people do it counts.
It was chrome, downloading a multi GB file without any sort of UI hints that it was doing so. A generative AI file.
Is this why chrome uses so much ram? They’ve just been pushing up the memory usage in preparation for this day, hoping I wouldn’t notice the extra software now running on my (old, outdated) system?
[0] "small" in comparison to ChatGPT, but still a bulky download
I can see the use of LLMs and machine learning tools like TTS, translators and grammar checkers to be integrated to browser, but only depending on local models or better, like Firefox's case to CPU optimized local models.
A lot of anti-AI backlash seems to exempt machine translation, which as far as I can tell is just because it's been around for so long that people are comfortable with it and don't see it as new or AI-y, which imho spells doom for a lot of this- in ten years automatic tab groups will seem just as natural and non-intrusive as machine translation.
A local LLM that I explicitly bring up to ask a question and dismiss (ie no CPU or RAM usage) when I'm done consulting it is nice. A piece of software I'm using interrupting what I'm doing to ask me a useless and annoying question or to make an unsolicited change to my workspace leaves me thinking about permanently uninstalling it.
I will never want automatic tab groups or automatic anything else. I don't even want an "integrated" desktop environment - I use i3 to get away from that. I hate all the useless bullshit half baked features that are constantly shoved in my face.
If the modern web was compatible with it I'd use a text based browser for 90% of what I do online. And if that were the case I'd still welcome a built in machine translation feature because it's an incredibly useful tool.
It's still relatively new in FF and I don't think I've seen anyone complaining about it annoying them with popups, even though it absolutely does throw up an interrupting overlay, especially on mobile.
It's much more efficient on system resources than the larger LLMs downloaded by browsers for other tasks.
- Use of analytics tends to replace user trials/interviews entirely, trading away rich signals for weaker ones
- Analytics can be used to justify otherwise unpopular or ill-advised changes
- When combined with certain changes (e.g. making features harder to access), the numbers can be “steered” in a particular direction to favor a particular outcome and better enable the last point (“Looks like nobody’s using that thing we hid behind an obscure feature flag! Guess we’re safe to remove it entirely now!”).
In theory telemetry/analytics have strong potential for improving software quality, but more often than not they’re just massaged and misused by product managers bent on pushing the software a particular direction.
The need for this is mainly on work machines that are locked down; if admin mode is necessary then it's DOA...
A local MITM proxy that doesn't require elevated rights and which filters out everything unwanted, starting with ads, would be nice I think.
Is there a way to persist the file even after updates?
Also there's absolutely zero need to be sudo to put a JSON config file for Firefox on Linux.
You're basically bash/curl'ing the kitchen sink, with all the security risks that entails, executing a shell script as root (which may or may not be malicious now or at some point in the future), just to...
Put a 12 lines JSON file in a user's Firefox config folder.
Way to go my "fremen" brothers [1].
[1] the "fremen" in Dune as those who adore the Shai-Hulud
I guess then, the browser and AI just serve different purposes now?
And you might as well just fork chromium for that purpose.
https://en.wikipedia.org/wiki/Chromium_(web_browser)#Free_an...
Google and others really ruined the web.
I also today tried Qwant and for the first time, in a long while, the results Qwant delivered were objectively better than from Google Search. What the heck is Google doing?
Inflating stock prices.
Browsers are currently incentivised to add a bunch of new features outside their traditional role. Some people prefer to keep the browser's role simple. It's not ideological and it's not "hating".
Microsoft shoving LLMs into literally everything, including Notepad, is what people are currently hating, because it isn't quite ready.
why not? All things being equal non-AI solution is better. "it is current hyped thing" should bring some downward correction
and of all things to hate, AI hate is harmless and at least partially justified