The last paragraph is interesting: "Overall I think we're going to see a much higher quality of software, ironically around the same level than before 2000 when the net became usable by everyone to download fixes. When the software had to be pressed to CDs or written to millions of floppies, it had to survive an amazing quantity of tests that are mostly neglected nowadays since updates are easy to distribute."
Was software made before 2000 better? And, if so, was it because of better testing or lower complexity?
There are some rose-colored glasses when people say this.
Programs didn’t auto save and regularly crashed. It was extremely common to hear someone talk about losing hours of work. Computers regularly blue screened at random. Device drivers weren’t isolated from the kernel so you could easily buy a dongle or something that single-handedly destabilized your system. Viruses regularly brought the white-collar economy to its knees. Computer games that were just starting to come online and be collaborative didn’t do any validation of what the client sent it (this is true sometimes now, but it was the rule back then).
It's amazing that the world has largely forgotten the terror of losing entire documents forever. It happened to me. It happened to everyone. And this is the only comment I've seen so far here to even mention this.
I was a developer at Microsoft in the 90s (Visual Studio (Boston) and Windows teams). I won't claim that software back then was "better," but what is definitely true is that we had to think about everything at a much lower level.
For example, you had to know which Win32 functions caused ring-3 -> ring-0 transitions because those transitions could be incredibly costly. You couldn't just "find the right function" and move on. You had to find the right function that wouldn't bring your app (and entire system) to its knees.
I specifically remember hating my life whenever we ran into a KiUserExceptionDispatcher [0] issue, because even something as simple as an exception could kill your app's performance.
Additionally, we didn't get to just patch flaws as they arose. We either had to send out patches on floppy disks, post them to BBSs, or even send them to PC Magazine.
It was the best of times, it was the worst of times.
Best/better because yes, QA actually existed and was important for many companies - QA could "stop ship" before the final master was pressed if they found something (hehe as it was usually games) "game breaking". If you search around on folklore or other historical sites you can find examples of this - and programmers working all night with the shipping manager hovering over them ready to grab the disk/disc and run to the warehouse.
HOWEVER, updates did exist - both because of bugs and features, and because programmers weren't perfect (or weren't spending space-shuttle levels of effort making "perfect code" - and even voyager can get updates iirc). Look at DooM for an example - released on BBS and there are various versions even then, and that's 1994 or so?
But it was the "worst" in that the frameworks and code were simply not as advanced as today - you had to know quite a bit about how everything worked, even as a simple CRUD developer. Lots of protections we take for granted (even in "lower level" languages like C) simply didn't exist. Security issues abounded, but people didn't care much because everything was local (who cares if you can r00t your own box) - and 2000 was where the Internet was really starting to take off and everything was beginning to be "online" and so issues were being found left and right.
This was the big thing. There were tons of bugs. Not really bugs but vulnerabilities. Nothing a normal user doing normal things would encounter, but subtle ways the program could be broken. But it didn't matter nearly as much, because every computer was an island, and most people didn't try to break their own computer. If something caused a crash, you just learned "don't do that."
Even so, we did have viruses that were spread by sharing floppy disks.
That's a really big part of it - bugs were ways that the program wouldn't do what the user wanted - and often workarounds existed (don't do that, it'll crash).
Nowadays those bugs still exist but a vast majority of bugs are security issues - things you have to fix because others will exploit them if you don't.
It appeared better, because there were fewer features and more time to develop and test. But it's also a lot of nostalgia, because everything moved slower, the world was smaller, there was a lower standard; people will usually remember the later versions of a software, or never even encountered the earlier versions. Without the internet and every one bitching about every little detail, the general awareness was also different, not as toxic as today.
At the time of release, yes. They had to ensure the software worked before printing CDs and floppies. Nowadays they release buggy versions that users essentially test for them.
Also in terms of security, there was generally a much smaller potential attack surface and those surfaces were harder to reach because we were much less constantly connected.
I wouldn't go that far. As soon as you went online all bets were off.
In the 90s we had java applets, then flash, browsers would open local html files and read/write from c:, people were used to exchanging .exe files all the time and they'd open them without scrutiny (or warnings) and so on. It was not a good time for security.
Then dial-up was so finicky that you could literally disconnect someone by sending them a ping packet. Then came winXP, and blaster and its variants and all hell broke loose. Pre SP2 you could install a fresh version of XP and have it pwned inside 10 minutes if it was connected to a network.
Servers weren't any better, ssh exploits were all over the place (even The Matrix featured a real ssh exploit) and so on...
The only difference was that "the scene" was more about the thrill, the boasting, and learning and less about making a buck out of it. You'd see "x was here" or "owned by xxx" in page "defaces", instead of encrypting everything and asking for a reward.
Except that when you did connect Windows to anything it was hacked in less than 30 seconds (the user ignored the "apply these updates first, and then connect ..." advice, they wanted some keyboard driver. Hacked, whoops, gotta waste time doing a wipe and reinstall. This was back when many places had no firewalls). IRIX would fall over and die if you pointed a somewhat aggressive nmap at it, some buggy daemon listening by default on TCP/0, iirc. There was code in ISC DHCPD "windows is buggy, but we work around it with this here kluge..." and etc etc etc etc etc
Software has gotten drastically more secure than it was in 2000. It's hard to comprehend how bad the security picture was in 2000. This very much, extremely includes Linux.
But there was much less awareness of buffer overflows and none of the countermeasures that are widespread today. It was almost defining of the Win95 era that applications (eg. Word) frequently crashed because of improper and unsafe memory management.
I remember opening a webpage and being hacked seemed more likely. Adobe Flash and Java had more vulnerabilities and weaker (if any) sandboxes than JavaScript.
It is hard to say which of the 2 is the reason, more likely both, i.e. lower complexity enabled more exhaustive testing.
In any case some of the software from before 2000 was definitely better than today, i.e. it behaved like being absolutely foolproof, i.e. nothing that you could do could cause any crash or corrupted data or any other kind of unpredictable behavior.
However, the computers to which most people had access at that time had only single-threaded CPUs. Even if you used a preemptive multitasking operating system and a heavily multi-threaded application, executing it on a single-threaded CPU was unlikely to expose subtle bugs due to race conditions, that might have been exposed on a multi-core CPU.
While nowadays there exists no standard operating system that I fully trust to never fail in any circumstance, unlike before 2003, I wonder whether this is caused by a better quality of the older programs or by the fact that it is much harder to implement software concurrency correctly on systems with hardware parallelism.
Delta, JetBlue, American Airlines and Alaska Airlines have free Internet as long as you are enrolled (for free) in their loyalty programs.
JetBlue and Delta use ViaSat. I only fly Delta for the most part and ViaSat was available on all domestic routes I’ve flown except for the smaller A900 that I take from ATL to Southwest GA (50 minute flight). Then I use my free unlimited 1 hour access through T-Mobile with GoGo ground based service.
Just think of 8 and 16 bit video console games. Those cartridges were expensive so just how sure they had to be they were bug free before making millions of them?
Before 2000 fixing a bug the user would notice was expensive - you had to mail them a new disk/cd. As such there was a lot more effort put into testing software to ensure there were no bugs users would notice.
However before 2000 (really 1995) the internet was not a thing for most people. There were a few viruses around, but they had it really hard to propagate (they still managed, but compared to today it was much harder). Nobody worried about someone entering something too long in various fields - it did happen, but if you made your buffers "large" (say 100 bytes) most forms didn't have to worry about checking for overflow because nobody would type that much anyway. Note the assumption that a human was typing things on a keyboard into fields to create the buffer overflow. Thus a large portion of modern attacks weren't an issue - we are much better at checking buffer sizes now than there - they knew back then they should, but often got away with being lazy and not doing it. If a vulnerability exists but is never exploited do you care - thus is today better is debatable.
In the 1990s the US had encryption export laws, if you wanted to protect data often it was impossible. Modern AES didn't even exist until 2001, instead we had DES (when you cared triple DES which was pretty good even by today's standards) - but you were not allowed to use it in a lot of places. I remember the company I worked for at the time developed their own encryption algorithm for export, with the marketing(!) saying something like "We think it is good, but it hasn't been examined near as well as DES so you should only use it if you legally you can't use DES"
As an end user though, software was generally better. They rarely had bugs anyone would notice. This came at the expense of a lot more testing, and features took longer to develop. Even back then it was a known trade off, and some software was known to be better than others because of the effort the company put into making it work before release. High risk software (medical) is still developed with a lot of extra testing and effort today.
As for the second part - software back then was plenty complex. Sure today things are more complex, but I don't think that is the issue. In fact in some ways things were more complex because extra effort was put into optimization (200mhz CPUs were the top end expensive servers, most people only had around 90mhz, and more than one core was something only nerds knew was possible and most of them didn't have it). As such a lot of effort was put into complex algorithms that were faster at the expensive of being hard to maintain. Today we have better optimize rs and faster CPUs so we don't write as much complex code trying to get performance.
Literally the moment everyone got on the internet, pretty much every computer program and operating system in the world was besieged by viruses and security flaws, so no.
It was a simpler time. Not better. Not worse. Programs still had bugs, but they weren't sloppy UI bugs, they were logic bugs and memory leaks. If software was better back then, we'd still be using it!
Yes. The incentives for writing reliable, robust code were much higher. The internet existed so you could, in theory, get a patch out for people to download - but a sizeable part of any user base might have limited access, so would require something physical shipped to them (a floppy or CD). Making sure that your code worked and worked well at time of shipping was important. Large corporate customers were not going to appreciate having to distribute an update across their tens of thousands of machines.
No. The world wasn't as connected as it is today, which meant that the attack surface to reasonably consider was much smaller. A lot of the issues that we had back then were due to designs and implementations that assumed a closed system overall - but often allowed very open interoperability between components (programs or machines) within the system. For example, Outlook was automatable, so that it could be part of larger systems and send mail in an automated way. This makes sense within an individual organisation's "system", but isn't wise at a global level. Email worms ran rampant until Microsoft was forced to reduce that functionality via patches, which were costly for their customers to apply. It damaged their reputation considerably.
An extreme version of this was openness was SQL Slammer - a worm which attacked SQL Servers and development machines. Imagine that - enough organisations had their SQL Servers or developer machines directly accessible that an actual worm could thrive on a relational database system. Which is mindboggling to think about these days, but it really happened - see https://en.wikipedia.org/wiki/SQL_Slammer for details.
I wouldn't say that the evidence points to software being better in the way that we would think of "better" today. I'd say that the environment it had to exist in was simpler, and that the costs of shipping & updating were higher - so it made more sense to spend time creating robust software. Also nobody was thinking about the possible misuse or abuse of their software except in very limited ways. These days we have to protect against much more ingenious use & abuse of programs.
Furthermore today patching is quick and easy (by historical comparison), and a company might even be offering its own hosted solution, which makes the cost of patching very low for them. In such an environment it can seem more reasonable to focus on shipping features quickly over shipping robust code slowly. I'd argue that's a mistake, but a lot of software development managers disagree with me, and their pay packet often depends on that view, so they're not going to change their minds any time soon.
In a way this is best viewed as the third age of computing. The first was the mainframe age - centralised computer usage, with controlled access and oversight, so mistakes were costly but could be quickly recovered from. The second was the desktop PC age - distributed computer usage, with less access control, so mistakes were often less costly but recovering from them was potentially very expensive. The third is the cloud & device age, with a mix of centralised and distributed computer use, a mix of access control, and potentially much lower costs of recovery. In this third age if you make the wrong decisions on what to prioritise (robustness vs speed of shipping), it can be the worst of both the previous ages. But it doesn't have to be.
I hope that makes sense, and is a useful perspective for you.
>people will finally understand that security bugs are bugs, and that the only sane way to stay safe is to periodically update, without focusing on "CVE-xxx"
The problem is that the very same tools, I expect, are behind the supply chain attacks that seem to be particularly notorious recently. No matter where you turn, there's an edge to cut you on that one.
“Reversing was already mostly a speed-bump even for entry-level teams, who lift binaries into IR or decompile them all the way back to source. Agents can do this too, but they can also reason directly from assembly. If you want a problem better suited to LLMs than bug hunting, program translation is a good place to start.”
Huh. Direct debugging, in assembly. At that point, why not jump down to machine code?
For the purposes of debugging, assembly is machine code, just with some nice constructs to make it easier to read. Transpiling between assembly and machine code is mostly a find-and-replace exercise, not like the advanced reasoning involved in proper compilation.
Decompiled assembly is basically machine code; without recreating the macros that make assembly "high level" you're as close to machine code as you're going to get unless you're trying to exploit the CPU itself.
> I don't know how long this pace will last. I suspect that bugs are reported faster than they are written, so we could in fact be purging a long backlog
Hopefully these same tools will also help catch security bugs at the point they're written. Maybe one day we'll reach a point where the discovery of new, live vulnerabilities is extremely rare?
Around 70% of security vulnerabilities are about memory safety and only exist because software is written in C and C++. Because most vulnerabilities are in newly written code, Google has found that simply starting writing new code in Rust (rather than trying to rewrite existing codebases) quickly brings the number of found vulnerabilities down drastically.
You can't just write Rust in a part of the codebase that's all C/C++. Tools for checking the newly written C/C++ code for issues will still be valuable for a very long time.
You actually can? A Rust-written function that exports a C ABI and calls C ABI functions interops just fine with C. Of course that's all unsafe (unless you're doing pure value-based programming and not calling any foreign code), so you don't get much of a safety gain at the single-function level.
No, this is false. For Rust codebases that aren't doing high-peformance data structures, C interop, or bare-metal stuff, it's typical to write no unsafe code at all. I'm not sure who told you otherwise, but they have no idea what they're talking about.
Good developers only write unsafe rust when there is good reason to. There are a lot of bad developers that add unsafe anytime they don't understand a Rust error, and then don't take it out when that doesn't fix the problem (hopefully just a minority, but I've seen it).
It's the classic "misunderstanding" that UB or buggy unsafe code could in theory corrupt any part of your running application (which is technically true), and interpreting this to mean that any codebase with at least one instance of UB / buggy unsafe code (which is ~100% of codebases) is safety-wise equivalent to a codebase with zero safety check - as all the safety checks are obviously complete lies and therefore pointless time-wasters.
Which obviously isn't how it works in practice, just like how C doesn't delete all the files on your computer when your program contains any form of signed integer overflow, even though it technically could as that is totally allowed according to the language spec.
If you're talking about Rust codebases, I'm pretty sure that writing sound unsafe code is at least feasible. It's not easy, and it should be avoided if at all possible, but saying that 100% of those codebases are unsound is pessimistic.
One feasible approach is to use "storytelling" as described here: https://www.ralfj.de/blog/2026/03/13/inline-asm.html That's talking about inline assembly, but in principle any other unsafe feature could be similarly modeled.
I'd be very curious to know what class of vulnerability these tend to be (buffer overrun, use after free, misset execute permissions?), and if, armed with that knowledge, a deterministic tool could reliably find or prevent all such vulnerabilities. Can linters find these? Perhaps fuzzing? If code was written in a more modern language, is it sill likely that these bugs would have happened?
Why don't we just pagerank github contributors? Merged PRs approved by other quality contributors improves rank. New PRs tagged by a bot with the rank of the submitter. Add more scoring features (account age? employer?) as desired.
Interesting that it's been higher than forecast since 2023. Personally I'd expect that trend to continue given that LLMs both increase bugs written as well as bugs discovered.
It's interesting to hear from people directly in the thick of it that these bug reports are apparently gaining value and are no longer just slop. Maybe there is hope for a world where AI helps create bug free software and doesn't just overload maintainers.
An AI enthusiast having a breathless and predictive position on the future of the technology? No way! It's almost like Wall Street is about to sour on the whole stack and there is a concerted effort to artificially push these views into the conversation to get people on board.
Then again, I'm a known crank and aggressive cynic, but you never really see any gathered data backing these points up.
Anyone who says anything good about AI must be an AI shill from the start, not someone who is genuinely observing reality or had their mind changed, don't you know?
Sort of a tautology to just assert that someone saying good things about AI is an AI enthusiast and therefore their opinion should be dismissed. He also happens to have been a kernel maintainer, his experience as he's describing it should count for something.
"On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us."
Is there a reason you’ve copy pasted the first paragraph from the link? It doesn’t add anything to the discussion, and also doesn’t help as a tl;dr because it’s literally the first paragraph. Genuine question!
The actual title is pretty unclear ("Significant Raise of Reports" of what?), so I considered replacing it by some of this excerpt, but HN rules say not to editorialize titles. Hence I put it into the `text` field, which I thought would be the body, but actually just gets posted as a comment.
Reports being written faster than bugs being created? Better quality software than before the 2000s?
Oh my sweet summer child.
This is some seriously delusional cope from someone who drank the entire jug of kool-aid.
I’d love to be proven wrong but the current trajectory is pretty plain as day from current outcomes. Everything is getting worse, and everyone is getting overwhelmed and we are under attack even more and the attacks are getting substantially more sophisticated and the blast radius is much bigger.
Was software made before 2000 better? And, if so, was it because of better testing or lower complexity?
Programs didn’t auto save and regularly crashed. It was extremely common to hear someone talk about losing hours of work. Computers regularly blue screened at random. Device drivers weren’t isolated from the kernel so you could easily buy a dongle or something that single-handedly destabilized your system. Viruses regularly brought the white-collar economy to its knees. Computer games that were just starting to come online and be collaborative didn’t do any validation of what the client sent it (this is true sometimes now, but it was the rule back then).
Bad old days indeed!
For example, you had to know which Win32 functions caused ring-3 -> ring-0 transitions because those transitions could be incredibly costly. You couldn't just "find the right function" and move on. You had to find the right function that wouldn't bring your app (and entire system) to its knees.
I specifically remember hating my life whenever we ran into a KiUserExceptionDispatcher [0] issue, because even something as simple as an exception could kill your app's performance.
Additionally, we didn't get to just patch flaws as they arose. We either had to send out patches on floppy disks, post them to BBSs, or even send them to PC Magazine.
[0]: https://doar-e.github.io/blog/2013/10/12/having-a-look-at-th...
Best/better because yes, QA actually existed and was important for many companies - QA could "stop ship" before the final master was pressed if they found something (hehe as it was usually games) "game breaking". If you search around on folklore or other historical sites you can find examples of this - and programmers working all night with the shipping manager hovering over them ready to grab the disk/disc and run to the warehouse.
HOWEVER, updates did exist - both because of bugs and features, and because programmers weren't perfect (or weren't spending space-shuttle levels of effort making "perfect code" - and even voyager can get updates iirc). Look at DooM for an example - released on BBS and there are various versions even then, and that's 1994 or so?
But it was the "worst" in that the frameworks and code were simply not as advanced as today - you had to know quite a bit about how everything worked, even as a simple CRUD developer. Lots of protections we take for granted (even in "lower level" languages like C) simply didn't exist. Security issues abounded, but people didn't care much because everything was local (who cares if you can r00t your own box) - and 2000 was where the Internet was really starting to take off and everything was beginning to be "online" and so issues were being found left and right.
This was the big thing. There were tons of bugs. Not really bugs but vulnerabilities. Nothing a normal user doing normal things would encounter, but subtle ways the program could be broken. But it didn't matter nearly as much, because every computer was an island, and most people didn't try to break their own computer. If something caused a crash, you just learned "don't do that."
Even so, we did have viruses that were spread by sharing floppy disks.
Nowadays those bugs still exist but a vast majority of bugs are security issues - things you have to fix because others will exploit them if you don't.
At the time of release, yes. They had to ensure the software worked before printing CDs and floppies. Nowadays they release buggy versions that users essentially test for them.
I wouldn't go that far. As soon as you went online all bets were off.
In the 90s we had java applets, then flash, browsers would open local html files and read/write from c:, people were used to exchanging .exe files all the time and they'd open them without scrutiny (or warnings) and so on. It was not a good time for security.
Then dial-up was so finicky that you could literally disconnect someone by sending them a ping packet. Then came winXP, and blaster and its variants and all hell broke loose. Pre SP2 you could install a fresh version of XP and have it pwned inside 10 minutes if it was connected to a network.
Servers weren't any better, ssh exploits were all over the place (even The Matrix featured a real ssh exploit) and so on...
The only difference was that "the scene" was more about the thrill, the boasting, and learning and less about making a buck out of it. You'd see "x was here" or "owned by xxx" in page "defaces", instead of encrypting everything and asking for a reward.
In any case some of the software from before 2000 was definitely better than today, i.e. it behaved like being absolutely foolproof, i.e. nothing that you could do could cause any crash or corrupted data or any other kind of unpredictable behavior.
However, the computers to which most people had access at that time had only single-threaded CPUs. Even if you used a preemptive multitasking operating system and a heavily multi-threaded application, executing it on a single-threaded CPU was unlikely to expose subtle bugs due to race conditions, that might have been exposed on a multi-core CPU.
While nowadays there exists no standard operating system that I fully trust to never fail in any circumstance, unlike before 2003, I wonder whether this is caused by a better quality of the older programs or by the fact that it is much harder to implement software concurrency correctly on systems with hardware parallelism.
JetBlue and Delta use ViaSat. I only fly Delta for the most part and ViaSat was available on all domestic routes I’ve flown except for the smaller A900 that I take from ATL to Southwest GA (50 minute flight). Then I use my free unlimited 1 hour access through T-Mobile with GoGo ground based service.
Before 2000 fixing a bug the user would notice was expensive - you had to mail them a new disk/cd. As such there was a lot more effort put into testing software to ensure there were no bugs users would notice.
However before 2000 (really 1995) the internet was not a thing for most people. There were a few viruses around, but they had it really hard to propagate (they still managed, but compared to today it was much harder). Nobody worried about someone entering something too long in various fields - it did happen, but if you made your buffers "large" (say 100 bytes) most forms didn't have to worry about checking for overflow because nobody would type that much anyway. Note the assumption that a human was typing things on a keyboard into fields to create the buffer overflow. Thus a large portion of modern attacks weren't an issue - we are much better at checking buffer sizes now than there - they knew back then they should, but often got away with being lazy and not doing it. If a vulnerability exists but is never exploited do you care - thus is today better is debatable.
In the 1990s the US had encryption export laws, if you wanted to protect data often it was impossible. Modern AES didn't even exist until 2001, instead we had DES (when you cared triple DES which was pretty good even by today's standards) - but you were not allowed to use it in a lot of places. I remember the company I worked for at the time developed their own encryption algorithm for export, with the marketing(!) saying something like "We think it is good, but it hasn't been examined near as well as DES so you should only use it if you legally you can't use DES"
As an end user though, software was generally better. They rarely had bugs anyone would notice. This came at the expense of a lot more testing, and features took longer to develop. Even back then it was a known trade off, and some software was known to be better than others because of the effort the company put into making it work before release. High risk software (medical) is still developed with a lot of extra testing and effort today.
As for the second part - software back then was plenty complex. Sure today things are more complex, but I don't think that is the issue. In fact in some ways things were more complex because extra effort was put into optimization (200mhz CPUs were the top end expensive servers, most people only had around 90mhz, and more than one core was something only nerds knew was possible and most of them didn't have it). As such a lot of effort was put into complex algorithms that were faster at the expensive of being hard to maintain. Today we have better optimize rs and faster CPUs so we don't write as much complex code trying to get performance.
Literally the moment everyone got on the internet, pretty much every computer program and operating system in the world was besieged by viruses and security flaws, so no.
Yes. The incentives for writing reliable, robust code were much higher. The internet existed so you could, in theory, get a patch out for people to download - but a sizeable part of any user base might have limited access, so would require something physical shipped to them (a floppy or CD). Making sure that your code worked and worked well at time of shipping was important. Large corporate customers were not going to appreciate having to distribute an update across their tens of thousands of machines.
No. The world wasn't as connected as it is today, which meant that the attack surface to reasonably consider was much smaller. A lot of the issues that we had back then were due to designs and implementations that assumed a closed system overall - but often allowed very open interoperability between components (programs or machines) within the system. For example, Outlook was automatable, so that it could be part of larger systems and send mail in an automated way. This makes sense within an individual organisation's "system", but isn't wise at a global level. Email worms ran rampant until Microsoft was forced to reduce that functionality via patches, which were costly for their customers to apply. It damaged their reputation considerably.
An extreme version of this was openness was SQL Slammer - a worm which attacked SQL Servers and development machines. Imagine that - enough organisations had their SQL Servers or developer machines directly accessible that an actual worm could thrive on a relational database system. Which is mindboggling to think about these days, but it really happened - see https://en.wikipedia.org/wiki/SQL_Slammer for details.
I wouldn't say that the evidence points to software being better in the way that we would think of "better" today. I'd say that the environment it had to exist in was simpler, and that the costs of shipping & updating were higher - so it made more sense to spend time creating robust software. Also nobody was thinking about the possible misuse or abuse of their software except in very limited ways. These days we have to protect against much more ingenious use & abuse of programs.
Furthermore today patching is quick and easy (by historical comparison), and a company might even be offering its own hosted solution, which makes the cost of patching very low for them. In such an environment it can seem more reasonable to focus on shipping features quickly over shipping robust code slowly. I'd argue that's a mistake, but a lot of software development managers disagree with me, and their pay packet often depends on that view, so they're not going to change their minds any time soon.
In a way this is best viewed as the third age of computing. The first was the mainframe age - centralised computer usage, with controlled access and oversight, so mistakes were costly but could be quickly recovered from. The second was the desktop PC age - distributed computer usage, with less access control, so mistakes were often less costly but recovering from them was potentially very expensive. The third is the cloud & device age, with a mix of centralised and distributed computer use, a mix of access control, and potentially much lower costs of recovery. In this third age if you make the wrong decisions on what to prioritise (robustness vs speed of shipping), it can be the worst of both the previous ages. But it doesn't have to be.
I hope that makes sense, and is a useful perspective for you.
There was a point in time where both windows wasn’t constantly bsoding and Microsoft’s primary objectives weren't telemetry and slop coding.
The problem is that the very same tools, I expect, are behind the supply chain attacks that seem to be particularly notorious recently. No matter where you turn, there's an edge to cut you on that one.
Huh. Direct debugging, in assembly. At that point, why not jump down to machine code?
Hopefully these same tools will also help catch security bugs at the point they're written. Maybe one day we'll reach a point where the discovery of new, live vulnerabilities is extremely rare?
So we now have a new code base in an undefined language which still has memory bugs.
This is progress.
Good developers only write unsafe rust when there is good reason to. There are a lot of bad developers that add unsafe anytime they don't understand a Rust error, and then don't take it out when that doesn't fix the problem (hopefully just a minority, but I've seen it).
Which obviously isn't how it works in practice, just like how C doesn't delete all the files on your computer when your program contains any form of signed integer overflow, even though it technically could as that is totally allowed according to the language spec.
One feasible approach is to use "storytelling" as described here: https://www.ralfj.de/blog/2026/03/13/inline-asm.html That's talking about inline assembly, but in principle any other unsafe feature could be similarly modeled.
Seems supported by this as well: https://www.first.org/blog/20260211-vulnerability-forecast-2...
Interesting that it's been higher than forecast since 2023. Personally I'd expect that trend to continue given that LLMs both increase bugs written as well as bugs discovered.
Then again, I'm a known crank and aggressive cynic, but you never really see any gathered data backing these points up.
https://www.anthropic.com/news/mozilla-firefox-security
?
Oh my sweet summer child.
This is some seriously delusional cope from someone who drank the entire jug of kool-aid.
I’d love to be proven wrong but the current trajectory is pretty plain as day from current outcomes. Everything is getting worse, and everyone is getting overwhelmed and we are under attack even more and the attacks are getting substantially more sophisticated and the blast radius is much bigger.