Slop is not necessarily the future

(greptile.com)

166 points | by dakshgupta 9 hours ago

56 comments

  • seamossfet 6 hours ago
    I find most developers fall into one of two camps:

    1. You treat your code as a means to an end to make a product for a user.

    2. You treat the code itself as your craft, with the product being a vector for your craft.

    The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.

    Personally, I fall into the first camp.

    No one has ever made a purchasing decision based on how good your code is.

    The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.

    With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).

    I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.

    • nocman 5 hours ago
      > No one has ever made a purchasing decision based on how good your code is.

      absolutely false.

      > The general public does not care about anything other than the capabilities and limitations of your product.

      also false.

      People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).

      The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.

      You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.

      • latexr 4 hours ago
        > You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.

        Exactly, thank you for putting it like that.

        So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.

        For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.

        • doug_durham 3 hours ago
          I agree with OP's distinction. However just because you see software as a means to an ends, doesn't mean that you don't feel that quality and craft are unimportant. You can see the "craft" oriented folks as being obsessed with the form of their software. A "craft" oriented engineer might rewrite a perfectly functioning piece of software to make it what they perceive to be "easier to reason about". I consider most software rewrites to be borderline malpractice.
          • jimbokun 2 minutes ago
            With bad code it’s often almost impossible to improve the functionality or correctness or performance of the code, without first rewriting parts of it.
          • deltaburnt 1 hour ago
            I think the kind of surface level rewrites that people rag on are pretty rare, at least in my experience. Realistically code that's impossible to understand, underdocumented, and lacking in proper abstractions is also deficient code. If you've ensured that the code is "good enough", you will likely hit a bug or feature request that is hindered by the poor structure and understanding of the code.

            It's totally fine to say "the code works, that area is stable, let's not mess with that code". I make those kinds of tradeoffs on a near daily basis. But let's be real, "perfectly functioning code" is an ill defined, moving target. What looks like perfectly functioning code to a sibling team or a PM, could be a massive liability to someone who actually knows the code.

            But then again I'm writing OS and performance critical code. A 1 in 1 million bug is easier to ignore in a throwaway log viewer website.

        • bit-anarchist 2 hours ago
          That example doesn't work well. All regulations come at the cost of freedom, and every freedom comes at the cost of regulations. While it isn't a strict binary (either 100% freedom or 100% regulation), enacting regulations do interfere with freedom. So this isn't just framing, it demonstrates a relationship between the two concepts, which may become relevant down in the discussion, if it already hasn't.
          • layer8 1 hour ago
            Regulation can cause freedom to be balanced differently between parties. For example, regulating smartphone manufacturers can result in more freedom for users. It’s not true that regulation necessarily reduces freedom overall (to the extent that that can even be graded). Just like rights, freedoms aren’t absolute, and one’s freedom often impinges on someone else’s freedom.
      • jimbokun 5 minutes ago
        I would add to that for software that has scaled to the point that it needs to run in a distributed fashion, code quality will show up very quickly in dollars and sense.

        High latencies, outages, memory leaks, security vulnerabilities, will be seen in your AWS bill or whatever hardware or service you deploy your software to. If your code isn’t clear enough to understand what it’s really doing, you have no chance at preventing or addressing the above.

      • mbesto 4 hours ago
        > obviously created by people the care deeply about the quality of the product they produce

        This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.

        > There is no substitute for high quality work.

        You're right because there really isn't a consistent definition of what "high quality" software work looks like.

        • MrRadar 3 hours ago
          > This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.

          Those first three are "enterprise" or B2B applications, where the person buying the software is almost never one of the people actually using the software. This disconnect means that the person making the buying decision cannot meaningfully judge the quality of any given piece of software they are evaluating beyond a surface level (where slick demos can paper over huge quality issues) since they do not know how it is actually used or what problems the actual users regularly encounter.

          • tokioyoyo 34 minutes ago
            Have you seen large consumer products’ codebases?… Companies like Google are tiniest exceptions when it comes to code gatekeeping and quality.
          • mbesto 3 hours ago
            Which might be true, but is totally irrelevant to the OP's comment.
            • MrRadar 3 hours ago
              Users care about quality, even if the people buying the software do not. You can't just say "well the market doesn't care about quality" when the market incentives are broken for a paricular type of software. When the market incentives are aligned between users and purchasers (such as when they are the same person) quality tends to become very important for the market viability of software (see Windows in the consumer OS market, which is perceptibly losing share to MacOS and Linux following a sustained decline in quality over the last several years).
              • mbesto 33 minutes ago
                > "well the market doesn't care about quality"

                You literally just told me the market doesn't care about quality. I don't get what point you're trying to make?

                > When the market incentives are aligned between users and purchasers (such as when they are the same person) quality tends to become very important for the market viability of software

                Right, but this magical market you're talking about doesn't exist. That's my point.

        • Terr_ 3 hours ago
          > You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product.

          What about caring and being depressed because quality comes from systems rather than (just) individuals?

        • tkiolp4 3 hours ago
          SAP, Salesforce, Booking.com… all awful products. We use them because monopolies.
          • cdrnsf 3 hours ago
            I couldn't book travel at a previous company because my address included a `.`, which passed their validation. Awful, awful software. I wouldn't expect slop code to improve it.
      • ttamslam 5 hours ago
        > People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).

        I would classify all of those as "capabilities and limitations of your product"

        I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.), and in that sense I agree no customer who's just using the product actually cares about that.

        Another definition of "good code" is probably "code that meets the requirements without unexpected behavior" and in that sense of course end users care about good code, but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.

        • westoncb 4 hours ago
          >but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.

          The reality of software products is that they are in nearly in all cases developed/maintained over time, though--and whenever that's the case, the black box metaphor fails. It's an idealization that only works for single moments of time, and yet software development typically extends through the entire period during which a product has users.

          > I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.)

          The above is also why these properties you've mentioned shouldn't be considered aesthetic only: the software's likelihood of having tractable bugs, manageable performance concerns, or to adapt quickly to the demands of its users and the changing ecosystem it's embedded in are all affected by matters of abstraction selection, code organization, and documentation.

        • arijun 4 hours ago
          But those aesthetics stem from that need for fewer bugs, performance, maintainability. Identifying/defining code smell comes from experience of what does and doesn’t work.

          > I wouldn't care so long as I wasn't expected to maintain it.

          But, if you’re the one putting out that software, of course you will have to maintain it! When your users come back with a bug or a “this flow is too slow,” you will have to wade into the innards (at least until AI can do that without mistakes).

        • Xirdus 4 hours ago
          Good abstractions translate directly into how quickly the devs can fix bugs and add new features.
        • skydhash 4 hours ago
          But the thing is that someone has to maintain it. And while beautiful code is not the same as correct code, the first is impactful in getting the second and keeping it.

          And most users are not consuming your code. They’re consuming some compiled, transpiled, or minified version of it. But they do have expectations and it’s easier to amend the product if the source code is maintainable.

      • delbronski 3 hours ago
        Garbage software that is slow as a dog has been winning. While we’ve been obsessing over our craft and arguing about what makes software beautiful, slow crappy software has taken over the world.

        Quality of code is just not that important of a concept anymore for the average web developer building some saas tool. React code was always crap anyways. Unless you are building critical systems like software that powers a plane or medical equipment, then code quality just doesn’t really matter so much in the age of AI. That may be a hard pill to swallow for some.

        • girvo 1 hour ago
          > then code quality just doesn’t really matter so much in the age of AI

          Except at scale it really does, because garbage in garbage out. The crappier the code you feed the current models, the worse and more confusing the broken leaky abstractions, the more bugs the AI will generate.

        • allajfjwbwkwja 2 hours ago
          There's a lot of space between web dev and medical equipment. I've never met a user that loved how often their work was lost or how long it took to implement feature requests amid all the ship-it-quick duct tape.
        • layer8 1 hour ago
          The question is whether it has to be that way. Developers who are against slop don’t believe that the current state of software is the best possible world.
      • steveBK123 5 hours ago
        > The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.

        Exactly. A lot of devs optimizing for whether the feature is going to take a day or an hour, but not contemplating that it's going to be out in the wild for 10 years either way. Maybe do it well once.

        • ttamslam 5 hours ago
          > but not contemplating that it's going to be out in the wild for 10 years either way

          I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.

          • coldtea 4 hours ago
            >I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.

            And in almost all of those cases, they'd be wrong.

            • nostrademons 2 hours ago
              I think I calculated the half-life of my code written at my first stint of Google (15 years ago) as 1 year. Within 1 year, half of the code I'd written was deprecated, deleted, or replaced, and it continued to decay exponentially like that throughout my 6-year tenure there.

              Interestingly, I still have some code in the codebase, which I guess makes sense because I submitted about 680K LOC (note: not all hand-authored, there was a lot of output from automated tools in that) and 2^15 is 32768, so I'd expect to have about 20 lines left, which is actually surprisingly close to accurate (I didn't precisely count, but a quick glance at what I recognized suggested about 200 non-deprecated lines remain in prod). It is not at all the code that I thought would still be there 15 years later, or that I was most proud of. The most durable change appears to be renaming some attributes in a custom templating language that is now deeply embedded in the Search stack, as well as some C++ code that handles how various search options are selected and persisted between queries.

              I think this both proves and disproves the original point. Most of your code is temporary. You have no idea which parts of your code is temporary. It's probably not the parts that you wish were temporary, which will almost certainly be made permanent.

          • benoau 5 hours ago
            In my experience the code will, but by year 5 nobody is left who worked on it from inception, and by year 10 nobody knows anybody who did, and during that time it reaches a stage where nobody will ever feel any sense of ownership or care about the code in its entirety again.
            • contextfree 4 hours ago
              I come into work and work on a 20 year old codebase every day, working on slowly modernizing it while preserving the good parts. In my experience, and I've been experimenting with both a lot, LLM-based tools are far worse at this than they are at starting new greenfield projects.
              • ryandrake 4 hours ago
                This conversation shows how diverse the field is!

                When it comes to professional development, I've almost never worked on a codebase less than 10 years old, and it was always [either silently or overtly] understood that the software we are writing is a project that's going to effectively live forever. Or at least until the company is no longer recognizable from what it is today. It just seems wild and unbelievable to me, to go to work at a company and know that your code is going to be compiled, sent off to customers, and then nobody is ever going to touch it again. Where the product is so throwaway that you're going to work on it for about a year and then start another greenfield codebase. Yet there are companies that operate that way!

              • aplomb1026 2 hours ago
                [dead]
          • steveBK123 5 hours ago
            It's important to know which type of repo/project you are in and hire/code accordingly.

            I've seen mismatch in each direction..

            • AlotOfReading 4 hours ago
              How can you possibly know which type of repo you're in ahead of time? My experience is that "temporary" code frequently becomes permanent and I've also been on the other side of those decisions 40 years later.
            • skydhash 4 hours ago
              Unless you’re producing demos for sales presentation (internally or externally), it’s always worth it to produce something good. Bad code will quickly slow you down and it will be a never ending parade of bug tickets.
              • steveBK123 3 hours ago
                indeed, being on-call cleanses many developers of slopulist habits
                • abelitoo 26 minutes ago
                  That depends on how quick the feedback loop is for your decisions. If it takes weeks or months to find the impact of your changes, or worse, if you're insulated somehow from those changes, you may not be pushed toward improving the quality of your code.
                • gjadi 2 hours ago
                  It depends on their sleep habit, work-life requirements and compensation when they need to be on-call.

                  When you get a fatter check because your code break, the incentives are not in favor of good code.

      • strogonoff 5 hours ago
        If a product looks pretty and seems to work great at first experience, but is really an unmaintainable mess under the hood, has an unvetted dependency graph, has a poorly thought through architecture that no one understands, perhaps is unsustainable due to a flawed business model, etc., to me it simply suffers from bad design[0], which will be felt sooner or later. If I know this—which is, admittedly, sometimes hard to know (especially in case of software products compared to physical artifacts)—I would, given alternatives, make the choice to not be a customer.

        In other words, I would, when possible, absolutely make a purchasing decision based on how good the code is (or based on how good I estimate the code to be), among other things.

        [0] The concept of design is often misunderstood. First, obviously, when it’s classified as “how the thing looks”; then, perhaps less obviously, when it’s classified as “how the thing works”. A classification I am arriving at is, roughly, “how the thing works over time”.

      • Ekaros 4 hours ago
        Demos might ne nice an flashy. But eventually you actually have to have generally working product. Too many issues with too many annoyances and eventually users of even enterprise software will be heard. Especially so if there is some actual loss of money or data that is not corrected very fast.

        In the end software is means to the end. And if you do not get to end because software is crap it will be replaced, hopefully by someone else.

        • tokioyoyo 33 minutes ago
          The argument is “it’s not crap if it works and does what it’s supposed to do relatively well, and it doesn’t matter how it’s written under the hood”.
      • singpolyma3 1 hour ago
        There's a big gap though between bug free, performance, etc and craft
      • EFreethought 5 hours ago
        >> No one has ever made a purchasing decision based on how good your code is.

        > absolutely false.

        Actually, you are both correct.

        Nobody makes a purchasing decision based on code quality.

        But they may later regret a purchasing decision based on code quality.

      • oxag3n 3 hours ago
        I treat the code I produce as my craft and I appreciate I can afford it and enjoy the output.

        I know engineers who aren't that lucky and struggle in "enterprise" software development, where slop was a feature for decades - people making decisions won't use the software (their low paid employees will) and software monstrosities need a hell a lot of support which sometimes brings more revenue than the original purchase.

      • NoMoreNicksLeft 4 hours ago
        The history of technology is filled with examples where between two competing analogous products, the inferior always wins. It does not matter if it is only slightly inferior or extraordinarily inferior, both win out. It's often difficult to come up with counter-examples. Why is this? Economic pressure. "Inferior" costs less. Sometimes the savings are passed on to the customer... they choose the inferior. Other times the greedy corporate types keep all of it (and win simply because they outmarket the competitor). It does not matter.

        If there are people who, on principle, demand the superior product then those people simply aren't numerous enough to matter in the long run. I might be one of those people myself, I think.

      • zer00eyz 18 minutes ago
        > People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly

        The assumption that people CARE about your product is the most Silicon Valley, Hacker News, forgot what the world out side of tech looks like thing ever.

        People CARE about their software as much as the CARE about their bank, or a new finance product... People excited over software is more of a historic footnote than any thing real people think in 2026.

        The vast majority of Software is one of two things:

        A) a tool

        B) a source of entertainment

        As a tool it either needs to provide value or it's something that is shoved on you by work.

        The user experience of your average mobile game today is fucking awful. People put up with a massive amount of garbage for a trickle of fun. So much of the web looks like a mid 90's Hong Kong back alleyway --- biking ads, videos screaming at you, and someone trying to steal your wallet. And the majority of things people are forced to use for work... well... Show me someone who is excited about their ERP or CMS or Network drive... Show me someone who thinks that anything to do with Salesforce is something to be excited over.

        > The general public does not care about anything other than the capabilities and limitations of your product.

        A segment of our industry is screaming about the security of open claw. People dont care (because we have also made a mockery of security as well) - they are using it as a tool that will deliver them a solution. It strips away all the arcanea that made people think we were wizards and writes the dam spells for them. It's a dumpster fire, and people are thrilled about it and what its delivering them. And thats software not made by you or I.

      • fooker 4 hours ago
        > The longer your product exists the more important the quality of the code will be

        From working on many many old and important code bases, the code quality is absolutely trash.

      • paxys 5 hours ago
        And yet somehow the shittiest buggiest software ends up being the most popular.

        Look through the list of top apps in mobile app stores, most used desktop apps, websites, SaaS, and all other popular/profitable software in general and tell me where you see users rewarding quality over features and speed of execution.

        • applfanboysbgon 4 hours ago
          You have it backwards. Excellent software becomes popular, and then becomes enshittified later once it already has users. Often there is a monopoly/network effect that allows them to degrade the quality of their software once they already have users, because the value in their offering becomes tied to how many people are using it, so even a technically superior newcomer won't be able to displace it (eg. Youtube is dogshit now but all of the content creators are there, and all of the viewers are there, so content creators won't create content for a better platform with no viewers and viewers won't visit a better platform with no content).

          If your goal is to break into the market with software that is dogshit from day 1, you're just going to be ones of millions of people failing their get-rich-quick scheme.

        • thfuran 5 hours ago
          I don’t think this search will really reveal speed of execution and feature set rewarded over quality either.
      • yabutlivnWoods 3 hours ago
        If code is craft and minimalism is hip then why ruby, and python, and go and... when it's electrical state in machines?

        That's the minimalism that's been lost.

        That's why I find the group 2 arguments disingenuous. Emotional appeal to conservatism, which conveniently also props up their career.

        Why all those parsers and package systems when what's really needed is dials min-max geometric functions from grand theft auto geometry to tax returns?

        Optimization can be (and will be) engineered into the machine through power regulation.

        There's way too many appeals to nostalgia emanating from the high tech crowd. Laundering economic anxiety through appeals to conservatism.

        Give me an etch a sketch to shape the geometry of. Not another syntax art parser.

    • iamcalledrob 6 hours ago
      Sloppy technical design ends up manifesting in bugs, experiential jank, and instability.

      There are some types of software (e.g. websites especially), where a bit of jank and is generally acceptable. Sessions are relatively short, and your users can reload the webpage if things stop working. The technical rigor of these codebases tends to be poor, but it's generally fine.

      Then there's software which is very sensitive to issues (e.g. a multi-player game server, a driver, or anything that's highly concurrent). The technical rigor here needs to be very high, because a single mistake can be devastating. This type of software attracts people who want to take pride in their code, because the quality really does matter.

      I think these people are feeling threatened by LLMs. Not so much because an LLM is going to outperform them, but because an LLM will (currently) make poor technical design decisions that will eventually add up to the ruin of high-rigor software.

      • Benjammer 5 hours ago
        > the quality really does matter.

        If this level of quality/rigor does matter for something like a game, do you think the market will enforce this? If low rigor leads to a poor product, won't it sell less than a good product in this market? Shouldn't the market just naturally weed out the AI slop over time, assuming it's true that "quality really does matter"?

        Or were you thinking about "matter" in some other sense than business/product success?

        • tabwidth 1 hour ago
          How long does that take though? Technical debt from sloppy code doesn't show up in the product until way later. By the time users notice, the team is already three features deep and can't back out.
          • tokioyoyo 29 minutes ago
            All these arguments somehow disregards that we’ve all been adding technical debt left and right, every other day to every single codebase in existence. Humans also write sloppy code.
        • iamcalledrob 4 hours ago
          Yes, I think the market will enforce this. A bit. Eventually. But the time horizon is long, and crummy software with a strong business moat can out-compete great software.

          Look at Windows. It's objectively not been a good product for a long time. Its usage is almost entirely down to its moat.

        • FridgeSeal 3 hours ago
          A lot of software is forced upon people against their will, and purchased bu people who will never use it.

          This obscures things in favour of the “quality/performance doesn’t matter argument”.

          I am, for example, forced to use a variety of microslop and zoom products. They are unequivocally garbage. Given the option, I would not use them. However, my employer has saddled us with them for reasons, and we must now deal with it.

        • bloppe 5 hours ago
          Yes, both the article and GP are making that exact point about it mattering from a customer's perspective.
        • SpicyLemonZest 5 hours ago
          Even if you're confident you can stop your own company from shipping terrible products, I worry the trend is broad enough and hard enough to audit that the market will enforce it by pulling back on all purchases of such software. If gamers learn that new multiplayer games are just always laggy these days, or CTOs learn that new databases are always less reliable, it's not so easy to convince them that your product is different than the rest.
        • theossuary 5 hours ago
          Yes, there's every reason to believe the market will weed out the AI slop. The problem is, just like with stocks, the market can stay irrational longer than you can stay solvent. While we all wait for executives to learn that code rigor matters, we still have bills to pay. After a year when they start trying to hire people to clean up their mess, we'll be the ones having to shovel a whole new level of shit; and the choice will be between that and starving.

          As someone who also falls into camp one, and absolutely loves that we have thinking computers now, I can also recognize that we're angling towards a world of hurt over the next few years while a bunch of people in power have to learn hard lessons we'll all suffer for.

    • Darmani 5 hours ago
      No-one comes out of the womb caring about code quality. People learn to care about the craft precisely because internal quality -- cohesion, modularity, robustness -- leads to external quality (correctness, speed, evolvability).

      People who care about code quality are not artists who want to paint on the company's dime. They are people who care about shipping a product deeply enough to make sure that doing so is a pleasant experience both for themselves and their colleagues, and also have the maturity to do a little bit more thinking today, so that next week they can make better decisions without thinking, so that they don't get called at 4 AM the night after launch for some emergency debugging of an issue that that really should have been impossible if it was properly designed.

      > No one has ever made a purchasing decision based on how good your code is.

      Usually they don't get to see the internals of the product, but they can make inferences based on its externals. You've heard plenty of products called a "vibe-coded piece of crap" this year, even if they're not open source.

      But also, this is just not true. Code quality is a factor in lots of purchasing decisions.

      When buying open source products, having your own team check out the repo is incredibly common. If there are glaring signs in the first 5 minutes that it was hacked together, your chances of getting the sale have gone way down. In the largest deals, inspecting the source code

      It was for an investment decision rather than for a purchase, but I've been personally hired to do some "emergency API design" so a company can show that it both has the thing being designed, and that their design is good.

      • mkehrt 2 hours ago
        > People who care about code quality are not artists who want to paint on the company's dime. They are people who care about shipping a product deeply enough to make sure that doing so is a pleasant experience both for themselves and their colleagues, and also have the maturity to do a little bit more thinking today, so that next week they can make better decisions without thinking, so that they don't get called at 4 AM the night after launch for some emergency debugging of an issue that that really should have been impossible if it was properly designed.

        Speak for yourself. This is exactly the GPs point. Some people care more about the craft of code than the output. I personally find writing good code to be what motivates me. Obviously its a spectrum; shipping is good too. But it's not why I get up in the morning.

        • Darmani 2 hours ago
          Okay, I admit I went too hard on that one to fight against the OP. I too get extreme pleasure from condensing a 300-line function into 30 lines of simplicity, even when practical considerations (it has a clean interface and no-one's even needed to open this file in 2 years) would dictate elegance there is not helpful.
      • doug_durham 2 hours ago
        Code quality is a side-effect of caring. The most important part of product design is caring at all levels. However it's caring about the external details that is the most important. Coding language is largely a function of the population of good coders in your areas. Code evolvability is almost entirely subjective.
    • ambicapter 5 hours ago
      This is like when people decided that everyone was either "introvert" or "extrovert" and then everyone started making decisions about how to live their life based on this extremely reductive dichotomy.

      There are products that are made better when the code itself is better. I would argue that the vast majority of products are expected to be reliable, so it would make sense that reliable code makes for better product. That's not being a code craftsman, it's being a good product designer and depending on your industry, sometimes even being a good businessman. Or, again, depending on your industry, not being callous about destroying people's lives in the various ways that bad code can.

      • renewiltord 5 hours ago
        I’m an introvert. I make sure that all my “welcome to the company” presentations are in green. I am also an extrovert in that I add more green than required.
    • clan 6 hours ago
      I respect your opinion and especially your honesty.

      And at the same time I hope that you will some day be forced to maintain a project written by someone else with that mindset. Cruel, yes. But unfortunately schadenfreude is a real thing - I must be honest too.

      I have gotten to old for ship now, ask questions later projects.

      • jstanley 6 hours ago
        I'm in camp 1 too. I've maintained projects developed with that mindset. It's fine! Your job is to make the thing work, not take on its quality as part of your personal identity.

        If it's harder to work with, it's harder to work with, it's not the end of the world. At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.

        I think camp 2 would rather see one beautiful thing than ten useful things.

        • couchand 5 hours ago
          I think camp 1 would rather see ten useless things than one useful thing.
        • ambicapter 5 hours ago
          > At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.

          Ah yes, if you aren't shitting code out the door as fast as possible, you're probably not shipping anything at all.

          • Nevermark 5 hours ago
            That isn't a fair reading.
            • jplusequalt 4 hours ago
              Neither is the original assertion. There are thousands of examples of exceptionally well crafted code bases that are used by many. I would posit the Linux kernel as an example, which is arguably the most used piece of software in the world.
              • Nevermark 3 hours ago
                > [...] one beautiful thing than ten useful things

                They didn't say beautiful/crafted things were not necessary.

                They were critiquing viewpoints that all code needs to be.

                Even if we (for humorous purposes) took their 1 in 10 ratio as a deadly serious cap on crafting, 10% of projects being "exceptionally well crafted code" would be a wonderful world. I would take 1% high craft to 99% useful! (Not disjointly of course.)

        • Imustaskforhelp 5 hours ago
          I think I fall in camp 1.5 (I don't fall in camp 1 or camp 2) as in I can see value in prototyping (with AI) and sometimes make quick scripts when I need them, but long term I would like to grow with an idea and build something genuinely nice from those prototypes, even manually writing the code as I found personally, AI codebases are an hassle to manage and have many bugs especially within important things (@iamcalledrob message here sums it up brilliantly as well)

          > I think camp 2 would rather see one beautiful thing than ten useful things.

          Both beautiful and useful are subjective (imo). Steve job's adding calligraphy to computer fonts could've considered a thing of beauty which derived from his personal relation to calligraphy, but it also is an really useful thing.

          It's my personal opinion that some of the most valuable innovations are both useful and beautiful (elegant).

          Of course, there are rough hacks sometimes but those are beautiful in their own way as well. Once again, both beauty and usefulness is subjective.

          (If you measure Usefulness with the profit earned within a purely capitalistic lens, what happens is that you might do layoffs and you might degrade customer service to get to that measure, which ultimately reduces the usefulness. profit is a very lousy measure of usefulness in my opinion. We all need profit though but doing solely everything for profit also feels a bit greedy to me.)

        • xienze 1 hour ago
          > If it's harder to work with, it's harder to work with, it's not the end of the world.

          Yeah it just takes longer and makes you miserable in the process. No biggie!

          • tokioyoyo 26 minutes ago
            We will still work ~8ish hours that day, and time will pass anyways.
    • davnicwil 3 hours ago
      I keep seeing this idea repeated, but I don't accept the dichotomy between those who care about 'crafting code' and those who care about 'building products' as though they are opposite points on a spectrum.

      To me, the entire point of crafting good code is building a product with care in the detail. They're inseparable.

      I don't think I've ever in my life met someone who cared a lot about code and technology who didn't also care immensely about detail, and design, and craft in what they were building. The two are different expressions of the same quality in a person, from what I've seen.

    • bloppe 6 hours ago
      I mostly agree with this. Part of the confusion with the discourse around AI is the fact that "software engineering" can refer to tons of different things. A Next.js app is pretty different from a Kubernetes operator, which is pretty different from a compiler, etc.

      I've worked on a project that went over the complexity cliff before LLM coding even existed. It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken, but their use-cases are supported by a Gordian Knot of tech debt that practically cannot be improved without breaking something. It's not about a single bug that an LLM (or human) might introduce. It's about a complete breakdown in velocity and/or reliability, but the product is very mature and still makes money; so abandoning it and starting over is not considered realistic. Eager uptake of tech debt helped fuel the product's rise to popularity, but ultimately turned it into a dead end. It's a tough balancing act. I think a lot of LLM-generated platforms will fall eventually into this trap, but it will take many years.

    • alecbz 5 hours ago
      Craft, in coding or anything else, exists for a reason. It can bleed over into vain frivolity, but craft helps keep the quality of things high.

      Craft often inspires a quasi-religious adherence to fight the ever-present temptation to just cut this one corner here real quick, because is anything really going to go wrong? The problems that come from ignoring craft are often very far-removed from the decisions that cause them, and because of this craft instills a sense of always doing the right thing all the time.

      This can definitely go too far, but I think it's a complete misunderstanding to think that craft exists for reasons other than ensuring you produce high-quality products for users. Adherents to craft will often end up caring about the code as end-goal, but that's because this ends up producing better products, in aggregate.

    • DesaiAshu 56 minutes ago
      We crossed a threshold in the past couple months where I'm not sure I'd agree that "libraries other devs depend on" is the right cutoff for LLMs. In many cases, LLMs will write more concise, faster, and more accurate code. Not by one-shooting, but through a deeper search of possibility space - not dissimilar to how stockfish and other pre-llm search algorithms worked

      Modern harnesses are systems built with LLMs as one of many building blocks (incl. regex, test suites, linters). If it can be measured and verified, there's a good chance LLMs will optimize it

      This is not a new concept. Humans stopped writing "artful" assembly many years ago, because Lattner and others made it much more efficient to rely on LLVM than hand-optimizing assembly

      It's also been demonstrated in other domains within Google (4x4 matmul, silicon photonics, protein folding)

      Interface heavy apps are not purely about objective function, they are about feel, comfort, usability - those apps will benefit heavily from humans. But subcomponents of these apps (eg. an algorithm to route packets efficiently) can often be better solved (somewhat objectively) by LLM-based solvers or other forms of RL.

      However, writing assembly for the sake of art sounds rather interesting in an 2026. Many of my favorite musicians and DJs are driving a resurgence in vinyl to help balance the computed future - and I think that's a great thing

    • qaid 5 hours ago
      Professionally, I've always been in camp #2. The quality of your code at least partially represents you in the eyes of your peers. I imagine this is rapidly changing, but the fact will always remain that readable code that you can reason about is objectively better.

      For personal projects, I've been in both camps:

      For scripts and one-offs, always #1. Same for prototypes where I'm usually focused on understanding the domain and the shape of the product. I happily trade code quality for time when it's simple, throwaway, or not important.

      But for developing a product to release, you want to be able to jump back in even if it's years later.

      That said, I'm struggling with this with my newest product. Wavering between the two camps. Enforcing quality takes time that can be spent on more features...

    • Rapzid 11 minutes ago
      I find there are two types of people.

      People who think developers fall into one of two camps.

      And people worth listening to.

    • Nevermark 5 hours ago
      > it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.

      It mystifies me when people don't intuit this.

      For any suitably sized project, there are parts where elegance and friction removal are far more important than others. By an order or two of magnitude.

      I have shipped beautifully-honed, highly craft code. Right alongside jank that was debugged in the "Well it seems to work now" and "Don't touch anything behind this in-project PI" category.

      There are very good reasons and situations for both approaches, even in one project.

    • roland35 6 hours ago
      That's true, but I think there is a gray area in between. As things scale up in one way or another, having high quality is important for both #1 and #2. Its hard to extend software that was designed poorly.

      The question where experience comes in is when quality is and isnt worth the time. I can create all sorts of cool software I couldn't before because now I can quickly pump out "good enough" android apps or react front ends! (Not trying to denigrate front end devs, it's just a skill I dont have)

    • farmeroy 5 hours ago
      I think developers fall into two camps:

      1. you care about shipping working, tested code that solves a specific business/user problem

      2. you care about closing tickets that were assigned to you

      • singpolyma3 16 minutes ago
        Some of us also care about craft and code quality instead
      • leptons 51 minutes ago
        I think developers fall in to way more than 2 "camps".
    • voidfunc 3 hours ago
      It's the end of era where the plucky code crafter gets to have a seat at the table of production. Those skills are going to become less and less useful going forward. Industry is going to stop hiring those types.

      The future of software looks a lot more like factory production lines with a small group of architect-tier engineers working on a design with product management and then feeding it into the factory for prototyping and production.

      If you're not an experienced late senior or principal engineer at your career stage by now there is basically no future for you in this industry. Lower end roles will continue to be reduced. People who can build and maintain the factory and understand its outputs are going to be the remaining high-value software talent.

    • smusamashah 1 hour ago
      I dont think 1 and 2 are that clear cut. We recently started using Codex at work. I never thought it would be able to do even closer to what it has been doing for me in our legacy code base.

      But I am not the fan of code it writes most of the times. I want my code to read and behave certain way. I can not submit that code, even if it works, if I can't explain or just don't like it. I then iterate over that code myself or ask AI until it has the shape I agree with.

      For my personal side projects I don't care as much what code looks like as long as it works correctly and easily modifiable. But for work, it still remains my responsibility no matter which tool was used.

    • steveBK123 5 hours ago
      I think type1 vs type2 dev requirements are also dependent on lifecycle / scale of your project, not just that its library / framework / mission critical software.

      If you aren't even sure if your idea is even gonna work, whether you have PMF, or the company will be around next year.. then yeah.. speed over quality all day long.

      On the other hand, I've never done the startup thing myself, and tend to work on software project with 10-20 year lifecycles. When code velocity maximalism leads to outages, excess compute cost and reputational issues.. good code matters again.

      Re: "No one has ever made a purchasing decision based on how good your code is." Sonos very much could go out of business for agreeing with this line. I can tell you lots of people stopped buying their products because of how bad their code quality became with the big app change debacle. Lost over a decade of built up good will.

      Apple is going through this lately with the last couple major OS releases across platforms and whatever is going on with their AI. This despite having incredible hardware.

    • ryandrake 4 hours ago
      Trying to describe craftsmanship always brings me back to the Steve Jobs quote:

      “When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”

      • mikkupikku 3 hours ago
        Steve Jobs didn't really know anything about cabinetry, because using plywood / MDF in places where it won't be seen but which would benifit from dimensional stability is absolutely common and there's no reason it shouldn't be.
        • ryandrake 3 hours ago
          You might be missing the point of the quote. He's not talking about whether it is commonly done. He's talking about the mentality that justifies sacrificing the aesthetic and artistry, in favor of things like "dimensional stability". He is giving a reason why it shouldn't be done, and people can disagree about whether the reason is a good one.
          • mikkupikku 3 hours ago
            He's saying that carpenters who make nice things wouldn't use a material like that, but they objectively do and this doesn't sacrifice the aesthetic or artistry.

            He specifically calls out the use of the material in a place it wouldn't be seen as something that the carpenter making beautiful cabinets wouldn't do, but that's actually something they absolutely do. He's trying to argue some other point by way of this analogy, but his analogy is wrong because he doesn't know anything about it. It would be like if I was trying to argue my point by way of car analogy but the example I chose was to say that car enthusiasts insist on genuine OEM parts, I'm only exposing myself as somebody who doesn't understand car people.

            • tehnub 52 minutes ago
              There are plenty of furniture makers that use all hardwood construction. It's just that they charge like $10K USD for a cabinet
    • pm 4 hours ago
      It is possible to exist in both camps. The quality of the process affects the quality of the product, and the quality of your thought affects the quality of the process. It's a cycle of continual learning, and from that perspective, thought, process and product are indivisible.

      Treating code as a means to an end doesn't guarantee success for your product anymore than treating code as a craft.

      • Rapzid 3 minutes ago
        It's not possible for two-camp believers to conceive of two-camp dwellers. That would be tantamount to a third, potentially superior camp.

        The two-camp construct is a tool to establish the believer as a member of the supreme one camp group; apart from the lesser campers. Their entire identity and self worth is built around one-camp membership.

    • amelius 4 hours ago
      > With that said, I do have respect for people in the latter camp.

      Well, you certainly should. Those people made AI based coding a possibility in the first place.

    • culi 2 hours ago
      Personally I fall into the first camp and have quite a lot of criticisms of AI-usage. The parts of my job that were the easiest are now done by AI and the parts of my job that were the worst have exploded and are most of what I do now.

      Code quality isn't just a fetish. It has real implications for security and the final product.

      I've also found that unmaintainable codebases aren't just hard to maintain for humans. LLMs seem to struggle with them just as much

    • Waterluvian 4 hours ago
      It’s much more complex. Part of your value as an engineer is having a good feel for balancing the trade offs on your code.

      Code is usually a liability. A means to an end. But is your code going to run for a minute, a month, a year, or longer? How often will it change? How likely are you going to have to add unforeseen features? Etc. Etc. Etc.

    • allknowingfrog 5 hours ago
      I think "make a product" is the important point of disagreement here. AI can generate code that users are willing to pay for, but for how long? The debate is around the long-term impact of these short-term gains. Code _is_ a means to an end, but well-engineered code is a more reliable means than what AI currently generates. These are ends of a spectrum and we're all on it somewhere.

      You ever notice how everyone who drives slower than you is a moron and everyone who drives faster than you is a maniac? Your two camps have a similar bias.

      • zozbot234 2 hours ago
        > Code _is_ a means to an end, but well-engineered code is a more reliable means than what AI currently generates.

        AI can help you make well-engineered code, but you have to ask for it because it's not what it will do by default. Prompt it with "Figure out how this crappy piece of code really works and document it in depth. Propose viable refactorings that could improve code quality" and it will do a much better job than the usual vibe-coded result.

    • ppqqrr 3 hours ago
      i've learned in two years of unemployed hacking that there is a camp 3: you treat code like an evolving life form, requiring both ingenious introspective craft and utilitarian grit to survive and thrive. a product is a trajectory of software in time, and software is a trajectory of code in time; this is no different from how our genetic code determines our physical existence, which then shapes our lives. there is no spectrum between code vs product; if you want to remain relevant in software design, you must see them as layers of a larger whole (and there are many, many more layers once you go beyond the binary of code VS product).
    • singpolyma3 1 hour ago
      Can I be both? When I wish I can take great pride in the craft of a particular piece of code.

      But also I know when to put up and make the damn thing work.

    • hn_acc1 1 hour ago
      I'm in neither camp. I write code to make our product work, which we can then sell as a unit to the end user.

      If I do a bad job, I get a bunch of bug reports, I get called out for writing bugs, etc. We've been pushed to use AI, and it's hurt more than it's helped with our code base.

    • sornaensis 3 hours ago
      It all depends on the tools. AI will surely give a competitive advantage to people working with better languages and tooling, right? Because they can tell the AI to write code and tests in a way that quashes bugs before they can even occur.

      And then they can ship those products much faster than before, because human hours aren't being eaten up writing out all of these abstractions and tests.

      The better tooling will let the AI iterate faster and catch errors earlier in the loop.

      Right?

    • giancarlostoro 4 hours ago
      > The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.

      I'm weird, I'm part of camp 2, but I think AI can be used to really craft some interesting things. While I appreciate camp 1, camp 2 is what produces better codebases that are easier to maintain, myself and others have realized, that the best practices for humans are also the best practice for AI models to fit your code.

    • Thanemate 4 hours ago
      How about the type of developer who comes up with statistics and made up "camps" as if enjoying the craft itself makes you, out of necessity of the false premise, unable to enjoy the fact that what you made is useful enough to people that they choose your product because you are obsessed with making a good work?

      John Carmack has talked about it in a podcast a few years ago, and he's the closest popular programmer that I can think of who was simply obsessed with milking every tiny ounce of GPU performance, yet none of his effort would matter if Doom and Quake weren't fun games.

    • fasterik 4 hours ago
      I think this is a false dichotomy. If you're passionate about your craft, you will make a higher quality product. The real division is between those who measure the success of a project in:

      - revenue/man-hour, features shipped/man-hour, etc.

      - ms response time, GB/s throughput, number of bugs actually shipped to customers, etc.

      People in the second camp use AI, but it's a lot more limited and targeted. And yes, you can always cut corners and ship software faster, but it's not going to be higher quality by any objective metric.

    • shinycode 3 hours ago
      That’s non sense. Every software people pay or use as their professional tool should be carefully crafted. Would you buy a house/car/anything or value knowing that people who built it don’t care about their craft as long as you got it in your hands and pay for it ? Or maybe you produce something cheap and worthless
    • CodeMage 5 hours ago
      > No one has ever made a purchasing decision based on how good your code is.

      There are two reasons for this. One is that the people who make purchasing decisions are often not the people who suffer from your bad code. If the user is not the customer, then your software can be shitty to the point of being a constant headache, because the user is powerless to replace it.

      The other reason is that there's no such thing as "free market" anymore. We've been sold the idea that "if someone does it better, then they'll win", but that's a fragile idea that needs constant protection from bad actors. The last time that protection was enacted was when the DOJ went against Microsoft.

      > Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.

      Any semblance of accountability for that has been diluted so much that it's not worth mentioning. A bug someone wrote into some cloud service can end up causing huge real-world damage in people's lives, but those people are so far removed from the suits that made the important decisions that they're powerless to change anything and won't ever see that damage redressed in any way.

      So yeah, I'm in camp #2 and I'm bitter about AI, because it's just accelerating and exacerbating the enshittification.

      Someone on the HN wrote recently that everyone who's foaming at the mouth about how AI helps us ship faster is forgetting that velocity is a vector -- it's not just about how fast you're going, but also in what direction.

      I'd go further and say that I'm not even convinced we're moving that much faster. We're just cranking out the code faster, but if we actually had to review that code properly and make all the necessary fixes, I'm pretty sure we would end up with a net loss of velocity.

    • Swizec 6 hours ago
      > The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.

      I am in both camps. Always have been.

      Code janitors about to be in high demand. We’ve always been pretty popular with leadership and it’s gonna get even more important.

      Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things

      My output is org velocity.

      • joshmarlow 5 hours ago
        > Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things

        I'm currently of the opinion that humans should be laser focused on the data model. If you've got the right data model, the code is simpler. If you've got the relevant logical objects and events in the database with the right expressivity, you have a lot of optionality for pivoting as the architecture evolves.

        It's about that solid foundation - and of course lots of tests on the other side.

        • Swizec 3 hours ago
          > I 'm currently of the opinion that humans should be laser focused on the data model

          yes. good programmers talk about data structures, bad programmers talk about code

        • JambalayaJimbo 5 hours ago
          How do you even converge on the right data model without refining code? Elegant code and elegant data model are the exact same thing!
          • bitwize 5 hours ago
            It's called "systems analysis". Programmers are generally pretty terrible at it because it requires holistic, big-picture thinking. But it used to take up the bulk of the design activity for a new enterprise system.
      • benatkin 5 hours ago
        I agree and I like how you describe it. The phrase from Django, "perfectionists with deadlines", also resonates with me.
      • acedTrex 6 hours ago
        > My output is org velocity.

        Amen, slow and steady and the feature fly wheel just keeps getting faster.

      • seamossfet 6 hours ago
        >slop cannons

        I am stealing that phrase haha

    • peacebeard 4 hours ago
      I think this is a false dichotomy. Maybe there is some theoretical developer who cares about their craft only due to platonic idealism, but most developers who care about their craft want their code to be correct, fast, maintainable, usable, etc. in ways that do indeed benefit its users. At worst, misalignment in priorities can come into play, but it's much more subtle than developers either caring or not caring about craft.
    • dirkc 4 hours ago
      I view every single line of code as a liability, the best solution is if you can avoid writing any code. Does that put me into group 1 or group 2?
    • mememememememo 3 hours ago
      Yes great so we sell shit. No one buys a ticket because ofnhow safe the 737 Max plane is or buys a post office franchise based on how good the software Fujitsu sold the post office is, but fuck lets take some pride in outselves and try to ship quality work.
    • throwatdem12311 4 hours ago
      > No one has ever made a purchasing decision based on how good your code is

      Because the ones that sell crappy code don’t sell to people that can tell the difference.

      You think I’d pay for Jira or Confluence if it wasn’t foisted upon me by a manager that has got it in with the Atlassian sales rep?

      I don’t even need to see Atlassian’s source code to know it’s sh*t.

    • yobbo 4 hours ago
      > No one has ever made a purchasing decision based on how good your code is.

      I routinely close tabs when I sense that low-quality code is wasting time and resources, including e-commerce sites. Amazon randomly cancelled my account so I will never shop from them. I try to only buy computers and electronics with confirmed good drivers. Etc.

    • mamami 2 hours ago
      This type of thinking is exactly how you end up with 50MB webpages and is core to rhe general sloppification of software
    • solid_fuel 4 hours ago
      It's easy to write off critics of this slop development as just "caring about the wrong thing", but that is couched in incorrect assumptions. This is the unfortunately common mistake of confusing taking responsibility with some sort of "caring about the code" in an artistic sense. I can certainly appreciate the artistry of well-written code, but I care about having a solid and maintainable code-base because I am accountable for what the code I write does.

      Perhaps this is an antiquated concept which has fallen out of favor in silicon valley, but code doesn't just run in an imaginary world where there are no consequences and everything is fun all the time. You are responsible for the product you sell. If you sell a photo app that has a security bug, you are responsible for your customers nude photos being leaked. If you vibe-code a forum and store passwords in plaintext, you are responsible for the inevitable breech and harm.

      The "general public" might not care, but that is only because the market is governed by imperfect information. Ultimately the public are the ones that get hurt by defective products.

    • quantummagic 5 hours ago
      This is a very useful insight. It nicely identifies part of the reason for the stark bifurcation of opinion on AI. Unfortunately, many of the comments below it are emotional and dismissive, pointing out its explanatory limitations, rather than considering its useful, probative value.
      • solid_fuel 4 hours ago
        I find most home inspectors fall into one of two camps:

        1. You treat the house as a means to an end to make a living space for a person.

        2. You treat the building construction itself as your craft, with the house being a vector for your craft.

        The people who typically have the most negative things to say about buildings fall into camp #2 where cheap unskilled labor is streamlining a large part of what they considered their art while enabling people in group #1 to iterate on their developments faster.

        Personally, I fall into the first camp.

        No one has ever made a purchasing decision based on how good the pipes inside the walls are.

        The general public does not care about anything other than the square footage and color of your house. Sure, if you mess up and one of the houses collapses then that'll manifest as an outcome that impacts the home owner negatively.

        With that said, I do have respect for people in the latter camp. But they're generally best fit for homes where that level of craftsmanship is actually useful (think: mansions, bridges, roads, things I use, etc).

        I just feel like it's hard to talk about this stuff if we're not clear on which types of construction we're talking about.

        • Gigachad 3 hours ago
          The general public does not know how to identify or care about the pipes in the walls. They do care when they bust and cause tens of thousands of dollars of damage. Thats why they hire someone with a keen eye to it to act on their behalf.
          • solid_fuel 2 hours ago
            The general public does not know how to identify or care about good code. They do care when their data gets leaked or their computer gets hacked or their phone gets ransomware. That’s why they hire software engineers, who are supposed to care about the quality of the code they ship.
        • tehnub 36 minutes ago
          Brilliant

          >Sure, if you mess up and one of the houses collapses then that'll manifest as an outcome that impacts the home owner negatively.

          lol

      • bakugo 5 hours ago
        "I have an opinion and everyone on the planet agrees with me, if you disagree, you don't matter" is not a useful insight, and is, in fact, far more emotional and dismissive than any of the replies to it.
        • quantummagic 4 hours ago
          That's a horribly broken misrepresentation of what was said in the original post. If that's what you took away from it, you're not reading carefully or critically.
          • emp17344 2 hours ago
            That is, in fact, how it comes across. You’re labeling perceived opponents as “emotional” and “dismissive”.
          • ratrace 3 hours ago
            [dead]
    • ramsaybolton 1 hour ago
      Okay tell this to game devs
    • rolandhvar 35 minutes ago
      I mean just look at UV.

      Did that guy make it because Rust, and because he's passionate about that sort of thing? Probably.

      But it's fucking fast. So did he sell out to OpenAI? Of course he did.

      And thusly, both camps.

    • meheleventyone 5 hours ago
      You realize you’re essentially building a false dichotomy? I work in video games where code really is a means to an end but still see that authorship is important even if that code is uuuuugly due to it being the expression of the game itself. From that perspective I’m neither worried about craft or product but my ability to express myself though code as the game behaves. Although if you really must have only two categories I’d be in camp one.

      As such AI is a net negative as it would be in writing a novel or making any other kind of art.

    • yakattak 5 hours ago
      > No one has ever made a purchasing decision based on how good your code is.

      If you have buggy software, people don’t use it if there are alternatives. They don’t care about the code but hard to maintain, buggy code will eventually translate to users trying other products.

    • JambalayaJimbo 5 hours ago
      You need both to be a great software engineer. The "means to an end" people will happily slop out PRs and let the "craft" people worry about it.
    • coldtea 4 hours ago
      >No one has ever made a purchasing decision based on how good your code is.

      That's however what makes for stable systems, deep knowledgable engineers, and structurally building the basis for the future.

      If all you care about is getting money for your product slop, it's not different than late night marketed crap, or fast fashion...

    • squidsoup 4 hours ago
      false dichotomy - you need to care about both.
    • coffeefirst 4 hours ago
      This is absolutely false. The purpose of craft is a make a good product.

      I don’t care what kind of steel you used to design my car, but I care a great deal that it was designed well, is safe, and doesn’t break down all the time.

      Craft isn’t a fussy thing.

    • theredsix 4 hours ago
      With AI you actually don't need to choose anymore. Well laid out abstractions actually make AI generate code faster and more accurately. Spending the time in camp 2 to design well and then using AI like camp 1 gives you the best of both worlds.
    • tclancy 5 hours ago
      Now you done it! Yeah, one of the difficult things is being able to see both sides. At the end of the day, I happen to write code because that's how I can best accomplish the things I need to do with the minimum of effort. While I do take pride in elegance and quality of code, it is always a means to an end. When I start gold plating, I try to remind myself of the adage I learned in a marketing class: No one ever needed a drill, they needed the ability to make holes.

      It is strange, but not really upsetting to me, that I am not particularly anal about the code Claude is generating for me anymore but that could also be a function of how low stakes the projects are or the fact nothing has exploded yet.

    • joe_the_user 4 hours ago
      I was just using an app that competes with airbnb. That the app's code is extraordinarly unreliable was a significant factor in my interactions with others on the app, especially, I gradually realized I couldn't be sure messages were delivered or data was up-to-date.

      That influenced some unfortunate interactions with people and meant that no one could be held to their agreements since you never knew if they received the agreements.

      So, well, code quality kind of matters. But I suppose you're still right in a sense - currently people buy and use complete crap.

    • jmyeet 4 hours ago
      I'm going to re-characterize your categorization:

      1. The people who don't understand (nor care) about the risks and complexity of what they're delivering; and

      2. The people that do.

      Widespread AI usage is going to be a security nightmare of prompt injection and leaking credentials and PII.

      > No one has ever made a purchasing decision based on how good your code is.

      This just isn't true. There's a whole process in purchasing software, buying a company or signing a large contract called "due diligence". Due diligence means to varying degree checking how secure the product is, the company's processes, any security risks, responsiveness to bugfixes, CVEs and so on.

      AI is going to absolutely fail any kind of due diligence.

      There's a little thing called the halting problem, which in this context basically means there's no way to guarantee that the AI will be restricted from doing anything you don't want it to do. An amusing example was an Air Canada chatbot that hallucinated a refund policy that a court said it had to honor [1].

      How confident are we going to be that AIs won't leak customer information, steal money from customers and so on? I'm not confident at all.

      [1]: https://arstechnica.com/tech-policy/2024/02/air-canada-must-...

    • somewhereoutth 4 hours ago
      3. You use the act of writing code to think about a given problem, and by so doing not only produce a better code, but also gain a deeper understanding of the problem itself - in combination a better product all around.
    • pron 5 hours ago
      I generally fall into the first camp, too, but the code that AI produces is problematic because it's code that will stop working in an unrecoverable way after some number of changes. That's what happened in the Anthropic C compiler experiment (they ended up with a codebase that wasn't working and couldn't be fixed), and that's what happens once every 3-5 changes I see Codex making in my own codebase. I think, if I had let that code in, the project would have been destroyed in another 10 or so changes, in the sense that it would be impossible to fix a bug without creating another. We're not talking style or elegance here. We're talking ticking time bombs.

      I think that the real two camps here are those who haven't carefully - and I mean really carefully - reviewed the code the agents write and haven't put their process under some real stress test vs those who have. Obviously, people who don't look for the time bombs naturally think everything is fine. That's how time bombs work.

      I can make this more concrete. The program wants to depend on some invariant, say that a particular list is always sorted, and the code maintains it by always inserting elements in the right place in the list. Other code that needs to search for an element depends on that invariant. Then it turns out that under some conditions - due to concurrency, say - an element is inserted in the wrong place and the list isn't sorted, so one of the places that tries to find an element in the list fails to find it. At that point, it's a coin toss of whether the agent will fix the insertion or the search. If it fixes the search, the bug is still there for all the other consumers of the list, but the testing didn't catch that. Then what happens is that, with further changes, depending on their scope, you find that some new code depends on the intended invariant and some doesn't. After several such splits and several failed invariants, the program ends up in a place that nothing can be done to fix a bug. If the project is "done" before that happens - you're in luck; if not, you're in deep, deep trouble. But right up until that point, unless you very carefully review the code (because the agents are really good at making code seem reasonable under cursory scrutiny), you think everything is fine. Unless you go looking for cracks, every building seems stable until some catastrophic failure, and AI-generated code is full of cracks that are just waiting for the right weight distribution to break open and collapse.

      So it sounds to me that the people you think are in the first camp not only just care how the building is built as long as it doesn't collapse, but also believe that if it hasn't collapsed yet it must be stable. The first part is, indeed, a matter of perspective, but the second part is just wrong (not just in principle but also when you actually see the AI's full-of-cracks code).

      • zozbot234 2 hours ago
        > The program wants to depend on some invariant, say that a particular list is always sorted, and the code maintains it by always inserting elements in the right place in the list.

        Invariants must be documented as part of defining the data or program module, and ideally they should be restated at any place they're being relied upon. If you fail to do so, that's a major failure of modularity and it's completely foreseeable that you'll have trouble evolving that code.

        • pron 2 hours ago
          Right, except even when the invariants are documented agents get into trouble. Virtually every week I see the agent write strange code with multiple paths. It knows that the invariant _should_ hold, but it still writes a workaround for cases it doesn't. Something I see even more frequently is where the agent knows a certain exception shouldn't occur, but it does, so half the time it will choose to investigate and half the time it says, oh well, and catches the exception. In fact, it's worse. Sometimes it catches exceptions that shouldn't occur proactively as part of its "success at all costs" drive, and all these contingency plans it builds into the code make it very hard (even for the agent) to figure out why things go wrong.

          Most importantly, this isn't hypothetical. We see that agents write programs that after some number of changes just collapse because they don't converge. They don't transition well between layers of abstractions, so they build contingencies into multiple layers, and the result is that after some time the codebase is just broken beyond repair and no changes can be made without breaking something (and because of all the contingencies, reproducing the breakage can be hard). This is why agents don't succeed in building even something as simple as a workable C compiler even with a full spec and thousands of human-written tests.

          If the agents could code well, no one would be complaining. People complain because agent code becomes structurally unsound over time, and then it's only a matter of time until it collapses. Every fix and change you make without super careful supervision has a high chance of weakening the structure.

          • zozbot234 1 hour ago
            Agents don't really know the whole codebase when they're writing the code, their context is way too tiny for that; and trying to grow context numbers doesn't really work well (most of it gets ignored). So they're always working piece-meal and these failures are entirely expected unless the codebase is rigorously built for modularity and the agent is told to work "in the small" and keep to the existing constraints.
            • pron 4 minutes ago
              > Agents don't really know the whole codebase when they're writing the code

              Neither do people, yet people manage to write software that they can evolve over a long time, and agents have yet to do that. I think it's because people can move back and forth between levels of abstraction, and they know when it's best to do it, but agents seem to have a really hard time doing that.

              On the other hand, agents are very good at debugging complex bugs that span many parts of the codebase, and they manage to do that even with their limited context (which isn't so limited compared to humans). They're just not smart enough to write stable code yet.

      • skydhash 4 hours ago
        It can be especially bad if the architecture is layered with each one having its own invariant. Like in a music player, you may have the concept of a queue in the domain layer, but in the UI layer you may have additional constraints that does not relate to that. Then the agent decide to fix a bug in the UI layer because the description is a UI bug, while it’s in fact a queue bug
        • mikkupikku 3 hours ago
          Shit like this is why you really have to read the plans instead of blindly accepting them. The bots are naturally lazy and will take short cuts whenever they think you won't notice.
    • haolez 1 hour ago
      Option 1 is a PM.
    • logicchains 5 hours ago
      It's perfectly possible to write very clean code with AI, it just takes a lot more time and prompting.
      • Gigachad 3 hours ago
        Easier to just write it yourself.
    • ggregoire 4 hours ago
      3. Coding is fun, prompting not so much
    • DetroitThrow 2 hours ago
      >No one has ever made a purchasing decision based on how good your code is.

      I got my company to switch from GitHub to GitLab after repeated outages. I've always moved companies to away from using GCP or Azure because of their reliability problems.

      This is a really funny comment.

    • keybored 2 hours ago
      This is a Venus v.s. Mars developer trope at this point.

      > The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.

      That’s fine for people argue those things.

      My criticisms of AI are mainly

      1. The principle of the GenAI approach

      2. Political

      The first point is about how stupid the GenAI approach is (I could link to the arguments). But I have left open the door for pure results, i.e. demonstrating that it (despite my belief) works in practice. So this is not about craftmanship.

      I’ve previously commented that I would respect a more principled approach even though it takes my craft.[1]

      > Personally, I fall into the first camp.

      Of course you do. Because...

      > No one has ever made a purchasing decision based on how good your code is.

      In these dichotomies the author typically puts himself forward as the pragmatist and the other side as the ones who care about things that are just irrelevant to the purchasing decision or whatever.

      But the AI haters have made real arguments against AI, against the people behind AI, and so on. It’s not a matter of vibes. So maybe respond to those arguments? We don’t need another armchair lesson in psychological inclinations.

      Be a pragmatist for all I care. But beware of the bloodless pragmatist who only sees what is, essentially, instant product gratification and not what comes after, or from the sides, or from below.

      [1] https://news.ycombinator.com/item?id=47358696

    • bigstrat2003 5 hours ago
      > No one has ever made a purchasing decision based on how good your code is. The general public does not care about anything other than the capabilities and limitations of your product.

      The capabilities and limitations of your product are defined in part by how good the code is. If you write a buggy mess (whether you write it yourself or vibe code it), people aren't going to tolerate that unless your software has no competitors doing better. People very much do care about the results that good code provides, even if they don't care about the code as an end in itself.

      • efromvt 1 hour ago
        I think this is exactly the point though (maybe more of the link than of this comment) - a sufficiently good product by all external quality metrics is fine even if the code is written on one line in a giant file or some other monstrosity. As long as one black box behaves the same way as another in all dimensions, they are competitive. You can argue that internal details often point to an external deficiency, but if they don’t, then there is no competitive pressure.
    • Pxtl 3 hours ago
      As developers we have a unique advantage over everybody else dealing with the way AIgen is revolutionizing careers:

      Everybody else is dealing with AIgen is suffering the AI spitting out the end product. Like if we asked AI to generate the compiled binary instead of the source.

      Artists can't get AIgen to make human-reviewed changes to a .psd file or an .svg, it poops out a fully formed .png. It usurps the entire process instead of collaborating with the artist. Same for musicians.

      But since our work is done in text and there's a massive publicly accessible corpus of that text, it can collaborate with us on the design in a way that others don't get.

      In software the "power of plain text" has given us a unique advantage over kinds of creative work. Which is good, because AIgen tends to be clumsy and needs guidance. Why give up that advantage?

    • waterTanuki 46 minutes ago
      > No one has ever made a purchasing decision based on how good your code is.

      RollerCoaster Tycoon.

      > The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.

      People care how fast you're able to ship updates, new features, and bugfixes. If you're working with a pile of vibe-coded spaghetti slop it's going to take longer to deliver these.

    • imiric 5 hours ago
      While creating good software is as much of an art as it is a science, this is not why the craft is important. It is because people who pay attention to detail and put care into their work undoubtedly create better products. This is true in all industries, not just in IT.

      The question is how much does the market value this, and how much it should value it.

      For one-off scripts and software built for personal use, it doesn't matter. Go nuts. Move fast and break things.

      But the quality requirement scales proportionally with how many people use and rely on the software. And not just users, but developers. Subjective properties like maintainability become very important if more than one developer needs to work on the codebase. This is true even for LLMs, which can often make a larger mess if the existing code is not in good shape.

      To be clear, I don't think LLMs inevitably produce poor quality software. They can certainly be steered in a good direction. But that also requires an expert at the wheel to provide good guidance, which IME often takes as much, if not more, work than doing it by hand.

      So all this talk about these new tools replacing the craft of programming is overblown. What they're doing, and will continue to do unless some fundamental breakthrough is reached, is make the creation of poor quality software very accessible. This is not the fault of the tools, but of the humans who use them. And this should concern everyone.

    • packetlost 6 hours ago
      I agree on the software dev camps.

      > The general public does not care about anything other than the capabilities and limitations of your product.

      It's absolutely asinine to say the general public doesn't care about the quality and experience of using software. People care enough that Microsoft's Windows director sent out a very tail-between-legs apology letter due to the backlash.

      It's as it always has been, balancing quality and features is... well, a balance and matters.

      • seamossfet 6 hours ago
        The public doesn't care about the code itself, they absolutely care about the quality and experience of using the software.

        But you can have an extremely well designed product that functions flawlessly from the perspective of the user, but under the hood it's all spaghetti code.

        My point was that consuming software as a user of the product can be quite different from the experience of writing that software.

        Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.

        I'd just be careful to separate code elegance from product experience, since they are different. Related? Yeah, sure. But they're not the same thing.

        • blackbear_ 6 hours ago
          There are other players in the game: the business and the market.

          Good code makes it easier for the business to move fast and stay ahead of the competition while reducing expenses for doing so.

          • WarmWash 4 hours ago
            That's true, but excel '98 would still cover probably 80% of users use cases.

            A lot, and I mean a lot, of software work is trying to justify existence by constantly playing and toying with a product that worked for for everyone in version 1.0, whether it be to justify a job or justify charging customers $$ per month to "keep current".

        • packetlost 6 hours ago
          That's fair!

          > Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.

          I'm sure that's the case in basically everything, it sorta doesn't matter (until it does) if it's cordoned off into a corner that doesn't change and nominally works from the outside perspective.

          But those cases are usually isolated, if they aren't it usually quickly becomes noticeable to the user in one way or another, and I think that's where these new tools give the illusion of faster velocity.

          If it's truly all spaghetti underneath, the ability to make changes nosedives.

        • almostdeadguy 5 hours ago
          I have yet to meet anyone whose problem with AI is that the code is not aesthetically pleasing, but that would actually be an indicator to me that people are using these things responsibly.

          My own two cents: there's an inherent tension with assistants and agents as productivity tools. The more you "let them rip", the higher the potential productivity benefits. And the less you will understand the outputs, or even if they built the "correct thing", which in many cases is something you can only crystalize an understanding about by doing the thing.

          So I'm happy for all the people who don't care about code quality in terms of its aesthetic properties who are really enjoying the AI-era, that's great. But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue. And moreso, anyone like this should ask why anyone should feel the need to employ you for your services in the future, since your job amounts to "telling the LLM what to do and accepting it's output uncritically".

          • seamossfet 5 hours ago
            >But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue.

            I think that's actually a good way to look at it. I use AI to help produce code in my day to day, but I'm still taking quite a while to produce features and a lot of it is because of that. I'm spending most of my time reading code, adjusting specs, and general design work even if I'm not writing code myself.

            There's no free lunch here, the workflow is just different.

        • slopinthebag 5 hours ago
          Facebook.com is a monstrosity though, and their mobile apps as well are slow and often broken. And the younger generations are using other networks, Facebook is in trouble.
    • dbshapco 1 hour ago
      I feel like any argument that begins by asserting a dichotomy is almost certain to be circular and will proceed as if the dichotomy were a fact rather than an unproven hypothesis.

      I don't believe there is a dichotomy, or even a spectrum of developers, but a complex landscape. Of course, that is also an bald assertion, but on a weaker claim, and no less valid than the original assertion.

      That said, independent of assertions about developer classification, in my experience there is a clear connection between the quality of the software and the quality of the product, and I've often see evidence of poor quality software compromising the product and user experience. Poor quality leaks out. Remember BSOD? Maybe not.

      I've become hesitant to unleash coding agents simply because the code base ends up looking like the victim of drive-by coding, littered with curious lambda functions, poor encapsulation, etc. The only thing I use coding agents for is exploratory and throwaway code, like one off scripts. I love coding agents for all the ancillary work, I protect the critical path like mamma bear her cubs.

      Coding agents make all the second order work easier so I have more bandwidth to focus on the critical parts. Again, software is a landscape, but at least for my work I can't abdicate parts to a coding agent and "works" is an inadequate standard. I need bullet-proof and unfailingly correct.

      Token generation definitely produces a certain stream-of-consciousness, Kerouac-as-programmer style. As long as I don't ever have to maintain or modify the code myself, am not concerned about cost control (especially in cloud environments where I am billed by compute cycles), I am fine with quick and dirty and done. I sigh when I see what should be a six line change in my head balloon to 300 lines of generated code, revert, and write the six lines myself. Would take longer to write the prompt to get the coding agent to fix it than fix it myself. It would grind away for several minutes and burn up an astonishing number of tokens for simple fixes.

      Anything linguistic the coding agents do well. Want to rename a variable in 300 different source files? I mean, it is overkill to be running a 200B parameter model to avoid writing the sed script I might write otherwise, but who am I to turn my nose up at my work being subsidized by investors? I don't think that economic model will go on forever.

      Any higher abstraction is being cargo-culted from language. This is where LLMs are weakest, because they don't understand abstraction or encapsulation, only the artifacts as expressed in language.

      Outside of exploratory and throwaway code, I use inline prompting to precisely target and scope changes, and then identify the cleanup and refactoring required to bring the code to acceptable quality. Although I do a lot of cleanup by hand as well. Rather than tell the coding agent that a lambda function wrapping a one liner that is used in one place in the code is dumb, I'll just remove the lambda myself. The coding agent can't adopt and generalize lessons from code review comments the way a human software engineer can -- I am forced to burn tokens every single time to get it to dial back its insane love affair with lambda functions. Again, not a big deal while costs remain subsidized.

      Operations and maintenance overhead in the type of software I've written through my career dominates over programming cost. Telecom, aerospace, e-commerce, etc. Systems are long lived. Outages are expensive. Regulatory compliance is a large factor. I've worked in shops with 70% cost overhead in operations. A $50K a month cloud compute bill can be reduced to $15K. There's usually some low hanging fruit and poor quality software doesn't account for all of this, but it is a significant fraction. Like a poorly written termination condition in a container that essentially was a busy wait burning thousands of dollars a month doing nothing (true story).

      I am currently writing a trading system, and can't afford to hallucinate a bunch of bad trades. Like the developer landscape, the software landscape is complex and not uniform. So I will concede there are probably many types of software outside of my own experience that can be implemented largely by coding agents. Low consequence. Marginal operational overhead.

      I might assert that coding agents forte is autogenerating technical debt, but then I am just being a wag. Less waggishly I would say use of coding agents is subject to engineering judgement, like any tool. Who is going to read that headline or give it a billion dollar valuation?

    • ModernMech 6 hours ago
      > You treat your code as a means to an end to make a product for a user.

      It isn’t that though, the “end” here is making money not building products for users. Typically people who are making products for users cares about the craft.

      If the means-to-end people could type words into a box and get money out the other side, they would prefer to deal with that than products or users.

      Thats why ai slop is so prevalent — the people putting it out there don’t care about the quality of their output or how it’s used by people, as long as it juices their favorite metrics - views, likes, subscribes, ad revenue whatever. Products and users are not in scope.

      • seamossfet 6 hours ago
        Yeah, I'm not trying to defend slop.

        I don't think all means-to-end people are just in it for money, I'll use the anecdote of myself. My team is working on a CAD for drug discovery and the goal isn't to just siphon money from people, the goal is legitimately to improve computational modeling of drug interactions with targets.

        With that in mind, I care about the quality of the code insofar as it lets me achieve that goal. If I vibe coded a bunch of incoherent garbage into the platform, it would help me ship faster but it would undermine my goal of building this tool since it wouldn't produce reliable or useful models.

        I do think there's a huge problem with a subset of means-to-end people just cranking out slop, but it's not fair to categorize everyone in that camp this way ya'know?

    • slopinthebag 5 hours ago
      This is just cope to avoid feeling any shame for shipping slop to users.
    • BoorishBears 3 hours ago
      I think some people are misunderstanding your point.

      Yes, some people left to their own devices would take twice as long to ship a product half as buggy only to find out the team that shipped early has taken a massive lead on distribution and now half the product needs to be reworked to catch up.

      And some people left to their own devices will also ship a buggy mess way too early to a massive number of people and end up with zero traction or validation out of it, because the bugs weren't letting users properly experience the core experience.

      So we've established no one is entirely right, no one is entirely wrong, it's ying/yang and really both sides should ideally exist in each developer in a dynamic balance that changes based on the situation.

      -

      But there's also a 3rd camp that's the intersection of these: You want to make products that are so good or so advanced *, that embracing the craft aspect of coding is inherent to actually achieving the goal.

      That's a frontend where the actual product is well outside typical CRUD app forms + dashboard and you start getting into advanced WebGL work, or complex non-standard UI state that most LLMs start to choke on.

      Or needing to do things quicker than the "default" (not even naive) approach allows for UX reasons. I ran into this using Needleman-Wunsch to identify UI elements on return visits to a site without an LLM request adding latency: to me that's the "crafty" part of engineering serving an actual user need. It's a completely different experience getting near instant feedback vs the default today of making another LLM request.

      And it's this 3rd camp's feedback on LLM development that people in the 1st camp wrongly dismiss as being part the 2nd craft-maxxed group. For some usecases, slop is actually terminal.

      Intentionally contrived example, but if you're building a Linear competitor and you vibecode a CRDT setup that works well enough, but has some core decisions that mean it'll never be fast enough to feel instant and frontend tricks are hiding that, but now users are moving faster than the data and creating conflicts with their own actions and...

      You backed yourself into a wall that you don't discover until it's too late. It's only hypervigilance and strong taste/opinion at every layer of building that kind of product that works.

      LLMs struggle with that kind of work right now and what's worrying is, the biggest flaw (a low floor in terms of output quality) doesn't seem to be improving. Opus 4.6 will still try to dynamically import random statements mid function. GPT 5.3 tried to satisfy a typechecker by writing a BFS across an untyped object instead of just updating the type definitions.

      RL seems to be driving the floor lower actually as the failure modes become more and more unpredictable compared to even GPT 3.5 which would not even be "creative enough" to do some of these things. It feels like we need a bigger breakthrough than we've seen in the last 1-2 years to actually get to the point where it can do that "Type 3" work.

      * good/advanced to enable product-led growth, not good/advanced for the sake of it

    • throwaway613746 2 hours ago
      [dead]
  • Animats 6 hours ago
    Meanwhile, the complexity of the average piece of software is drastically increasing. ... The stats suggest that devs are shipping more code with coding agents. The consequences may already be visible: analysis of vendor status pages [3] shows outages have steadily increased since 2022, suggesting software is becoming more brittle.

    We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.

    AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.

    Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?

    "The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.

    • switchbak 56 minutes ago
      I think we're heading for a real crisis here. We've got an imperfect system of constraints and bottlenecks, and we've just eliminated one of the main bottlenecks - the speed at which we can add new code. This just puts so much more strain on the rest of the system, I think the industry is going to have a quick lesson on the non-linear costs of software complexity.

      I'm glad to see that the author of the article is putting an emphasis on simplicity here, especially given the nature of their business. Those that fully embrace the "code doesn't matter" approach are in for a world of hurt.

      Long-term, I expect there will be more tooling and model advancements to help us in this regard - and there will certainly be a big economic incentive for that soon. But in the meantime it feels like a dam has been breached and we're just waiting for the real effects to become manifest.

    • 9dev 1 hour ago
      While I also view this development critically, why do you assume AI will be unable to fix the issues eventually?
  • reese_john 6 hours ago

      Why build each new airplane with the care and precision of a Rolls-Royce? In the early 1970s, Kelly Johnson and I [Ben Rich] had dinner in Los Angeles with the great Soviet aerodynamicist Alexander Tupolev, designer of their backfire Bear bomber. 'You Americans build airplanes like a Rolex watch,' he told us. 'Knock it off the night table and it stops ticking. We build airplanes like a cheap alarm clock. But knock it off the table and still it wakes you up.'...The Soviets, he explained, built brute-force machines that could withstand awful weather and primitive landing fields. Everything was ruthlessly sacrificed to cut costs, including pilot safety.
      We don't need to be ruthless to save costs, but why build the luxury model when the Chevy would do just as well? Build it right the first time, but don't build it to last forever. - Ben Rich in Skunk Works
    • kalaksi 3 hours ago
      And then everyone disagrees what counts as luxury in software.
    • imiric 5 hours ago
      That's an interesting story, but not a great analogy for software.

      If a technology to build airplanes quickly and cheaply existed and was made available to everyone, even to people with no aeronautical engineering experience, flying would be a much scarier ordeal than it already is.

      There are good reasons for the strict safety and maintenance standards of the aviation industry. We've seen what can happen if they're not followed.

      The fact that the software industry doesn't have similar guardrails is not something to celebrate. Unleashing technology that allows anyone to create software without understanding or even caring about good development practices and conventions is fundamentally a bad idea.

  • dang 4 hours ago
    The authors updated their title so I've updated it here too. Previous title was "Good code will still win" - but it was leading to too much superficial discussion based entirely on the phrase "good code" in the title. It's amazing how titles do that!

    (Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)

    • Fraterkes 3 hours ago
      Never considered that the mods of this site are literally discussing with the people heading yc companies how to game their hn-titles for better interaction. How naive I am.
      • dang 2 hours ago
        We discuss these things all the time with people who email us whether they head yc companies or not.
  • ramsaybolton 1 hour ago
    What I would like to add is that coding in the flow state is underestimated. When your brain just clicks with every change and variable it's just different AND more efficient than doing with AI.
  • socalgal2 6 hours ago
    When has this ever been true

    Did the best processor win? no x86 is trash

    Did the best computer language win? no (not that you can can pick a best)

    The same is true pretty much everywhere else outside computers, with rare exception.

  • stephc_int13 4 hours ago
    The economic angle is not as clear cut as the authors seem to think.

    There is an abundance of mediocre and even awful code in products that are not failing because of it.

    The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.

    • wreath 4 hours ago
      Even with enough money, you may not be able to attract/keep talented engineers who are willing to put up with such a work environment (the codebase itself, and probably the culture that led to its state) and who want to ship well built/designed software but are slowed down by the mess.
    • woeirua 2 hours ago
      The most successful software in a field is typically NOT the best software. The authors of the article live in a world that does not exist. Clean code lost, many years ago.
  • mrbombastic 2 hours ago
    these economic incentives for good code would also apply to code before llms no? And we have had plenty of shit code that stayed shit for a long time. I find this idea that economic incentives will necessarily drive the outcomes you desire to be akin to a religious belief for some people.
    • recursivecaveat 1 hour ago
      Yeah you could say the same thing about teleporting to Paris, or infinitely flavorful bananas. Just because the market would reward something doesn't mean it will come to be. There may be too many tradeoffs, or technical limits, or just that the future is 200 years away so you won't be around for it anyways.
  • simianwords 6 hours ago
    People are not emotionally ready to accept that certain layers of abstraction don’t need as much care and effort if they can be automated.

    We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.

    Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.

    • yobbo 4 hours ago
      "The only reason people disagree with me is because they are emotionally deficient."
      • alt187 2 hours ago
        As opposed to me, who is perfectly rational.
    • dcchambers 6 hours ago
      Does performance not matter?

      What if your AI uses an O(n) algorithm in a function when an O(log n) implementation exists? The output would still be "correct"

      • inetknght 6 minutes ago
        If it's not tested, it's not Engineered.

        Test what you care about. If you care about performance, then test your performance. Otherwise performance doesn't matter.

      • keeda 4 hours ago
        > Does performance not matter?

        No, unfortunately. In a past life, in response to an uptime crisis, I drove a multi-quarter company-wide initiative to optimize performance and efficiency, and we still did not manage to change the company culture regarding performance.

        If it does not move any metrics that execs care about, it doesn't matter.

        The industry adage has been "engineer time is much more expensive than machine time," which has been used to excuse way too much bloated and non-performant code shipped to production. However, I think AI can actually change things for the better. Firstly, IME it tends to generate algorithmically efficient code by default, and generally only fails to do so if it lacks the necessary context (e.g. now knowing that an input is sorted.)

        More importantly though, now engineer time is machine time. There is now very little excuse to avoid extensive refactoring to do things "the right way."

      • NitpickLawyer 6 hours ago
        > Does performance not matter?

        Performance can be a direct target in a feedback loop and optimised away. That's the easy part. Taking an idea and poof-ing a working implementation is the hard part.

        • simianwords 5 hours ago
          Also most performance optimisations exit at the microservice architecture level, or db and io level
      • paxys 5 hours ago
        As it stands today the average engineer is much more likely to ship an unoptimized algorithm than an AI.
      • simianwords 6 hours ago
        In most cases no. Bottleneck is usual IO.
  • pagecalm 6 hours ago
    Agreed on the economics side. Clean code saves you time and money whether a human or AI wrote it. That part doesn't change.

    But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.

    • antdke 4 hours ago
      Yup - In the end, it’s still just a tool that adheres to the steering (or lack thereof) of the user.
  • xnx 7 hours ago
    If "good code" == "useful code", then yes.

    People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.

    • gizmo686 5 hours ago
      Engineers don't build the cheapest bridge that just barely won't fail. They build the cheapest bridge that satisfies thousands of pages of regulatory requirements maintained and enforced by dozens of different government entities. Those regulations range from safety, to aesthetic, to environmental, to economic, to arcane.

      Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse. And no care for the impact on any stakeholder other than the one paying them.

      • SchemaLoad 59 minutes ago
        I don't think that's true. Engineers would largely want to build the best bridge costs be damned. But they would end up undercut by anyone who cuts corners resulting in the only companies getting contracts are the ones who cut the most corners. Even if no one wants to build bridges that collapse, it would be impossible without some counter forces of laws and accountability.
      • CodeMage 5 hours ago
        > Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse.

        I don't know any real (i.e. non-software) engineers, but I would love to ask them whether what you said is true. For years now, I've been convinced that we should've stuck with calling ourselves "software developers", rather than trying to crib the respectability of engineering without understanding what makes that discipline respectable.

        Our toxic little industry would benefit a lot from looking at other fields, like medicine, and taking steps to become more responsible for the outcomes of our work.

        • chuckadams 4 hours ago
          Civil engineers are licensed and carry insurance. When software developers have similar requirements, then I'll call them engineers. In some fields like avionics, the certification regime is a good proxy for licensing -- I think we could extend the "engineer" title to those developers too.

          Such a world still has room for unlicensed developers too -- I'd certainly be among them.

          • CodeMage 1 hour ago
            > Such a world still has room for unlicensed developers too -- I'd certainly be among them.

            Sign me up. When I started programming as a 7 year old kid, it wasn't because I dreamed of spending my days on endless meetings and documents. But hey, 40 years later, I'm working as a senior "engineer" and with that comes a heavy emphasis on project management.

            Sure, you're expected to know how to solve interesting technical challenges, but that's more of a nice-to-have. It's nowhere near important as being able to make a project look successful despite the fact that the middle management convinced the senior "leadership" to do that project out of sheer ambition and without bringing on board the people who actually talk to the users, so now you're stuck without clear requirements, without a clear way to measure success, and with accumulating tech debt gumming up the works while your boss works with various "stakeholders" to "pivot" over and over so he doesn't have to go to the senior leadership to explain why we're delaying launch again.

            And what I'm describing is one of the best places I've ever worked at across more than 25 years of my professional career. Hell, I'm lucky that senior "engineer" is what they call a "terminal" position here, i.e. I'm allowed to settle in it without having to work towards a promotion. From what I've been told, there are places where you have to get to be a staff engineer or they'll eventually let you go.

            I don't know about anyone else, but I find the whole situation fucking insane.

    • siriusastrebe 7 hours ago
      What would happen if we made bridges to last as long as possible, to withstand natural disasters and require minimal maintenance?

      What if we built things that are meant to last? Would the world be better for it?

      • fiedzia 6 hours ago
        > What if we built things that are meant to last? Would the world be better for it?

        You'd have a better bridge, at the expense of other things, like hospitals or roads. If people choose good-enough bridges, that shows there is something else they value more.

        • siriusastrebe 6 hours ago
          Once the good-enough bridge deteriorates and we have to spend more money maintaining or replacing it

          Don't we end up just spending the same? Just now we're left with a crappy bridge.

          • SchemaLoad 57 minutes ago
            "Good enough" bridges still last 50+ years. We could design a bridge to last 200 years but we won't even know if the design we have today will even be needed in 200 years. Maybe by then we all use trains in underground tunnels.
          • cm11 6 hours ago
            Certainly, "enough" is doing a lot of work and things get blurry, but I think "good enough" is meant to capture some of that. Over building is also a problem. It isn't strictly true that building longer lived things is cheaper over time either, it obviously depends on the specific things getting compared. And if you go 100 years rather than 25 years, you'll have fewer chances to adjust and optimize for changes to the context, new technology, changing goals, or more efficient (including cost saving) methods.

            Obviously, there's a way to do both poorly too. We can make expensive things that don't last. I think a large chunk of gripes about things that don't last are really about (1) not getting the upside of the tradeoff, cheaper (in both senses) more flexible solutions, and (2) just getting bad quality period.

          • Ekaros 4 hours ago
            It might very well be that building and maintaining a bridge for 100 years costs three or four times as much as building and maintaining one that last 50 years. If demolition costs are not same as cost of bridge well in long run replacing the bridge ever 50 years is cheaper.

            On whole it is entirely reasonable optimisation problem. What is the best lifespan of single bridge over desired total lifespan.

          • pixl97 6 hours ago
            Depends how much the infrastructure and needs around it changes.
          • nisegami 6 hours ago
            But we also got roads and hospitals.
      • GarnetFloride 6 hours ago
        Look up Roman concrete. There are 2000 year old bridges and aqueducts still in use.

        We only recently figured out how to reproduce Roman concrete.

        We’d have more but a lot were blown up during WWII.

        • bombela 6 hours ago
          There is nothing special about roman concrete compared to moderns concrete. Modern concrete is much better

          The difference is that they didn't have rebar. And so they built gravity stable structures. Heavy and costly as fuck.

          A modern steel and concrete structure is much lighter and much cheaper to produce.

          It does mean a nodern structure doesn't last as long but also the roman stuff we see is what survived the test of time, not what crumbled.

          • throw-qqqqq 5 hours ago
            > There is nothing special about roman concrete compared to moderns concrete. Modern concrete is much better

            Roman concrete is special because it is much more self-healing than modern concrete, and thus more durable.

            However, that comes at the cost of being much less strong, set much slower and require rare ingredients. Roman concrete also doesn’t play nice with steel reinforcement.

            https://en.wikipedia.org/wiki/Roman_concrete

          • Topgamer7 5 hours ago
            I think you are incorrect. Compared to modern concrete, roman concrete was more poorly cured at the time of pouring. So when it began to weather and crack, un-cured concrete would mix with water and cure. Thus it was somewhat self healing.

            Modern concrete is more uniform in mix, and thus it doesn't leave uncured portions.

          • darkwater 6 hours ago
            We have modern architecture crumbling already less than 100 years after it has been built. I know engineering is about tradeoffs but we should also acknowledge that, as a society, we are so much used to put direct economic cost as the main and sometimes only metric.
            • bluGill 3 hours ago
              You would be very unhappy if you had to live in a house as built 100 years ago. Back then electric lights were rare. even if you had them the wiring wasn't up to running modern life. my house is only 50 years old and it shows signs of the major remodel 30 years ago, and there are still a lot of things that a newer house would all do different that I sometimes miss.
      • recursive 7 hours ago
        Devil's advocate here. Maybe we'd all forget how to build bridges in the next thousand years, after bridging all the bridg-able spans.
      • DeathArrow 7 hours ago
        What if instead of one bridge we build three, so more people can cross the river?
        • siriusastrebe 6 hours ago
          And if your one bridge survived as long as, or longer than three bridges?
          • pixl97 6 hours ago
            Then you still have traffic issues and no one is happy.
    • blast 7 hours ago
      > the cheapest bridge that just barely won't fail

      That can't be right? What about safety factors

      • StevenWaterman 7 hours ago
        Safety factors exist because without them, bridges fall down
        • pklausler 6 hours ago
          The free market ensures that bridges stay up, because the bridge-makers don't want to get sued by people who have died in bridge collapses.
          • tredre3 3 hours ago
            That is definitely not the free market at play. It's legislative body at play.

            Engineers (real ones, not software) face consequences when their work falls apart prematurely. Doubly so when it kills someone. They lose their job, their license, and they can never work in the field again.

            That's why it's rare for buildings to collapse. But software collapsing is just another Monday. At best the software firm will get fined when they kill someone, but the ICs will never be held responsible.

          • quentindanjou 6 hours ago
            This only works when the barrier of entry to sue is low enough to be done and when the law is applied impartially without corruption with sanctions meaningful enough , potentially company-ending, to discourage them.

            At the moment you remove one of these factors, free market becomes dangerous for the people living in it.

        • irishcoffee 6 hours ago
          That isn't how safety factors work... The person you're responding to is correct. I encourage you to look it up!
          • StevenWaterman 3 hours ago
            Safety factors account for uncertainty. Uncertainty the quality of materials, of workmanship, of unaccounted-for sources of error. Uncertainty in whether the maximum load in the spec will actually be followed.

            Without a safety factor, that uncertainty means that, some of the time, some of your bridge will fall down

    • nzeid 5 hours ago
      I'd describe that as passable engineering.

      Good engineering is building the strongest bridge within budget and time.

    • bitwize 5 hours ago
      Um, ackshually, real civil/structural engineers—at least, those in the global north—design bridges, roads, and buildings with huge tolerances (multiple times the expected loads) because unexpected shit happens and you don't want to suffer catastrophic failure when conditions are just outside of your typical use case and have a Tacoma Narrows Bridge type situation on your hands.
      • kube-system 5 hours ago
        We might be arguing semantics, but safety margins aren't considered 'overbuilding' but part of the bare minimum requirements for a bridge to stand. They aren't there "just in case" they are there because it is known for a fact that bridges in the real world will experience degradation and overloading.

        If you build a bridge that is rated to carry 100k lbs of weight, and you build it to hold 100k lbs, you didn't build it to barely meet spec -- you under built it -- because overloading is a known condition that does happen to bridges.

  • jumploops 4 hours ago
    “John Ousterhout [..] argues that good code is:

    - Simple and easy to understand

    - Easy to modify”

    In my career at fast-moving startups (scaling seed to series C), I’ve come to the same conclusion:

    > Simple is robust

    I’m sure my former teams were sick of me saying it, but I’ve found myself repeating this mantra to the LLMs.

    Agentic tools will happily build anything you want, the key is knowing what you want!

    • jfreds 3 hours ago
      My issue with this is that a simple design can set you up for failure if you don’t foresee and account for future requirements.

      Every abstraction adds some complexity. So maybe the PoC skips all abstractions. Then we need to add a variant to something. Well, a single if/else is simpler than an abstract base class with two concrete implementations. Adding the 3rd as another if clause is simpler than refactoring all of them to an ABC structure. And so on.

      “Simple” is relative. Investing in a little complexity now can save your ass later. Weighing this decision takes skill and experience

      • miningape 17 minutes ago
        I think what matters more than the abstract class vs if statement dichotomy, is how well something maps the problem domain/data structures and flows.

        Sure maybe its fast to write that simple if statement, but if it doesn't capture the deeper problem you'll just keep running head first into edge cases - whereas if you're modelling the problem in a good way it comes as a natural extension/interaction in the code with very little tweaking _and_ it covers all edge cases in a clean way.

    • mememememememo 3 hours ago
      Yes. Which is why "I generated X lines of code" "I used a billion tokens this month" sound stupid to me.

      Like I used 100 gallons of petrol this month and 10 kilos of rabbit feed!

      • sph 3 hours ago
        People use stupid metrics like those because more useful ones, like "productivity" or "robustness" are pretty much impossible to objectively measure.
  • chromacity 3 hours ago
    > I want to argue that AI models will write good code because of economic incentives.

    The economic incentives on the internet by and large favor the production of slop. A significant proportion of the text-based web was content-farmed even before LLMs - and with the advent of LLMs, you now have slop-results for almost every search query imaginable, including some incredibly niche topics. We've seen the same trend with video: even before gen AI, online video consumption devolved toward carefully-engineered, staged short-form bait (TikTok, YT Shorts, etc). In the same vein, the bulk of the world's email traffic is phishing and spam.

    None of this removed the incentive to produce high-quality websites, authentic and in-depth videos, and so on. But in practice, it made such content rare and made it harder for high-quality products to thrive. So yeah, I'm pretty sure that good software will survive in the LLM era. But I'm also absolutely certain that most app stores will be overrun by slop, most games on Steam will be slop, etc.

  • clawfund 2 hours ago
    The token efficiency argument only holds for teams paying per API call. Cursor, Copilot, and most tools developers actually use are flat subscription. On a flat monthly fee, there's no economic pressure toward brevity in generated code.
    • foltik 2 hours ago
      Apparently there’s no economic pressure toward brevity in LLM generated slop comments either.

      And no, they’re not flat subscriptions. Use more tokens and your quota is gone faster.

      • gpm 2 hours ago
        Not true with github copilot. Cost is per prompt no matter how many tokens the prompt uses. Which can vary by 2 or 3 orders of magnitude...
  • fnoef 5 hours ago
    I wish it was true, but it sounds like copium. I bet garment makers, or artisan woodworkers said the same when big store cheap retails came. I bet they said "people value quality and etc", but in the end, outside of a group of people who has principles, everyone else floods their home with H&Ms and crap from Temu.

    So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.

    Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.

    • zozbot234 3 hours ago
      Artisanal crafts are alive and well. It turns out that some people actually prefer handmade stuff to the mass-produced kind, and there's plenty enough of them for a viable market, at least for the highest-quality producers. The real losers are those who make stuff of only barely-acceptable quality: they have no edge over what's mass produced, their middling skills lose value and they're forced to exit the sector.
    • mattmanser 4 hours ago
      For a lot of companies their entire income entirely depends on their uptime.

      Might be fine if your HR software isn't approving holiday requests, but your checkout breaks, there's no human that can pick apart the mess and you lose your entire income for a week and that might be the end of the business.

  • pizzly 5 hours ago
    The current iteration of models don't write clean code by itself but future ones will. The problem in my view is extremely similar to agentic/vibe coding. Instead of optimizing for results you can optimize for clean code. The demand is there, clean code will lead to less bugs, faster running code and less tokens used (thus less cost) when understanding the code from a fresh session. It makes sense that the first generation of vibe coding focused on the results first and not clean code. Am I missing something?
  • vb-8448 7 hours ago
    Good code wasn't winning even before the ai slop era!

    The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".

    To date nothing changed yet, I bet it won't change even in the future.

    • deathanatos 5 hours ago
      & I have thus far made a large portion of my living off of fixing bad code "later".

      … but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".

      So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.

    • briantakita 7 hours ago
      I was told by an exec...once a company or technology implements something and gets mindshare, the community (including companies) moves on.

      Competition is essentially dead for that segment given there is always outward growth.

      With that being said, AI enables smaller players to implement their visions with enough completeness to be viable. And with a hands off approach to code, the underlying technology mindshare does not matter as much.

      • esafak 7 hours ago
        If that were true first movers would always win. Hotmail came before Gmail. Yahoo came before Google. Myspace came before Facebook. Et cetera. Of course it is best to avoid competition by creating a new (sub)category but category kings can change.
    • ares623 4 hours ago
      The irony is that "good" code and good documentation have top priority now in most orgs. For decades the best developers have been screaming about good code and documentation but leadership couldn't give a fuck. But now that their favorite nepobaby is here, now it's the most important thing all of a sudden.
    • mettamage 6 hours ago
      I disagree, Electron showed the world that good code can be magnetic

      ... I'll see myself out

      • vb-8448 5 hours ago
        It's a joke? (I'm not in the electron/js world and I don't get it)
  • titzer 3 hours ago
    > Markets will not reward slop in coding, in the long-term.

    Forgive my cynical take, but we're currently experiencing a market that doesn't appear to be rewarding anything specific in the long-term, as huge sums of money are traded on a minute-to-minute, day-to-day, and week-to-week basis. There's an explosion of uncertainty in today's markets and complete collapse of long-range planning echoing at many levels in society--particularly at the highest levels of governments. So I kind of don't want to hear about markets are going to reward.

    But what exactly is "good code" (presumably the opposite of slop)?

    I'd say that good code is terse, robust, suits its function, yet admits just the right amount of testability, performance, and customizability for the future. Good code anticipates change well. That means that if it has one job, it does that one job well, and not twenty others. If the job is going to stay the same, the code is going to stay the same. Good systems are made from well-factored pieces of code that have proper jobs and do their proper jobs without taking on jobs they shouldn't.

    I for one think that AI code is going to reflect its training. If it's trained on just a random selection of everything out there, it's probably going to be pretty mediocre and full of bugs.

    • bluGill 3 hours ago
      Markets have always rewarded popularity in the short term. in the long term though it has always rewarded quality.
  • DanHulton 2 hours ago
    We've been here before. Outsourcing of coding was really big for a while, until the reality of that situation caught up with those who practiced it - if you were saving a bundle on outsourcing your coding work, you were only saving money _now._ Down the line, you'd have to pay extra for someone competent to re-implement the work with an eye to quality.

    (Sure, there were good outsourcing shops, but you didn't tend to save too much with them, since they knew they were good and charged appropriately.)

    "Slop" ai-generated code is the same tradeoff as cheap outsourcing shops. You move quicker and cheaper now, but there will come a day when code quality will dip low enough that it will be difficult enough to make new changes that a refocus on quality becomes not just worthwhile, but financially required as well.

    (And you may argue that you're using ai-generated code, but are maintaining a high code quality, and so for you this day will never come and you might be right! But you're the "good outsourcing shop", and you're not "saving" nearly as much time or money as those just sloppin' it up these days, so you're not really the issue, I'd argue.)

  • ramsaybolton 1 hour ago
    What I would like to add
  • skybrian 4 hours ago
    I think for this to work you need some kind of complexity budget. AI's are good at optimizing but you need to give them the right goals.
  • dzonga 1 hour ago
    > lines of code per developer grew.

    when you're making (crafting) software if the lines are going up for an equivalent functionality it means you're cooking up bullshit.

    the whole premise of software arts (engineering) - is you do MORE with LESS.

    engineering is not science, and neither is art. creativity is needed, rules of thumb are to be followed.

  • t43562 3 hours ago
    i.e. no matter what, the answer is always AI. If it's isn't good now it will be so .... AI. Don't forget to take your soma pills if anything isn't perfect.
  • keeda 3 hours ago
    I'm optimistic that AI will actually increase the proportion of good code in the future.

    1. IME AI tends to produce good code "in the small." That is, within a function or a file, I've encountered very little sloppy code from AI. Design and architecture is (still) where it quickly tends to go off the rails and needs a heavy hand. However, the bulk of the actual code will tend to be higher quality.

    2. Code is now very cheap. And more tests actually results in better results from AI. There is now very little excuse to avoid extensive refactoring to do things "the right way." Especially since there will be a strong incentive to have clean code, because as TFA indicates...

    3. Complex, messy code will directly increase token costs. Not just in grokking the codebase, but in the tokens wasted on failed attempts rooted in over-complicated code. Finally, tech debt has a concrete $$$ amount. What can get measured can get fixed, and nothing is easier to measure (or convince execs about!) than $$$.

    Right now tokens are extremely cheap because they're heavily subsidized, but when token costs inevitably start ramping up, slop will automatically become less economically viable.

  • jillesvangurp 4 hours ago
    Getting AI tools to produce better code is not that hard; if you know how to do things right yourself. Basically, all you need to do is ask it to. Ask it to follow SOLID principles. Ask it to stick to guard rails. Ask it to eliminate code duplication, write tests, harden code bases, etc. It will do it. Most people just don't know what to ask for. Or even that they should be asking for it. Or how to ask for it. A lot of people getting messy results need to look in the mirror.

    I'm using AI for coding just like everybody else. More or less exclusively since a few months. It's sometimes frustrating to get things done the right way but mostly I get the job done. I've been coding since the nineties. So, I know how to do things right and what doing it wrong looks like. If I catch my AI coding tools doing it wrong, I tell it to fix it and then adjust skills and guard rails to prevent it going off the rails.

    AI tools actually seem to self correct when used in a nice code base. If there are tests, they'll just write more tests without needing to be prompted. If there is documentation, that gets updated along with the code. When you start with a vibe coded mess it can escalate quickly unless you make it clean up the mess. Sometimes the tests it adds are a bit meh and you have to tell it off by "add some tests for the non happy path cases, make sure to cover all possible exceptions, etc.". You can actually ask for a critical code review and then tell it "fix all of that". Sometimes it's as simple as that.

  • ezekg 6 hours ago
    The background pattern really makes it hard to read, just fyi. I'd make the content have a white bg if you absolutely must use the pattern.
  • personality1 6 hours ago
    I wish I could write beautiful good code, every part of me wants it, but I'm forced to deliver as fast as I can.
  • beloch 4 hours ago
    "AI will write good code because it is economically advantageous to do so. Per our definition of good code, good code is easy to understand and modify from the reduced complexity."

    ---------

    This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.

    Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.

    • Ericson2314 4 hours ago
      The economic force is the LLMs themselves are worse at maintaining slop than good good.

      Everything fundamental that makes good easier for humans to maintain also makes it easier for LLMs to maintain. Full stop.

  • eagerpace 1 hour ago
    Slop code is like an early neural net. Path of least resistance. Except we cna see it and compare it to how it’s replacing something we’ve traditionally done instead of a neural net being opaque. As they write more code the path and concern for how it gets there will be approach zero.
  • woeirua 2 hours ago
    None of this is true. Pretty much all JavaScript code is slop because the language is god awful. It’s so bad that we spent the last 20 years trying to code around the severe limitations of the language itself. Despite everyone knowing that JS sucks, no one has been able to displace it. Slop wins. Typically because of first mover advantage.
    • icedchai 1 hour ago
      30 years. I remember when Netscape 2.0 was released.
    • skillissue33 2 hours ago
      Everyone who sucks at JavaScript says this. Same with CSS lol
      • woeirua 1 hour ago
        The language itself is bad. Even the creator admitted it.
  • shevy-java 1 hour ago
    It seems the big corporations committed to it already.

    Microslop is the future.

  • muskstinks 6 hours ago
    ... for now.

    And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.

    We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.

    A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.

    And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.

    We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)

    Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.

    Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).

    We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.

  • Tiberium 5 hours ago
    Electron won even in the pre-LLM era, I sure wonder why.
  • 1970-01-01 2 hours ago
    Here's the crux of it: if you want fast and cheap today, you have to choose AI. There is no cheaper, faster option. The article completely fails to mention this very damning fact about the state of the art. Slop is the future if you need it now and don't want to spend money.
  • rbbydotdev 6 hours ago
    The wrinkle here is what exactly “win” means
  • vicchenai 2 hours ago
    the distinction between slop and good AI-assisted code really comes down to who's reviewing it. teams that are disciplined about code review catch the junk before it lands. teams that let AI output fly straight to prod are gonna have a bad time eventually. it's less about the AI and more about engineering culture around it
  • cineticdaffodil 1 hour ago
    Slop is a chancevfor everyone with the old culture alive to rapidly take over endaveurs that have been taken over by utopists that deprecate that company.
  • Ancalagon 4 hours ago
    Maybe the models get better on the code side but I thought slop referred to any AI generated text or imagery? It’s hard to see how most of the internet’s written words won’t be slop, especially when there’s no binding compiler contract like in code.
  • ahussain 6 hours ago
    My prediction is that we'll start to see a whole new layer of abstraction to help us write high quality code with LLMs - meaning new programming languages, new toolchains, stricter typechecking, in-built feedback loops etc.

    The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.

  • sublinear 6 hours ago
    > economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall

    Economic forces are completely irrelevant to the code quality of AI.

    > I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies

    Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.

    AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.

    Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.

  • sergiotapia 4 hours ago
    The slop debt will always come to collect.

    A certain big PaaS I won't name here has had lots of clusterfucks in the last 3 months. The CEO is extremely bought into AI and "code not mattering anymore". He's also constantly talking about the meteoric growth because Claude and other AI providers are using railway as default suggestions.

    The toll has come to collect and now a lot of real production users are looking at alternatives.

    The reality is the market is rewarding slop and "velocity now". There will come a time where it will reward quality again.

  • yshamrei 7 hours ago
    good code do not earn money =)
  • RcouF1uZ4gsC 6 hours ago
    The existence and ubiquity of bash scripts make me doubt this.
  • darenr 5 hours ago
    Everyone's talking about AI, but let's posit that today's coding models are as good as a SDE on the performance/experience distribution, maybe in the lower quartile, but can we also posit that this will improve and over time the coding models equal and then better the median software engineer? It's not like SDE's are not also churning out poor quality code "it worked for me", "what tests?" "O(what?)", etc, we've all worked with them.

    The difference is that over the years while tooling and process have dramatically improved, SDE's have not improved much, junior engineers still make the same mistakes. The assumption is that (not yet proven, but the whole bubble is based on this) that models will continue to improve - eventually leaving behind human SDEs (or other domain people, lawyers, doctors, etc) - if this happens these arguments I keep seeing on HN about AI slop will all be moot.

    Assuming AI continues to improve, the cost and speed of software development will dramatically drop. I saw a comment yesterday that predicted that AI will just plateau and everyone will go back to vim and Makefiles (paraphrasing).

    Maybe, I don't know, but all these people saying AI is slop, Ra Ra Humans is just wishful thinking. Let's admit it, we don't know how it will play out. There's people like Dario and Sam who naturally are cheerleading for AI, then there's the HN collective who hate every new release of MacOS and every AI model, just on principle! I understand the fear, anyone who's ever read Flora Thompson's Lark Rise to Candleford will see the parallels, things are changing, AI is the plough, the railway, the transistor...

    I'm tired on the debate, my experience is that AI (Gemini for me) is awesome, we all have gaps in our knowledge/skills (but not Gemini), AI helps hardcore backend engineers throw together a Gradio demo in minutes to make their point, helps junior devs review their code before making a PR, helps Product put together presentations. I could go on and on, those that don't see value in AI are doing it wrong.

    As Taylor Swift said "It's me, hi, I'm the problem, it's me" - take that to heart and learn to leverage the tools, stop whining please, it's embarrassing to the whole software industry.

  • howoldareyou 1 hour ago
    Coming from the same founder who (famously) required 7-day work weeks (no days off ever) lol

    Is that still the future or nah?

  • imta71770 2 hours ago
    [dead]
  • seniorThrowaway 7 hours ago
    this submission is basically an ad
  • babyfounder 1 hour ago
    Coming from the same child founder who makes his employees work 24/7 while he jacks off to hentai all day
  • aplomb1026 6 hours ago
    [dead]
  • throwaway613746 7 hours ago
    [dead]
  • sloptile 7 hours ago
    [flagged]
  • ycisrigged 2 hours ago
    This website and entire startup ecosystem is a low IQ circle jerk that shouldn’t be taken seriously
  • 7e 7 hours ago
    None of this is true. Models will soon scale to several million tokens of context. That, combined with the combined experience of millions of feedback cycles, will make software a solved problem for machines, even as humans remain dumb. Yes, even complex software. Complex software is actually better because it is, generally, faster with more features. It’s smarter. Like a jet fighter, the more complex it is, the more capable it is.