Why we don’t use AI

(yarnspinner.dev)

71 points | by parisidau 2 hours ago

16 comments

  • barishnamazov 1 hour ago
    I struggle with the "good guys vs bad guys" framing here.

    Is a small indie dev "dodgy" if they use AI to unblock a tricky C# problem so they can actually finish their game? Yarn Spinner seems to conflate "Enterprise Scale Replacement" (firing 500 support staff) with "assistive tooling" (a solo dev using GenAI for texture variants).

    By drawing such a hard line, they might be signaling virtue to their base, but they are also ignoring the nuance that AI -- like the spellcheckers and compilers before it -- can be a force multiplier for the very creatives they want to protect.

    Personally, I do agree that there are many problems with companies behind major LLMs today, as well as big tech companies C-levels who don't understand why AI can't replace engineers. But this post, as much as written in a nice tone, doesn't frame the problem correctly in my mind.

    • GaryBluto 5 minutes ago
      > I struggle with the "good guys vs bad guys" framing here.

      It's because generative AI has become part of the "culture wars" and is therefore black and white to lots of people.

    • johnthedebs 24 minutes ago
      I think the only people they're calling "dodgy" are the ones offering these AI tools, and not the people using them.
  • rendang 1 hour ago
    One could probably think of dozens of reasonable arguments for avoiding LLM use, but this one is awful. If LLMs actually are able to get more work done with fewer people aka "firing people" that would be wonderful for humankind. If you disagree and like getting less work done with more people, you are welcome to forego tractors, dishwashers, the steam engine, and all the rest.
    • UqWBcuFx6NV4r 55 minutes ago
      Yeah. This has been an interesting cultural shift, especially with “the kids”. I’ve had at least a few people passionately tell me that using (non-generative, non-LLM) AI to assist with social network content moderation, is unethical, because it takes away jobs from people. Mind you, these are jobs in which people are exposed to CSAM, gore, etc. A fact that does not dissuade people of this view. There are certainly some sensible arguments against using “AI” for content moderation. This is not one of them.

      It’s really intriguing how an increasingly popular view of what’s “ethical” is anything that doesn’t stand in the way of the ‘proletariat’ getting their bag, and anything that protects content creators’ intellectual property rights, with no real interest in the greater good.

      Such a dramatic shift from the music piracy generation a mere decade or two ago.

      It’s especially intriguing as a non-American.

      Again, as you say, many sensible arguments against AI, but for some people it really takes a backseat to “they took our jerbs!”

      • majormajor 28 minutes ago
        I can't speak to outside the US, but here companies have gotten much more worker-hostile in the last 30 years and generally the economy has not delivered a bunch of wonderful new jobs to replace the ones that the information age already eliminated (let alone the ones that people are trying to eliminate now). A lot of new job growth is in relatively lower-paying and lower-stability roles.

        Forty years ago I would've had a personal secretary for my engineering job, and most likely a private office. Now I get to manage more things myself in addition to being expected to be online 24x7 - so I'm not even convinced that eliminating those jobs improve things for the people who now get to self-serve instead of being more directly assisted.

      • andrepd 50 minutes ago
        They took our jerbs is a perfectly valid argument for people which face ruin without a jerb.

        Capitalism is not prepared nor willing to retrain people, drastically lower the workweek, or bring about a UBI sourced from the value of the commons. So indeed, if the promises of AI hold true, a catastrophe is incoming. Fortunately for us, the promises of AI CEOs are unlikely to be true.

    • barishnamazov 50 minutes ago
      I personally wish for the time when AI can replace everything I can do (at least in my current field). I'm not sure how exactly I'll feel about it then, but it'd be a technological advancement I'd enjoy seeing in my lifetime.

      One question perhaps is, even if AI can do everything I can do (i.e., has the skills for it), will it do everything I do? I'm sure there are many people in the world with the superset of my skills, yet I bet there are some things only I'm doing, and I don't think a really smart AI will change that.

    • TeMPOraL 58 minutes ago
      Yup. The problem was never with the technology replacing work, it was always with the social aspect of deploying it, that ends up pulling the rug under people whose livelihood depend on exchanging labor for money.

      The luddites didn't destroy automatic looms because they hated technology; they did it because losing their jobs and seeing their whole occupation disappear ruined their lives and lives of their families.

      The problem to fix isn't automation, but preventing it from destroying people's lives at scale.

    • andrepd 54 minutes ago
      Would being the operating word here. In a capitalist economy with wage labour, it is a catastrophe.
  • 9JollyOtter 49 minutes ago
    > This comment pops up a few times, often from programmers. Unfortunately, because of how messy the term AI now is, the same concerns still apply. Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.

    What other people and companies do because I happen to use something correctly (as an assistive technology), is not my responsibility. If someone happens to misuse it or enforce it use in a dysfunctional work environment, that is their doing and not mine.

    If a workplace is this dysfunctional, there are likely many other issues that already exist that are making people miserable. AI isn't the root cause of the issue, it is the workplace culture that existed before the presence of AI.

  • pico303 4 minutes ago
    I’d love to work for a company like this, but when you said, “by the time we finished our doctorates,” I knew you were way out of my league.
  • goobert 33 minutes ago
    It's like saying we won't use compilers because it puts all the people who would manually create punch cards out of a job
  • ta9000 18 minutes ago
    Here’s a paragraph summary of the story, written by Sonnet 4.5:

    The Yarn Spinner team explains they don’t use AI in their game development tool despite having academic and professional backgrounds in machine learning—they’ve written books on it and gave talks about ML in games. Their position shifted around 2020 when they observed AI companies pivoting from interesting technical applications toward generative tools explicitly designed to replace workers or extract more output without additional hiring. They argue that firing people has become AI’s primary value proposition, with any other benefits being incidental. Rather than adopt technology for its own sake (“tool-driven development”), they focus on whether features genuinely help developers make better games. While they acknowledge numerous other AI problems exist and may revisit ML techniques if the industry changes, they currently refuse to use, integrate, or normalize AI tools because doing so would financially and socially support companies whose business model centers on eliminating jobs during a period when unemployment can be life-threatening.

  • journal 30 minutes ago
    programming is just turning calories/energy into text. some of you are just not that efficient at it, some of you produce garbage when you do. it's only been three years, there is still low hanging fruit on the new branches.
  • sandinmyjoints 1 hour ago
    I wonder if the title of this post will someday be a certification?
  • localhoster 1 hour ago
    I wish. I have just witnessed a engineer on our (small) team push a 4k line change to prod at the middle of the night. His message was: "lets merge and check it after". AI can help good team become better, but for sure it will make bad teams worse.

    I sorry friends, I think imma quit to farming :$

    • cmcaleer 42 minutes ago
      I don’t really see how this is an AI issue. We use AI all the time for code generation but if you put this on my desk with specific instructions to be light on review and it’s not a joke, I’m probably checking to see if you’re still on probation because that’s an attitude that’s incompatible with making good software.

      People with this kind of attitude existed long before AI and will continue to exist.

      • sheeh 1 minute ago
        Making good software isn’t what matters in most workplaces - making software that works (even if you have taped over the cracks) is.

        It’s always been this way in toxic workplaces - LLM’s amplify this.

    • deepsun 1 hour ago
      Try to comply to an infosec standard. Typically one of many compliance controls are "every change must be reviewed and approved by another person". So no one can push on their own.

      I know folks tend to frown on security compliances, but if you honestly implement and maintain most of the controls in there, not just to get a certificate -- it really make a lot of sense and improves security/clarity/risks.

    • UqWBcuFx6NV4r 54 minutes ago
      If I could at all help it, I would simply not work somewhere with that sort of engineering culture. Huge red flag.
    • risyachka 1 hour ago
      One should not be able to push to prod on their own especially in the middle of the night? Unless its a critical fix
      • rhplus 1 hour ago
        > Unless its a critical fix

        The bar for human approval and testing should be even higher for critical fixes.

        • madeofpalk 58 minutes ago
          Exactly. Wake someone up to review.
      • amelius 1 hour ago
        Who cares, AI has lowered the bar. If AI can produce rubbish 20+% of the time, so can we.
    • sheeh 48 minutes ago
      There’s a weird thing going on - I can see value in using LLM’s to put something together so you can see it rather than investing time to do it properly initially.

      But to just copy, paste and move on… terrible.

  • Vaslo 5 minutes ago
    This article is silly. Your employees are using AI to get shit done whether you like it or not. They are just being sneaky about it.
  • ktallett 57 minutes ago
    I didn't either but I have now realised yes it uses a lot of energy and yes it can be a total waste of energy, but if you are doing good with it, then it is worth the cost.

    I have decided I can only use AI that has a benefit to society at all. Say lower energy use apps for eink devices.

  • deadbabe 58 minutes ago
    “You’ll get left behind if you don’t learn AI”

    Left behind where? We all live in the same world, anyone can pick up AI at any moment, it’s not hard, an idiot can do it (and they do).

    If you’re not willing to risk being “left behind”, you won’t be able to spot the next rising trend quickly enough and jump on it, you’ll be too distracted by the current shiny thing.

    • nomel 49 minutes ago
      I read it as "compared to others, in the current context".

      If you take some percent longer to finish a some code, because you want that code to maintain some level of "purity", you'll finish slower than others. If his is a creative context, you'll spend more time on boilerplate than interesting stuffs. If this is a profit driven context, you'll make less money, less money for staff. Etc.

      > If you’re not willing to risk being “left behind”...

      I think this is orthogonal. Some tools increase productivity. Using a tool doesn't blind a component person...they just have an another tool under their belt to use if they personally find it valuable.

  • djaouen 1 hour ago
    If the people developing and supporting chat- and c0rn-bots are anxious about the future, that is the correct emotion to be feeling.
  • dacox 56 minutes ago
    The reflexive and moralistic anti-AI takes are starting to get more annoying than the actual AI slop
    • nomel 42 minutes ago
      I suspect a subpopulation of software development is going to become a bit religious, for a short while, split into "morally pure anti AI" and those who are busy using software as a means to an end to solve some real world problem. I think the tools will eventually be embraced, out of necessity, as they become more practically useful (being somewhere around "somewhat useful" right now).

      As a result, I think we'll eventually see a mean shift from rewarding those that are "technically competent" more towards those that are "practically creative" (I assume the high end technical competence will always be safe).

    • terminatornet 26 minutes ago
      No they aren't.
    • sodapopcan 51 minutes ago
      Getting annoying, yes, not more annoying than the actual slop, though, that's ridiculous.
  • terminatornet 1 hour ago
    Didn't know yarn spinner was used to make so many cool games. Norco is a favorite from the last few years.

    The anti-ai stance just makes em even cooler.

  • goldforever 40 minutes ago
    [dead]