Programming takes practice. And if all of your code is generated via LLM, then you're not getting any practice.
It's the same as saying using genAI will make you a bad artist. In the sense that putting hands-to-medium makes you a good artist, that is true. Unless you take deliberate steps to learn, your skills will attrophe.
However, being a good programmer|artist is different from being a successful programmer|artist. GenAI can help you churn out tons of content, and if you can turn that content into income, you'll be successful.
Even before LLMs, successful and capable were orthogonal features for most programmers. We had people who made millions churning out a crud website over a few months, and others that can build game engines, but are stuck in underpaid contracting roles.
The problem I have with a lot of these "oh I've heard it all before"-type posts is that some of what you heard is true. Yes, IDEs did make for some bad programmers. Yes, scripting languages has made for some bad programmers. Yes, some other shortcuts have made for bad programmers.
They haven't destroyed everyone but there definitely are sets of people who used the crutches and never got past them. And not just in a "well they never needed anything more" but worse programmers than they should or could have been.
> "C makes you a bad programmer. Real men code in assembler."
So, who is it that supposedly said that? Not K&R (obviously). Not Niklaus Wirth. Not Stroustrup. Not even Dijkstra (Algol 60) and he loved writing acerbic remarks about how much the average developer sucked. I don't recall Ken Thompson, Fred Brooks (OS/360), Cutler, or any other OS architect having said anything like that either. Who in that era that has any kind of credibility said that?
The "Real Men Don't Use Pascal" essay was a humorous shitpost that didn't reflect any kind of prevailing opinion.
C doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
IDEs don't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
Intellisense/autocomplete doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
I get what you're saying but let's be real: 99.99999% of modern software development is done with constant internet connectivity and is effectively impossible without it. Whether that's pulling external packages or just looking up the name of an API in the standard library. Yeah, you could grep docs, or have a shelf full of "The C++ Programming Language Reference" books like we did in the 90s, but c'mon.
I have some friends in the defense industry who have to develop on machines without public internet access. You know what they all do? Have a second machine set up next to them which does have internet access.
High level langages like js or python have a lot of bad design / suboptimal code... as well as some java code in many places.
Some bad java code (it just needs to be a sql select in a loop) can easily perform thousand time worse than a clean python implementation of the same thing.
As said above, once c was high level programing language and still is in some places.
I do not code in python / go / js that much these days, but what made me a not so bad developper is my understanding of computing mechanism (why and how to use memory instead of disk, how to arange code so cpu can use it's cache efficiently...)
As said in many posts, code quality even for vibe coded stuff depends more on what was prompted and how many efforts the PR diff is human readable to get maintainable and efficient softwares at the end of the day.
Yet senior devs often spend more time reviewing code instead of actually writting some. Vibe coding ultimately feels the same for me at the moment.
I still love to write some code by hand, but I start to feel less and less productive with this approach while at the same time feeling I don't really lost my skills to do so.
I think I really feel and effectively am more efficient at delivering thing with appropriate quality level for my customers now that I have agentic coding skills in my belt.
I'll always remember a lab we had in university where we hand-wrote machine code to do blinkenlights, and used an array of toggle switches to enter each byte into memory by hand.
All of this is true, but all of the examples that came before were deterministic, so once you understood the abstraction, you still understood the whole thing.
The examples are semantic shifts. Assembler → C wasn't just a syntax swap (functions are semantic units, types are meaning, compilation is optimization reasoning, etc.). "Rename this symbol safely across a project" is a semantic transformation. And of course, autocomplete is semantic. AI represents a difference in degree, but not kind. Like the examples cited by the parent, AI further moves us from lower-level semantics to higher-level semantics.
I'm just enjoying the last few years of this career. Let me have fun!
Joking aside, we have to understand that this is the way software is being created and this tool is going to be the tool most trivial software (which most of us make) will be created with.
I feel like the industry is telling me: Adopt of become irrelevant
Meh, I am also old enough to have experienced what the GP post mentioned, and I remember also when Visual Basic 6 was released, a similar sentiment appeared:
Suddenly, every cousin 13 year old could implement apps for their Uncle's dental office, laboratory, parts shop billing, tourism office management, etc. Some people also believed that software developers would become irrelevant in couple of years.
For me as an old programmer, I am having A BLAST using these tools. I have used enough tools (TurboBasic, Rational Rose (model based development, ha!), NetBeans, Eclipse, VB6, BorlandC++ builder) to be able to identify their limits and work with them.
> It's probably fine--unless you care about self-improvement or taking pride in your work.
I’m hired to solve business problems with technology, not to self-improve or get on my high horse because I hand-wrote a silly abstraction layer for the n-th time
And you/we will be replaced by an AI that will solve the business problem (the day they get so good to actually do that, which might happen or not but... who knows?)
After all, you can go and be a goat herder right now, and yet you are presumably not doing this.
Nothing is stopping you being a goat herder - the place that is paying you for solving business problems will continue just fine if you leave, after all. Your presence there is not required.
Herding goats doesn't solve the interesting technical problem I'm trying to solve.
Point is: if that problem is solvable without me, that's the win condition for everyone. Then I go herd goats (and have this nifty tool that helps me spec out an optimal goat fence while I'm at it).
> Point is: if that problem is solvable without me, that's the win condition for everyone.
The problem is solvable without you. I don't even need to know what the problem actually is, because the odds of you being one of the handful of the people in the world who are so critical that the world notices their passing is so low, I have a better chance of winning a lottery jackpot than of you being some critical piece of some solution.
I unironically have a 5 year plan to get out of tech and into something more “real”.
I want to work on something that helps actual humans not these “business problems”
There aare probably 2 ways to see te future of LLMs / AI: they are either going to have the capabilities to replace all white collar work, or they are not.
If you think they are going to replace us, then yo ucan either surrender or fight back, and personally I read all these anti-AI posts as fighting back, to help people realize we might be digging our own grave.
If, OTOH, you see AI as a force-multiplier tool that's never going to completely replace a human developer then yes, probably the smartest thing to do is to learn how to master this new tool, but at the same time keep in mind the side effects it might bring, like atrophy.
My personal goal has been to dig that grave ever since I could hold a shovel.
We've always been in the business of replacing humans in the 3-D's space (dirty, dangerous, dull... And to be clear. data manipulation for its own sake is dull). If we make AI that replaces 90% of what I do at my desk every day... We did it. We realized the dream from the old Tom Swift novels where he comes up with an idea for an invention and hands the idea off to his computer to extrapolate it, or the ship's computer in Star Trek acting like a perfect engineering and analytical assistant to take fuzzy asks from humans and turn them into useful output.
The problem is that this time, we're creating a competing intelligence that in theory could replace all work, AND, that competing intelligence is ultimately owned/controlled by a few dozen very rich guys.
I love to code, like fun code, solving a relatively small concrete problem with code feels rewarding to me....however, writing business code on the other hand? Not really.
I do however, love solving business problems. This is what I am hired for. I speak to VP/managers to improve their day to day. I come up with feasible solution and translate them into code.
If AI could actually code, like really code(not here is some code, it may or may not work go read documentation to figure out why it doesn't), I would just go and focus on creating affordable software solutions to medium/small businesses.
This is kind of like gardening/farming, before industrial revolution most crops required a huge work force, these days with all the equipment and advancements a single farmer can do a lot on their own with small staff. People still "hand" garden for pleasure, but without using the new tech they wouldn't be able to compete on a big scale.
I know many fear AI, but it is progress and it will never stop. I do think many devs are intelligent and will be able to evolve in the workplace.
Are you a consultant? Because otherwise there’s a thing called a “career ladder”, and you are very much being paid to self-improve. And if you don’t, that’s going to feature prominently in your next promotion review.
I agree, I was always annoyed in projects where these kids thought they were still in school and spinning up incredible levels of over abstraction that led to some really horrible security problems.
> I’m hired to solve business problems with technology, not to self-improve or get on my high horse because I hand-wrote a silly abstraction layer for the n-th time
So, this "solve business problems" is some temporary[1] gig for you?[2]
------------------------------
[1] I'm reminded of the anti-union people who are merely temporarily embarrassed millionaires.
[2] Skills atrophy. Maybe you won't need the atrophied skill in the future, but how sure are you that this is the case? The eventual outcome?
For me AI is really powerful autocomplete. Like you said, I wrote the abstraction years ago. Writing the abstraction again now is not required.
A time and place may come where the AI are so powerful I’m not needed. That time is not right now.
I have used Rider for years at this point and it automatically handles most imports. It’s not AI, but its one of the things that is just not needed for me to even think about.
Maybe everyone is using these agentic tools super heavily, and it’s way different for them, but I just use AI to do all the boring stuff, then I read it and tweak it. It just accelerates my process by 2-5x since I don’t have to implement boring and tedious things like reading or writing a csv file, so I can spend all my coding time on the actually important parts, the novel parts.
I don’t commit 1,000 lines that i don’t know how it works.
If people are just not coding anymore and trusting AI to do everything, i agree, they’re going to hit a wall hard once the complexity of their non-architected Frankenstein project hits a certain level. And they’ll be paying for a ton of tokens to spin the AI’s wheels trying to fix it.
I've also heard similar arguments about "Using stackoverflow instead of RTFM makes you a bad programmer."
These things are all tradeoffs. A junior engineer who goes to the manual every time is something I encourage, but if they go exclusively to only the manual every time they are going to be slower and produce code more disjoint and harder to maintain than their peers who have taken advantage of other people's insights into the things the manuals don't say.
It has long been understood that programming is more about reading code than writing code. I don't see any issue with having LLMs write code. The real issue arises when you stop bothering to read all the code that the LLM writes.
> The real issue arises when you stop bothering to read all the code that the LLM writes.
Fluency in reading will disappear if you aren't writing enough. And for the pipeline from junior to senior, if the juniors don't write as much as we wrote when young, they are never going to develop the fluency to read.
Coding agents are going to become better and used everywhere, why train for the artisanal coding style of 2010 when you are closer to 2030? What you need to know is how to break complex projects in small parts, improve testing, organize work and the typical agent problems and capabilities. In the future no employer is going to have the patience for you to code manually.
As a 25+ year veteran programmer that's been mostly unimpressed with the quality of AI-generated code -
I've still learned from it. Just read each line it generates carefully. Read the API references of unfamiliar functions or language features it uses. You'll learn things.
You'll also see a lot of stupidity, overcomplication, outdated or incorrect APIs calls, etc.
I have always found it way easier to write code than to understand code written by someone else. I use Claude for research and architectural discussions, but only allow it to present code snippets, not to change any files. I treat those the same way I treat code from Stack Overflow and manually adapt them to the present coding guidelines and my aesthetics. Not a recipe for 10x, but it gets road blocks out of the way quickly.
I guess yours might have been intended to be a facetious comment, but a quick google for designer weaving shows up a UK company as the first hit for me that sells their work for approximately $1500 per square foot.
If the demand for this work is high, maybe the individual workers aren't earning $100k per year, but the owner of the company who presumably was/is a weaver might well be earning that much.
What the loom has done is made the repeatable mass production of items cheap and convenient. What used to be a very expensive purchase is now available to more people at a significantly cheaper price, so probably the profits of the companies making them are about the same or higher, just on a higher volume.
It hasn't entirely removed the market for high end quality weaving, although it probably has reduced the number of people buying high-end bespoke items if they can buy a "good enough" item much cheaper.
But having said that, I don't think weavers were on the inflation-adjusted equivalent of 100k before the loom either. They may have been skilled artisans, but that doesn't mean the majority were paid multiples above an average wage.
The current price bubble for programming salaries is based on the high salaries being worth paying for a company who can leverage that person to produce software that can earn the company significantly more than that salary, coupled with the historic demand for good programmers exceeding supply.
I'm sure that even if the bulk of programming jobs disappear because people can produce "good enough" software for their purposes using AI, there will always be a requirement for highly skilled specialists to do what AI can't, or from companies that want a higher confidence that the code is correct/maintainable than AI can provide.
Is it just me, or does anyone else use AI not just to write code, but to learn. Since I've been using Claude I've learned a lot about Rust by having it build things for me, then working with that code. I've never been a front end guy, but I had it write a Chrome plugin for me, then I used that code to learn how it works. It's not a black box to me, but I don't need to look up some CSS stuff I've never used. I can prompt Claude to write it and then I can look at it then "Huh, that's how it works". Better than researching it myself, I can see an example of exactly how it's done, then I learn from that.
I'm doing a lot of new things I never would have done before. Yes, I could have googled APIs and read tutorials, but I learn best by doing, and AI helps me learn a lot faster.
I second this. It's like having a second brain with domain expertise in pretty much anything I could want to ask questions of. And while factual assertions may still be problematic (hallucinations), I can very quickly run code and see if it does what I want or not. I don't care if it hallucinates if it solves my problem with code that is half decent. Which it does.
A competent developer should be able to read the code, spot any defects in “decency”, and fix them (or indeed, explain as you would to a junior dev how you want it fixed and let AI fix it). And of course they should have tests that should be able to categorically prove that the code does what it is supposed to do.
Me too! I got into ESP32s and sensors thanks to AI. I wouldn't have had time or energy after stressful work all day but thanks to them I can get firmware written for my projects. Along the way I'm also learning how the firmware has to be written and finding issues with what the AI wrote and correcting them.
If people aren't learning from AI it's their fault. Yeah AI makes stuff up and hallucinates and can be wrong but how is that different than a distracted senior dev? AI is available to me 24/7 to answer my questions in minutes or seconds where half the time when I message people I have to wait 30-60min for a response.
People just need to approach things intelligently and actually learn along the way. You can easily get to the point where you're thinking more clearly about a problem than the AI writing your code pretty quickly if you just pay attention and do the research you need to understand what's happening. They're not as factual as a textbook but they don't need to be to give you the space to ask the right questions and they'll frequently provide sources (though I'd heavily recommend checking them. Sometimes the sources are a joke)
I do agree this is where AI shines. If you need a quick rehash of something that's been done a zillion times before or a quick integration between two known good components, AI's great.
But the skills you describe are still skills, reading and researching and doing your own fact finding are still important to practice and be good at. Those things only get more important in situations off the beaten path, where AI doesn't always give you trustworthy answers or do trustworthy work.
I'm still going to nurture some of these skills. If I'm trying to learn, I'll stick to using AI only when I'm truly stuck or no longer having fun.
I am using AI to learn EVERYTHING. Spanish, code, everything. Honestly, the largest acceleration I am getting is in research towards design docs (which then get used for implementation).
I'm curious how the spanish is going! Have you used any interesting methods or are you just kind of talking to it and asking it questions about spanish?
Absolutely. It's a tireless rubik's cube. One that you can rotate endlessly to digest new material. It doesn't sigh heavily or not have the mental bandwidth to answer. Yes, it should not be trusted with high precision information but the world can get by quite well on vibes.
I have definitely had Claude make recommendations that gave me structural insight into the code that I didn't have on my own, and I integrated that insight.
People who claim "It's not synthesized, it's just other people's work run through a woodchipper" aren't precisely right, but they also aren't precisely wrong... And in this space, having the whole ecosystem of programmers who published code looking over my shoulder as I try to solve problems is a huge boon.
Compared to what though? I have ended up with needlessly convoluted solutions when learning something the old-fashioned way before. Then over time, as I learn more, I improve my approach.
Not everyone has access to an expert that will guide them to the most efficient way to do something.
With either form of learning though, critical thinking is required.
While I agree with much of the sentiment, I believe a point will approach where the amount of code and likely its complexity; due to having been written by ai, will require ai to work with and maintain
> I love to write code. I very much do not love to read, review, and generate feedback on other people's code. I understand it's a good skill to develop and can be instrumental in helping to shape less experienced colleagues into more productive collaborators, but I still hate it.
Same.
Writing code is easy.
Reading code is very very hard.
I find a that actually a disturbing assumption. I've learned a lot from reading other peoples code, seeing how they were thinking and spotting errors, so the good and the bad. I believe that in order to actually write good code its important to actually understand what is the context of the task which basically requires a lot of code reading, which is also sometimes quite enjoyable when you have competent authors. Reading code is an essential part of the game. If you cannot do that you'll just create huge balls of mud with or without ai usage.
Though using ai will speedrun the mud so yeah, there is an argument for not using it.
What an odd thing for them to put in the article. This is an example of AI generated code making someone a better programmer (by improving their ability to read code and give feedback). So it contradicts the title.
They could rename it "Using AI Generated Code Makes Programming Less Fun, for Me", that would be more honest.
The problem for programmers is (as a group) they tend to dislike the parts of their job that are hardest to replace with AI and love the stuff that is easiest for machines to copy. It turns out meetings, customer support, documentation, tests, and QA are core parts of being a good engineer.
>It's probably fine--unless you care about self-improvement or taking pride in your work.
I did, for a very long time. Then I realized that it's just work, and I'd like to spend my life minimizing the amount of that I do and maximizing the things I do want to do. Code gen has completely changed the equation for workaday folks. Maybe that will make us obsolete, and fall out of practice. But I tend to think the best software engineers are the laziest ones who don't try to be clever. Maybe not the best programmers per se, but I know whose codebase I'd rather inherit.
I get your points here; I've had a similar discussion with my VP of Engineering. His argument is that I'm not hired to write `if` statements, I'm hired to solve problems. AI can solve it faster that's what he cares about at the end of the day.
However I agree there's a different category here under the idea of 'craft'. I don't have a good way to express this. It's not that I'm writing these 'if' statements in a particular way, it's how the whole system is structured and I understand every single line and it's an expression of my clarity of the system in code.
I believe there a split between these two and both are focusing on different problems. Again I don't want to label, but if I *had to* I would say one side is business focused. Here's the thing though - your end customers don't give a fuck if it's built with AI or crafted by hand.
The other side is the craftsmanship, and I don't know how to express this to make sense.
I'm looking for a good way to express this - feeling? Reality? Practice?
IDK, but I do understand your side of it; However, I don't think many companies will give a shit.
If they can go to market in 2 weeks vs 2 month's you know what they'll choose.
I think, for those of us who have been in this industry for 20 years, AI isn't going to magically make me lose everything I learned.
However, for those in the first few years of their career, I'm definitely seeing the problem where junior devs are reaching for AI on everything, and aren't developing any skills that would allow them to do anything more than the AI can do or catch any of the mistakes that AI makes. I don't see them on a path that leads them from where they are to where I am.
A lot of my generation of developers is moving into management, switching fields, or simply retiring in their 40s. In theory there should be some of us left who can do what AI can't for another 20 years until we reach actual retirement age, but programming isn't a field that retains its older developers well. So this problem is going to catch up with us quickly.
Then again, I don't feel like I ever really lived up to any of the programmers I looked up to from the 80s and 90s, and I can't really point to many modern programmers I look up to in the same way. Moxie and Rob Nystrom, maybe? And the field hasn't collapsed, so maybe the next generation will figure out how to make it work.
So this author loves the easy part (writing code), hates the hard part (reading and reviewing), and lacks so much self awareness that he is going to lecture people on skill atrophy?
If you want to be an artist be an artist, that's fine, don't confuse artististry with engineering.
I write art code for myself, I engineer code professionally.
The author wraps with a false dichotomy that uses emotionally loaded language at the end: "You Believe We have Entered a New Post-Work Era, and Trust the Corporations to Shepherd Us Into It". I mean, what? Why can't I think it's quickly becoming a new era _and_ not trust corporations? Why does the author take that idea off the table? Is this logic or rhetoric? Who is this author trying to convince?
SWE life has always had smatterings of weird gatekeeping, self identities wrapped up in external tooling or paradigms, fragile egos, general misanthropy, post-hoc rationalization, etc. but... man watching the progressions of the crash outs these last few years has been wild.
In my day job, I use best practices. If I'm changing a SQL database, I write database migrations.
In my hobby coding? I will never write a database migration. You couldn't force me to at gunpoint. I just hate them, aesthetically. I will come up with the most elaborate and fragile solutions to avoid writing them. It's part of the fun.
Oh no, AI-generated code will make me a bad programmer? Thank God I’ve been hand-crafting my 500 line regex monstrosities for 20 years—clearly the gold standard. Next you’ll tell me copy pasting Stack Overflow turns me into a cargo cultist. Wake up: bad programmers existed since punch cards; AI just speeds up the Darwinian cull. Use it to boilerplate the boring bits, then actually grok what it spits out. Or keep Luddite-posting while the rest of us ship.
Its honestly a phenomenal time to be a developer that doesnt use AI tooling. Its easier now than ever to differentiate yourself from increasingly knowledge-less devs who can only recite buzzwords but cant actually create, maintain, and improve remotely complex systems.
Yes, taking the bus to work will make me a worse runner than jogging there. Sometimes, I just want to get to a place.
Secondly, I'm not convinced the best way to learn to be a good programmer is just to do a whole project from 0 to 100. International practice is a thing.
Having someone else write the code is about as far from intentional practice as can be.
I do think the “becoming dependent on your replacement” point is somewhat weak. Once AI is as good as the best human at programming (which I think could still be many years away), the conversation is moot.
Yep, this is the only analogy that makes sense. And if, like in the taxi situation, you are the owner of the taxi license, then you win because you keep making money but now you don't have to drive. But if OTOH you are just driving for a salary, bad news, you need to find another job now. Maybe if you are a very good driver, good looking and with good manners, some rich guy can hire you as his personal driver but otherwise...
Agreed mostly, especially in terms of efficiency. I have, however, been seeing more people recently with a built in dependency on their IDEs to solve their problems.
Using a compiler will also make you much worse at writing assembly code. Doesn’t bother me at all. Haven’t written any assembly since the 20th century.
Exactly 2 years ago I remember people calling AI stochastic parrots with no actual intellectual capability and people on HN weren’t remotely worried that AI would take over there jobs.
I mean in 2 years the entire mentality shifted. Most people on HN were just completely and utterly wrong (also quite embarrassing if you read how self assured these people were, this is like 70 percent of HN at the time).
First AI is clearly not a stochastic parrot and second it hasn’t taken our jobs yet but we can all see that potential up ahead.
Now we get articles like this saying your skills will atrophy with AI because the entire industry is using it now.
I think it’s clear. Everyone’s skills will atrophy. This is the future. I fully expect in the coming decades that the generation after zoomers have never coded ever without the assistance of AI and they will have an even harder time finding jobs in software.
Also: because the change happened so fast you see tons of pockets of people who aren’t caught up yet. People who don’t realize that the above is the overarching reality. You’ll know you’re one of these people if AI hasn’t basically taken over your work place and you and your coworkers aren’t going all in on Claude or Codex. Give it another 2 years and everyone will flip here too.
About a year ago, another commenter said this in response to the question "Ask HN: SWEs how do you future-proof your career in light of LLMs?":
> "I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time."
Even a year ago that seemed like a ridiculous thing to say. LLM's have made one thing very clear to me: A massive percentage of developers derive their sense of self worth from how smart coding makes them feel.
Yes. If one thing is universal among people is that they can’t fully accept reality at face value if that reality is violating their identity.
What has to happen first is that people need to rebuild their identity before they can accept what is happening and that rebuilding process will take longer then the rate at which AI is outrunning all of us.
What is my role in tech if for the past 20 years I was a code ninja but now AI can do better than me? I can become a delegator or manager to AI, a prompt wizard or some leadership role… but even this is a target for replacement by AI.
AI doesn't need or care about "high quality" code in the same ways we define it. It needs to understand the system so that it can evolve it to meet evolving requirements. It's not bound by tech debt in the same way humans are.
That being said, what will be critical is understanding business needs and being able to articulate them in a manner that computers (not humans) can translate into software.
Two years ago there were also hundreds of people constantly panic-posting here about how our jobs would be gone in a month, that learning anything about programming was now a waste of time and the entire profession was already dead, with all other knowledge work guaranteed to follow. People were posting about how they were considering giving up on CS degrees because AI would make them pointless. The people who used language like "stochastic parrots" were regularly mocked by AI enthusiasts, and the AI enthusiasts were then mocked in return for their absurd claims about fast take-off and imminent AGI. It was a cesspool of bad takes coming from basically every angle, strengthening in certainty as they bounced off each other's idiocy.
Your memory of the discourse of that era has apparently been filtered by your brain in order to support the point you want to make. Nobody who thoughtlessly adopted an extreme position at a hinge point where the future was genuinely uncertain came out of that looking particularly good.
No it very clearly is. Even still today, it is obvious that it has zero understanding of anything and it's just parroting training data arranged in different ways.
No. Many of the answers it produces can only be attributed to intelligence. Not all but a many can be. We can prove that these answers are not parroted.
As for “understanding” we can only infer this from input and output. We can’t actually know if it “understands” because we don’t actually know how these things work and in addition to that, we don’t have a formal definition of what “understanding” is.
Yeah -- stochastic just implies a probabilistic method. It's just that when you include enough parameters your probabilities start to match the actual space of acceptable results really really well. In other words, we started to throw memory at the problem and the results got better. But it doesn't change the fundamentals of the approach.
In my experience, it's not that the term itself is incorrect but more so people use it as a bludgeoning force to end conversations about the technology. Rather than, what should happen, is to invite nuance about how it can be utilized and it's pitfalls.
Colloquially, it just means there’s no thinking or logic going on. LLMs are just pattern matching an answer.
From what we do know about LLMs we do know that it is not trivial pattern matching, the output formulated is literally by the definition of machine learning itself completely original information not copied from the training data.
Well, its taken blame for the job cutting due to the broad growth slowdown since COVID fiscal and monetary stimulus was stopped and replaced with monetary tightening, and then most recently the economy was hit with the additional hammers of the Trump tariff and immigration policies, as lots of people want to obscure, deny, and distract from the general economic malaise (and because many of the companies, and even more of their big investors, involved are in incestuous investment relationships with AI companies, so "blaming" AI for the cuts is also a form of self-serving promotion.)
But AI is still a stochastic parrot with no actual intellectual capability... who actually believes otherwise? I figured most people had played with local models enough by now to understand that it's just math underneath. It's extremely useful, but laughably far from intelligence, as anyone who has attempted to use Claude et al for anything nontrivial knows.
This quote is so telling. I’m going to be straight with you and this is my opinion so you’re free to disagree.
From my POV you are out of touch with the ground truth reality of AI and that’s ok because it has all changed so fast. Everything in the universe is math based and in theory even your brain can be fully modelled by mathematics… it’s a pointless quote.
The ground truth reality is that nobody and I mean nobody understands how LLMs work. This isn’t me making shit up, if you know transformers, if you know the industry and if you even listened to the people behind the technology who make these things… they all say we don’t know how AI works.
But we do know some things. We know it’s not a stochastic parrot because in addition to the failures we’ve seen plenty of successes to extremely complicated problems that are too non trivial for anything other than an actual intelligence to solve.
In the coming years reality will change so much that your opinion will flip. You might be so stubborn as to continue calling it a stochastic parrot but by then it will just be lip service. Your current reaction is normal given the paradigm shift happened so fast and so recently.
There’s tons more where that came from. Like I said lots of people are out of touch because the landscape is changing so fast.
What is baffling to me is that not only are you unaware of what I’m saying but you also think what I’m saying is batshit insane despite the fact that people in the center of it all who are creating these things SAY the same thing. Maybe it’s just terminology…understanding how t build an LLM is not the same as understanding why it works or how it works.
Either way I can literally provide tons and tons more of evidence to the contrary if you’re still not getting it: We do not understand how LLMs work.
Also you can prompt an LLM about whether or not we understand LLMs it should tell the same thing I’m saying along with explaining transformers to you.
That's a CEO of an AI company saying his product is really superintelligent and dangerous and nobody knows how it works and if you don't invest you're going to be left behind. That's a marketing piece, if you weren't aware.
Just because the restaurant says "World's Best Burgers" on its logo doesn't make it true.
Geoffrey Hinton father of AI who quit his job at Google to warn people about AI. What’s his motivation? Altruism.
Man it’s not even about people saying things. If you knew how transformers and LLMs work you would know even for the most basic model we do not understand how they work.
I mean at a minimum I understand how they work, even if you don't. So the claim that "nobody and I mean nobody understands how LLMs work" is verifiably false.
Did you not look at the evidence I posted? It’s not about you or I it’s about humanity. I have two on the ground people who are central to AI saying humanity doesn’t understand AI.
If you say you understand LLMs then my claim is then that you are lying. Nobody understands these things and people core to building these things are in absolute agreement with me.
I build LLMs for a living, btw. So it’s not just other experts saying these things.. I know what I’m talking about on a fundamental level.
I’ve seen it solve a complex domain specific problem and build a basis of code in 10 minutes what took a year for a human to do. And it did it better.
I’ve also seen it fuck up in the same way you describe. So do I weigh and balance these two pieces of contrasting evidence to form a logical conclusion? Or do I pick and choose one of pieces of evidence that is convenient to my world view? What should I do? Actually why don’t you tell me what you ended up doing?
Why does it even matter if it is a stochastic parrot? And whose to say that humans aren't also?
Imagine the empire state building was just completed, and you had a man yelling at the construction workers: "PFFT that's just a bunch of steel and bricks"
Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...
Sam Altman is the modern day PT Barnum. He doesn't believe a damn thing except "make more money for Sam Altman", and he's real good at convincing people to go along with his schemes. His actions have zero evidential value for whether or not AI is intelligent, or even whether it's useful.
> Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...
The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.
It's not that the money can predict what is correct, it's that it can tell us where people's values lie.
Those people who invested cash in blockchain believed that they could develop something worthwhile on the blockchain.
Zuckerberg believed the Metaverse could change things. It's why he hired all of those people to work on it.
However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.
There's another article posted here, "Believe the Checkbook" or something like that. And they point out that Anthropic had no reason to purchase Bun except to get the people working on it. And if you believe we're about to turn a corner on vibe coding, you don't do that.
C makes you a bad programmer. Real men code in assembler.
IDEs make you a bad programmer. Real men code in a text editor.
Intellisense / autocomplete makes you a bad programmer. Real men RTFM.
Round and round we go...
Programming takes practice. And if all of your code is generated via LLM, then you're not getting any practice.
It's the same as saying using genAI will make you a bad artist. In the sense that putting hands-to-medium makes you a good artist, that is true. Unless you take deliberate steps to learn, your skills will attrophe.
However, being a good programmer|artist is different from being a successful programmer|artist. GenAI can help you churn out tons of content, and if you can turn that content into income, you'll be successful.
Even before LLMs, successful and capable were orthogonal features for most programmers. We had people who made millions churning out a crud website over a few months, and others that can build game engines, but are stuck in underpaid contracting roles.
They haven't destroyed everyone but there definitely are sets of people who used the crutches and never got past them. And not just in a "well they never needed anything more" but worse programmers than they should or could have been.
So, who is it that supposedly said that? Not K&R (obviously). Not Niklaus Wirth. Not Stroustrup. Not even Dijkstra (Algol 60) and he loved writing acerbic remarks about how much the average developer sucked. I don't recall Ken Thompson, Fred Brooks (OS/360), Cutler, or any other OS architect having said anything like that either. Who in that era that has any kind of credibility said that?
The "Real Men Don't Use Pascal" essay was a humorous shitpost that didn't reflect any kind of prevailing opinion.
IDEs don't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
Intellisense/autocomplete doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
I have some friends in the defense industry who have to develop on machines without public internet access. You know what they all do? Have a second machine set up next to them which does have internet access.
It’s like a carpenter talking about IKEA saying “I remember when I got an electric sander, it’s the same thing”.
I am mid career now.
High level langages like js or python have a lot of bad design / suboptimal code... as well as some java code in many places.
Some bad java code (it just needs to be a sql select in a loop) can easily perform thousand time worse than a clean python implementation of the same thing.
As said above, once c was high level programing language and still is in some places.
I do not code in python / go / js that much these days, but what made me a not so bad developper is my understanding of computing mechanism (why and how to use memory instead of disk, how to arange code so cpu can use it's cache efficiently...)
As said in many posts, code quality even for vibe coded stuff depends more on what was prompted and how many efforts the PR diff is human readable to get maintainable and efficient softwares at the end of the day.
Yet senior devs often spend more time reviewing code instead of actually writting some. Vibe coding ultimately feels the same for me at the moment.
I still love to write some code by hand, but I start to feel less and less productive with this approach while at the same time feeling I don't really lost my skills to do so.
I think I really feel and effectively am more efficient at delivering thing with appropriate quality level for my customers now that I have agentic coding skills in my belt.
AI is different.
Joking aside, we have to understand that this is the way software is being created and this tool is going to be the tool most trivial software (which most of us make) will be created with.
I feel like the industry is telling me: Adopt of become irrelevant
Now I'm just telling AI what to do.
Suddenly, every cousin 13 year old could implement apps for their Uncle's dental office, laboratory, parts shop billing, tourism office management, etc. Some people also believed that software developers would become irrelevant in couple of years.
For me as an old programmer, I am having A BLAST using these tools. I have used enough tools (TurboBasic, Rational Rose (model based development, ha!), NetBeans, Eclipse, VB6, BorlandC++ builder) to be able to identify their limits and work with them.
I’m hired to solve business problems with technology, not to self-improve or get on my high horse because I hand-wrote a silly abstraction layer for the n-th time
After all, you can go and be a goat herder right now, and yet you are presumably not doing this.
Nothing is stopping you being a goat herder - the place that is paying you for solving business problems will continue just fine if you leave, after all. Your presence there is not required.
Point is: if that problem is solvable without me, that's the win condition for everyone. Then I go herd goats (and have this nifty tool that helps me spec out an optimal goat fence while I'm at it).
The problem is solvable without you. I don't even need to know what the problem actually is, because the odds of you being one of the handful of the people in the world who are so critical that the world notices their passing is so low, I have a better chance of winning a lottery jackpot than of you being some critical piece of some solution.
I have buy-in from a former co-worker with whom I remained in touch over the years, so there will be at least two of us working the fields.
There aare probably 2 ways to see te future of LLMs / AI: they are either going to have the capabilities to replace all white collar work, or they are not.
If you think they are going to replace us, then yo ucan either surrender or fight back, and personally I read all these anti-AI posts as fighting back, to help people realize we might be digging our own grave.
If, OTOH, you see AI as a force-multiplier tool that's never going to completely replace a human developer then yes, probably the smartest thing to do is to learn how to master this new tool, but at the same time keep in mind the side effects it might bring, like atrophy.
We've always been in the business of replacing humans in the 3-D's space (dirty, dangerous, dull... And to be clear. data manipulation for its own sake is dull). If we make AI that replaces 90% of what I do at my desk every day... We did it. We realized the dream from the old Tom Swift novels where he comes up with an idea for an invention and hands the idea off to his computer to extrapolate it, or the ship's computer in Star Trek acting like a perfect engineering and analytical assistant to take fuzzy asks from humans and turn them into useful output.
They aren't going to willingly spread the wealth.
I do however, love solving business problems. This is what I am hired for. I speak to VP/managers to improve their day to day. I come up with feasible solution and translate them into code.
If AI could actually code, like really code(not here is some code, it may or may not work go read documentation to figure out why it doesn't), I would just go and focus on creating affordable software solutions to medium/small businesses.
This is kind of like gardening/farming, before industrial revolution most crops required a huge work force, these days with all the equipment and advancements a single farmer can do a lot on their own with small staff. People still "hand" garden for pleasure, but without using the new tech they wouldn't be able to compete on a big scale.
I know many fear AI, but it is progress and it will never stop. I do think many devs are intelligent and will be able to evolve in the workplace.
So, this "solve business problems" is some temporary[1] gig for you?[2]
------------------------------
[1] I'm reminded of the anti-union people who are merely temporarily embarrassed millionaires.
[2] Skills atrophy. Maybe you won't need the atrophied skill in the future, but how sure are you that this is the case? The eventual outcome?
A time and place may come where the AI are so powerful I’m not needed. That time is not right now.
I have used Rider for years at this point and it automatically handles most imports. It’s not AI, but its one of the things that is just not needed for me to even think about.
I don’t commit 1,000 lines that i don’t know how it works.
If people are just not coding anymore and trusting AI to do everything, i agree, they’re going to hit a wall hard once the complexity of their non-architected Frankenstein project hits a certain level. And they’ll be paying for a ton of tokens to spin the AI’s wheels trying to fix it.
I'm trying to think of any examples of someone who said that "a generation ago" at all, let alone any that wasn't regarded as a fringe crackpot.
These things are all tradeoffs. A junior engineer who goes to the manual every time is something I encourage, but if they go exclusively to only the manual every time they are going to be slower and produce code more disjoint and harder to maintain than their peers who have taken advantage of other people's insights into the things the manuals don't say.
Fluency in reading will disappear if you aren't writing enough. And for the pipeline from junior to senior, if the juniors don't write as much as we wrote when young, they are never going to develop the fluency to read.
I've still learned from it. Just read each line it generates carefully. Read the API references of unfamiliar functions or language features it uses. You'll learn things.
You'll also see a lot of stupidity, overcomplication, outdated or incorrect APIs calls, etc.
Anybody know any weavers making > 100k a year?
If the demand for this work is high, maybe the individual workers aren't earning $100k per year, but the owner of the company who presumably was/is a weaver might well be earning that much.
What the loom has done is made the repeatable mass production of items cheap and convenient. What used to be a very expensive purchase is now available to more people at a significantly cheaper price, so probably the profits of the companies making them are about the same or higher, just on a higher volume.
It hasn't entirely removed the market for high end quality weaving, although it probably has reduced the number of people buying high-end bespoke items if they can buy a "good enough" item much cheaper.
But having said that, I don't think weavers were on the inflation-adjusted equivalent of 100k before the loom either. They may have been skilled artisans, but that doesn't mean the majority were paid multiples above an average wage.
The current price bubble for programming salaries is based on the high salaries being worth paying for a company who can leverage that person to produce software that can earn the company significantly more than that salary, coupled with the historic demand for good programmers exceeding supply.
I'm sure that even if the bulk of programming jobs disappear because people can produce "good enough" software for their purposes using AI, there will always be a requirement for highly skilled specialists to do what AI can't, or from companies that want a higher confidence that the code is correct/maintainable than AI can provide.
I'm doing a lot of new things I never would have done before. Yes, I could have googled APIs and read tutorials, but I learn best by doing, and AI helps me learn a lot faster.
Sometimes.
If people aren't learning from AI it's their fault. Yeah AI makes stuff up and hallucinates and can be wrong but how is that different than a distracted senior dev? AI is available to me 24/7 to answer my questions in minutes or seconds where half the time when I message people I have to wait 30-60min for a response.
People just need to approach things intelligently and actually learn along the way. You can easily get to the point where you're thinking more clearly about a problem than the AI writing your code pretty quickly if you just pay attention and do the research you need to understand what's happening. They're not as factual as a textbook but they don't need to be to give you the space to ask the right questions and they'll frequently provide sources (though I'd heavily recommend checking them. Sometimes the sources are a joke)
But the skills you describe are still skills, reading and researching and doing your own fact finding are still important to practice and be good at. Those things only get more important in situations off the beaten path, where AI doesn't always give you trustworthy answers or do trustworthy work.
I'm still going to nurture some of these skills. If I'm trying to learn, I'll stick to using AI only when I'm truly stuck or no longer having fun.
People who claim "It's not synthesized, it's just other people's work run through a woodchipper" aren't precisely right, but they also aren't precisely wrong... And in this space, having the whole ecosystem of programmers who published code looking over my shoulder as I try to solve problems is a huge boon.
Not everyone has access to an expert that will guide them to the most efficient way to do something.
With either form of learning though, critical thinking is required.
I've refactored the sloppiest slop with AI in days with zero regressions. If I did it manually it could have taken months.
Same. Writing code is easy. Reading code is very very hard.
They could rename it "Using AI Generated Code Makes Programming Less Fun, for Me", that would be more honest.
The problem for programmers is (as a group) they tend to dislike the parts of their job that are hardest to replace with AI and love the stuff that is easiest for machines to copy. It turns out meetings, customer support, documentation, tests, and QA are core parts of being a good engineer.
I did, for a very long time. Then I realized that it's just work, and I'd like to spend my life minimizing the amount of that I do and maximizing the things I do want to do. Code gen has completely changed the equation for workaday folks. Maybe that will make us obsolete, and fall out of practice. But I tend to think the best software engineers are the laziest ones who don't try to be clever. Maybe not the best programmers per se, but I know whose codebase I'd rather inherit.
I know plenty of 50-something developers out of work because they stuck to their old ways and the tech world left them behind.
However I agree there's a different category here under the idea of 'craft'. I don't have a good way to express this. It's not that I'm writing these 'if' statements in a particular way, it's how the whole system is structured and I understand every single line and it's an expression of my clarity of the system in code.
I believe there a split between these two and both are focusing on different problems. Again I don't want to label, but if I *had to* I would say one side is business focused. Here's the thing though - your end customers don't give a fuck if it's built with AI or crafted by hand.
The other side is the craftsmanship, and I don't know how to express this to make sense.
I'm looking for a good way to express this - feeling? Reality? Practice?
IDK, but I do understand your side of it; However, I don't think many companies will give a shit.
If they can go to market in 2 weeks vs 2 month's you know what they'll choose.
However, for those in the first few years of their career, I'm definitely seeing the problem where junior devs are reaching for AI on everything, and aren't developing any skills that would allow them to do anything more than the AI can do or catch any of the mistakes that AI makes. I don't see them on a path that leads them from where they are to where I am.
A lot of my generation of developers is moving into management, switching fields, or simply retiring in their 40s. In theory there should be some of us left who can do what AI can't for another 20 years until we reach actual retirement age, but programming isn't a field that retains its older developers well. So this problem is going to catch up with us quickly.
Then again, I don't feel like I ever really lived up to any of the programmers I looked up to from the 80s and 90s, and I can't really point to many modern programmers I look up to in the same way. Moxie and Rob Nystrom, maybe? And the field hasn't collapsed, so maybe the next generation will figure out how to make it work.
People care if their software works. They don’t care how beautiful the code is.
AI can churn out 25 drafts faster than 99% of devs can get their boilerplate setup for the first time.
The new skill is fitting all that output into deployable code, which if you are experienced in shipping software is not hard to get the model to do.
If you want to be an artist be an artist, that's fine, don't confuse artististry with engineering.
I write art code for myself, I engineer code professionally.
The author wraps with a false dichotomy that uses emotionally loaded language at the end: "You Believe We have Entered a New Post-Work Era, and Trust the Corporations to Shepherd Us Into It". I mean, what? Why can't I think it's quickly becoming a new era _and_ not trust corporations? Why does the author take that idea off the table? Is this logic or rhetoric? Who is this author trying to convince?
SWE life has always had smatterings of weird gatekeeping, self identities wrapped up in external tooling or paradigms, fragile egos, general misanthropy, post-hoc rationalization, etc. but... man watching the progressions of the crash outs these last few years has been wild.
In my day job, I use best practices. If I'm changing a SQL database, I write database migrations.
In my hobby coding? I will never write a database migration. You couldn't force me to at gunpoint. I just hate them, aesthetically. I will come up with the most elaborate and fragile solutions to avoid writing them. It's part of the fun.
Yes, taking the bus to work will make me a worse runner than jogging there. Sometimes, I just want to get to a place.
Secondly, I'm not convinced the best way to learn to be a good programmer is just to do a whole project from 0 to 100. International practice is a thing.
I do think the “becoming dependent on your replacement” point is somewhat weak. Once AI is as good as the best human at programming (which I think could still be many years away), the conversation is moot.
I mean in 2 years the entire mentality shifted. Most people on HN were just completely and utterly wrong (also quite embarrassing if you read how self assured these people were, this is like 70 percent of HN at the time).
First AI is clearly not a stochastic parrot and second it hasn’t taken our jobs yet but we can all see that potential up ahead.
Now we get articles like this saying your skills will atrophy with AI because the entire industry is using it now.
I think it’s clear. Everyone’s skills will atrophy. This is the future. I fully expect in the coming decades that the generation after zoomers have never coded ever without the assistance of AI and they will have an even harder time finding jobs in software.
Also: because the change happened so fast you see tons of pockets of people who aren’t caught up yet. People who don’t realize that the above is the overarching reality. You’ll know you’re one of these people if AI hasn’t basically taken over your work place and you and your coworkers aren’t going all in on Claude or Codex. Give it another 2 years and everyone will flip here too.
> "I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time."
Even a year ago that seemed like a ridiculous thing to say. LLM's have made one thing very clear to me: A massive percentage of developers derive their sense of self worth from how smart coding makes them feel.
What has to happen first is that people need to rebuild their identity before they can accept what is happening and that rebuilding process will take longer then the rate at which AI is outrunning all of us.
What is my role in tech if for the past 20 years I was a code ninja but now AI can do better than me? I can become a delegator or manager to AI, a prompt wizard or some leadership role… but even this is a target for replacement by AI.
That being said, what will be critical is understanding business needs and being able to articulate them in a manner that computers (not humans) can translate into software.
Your memory of the discourse of that era has apparently been filtered by your brain in order to support the point you want to make. Nobody who thoughtlessly adopted an extreme position at a hinge point where the future was genuinely uncertain came out of that looking particularly good.
No it very clearly is. Even still today, it is obvious that it has zero understanding of anything and it's just parroting training data arranged in different ways.
As for “understanding” we can only infer this from input and output. We can’t actually know if it “understands” because we don’t actually know how these things work and in addition to that, we don’t have a formal definition of what “understanding” is.
From what we do know about LLMs we do know that it is not trivial pattern matching, the output formulated is literally by the definition of machine learning itself completely original information not copied from the training data.
Fascinating.
It also has already taken junior jobs. The market is hard for them.
Correction: it has been a convenient excuse for large tech companies to cut junior jobs after ridiculous hiring sprees during COVID/ZIRP.
Well, its taken blame for the job cutting due to the broad growth slowdown since COVID fiscal and monetary stimulus was stopped and replaced with monetary tightening, and then most recently the economy was hit with the additional hammers of the Trump tariff and immigration policies, as lots of people want to obscure, deny, and distract from the general economic malaise (and because many of the companies, and even more of their big investors, involved are in incestuous investment relationships with AI companies, so "blaming" AI for the cuts is also a form of self-serving promotion.)
This quote is so telling. I’m going to be straight with you and this is my opinion so you’re free to disagree.
From my POV you are out of touch with the ground truth reality of AI and that’s ok because it has all changed so fast. Everything in the universe is math based and in theory even your brain can be fully modelled by mathematics… it’s a pointless quote.
The ground truth reality is that nobody and I mean nobody understands how LLMs work. This isn’t me making shit up, if you know transformers, if you know the industry and if you even listened to the people behind the technology who make these things… they all say we don’t know how AI works.
But we do know some things. We know it’s not a stochastic parrot because in addition to the failures we’ve seen plenty of successes to extremely complicated problems that are too non trivial for anything other than an actual intelligence to solve.
In the coming years reality will change so much that your opinion will flip. You might be so stubborn as to continue calling it a stochastic parrot but by then it will just be lip service. Your current reaction is normal given the paradigm shift happened so fast and so recently.
This is a really insane and untrue quote. I would, ironically, ask an LLM to explain how LLMs work. It's really not as complicated as it seems.
You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".
The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.
[0] https://en.wikipedia.org/wiki/ELIZA_effect
It seems clear you don't want to have a good faith discussion.
It's you claiming that we understand how LLM's work, while the researchers who built them say that we ultimately don't.
There’s tons more where that came from. Like I said lots of people are out of touch because the landscape is changing so fast.
What is baffling to me is that not only are you unaware of what I’m saying but you also think what I’m saying is batshit insane despite the fact that people in the center of it all who are creating these things SAY the same thing. Maybe it’s just terminology…understanding how t build an LLM is not the same as understanding why it works or how it works.
Either way I can literally provide tons and tons more of evidence to the contrary if you’re still not getting it: We do not understand how LLMs work.
Also you can prompt an LLM about whether or not we understand LLMs it should tell the same thing I’m saying along with explaining transformers to you.
Just because the restaurant says "World's Best Burgers" on its logo doesn't make it true.
Here’s another: https://youtube.com/shorts/zKM-msksXq0?si=bVethH1vAneCq28v
Geoffrey Hinton father of AI who quit his job at Google to warn people about AI. What’s his motivation? Altruism.
Man it’s not even about people saying things. If you knew how transformers and LLMs work you would know even for the most basic model we do not understand how they work.
If you say you understand LLMs then my claim is then that you are lying. Nobody understands these things and people core to building these things are in absolute agreement with me.
I build LLMs for a living, btw. So it’s not just other experts saying these things.. I know what I’m talking about on a fundamental level.
It will just spew over-confident sycophantic vomit. There is no attempt to reason. It’s all worthless nonsense.
It’s a fancy regurgitation machine and that will go completely off the rails when it steps outside of it’s training area. That’s it.
I’ve also seen it fuck up in the same way you describe. So do I weigh and balance these two pieces of contrasting evidence to form a logical conclusion? Or do I pick and choose one of pieces of evidence that is convenient to my world view? What should I do? Actually why don’t you tell me what you ended up doing?
Imagine the empire state building was just completed, and you had a man yelling at the construction workers: "PFFT that's just a bunch of steel and bricks"
The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.
Those people who invested cash in blockchain believed that they could develop something worthwhile on the blockchain.
Zuckerberg believed the Metaverse could change things. It's why he hired all of those people to work on it.
However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.
There's another article posted here, "Believe the Checkbook" or something like that. And they point out that Anthropic had no reason to purchase Bun except to get the people working on it. And if you believe we're about to turn a corner on vibe coding, you don't do that.