A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare. Right now, they're great for prompting simple sites/platforms but they break at large enterprise repos.
If you don't have a rigid, external state machine governing the workflow, you have to brute-force reliability. That codebase bloat is likely 90% defensive programming; frustration regexes, context sanitizers, tool-retry loops, and state rollbacks just to stop the agent from drifting or silently breaking things.
The visual map is great, but from an architectural perspective, we're still herding cats with massive code volume instead of actually governing the agents at the system level.
We propped the entire economy up on it. Just look at the s&p top 10. Actually even top 50 holdings.
If it doesn't deliver on the promise we have bigger problems than "oh no the code is insecure". We went from "I think this will work" to "this has to work because if it doesn't we have one of those 'you owe the bank a billion dollars' situations"
I find it really strange that there is so much negative commentary on the _code_, but so little commentary on the core architecture.
My takeaway from looking at the tool list is that they got the fundamental architecture right - try to create a very simple and general set of tools on the client-side (e.g. read file, output rich text, etc) so that the server can innovate rapidly without revving the client (and also so that if, say, the source code leaks, none of the secret sauce does).
Overall, when I see this I think they are focused on the right issues, and I think their tool list looks pretty simple/elegant/general. I picture the server team constantly thinking - we have these client-side tools/APIs, how can we use them optimally? How can we get more out of them. That is where the secret sauce lives.
The tools was mostly already known, no? (I wish they had a "present" tool which allowed to model to copy-paste from files/context/etc. showing the user some content without forcing it through the model)
It’s not surprising. There has been quite a bit of industrial research in how to manage mere apes to be deterministic with huge software control systems, and they are an unruly bunch I assure you.
It's hard to tell how much it says about difficulty of harnessing vs how much it says about difficulty of maintaining a clean and not bloated codebase when coding with AI.
Why not both? AI writes bloated spaghetti by default. The control plane needs to be human-written and rigid -> at least until the state machine is solid enough to dogfood itself. Then you can safely let the AI enhance the harness from within the sandbox.
Kinda depends how much of it is vibe coded. It could easily be 5x larger than it needs to be just because the LLM felt like it if they've not been careful.
Claude folks proudly claim to have Claude effectively writing itself. The CEO claims it will read an issue and automatically write a fix, tests, commit and submit a PR for it.
Bingo. And them 'being careful' is exactly what bloats it to 500k lines. It's a ton of on-the-fly prompt engineering, context sanitizers, and probabilistic guardrails just to keep the vibes in check.
Herding cats is treating the LLM's context window as your state machine. You're constantly prompt-engineering it to remember the rules, hoping it doesn't hallucinate or silently drop constraints over a long session.
System-level governance means the LLM is completely stripped of orchestration rights. It becomes a stateless, untrusted function. The state lives in a rigid, external database (like SQLite). The database dictates the workflow, hands the LLM a highly constrained task, and runs external validation on the output before the state is ever allowed to advance. The LLM cannot unilaterally decide a task is done.
I got so frustrated with the former while working on a complex project that I paused it to build a CLI to enforce the latter. Planning to drop a Show HN for it later today, actually.
>A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare. Right now, they're great for prompting simple sites/platforms but they break at large enterprise repos.
Is that the case? I'm pretty sure Claude Code is one of the most massively successful pieces of software made in the last decade. I don't know how that proves your point. Will this codebase become unmanageable eventually? Maybe, but literally every agent harness out there is just copying their lead at this point.
Claude code is a massively successful generator, I use it all the time, but it's not a governance layer.
The fact that the industry is copying a 500k-line harness is the problem. We're automating security vulnerabilities at scale because people are trying to put the guardrails inside the probabilistic code instead of strictly above it.
Standardizing on half a million lines of defensive spaghetti is a huge liability.
> A 500k line codebase for an agent CLI proves one thing: making a probabilistic LLM behave deterministically is a massive state-management nightmare.
Considering what the entire system ends up being capable of, 500k lines is about 0.001% of what I would have expected something like that to require 10 years ago.
You can combine that with all the training and inference code, and at the end of the day, a system that literally writes code ends up being smaller than the LibreOffice codebase.
> You can combine that with all the training and inference code, and at the end of the day, a system that literally writes code ends up being smaller than the LibreOffice codebase.
You really need to compare it to the model weights though. That’s the “code”.
... what are you even talking about? "The system
that literally writes code" has a few hundreds of trillions of parameters. How is this smaller than LibreOffice?
If writing concise architectural analysis without the fluff makes me an AI, I'll take the complement. But no - just a tired Architect who has spent way too many hours staring at broken agent state loops haha.
Author here. I built this in a few hours after the Claude Code leak.
I've been working on my own coding agent setup for a while. I mostly use pi [0] because it's minimal and easy to extend. When the leak happened, I wanted to study how Anthropic structured things: the tool system, how the agent loop flows, A 500K line codebase is a lot to navigate, so I mapped it visually to give myself a quick reference I could come back to while adapting ideas into my own harness and workflow.
I'm actively updating the site based on feedback from this thread. If anything looks off, or you find something I missed, lmk.
I’m using pi and cc locally in a docker container connected to a local llama.cpp so the whole agentic loop is 100% offline.
I had used pi and cc to analyze the unpacked cc to compare their design, architecture and implementation.
I guess your site was also coded with pi and it is very impressive. Wonderful if you can do a visualization for pi vs cc as well. My local models might not be powerful enough.
This is nice, I really like the style/tone/cadence.
The only suggestion/nit I have is that you could add some kind of asterisk or hover helper to the part when you talk about 'Anthropic's message format', as it did make me want to come here and point out how it's ackchually OpenAI's format and is very common.
Only because I figure if this was my first time learning about all this stuff I think I'd appreciate a deep dive into the format or the v1 api as one of the optional next steps.
I know it seems counter-intuitive but are there any agent harnesses that aren’t written with AI? All these half a million LoC codebases seem insane to me when I run my business on a full-stack web application that’s like 50k lines of code and my MvP was like 10k. These are just TUIs that call a model endpoint with some shell-out commands. These things have only been around in time measured in months, half a million LoC is crazy to me.
> These are just TUIs that call a model endpoint with some shell-out commands.
Claude Code CLI is actually horrible: it's a full headless browser rendering that's then converted in real-time to text to show in the terminal. And that fact leaks to the user: when the model outputs ASCII, the converter shall happily convert it to Unicode (no latter than yesterday there was a TFA complaining about Unicode characters breaking Unix pipes / parsers expecting ASCII commands).
It's ultra annoying during debugging sessions (that is not when in a full agentic loop where it YOLOs a solution): you can't easily cut/paste from the CLI because the output you get is not what the model did output.
Mega, mega, mega annoying.
What should be something simple becomes a rube-goldberg machinery that, of course, fucks up something fundamental: converting the model's characters to something else is just pathetically bad.
Anyone from Anthropic reading? Get your shit together: if you keep this "headless browser rendering converted to text", at least do not fucking modify the characters.*
Isn't it a simple REPL with some tools and integrations, written in a very high level language? How the hell is it so big? Is it because it's vibecoded and LLMs strive for bloat, or is it meaningful complexity?
> Claude Code's 500k LOC doesn't seem out of the ordinary.
Aren't all the other products also vibe-coded? "All vibe-coded products look like this" doesn't really seem to answer the question "Why is it so damn large?"
It's a repl, that calls out to a blackbox/endpoint for data, and does basic parsing and matching of state with specific actions.
I feel the bulk of those lines should be actions that are performed. Either this is correct or this is not:
1. If the bulk of those lines implement specific and simple actions, why is it so large compared to other software that implements single actions (coreutils, etc)
2. If the actions constitute only a small part of the codebase, wtf is the rest of it doing?
>> I feel the bulk of those lines should be actions that are performed. Either this is correct or this is not:
> You're complaining about vibe coding while also complaining about how you "feel" about the code. Do you see the irony in that?
Where did I complain about how I feel about the actual code? I have feelings, negative ones, about the size of the code given the simple functionality it has, but I have no feelings on the code because I did not look at the code.
Bad by whose definition? They work really well in my experience. They aren't perfect but the amount of hand holding has gone down dramatically and you can fix any glaring problems with a code review at the end. I work on a multimillion line code base which does not use any popular frameworks and it does a great job. I may be benefiting from the fact that the codebase is open source and all models have obviously been trained on it.
I haven't seen the scrolling glitch in months, where previously it was happening multiple times a day. Also haven't seen anyone complain about it in quite some time. Pretty sure they have resolved that.
Most of their issues have been solved a long time ago, with 1000x less code. It is depressing at this point. I really had no clue IT was in the shitters this much. I knew it was theatrical but I had no idea that it was by this much.
yeah its honestly full of vibe fixes to vibe hacks with no overarching desig. . some great little empirical observations though!i think the only clever bit relative to my own designs is just tracking time since last cache ht to check ttl. idk why i hadnt thought of that, but makes perfect sense
I don't know if you're mindlessly repeating the HN trope that JS/typescript/Electron is bad and that all bloat can easily prevented, but if you're truly interested in answers to your questions: RTFA.
Other notable agents' LOC: Codex (Rust) ~519K, Gemini (TS) ~445K, OpenCode (TS) ~254K, Pi (TS) ~113K LOC. Pi's modular structure makes it simple to see where most of code is. Respectively core, unified API, coding agent CLI, TUI have ~3K, ~35K, ~60K, ~15K LOC. Interestingly, the just uploaded claw-code's Rust version is currently at only 28K.
edit: Claude is actually (TS) 395K. So Gemini is more bloat. Codex is arguable since is written in lower-level language.
Just check the leaked code yourself. Two biggest areas seem to be the `utils` module, which is a kitchen sink that covers a lot of functionality from sandboxing, git support, sessions, etc, and `components` module, which contains the react ui. You could certainly build a cli agent with much smaller codebase, with leaner ui code without react, but probably not with this truckload of functionality.
Software doesn’t end at the 20k loc proof of concept though.
What every developer learns during their “psh i could build that” weekendware attempt is that there is infinite polish to be had, and that their 20k loc PoC was <1% of the work.
That said, doesn't TFA show you what they use their loc for?
I guess because you see 3D stuff in a 3D game instead of text, people assume that it must be the most complex thing in software? Or because you solve hard math problems in 3D, those functions are gonna be the most loc?
It's a completely different domain, e.g. very different integration surface area and abstractions.
Claude Code's source is dumped online so there's probably a more concrete analysis to be had than "that sounds like too many loc".
It is a different domain but that wasn’t your argument. Your argument was that someone was comparing it to a POC when in fact they were comparing to a finished product.
Also a AAA game (with the engine) with physics, networking, and rendering code is up there in terms of the most complex pieces of software.
They just claimed that you can build a 3D game in 500k loc, thus Claude Code shouldn't use so many loc. They/you didn't render the argument for that.
For example, without looking at the code, the superstition also works in the opposite direction: Claude Code is an interface to using AI to do any computer task while a 3D game just lets you shoot some bad guys, so surely the 3D game must be done in fewer loc. That's equally unsatisfying.
You'd have to be more concrete than "sounds like a lot".
> Claude Code is an interface to using AI to do any computer task
Claude Code is quite literally a wrapper around a few APIs. At one point it needed 68GB of RAM to run and requires 11ms to "lay a scene graph" to display a few hundred characters on screen. All links here: https://news.ycombinator.com/item?id=47598488
> while a 3D game just lets you shoot some bad guys, so surely the 3D game must be done in fewer loc.
Take the loadInitialMessage function: It's encumbered with real world incremental requirements. You can see exactly the bolted-on conditionals where they added features like --teleport, --fork-session, etc.
The runHeadlessStreaming function is a more extreme version of that where a bunch of incremental, lateral subsystems are wired together, not an example of superfluous loc.
Comments like these remind me of the football spectators that shout "Even I could have scored that one" when they see a failed attempt.
Sure. You could have. But you're not the one playing football in the Champions League.
There were many roads that could have gotten you to the Champions League. But now you're in no position to judge the people who got there in the end and how they did it.
I don't think this is warranted given that the comment you're criticising is simply expressing an opinion explicitly solicited by the comment it's responding to.
It’s more like “Player A is better than Player B” coming from a professional player in a smaller league who is certainly qualified to have that opinion.
> Sure. You could have. But you're not the one playing football in the Champions League.
The only reason people are using Claude Code is because it's the only way to use their (heavily subsidized) subscription plans. People who are okay with using and paying for their APIs often opt out for other, better, tools.
Also, analogies don't work. As we know for a fact that Claude Code is a bloated mess that these "champions league-level engineers" can't fix. They literally talk about it themselves: https://news.ycombinator.com/item?id=47598488 (they had to bring in actual Champions League engineers from bun to fix some of their mess).
Honest question: Why does it matter? They got the product shipped and got millions of paying customers and totally revolutionized their business and our industry.
Engineers using LOC as a measure of quality is the inverse of managers using LOC as a measure of productivity.
More code means more entropy, more room for bugs, harder to find issues, more time to fix, more attack surface, more memory used, more duplication, more inconsistencies... I bet you at some point we'll get someone reporting how AI performance deteriorates as the code base grows, and some blog post about how their team improved the success of their AI by trimming the code base down to less than 100k LOC or something like that.
The principles of good software don't suddenly vanish just because now it's a machine writing the code instead of a human, they still have to deal with the issues humans have for more than half a century. The history of programming is new developers coming up with a new paradigm, then rediscovering all the issues that the previous generation had figured out before them.
The history of programming is also each generation writing far less performant code than the one before it. The history of programming is each generation bemoaning the abstractions, waste and lack of performance of the code of the next generation.
It turns out that there is a tradeoff in code between velocity and quality that smart businesses consider relative to hardware cost/quality. The businesses that are outcompeting others are rarely those who have the highest quality code, but rather those that are shipping quickly at a quality level that is satisfactory for current hardware.
> far less performant code than the one before it.
That worked because of rapid advancements in CPU performance. We’ve left that era.
It’s about more than performance. Code is and always has been a liability. Even with agents, you start seeing massive slowdowns with code base size.
It’s why I can nearly one shot a simple game for my kid in 20 minutes with Claude, but using it at work on our massive legacy codebase is only marginally faster than doing it by hand.
You asked why the size of the code matters, I gave you the answer. If you want to ramble about the non technical aspects of software development talk to someone else, I'm not interested.
I asked a rhetorical question to get the reader to think about a topic. I was not looking for a rote recitation of a well-known textbook answer. Maybe you should not be on the comment section of an engineering website if you find discussion so offensive.
Among the hundreds of thousands of lines of code that Anthropic produced was one that leaked the source code. It is likely to be a config file, not part of the Claude Code software itself, but it still something to track.
The more lines of code you have the more likely there is for one of them to be wrong and go unnoticed. It results in bugs, vulnerabilities,... and leaks.
The reason it’s not useful as a measure of productivity is because it’s measure of complexity (not directly, but it’s correlated). But it tells you nothing about whether that complexity was necessary for the functionality it provides.
But given that we know the functionality of Claude Code, we can guess how much complexity should be required. We could also be wrong.
>Why does it matter?
If there’s massively more code than there needs to be that does matter to the end user because it’s harder to maintain and has more surface area for bugs and security problems. Even with agents.
It will be exactly that. But that is a 'them' problem. I can look at it a go 'that looks like a bad idea' but they are the ones who have to live with it.
At some point someone will probably take their LLM code and repoint it at the LLM and say 'hey lets refactor this so it uses less code is easier to read but does the same thing' and let it chrun.
One project I worked on I saw one engineer delete 20k lines of code one day. He replaced it with a few lines of stored procedure. That 20k lines of code was in production for years. No one wanted to do anything with it but it was a crucial part of the way the thing worked. It just takes someone going 'hey this isnt right' and sit down and fix it.
When software requires 68 GB or RAM to run, or when they spend a week not being able to find a bug that causes multiple people to immediately run out of tokens, it's not a "them" problem.
For the animations specifically, it's using Motion (fka Framer Motion) Javascript library. If you describe some animations from the site to an LLM and ask it to use Framer motion, you get very similar results. The creator likely just prompted for a while until they were happy with the outcome.
Yup, strange to see people still don’t understand LLMs massively speed up coding greenfield pet projects. Anytime you see a bee web app it’s better to assume AI use rather than not anymore.
I'm not familiar enough with this animation library to answer that. Someone could be very used to this type of website and just copy paste things they've done before.
Well, I assume this is all just generated with Claude Code, right? Whether there is much back and forth with the LLM is a valid question and nothing wrong with generating websites (I do it too for some side projects). Claude loves generating websites with a particular style of serif font. We also saw this with https://tboteproject.com/timeline/ and I've just generally seen it from various designs that coworkers have spit out over months using Claude defaults.
I guess I just find it weird because all the signals are messed up so whenever I see these sorts of layouts, I feel like I'm looking at the average where I don't think "gorgeous and interesting" at all. Instead, I'm forced to think "I should be skeptical of this based on the presentation because it presents as high quality but this may be hiding someone who is not actually aware of what they're presenting in any depth" as the author may have just shoved in a prompt and let it spin.
There's actually a similarly designed website (font weights, font styles etc) here in New Zealand (https://nzoilwatch.com/) where at a glance, it might seem like some overloaded professional-backed thing but instead it's just some guy who may or may not know anything about oil at all, yet people are linking it around the place like some sort of authoritative resource.
I would have way less of an issue if people just put their names by things and disclosed their LLM usage (which again, is fine) rather than giving the potentially false impression to unequipped people that the information presented is actually as accurate and trustworthy as the polish would suggest.
We do need "hard effortful careful work" to keep planes flying, electrical grids running and medical devices safe. It's very relevant but very undervalued by our current economy.
That was the leaked code and now it's just some random dudes harness btw. He swapped it out. Did a sloppy find and replace for "claude" and made it claw.
I was talking to one of the people who works at a big agentic coding tools. If I recall correctly, he was talking about how they use the tool to build the tool. I was complaining that all of the websites/frontends I make look pretty weak, and I'm amazed they get much slicker looking UIs with the same tool. He showed me that one way they do it is by having an extensive UI library of components/graphics/whatever, and also mentioned that the folks build their UIs know how to prompt/use the tool because it's backed by years of UI development knowledge & superior resources. I realized I didn't have any of that, and it actually made me feel better.
Last week we I was struggling to go from vague prompt to a OMG-it's-so-nice-looking web app, I remembered that example above and then decided to create my own component library, which I did in a couple days: https://www.substrateui.dev/. I was actually super happy that I was able to accomplish that, and then I realized I wanted to better understand the content that I had vibe coded into existence. So now I'm recreating that design system step by step w/ Claude code, filling in gaps in my knowledge & learning a bit about colors, typography, CSS, blah blah blah. It's actually a lot of fun because I'm able to explore all of the concepts and learn enough to build a front end that doesn't suck & is good enough for my use case without getting stuck for days on trying to center a stupid div by hand or play whack-mole-fix-something-and-break-something-else when trying to clean up AI slop.
I was referencing https://www.neobrutalism.dev/ and https://www.retroui.dev/ and slopped my way through it. A lot of it was just asking Claude Code "is this a proper design system?", then I kept doing that until it didn't have anything useful to add. Now I'm using my that as the template for understanding such things in more detail.
The people who don’t know how to use an LLM to make them more productive, or are scared it’s going to take their job, are louder than the people who are making good use of them to
make them more productive.
That just seems to be human nature unfortunately - the complainers are always louder.
As someone currently "making good use of" generative AI while simultaneously being painfully aware of its shortcomings, I think the overall discourse is a bit more nuanced. Bucketing folks into simple "for" and "against" GenAI camps does nothing to cover the vast spectrum in between, making your take ultimately built on a false dichotomy. Further implying those camps fall on the lines of those "in the know" of AI vs "those in denial/scared of" is patronizing at best, and I've grown tired of this oversimplification parroted out every time the topic of LLM systems come up.
Those within well informed, technical circles will fall somewhere in between the for/against labels, myself included.
The GenAI hype cycle is finally starting to collapse as the general population starts to realize that these systems aren't the panacea for "everything" after all. They provide enormous utility in some domains like coding, but even then there are massive tradeoffs, footguns and the usual horse blinder ills that come with every hype cycle. I just hope we stop having to "learn the hard way" with respect to undisciplined use of current-gen LLM systems writ large, and cooler heads prevail sooner rather than later.
What? We must have different internets, I agree in general, but the "AI is the second coming" crowd is louder than standing next to a jet on takeoff. I'm in the "AI is making me more productive but a worse developer" crowd, don't know what I count as.
You got shuttled into one bubble and the previous commenter into another advertising / news bubble. It's incredible how different the media experience is for people in different media bubbles.
I mean, tools change, but I'd be happy to hear if any tool can create that by just saying create "Claude Code Unpack" with nice graphics. or some other single prompt. It likely was an iterative process and it would be lovely if more people started sharing that, because the process itself is also very interesting.
I've created some chinese characters learning website and I took me typing 1/3 of LoTR to get there[1]. I would have typed like 1% of that writing code directly. It is a different process, but it still needs some direction.
I think it is accurate. Where are the autonomous AI who beat the creator to the punch? When we write "Hello, World!" in C and compile it with `gcc`, do we give credit to every contributor to GNU? AI is a tool that thus far only humans are capable of using with the unique inspiration. Will this change in the future? Certainly. But is it the case now? I think my questions imply some reasonable objections.
I like the Claude desktop interface. The color scheme, presentation, fonts, etc. Is there a CSS I can find for the desktop version - I assume it's using some kind of web rendering engine and CSS is part of it.
I guess they really do eat their own dogfood and vibe code their way through it without care for technical debt? In a way, it’s a good challenge, but it’s fairly painful to watch the current state of the project (which is about a year old now, so it should be in prime shape).
> is about a year old now, so it should be in prime shape
A 1yo project may be in good shape if written by just one dev, maybe a few. But if you have many devs, I can guarantee it will be messy and buggy. If anything, at 1yo it is probably still full of bugs because not enough time has elapsed for people to run into them.
It's only 510k LoC, at ~100 lines of code a day for a year, this code base would take 23 engineers a year to write. That's for 220 working days in somewhere civilized.
And I'm sure we all know that when working on a greenfield project you can produce a lot more LoC per day than maintaining a legacy one.
Given that vibe code is significantly more verbose, you're probably talking about ~15 engineers worth of code?
I know that's all silly numbers, but this is just attempting to give people some context here, this isn't a massive code base. I've not read a lot of it, so maybe it's better than the verbose code I see Claude put out sometimes.
The previous poster was making out that in a year the code base would be a mess if people had done it.
This is a two-pizza team sized project, so it's not a project that the code quality would inevitably spiral out of control due to communication problems.
A single senior architect COULD have kept the code quality under control.
Put yourself in their shoes; either the quality of Claude's coding continues to improve or else their business is probably doomed if it stagnates, so for them it makes sense to punt technical debt to the future when more capable versions of their models will be able to better fix it.
This is why I personally don't take technical debt arguments about how LLM maintained code bases deteriorate with size/age seriously; it presumes that at some point I'll give up with the LLM and be left with a mess to clean up by hand, but that's not going to happen, future maintenance is to be left to LLMs and if that isn't possible for some reason then the project is as good as dead anyway. When you start a project with a LLM the plan should be to see it through with LLMs, planning to have unaided humans take over maintenance at some point is a mistake.
Doesn't this contradict the popular wisdom that "what's good for a human engineer is good for an LLM"? e.g. documentation, separation of concerns, organized files, DRY.
I find LLMs very useful and capable, but in my experience they definitely perform worse when things are unorganized. Maintenance isn't just aesthetics, it's a direct input to correctness.
Maybe a little. I don't hold fast to that popular wisdom, e.g. I think comments are not always a net positive for LLMs. With respect to technical debt, how much debt is too much debt before it gums up the works and arrests forward progress on the software? It probably depends on the individual programmer. LLMs do seem to have a higher tolerance for technical debt than myself personally at least.
I am more worried that we are moving toward creating black boxes and this might turn software "development" into a field as confused as philosophy and dialectics.
Which makes for an interesting thought / discussion; code is written to be read by humans first, executed by computers second. What would code look like if it was written to be read by LLMs? The way they work now (or, how they're trained) is on human language and code, but there might be a style that's better for LLMs. Whatever metric of "better" you may use.
Just a thought experiment, I very much doubt I'm the first one to think of it. It's probably in the same line of "why doesn't an LLM just write assembly directly"
LLMs read and write human-code because humans have been reading and writing human-code. The sample size of assembly problems is, in my estimate, too small for LLMs to efficiently read and write it for common use cases.
I liken it to the problem of applying machine learning to hard video games (e.g. Starcraft). When trained to mimic human strategies, it can be extremely effective, but machine learning will not discover broadly effective strategies on a reasonable timescale.
If you convert "human strategies" to "human theory, programming languages, and design patterns", perhaps the point will be clear.
But: could the ouroboric cycle of LLM use decay the common strategies and design patterns we use into inexplicable blobs of assembly? Can LLMs improve at programming if humans do not advance the theory or invent new languages, patterns, etc?
But starcraft training is not through mimicking human strategies - it was pure RL with a reward function shaped around winning, which allows it to emerge non-human and eventually super-human strategies (such as the worker oversaturation).
The current training loop for coding is RL as well - so a departure from human coding patterns is not unexpected (even if departure from human coding structure is unexpected, as that would require development of a new coding language).
> It's probably in the same line of "why doesn't an LLM just write assembly directly"
My suspicion is that the "language" part of LLMs means they tend to prefer languages which are closer to human languages than assembly and benefit from much of the same abstractions and tooling (hence the recent acquisition of bun and astral).
The problem with that is that assembly isn't portable, and x86 isn't as dominant as it once was, so then you've got arm and x86(_64). But you could target the LLVM machine if you wanted.
Yes but my point was that they seem to explicitly not care about code quality and/or the insane amount of bloat, and seem to just want the LLM to be able to deal with it.
I've heard somewhere that they have roughly 100% code churn every few months, so yes, they unfortunately don't care about code quality. It's a shame, because it's still the best coding agent, in my experience.
> they unfortunately don't care about code quality.
> It's a shame, because it's still the best coding agent, in my experience.
If it is the best, and if it delivers the value users are asking for, then why would they have an incentive to make further $$$ investments to make it of a "higher" quality if the value this difference could make is not substantial or hurts the ROI?
On many projects I found this "higher quality" not only to be false of delivering more substantial value but actually I found it was hurting the project to deliver the value that matters.
Maybe we are after all entering the era of SWE where all this bike-shedding is gone and only type of engineers who will be able to survive in it will be the ones who are capable of delivering the actual value (IME very few per project).
Is this why they ran into a bug with people hitting usage limits even on very short sessions and had to cease all communications for over a day after a week of gaslighting users because they couldn't find the root cause in the "quality doesn't matter" code base?
Or that's why tgey had to buy bun with actual engineers to work on Claude Code to reduce memory peaks from 68 GB (yes, 68 gigabytes) to a "measely" 1.7? Because code quality doesn't matter?
Or that a year later they still cannot figure out how to render anything in the terminal without flickering?
The only reason people use Claude Code is because it's the only way to use Anthropic's heavily subsidized subscription. You get banned if you use it through other, better, tools.
"Windows is the world's most popular desktop consumer OS. Microsoft are doing everything right, and should never ever change. Who are we to criticise them"
Yes, but as I said, it’s in a way the ultimate form of dogfooding: ideally they’ll be able to get the LLM smart enough to keep the codebase working well long-term.
Now whether that’s actually possible is a second topic.
Just finished looking at Ink here.. frontend world has no shame. Love the gloating about 40x less RAM as if that amount of memory for a text REPL even approaches defensible. "CC built CC" is not the flex people seem to suggest it is.
Appreciate the effort, but this is very basic and nothing you need the source code to understand. I was expecting a deep dive into what specific decisions they made, but not how an loop of tool calls works
I found it a useful overview. My primary question about the client source was - is there any secret sauce in it? Based on this site, the answer is no, the client is quite simple/dumb, and all the secret sauce resides on the server/in the model.
I particularly valued the tool list. People in these comments are complaining about how bad the code is, but I found the client-side tools that the model actually uses to be pretty clean/general.
My takeaway was more that at a very basic level they know what they are doing - keep the client general, so that you can innovate on the server side without revving the client as much.
I don't know why people obsess and spend so much time on this codebase. It isn't (and never was)alien technology. It's just mediocre typescript generated by an LLM
Thanks to Claude Code, we got such a beautifully polished and dazzling website that gives a complete introduction to itself the very moment the leak happened :)
There's this weird thing about AI generated content where it has the perfect presentation but conveys very little.
For example the whole animation on this website, what does it say beyond that you make a request to backend and get a response that may have some tool call?
We've moved from "move fast and break things" to "hallucinate fast and patch later." It's the inevitable side effect of using AI to curate AI-written codebases.
That's fair. The site isn't meant to be a deep technical dive, it's more of a visual high-level guide of what I've curated while exploring the codebase while assisted by AI, 500k loc codebase is just too much to sift through in a short amount of time.
I agree with you and I'm generally an AI "defender" when people superficially dismiss AI capabilities, but this is a more subtle point.
If you prompt with little raw material and little actual specification of what you want to see in the end, eg you just say make a detailed breakdown dashboard-like site that analyzes this codebase, the result will have this uncanny character.
I'd describe it as a kind of "fanfic", it (and now I'm not just talking about this website but my overall impression related to this phenomenon) reminds me a bit like how when I was 15 or so, I had an idea about how the world works then things turned out to be less flashy, less movie-like, less clear-cut, less-impressive-to-a-teenage-boy than I had thought.
If you know the concept of "stupid man's idea of a smart man", I'd say AI made stuff (with little iteration) gives this outward appearance of a smart man from the Reddit-midwit-cinematic-universe. It's like how guns in movies sound more like guns than real guns. It's hyperreality.
Again this is less about the capabilities of AI and it's more connected to the people-pleasing nature of it. It's like you prompt it for some epic dinner and it heaps you up some hmmm epic bacon with bacon yeah (referring to the hivemind-meme). Or BigMac on the poster vs the tray, and the poster one is a model made with different components that are more photogenic. It's a simulacrum.
It looks more like your naive currently imagined thing about what you think you need vs what you'd actually need. It's like prompting your ideal girlfriend into AI avatar existence. I'm sure she will fit your ideal thought and imagination much better but your actual life would need the actual thing.
This relates to the Persona thing that Anthropic has been exploring, that each prompt guides the model towards adopting a certain archetypal fiction character as it's persona and there are certain attraction basins that get reinforced with post training. And in the computer world, simulated action can be easily turned into real action with harnesses and tools, so I'm not saying that it doesn't accomplish the task. But it seems that there are more sloppy personas, and it seems that experts can more easily avoid summoning them by giving them context that reflects more mundane reality than a novice or an expert who gives little context. Otherwise the AI persona will be summoned from the Reddit midwit movie.
I'm not fully clear about all this, but I think we have a lot to figure out around how to use and judge the output of AI in a productive workflow. I don't think it will go away ever, but will need some trimming at the edges for sure.
Kairos and auto-dream are more interesting than anything in the agent loop section. Memory consolidation between sessions is the actual unsolved problem. The rest is just plumbing tbh
A year ago I wouldn't have guessed a TUI could be a competitive advantage. But "harness engineering" became a thing, and it turns out the agent wrapper — tool orchestration, context management, permission flows — is where real product value lives. Not as much as the models themselves, but more than most people expected. This leak is a painful reminder of that.
I think it's good that it's out there, and I wonder why Anthropic have been keeping it closed source; clearly they can't possibly think that the CC source code is a competitive advantage...?
Agents in general are easy to make, and trivial to make for yourself especially, and the result will be much better than what any of the big providers can make for you.
`pi` with whatever commands/extensions you want to make for yourself is better than CC if you really don't want to go through the trouble of making your own thing.
If you think this is not a competitive advantage then youre missing the point. LLMs arent so good that they work through bad abstractions and pretty much everyone has bad abstractions. CC is what invents some of the best abstractions (not the first). I think theyre they first ones who nailed subagents well. Theres a lot to learn from them and while im learning a lot from their source code my heart bleeds that this happened to them.
Sincerely, someone running a team building similar things for analytics.
All of the above at exactly the token cost that it requires for you.
Anything general is always going to be worse for specific use cases, and agents from these big providers are very general. They'll spend tons of tokens doing things that you might not need, including spend extra tokens on supporting MCP, etc., when you might not even need that.
I feel the same way. Given it's AI-written, looking at the code isn't even worth it to me. I would rather read a blog post about how they develop it day to day.
I doubt there is anything special about the transformer code the frontier labs use. The only thing proprietary in it are probably the infrastructure-specific optimizations for very large scale distributed training and some GPU kernel tricks. The real moat is the training data, especially the RLHF/finetuning data and verifiable reward environments, and the GPU clusters of course.
The open source models are quite close, and they'd probably be just as good with the equivalent amount of compute/data the frontier labs have access to.
However, I assume that usage data could be increasingly valuable as well. That will likely help the big commercial cloud models to maintain a head start for general use.
Really nice visualisation of this, makes understanding the flow at a high levle pretty clear. Also the tool system and command catalog, particularly the gated ones are super interesting.
/stickers:
Displays earned achievement stickers for milestones like first commit, 100 tool calls, or marathon sessions. Stickers are stored in the user profile and rendered as ASCII art in the terminal.
That is not what it does at all - it takes you to a stickermule website.
What is the motivation for someone to put out junk like this?
The animated explanation at the top is also way too fast at 1x, almost impossible to follow; that immediately hinted at the author not fully reading/experiencing the result before publishing this.
Many people seem to believe the Claude Code has some sort of secret sauce in the agent itself for some reason.
I have no idea why because in my experience Claude Code and the same models inside of Cursor behave almost identically. I think all the secret sauce is in the RLHF.
519K lines of code for something that is using the baseline *nix tools for pretty much everything important, how do they even manage to bloat it this much? I mean I know how technically, but it's still depressing.
Can't they ask CC to make it good, instead of asking it to make it bigger?
I mean, I get it: vibe-coded software deserves vibe-coded coverage. But I would at least appreciate it if the main part of it, the animation, went at a speed that at least makes it possible to follow along and didn't glitch out with elements randomly disappearing in Firefox...
It's on the front page because it looks really cool. You can complain about it being vibe coded, but it still looks good. If you ask Claude to allow the user to slow down the animation, it can do that quite easily, that's just not a problem caused by vibe coding. And I'm on FF and didn't notice anything glitching out.
What exactly is shitty here? A program i use for hours every day to do the job previously done by many N human beings, without many bugs, seems to have code thats seemingly messy but still clearly works.
Thanks, I'll use this for teaching next week (on what not to do). BashTool.ts :D But, in general, I guess it just shows yet again that the emperor has no clothes.
If you don't have a rigid, external state machine governing the workflow, you have to brute-force reliability. That codebase bloat is likely 90% defensive programming; frustration regexes, context sanitizers, tool-retry loops, and state rollbacks just to stop the agent from drifting or silently breaking things.
The visual map is great, but from an architectural perspective, we're still herding cats with massive code volume instead of actually governing the agents at the system level.
If it doesn't deliver on the promise we have bigger problems than "oh no the code is insecure". We went from "I think this will work" to "this has to work because if it doesn't we have one of those 'you owe the bank a billion dollars' situations"
My takeaway from looking at the tool list is that they got the fundamental architecture right - try to create a very simple and general set of tools on the client-side (e.g. read file, output rich text, etc) so that the server can innovate rapidly without revving the client (and also so that if, say, the source code leaks, none of the secret sauce does).
Overall, when I see this I think they are focused on the right issues, and I think their tool list looks pretty simple/elegant/general. I picture the server team constantly thinking - we have these client-side tools/APIs, how can we use them optimally? How can we get more out of them. That is where the secret sauce lives.
Can you expand on this?
My experience is they require excessive steering but do not “break”
Like that drunk uncle that takes half an hoir and 20 000 words to tell you a 500 word story.
System-level governance means the LLM is completely stripped of orchestration rights. It becomes a stateless, untrusted function. The state lives in a rigid, external database (like SQLite). The database dictates the workflow, hands the LLM a highly constrained task, and runs external validation on the output before the state is ever allowed to advance. The LLM cannot unilaterally decide a task is done.
I got so frustrated with the former while working on a complex project that I paused it to build a CLI to enforce the latter. Planning to drop a Show HN for it later today, actually.
Is that the case? I'm pretty sure Claude Code is one of the most massively successful pieces of software made in the last decade. I don't know how that proves your point. Will this codebase become unmanageable eventually? Maybe, but literally every agent harness out there is just copying their lead at this point.
The fact that the industry is copying a 500k-line harness is the problem. We're automating security vulnerabilities at scale because people are trying to put the guardrails inside the probabilistic code instead of strictly above it.
Standardizing on half a million lines of defensive spaghetti is a huge liability.
Considering what the entire system ends up being capable of, 500k lines is about 0.001% of what I would have expected something like that to require 10 years ago.
You can combine that with all the training and inference code, and at the end of the day, a system that literally writes code ends up being smaller than the LibreOffice codebase.
It boggles the mind, really.
https://github.com/badlogic/pi-mono/tree/main/packages/codin...
You really need to compare it to the model weights though. That’s the “code”.
By "just" wrapping a browser engine.
I know xkcd 1053, but come on.
Drop an em dash or a bullet point and they go into spasms.
I've been working on my own coding agent setup for a while. I mostly use pi [0] because it's minimal and easy to extend. When the leak happened, I wanted to study how Anthropic structured things: the tool system, how the agent loop flows, A 500K line codebase is a lot to navigate, so I mapped it visually to give myself a quick reference I could come back to while adapting ideas into my own harness and workflow.
I'm actively updating the site based on feedback from this thread. If anything looks off, or you find something I missed, lmk.
[0] https://pi.dev/
I had used pi and cc to analyze the unpacked cc to compare their design, architecture and implementation.
I guess your site was also coded with pi and it is very impressive. Wonderful if you can do a visualization for pi vs cc as well. My local models might not be powerful enough.
Thanks for the hard work!
The only suggestion/nit I have is that you could add some kind of asterisk or hover helper to the part when you talk about 'Anthropic's message format', as it did make me want to come here and point out how it's ackchually OpenAI's format and is very common.
Only because I figure if this was my first time learning about all this stuff I think I'd appreciate a deep dive into the format or the v1 api as one of the optional next steps.
For starters, CC's TUI is React-based.
Claude Code CLI is actually horrible: it's a full headless browser rendering that's then converted in real-time to text to show in the terminal. And that fact leaks to the user: when the model outputs ASCII, the converter shall happily convert it to Unicode (no latter than yesterday there was a TFA complaining about Unicode characters breaking Unix pipes / parsers expecting ASCII commands).
It's ultra annoying during debugging sessions (that is not when in a full agentic loop where it YOLOs a solution): you can't easily cut/paste from the CLI because the output you get is not what the model did output.
Mega, mega, mega annoying.
What should be something simple becomes a rube-goldberg machinery that, of course, fucks up something fundamental: converting the model's characters to something else is just pathetically bad.
Anyone from Anthropic reading? Get your shit together: if you keep this "headless browser rendering converted to text", at least do not fucking modify the characters.*
Isn't it a simple REPL with some tools and integrations, written in a very high level language? How the hell is it so big? Is it because it's vibecoded and LLMs strive for bloat, or is it meaningful complexity?
- Opencode (anomalyco/opencode) is about 670k LOC
- Codex (openai/codex) is about 720k LOC
- Gemini (google-gemini/gemini-cli) is about 570k LOC
Claude Code's 500k LOC doesn't seem out of the ordinary.
Aren't all the other products also vibe-coded? "All vibe-coded products look like this" doesn't really seem to answer the question "Why is it so damn large?"
It's a repl, that calls out to a blackbox/endpoint for data, and does basic parsing and matching of state with specific actions.
I feel the bulk of those lines should be actions that are performed. Either this is correct or this is not:
1. If the bulk of those lines implement specific and simple actions, why is it so large compared to other software that implements single actions (coreutils, etc)
2. If the actions constitute only a small part of the codebase, wtf is the rest of it doing?
> You're complaining about vibe coding while also complaining about how you "feel" about the code. Do you see the irony in that?
Where did I complain about how I feel about the actual code? I have feelings, negative ones, about the size of the code given the simple functionality it has, but I have no feelings on the code because I did not look at the code.
I think a lot of the people prasing Claude & co are on Macs.
I'm not saying that this is necessarily too much, I'm genuinely asking if this is a bloat or if it's justified.
edit: Claude is actually (TS) 395K. So Gemini is more bloat. Codex is arguable since is written in lower-level language.
I doubt it needs to be more than 20-50kloc.
You can create a full 3D game with a custom 3D engine in 500k lines. What the hell is Claude Code doing?
What every developer learns during their “psh i could build that” weekendware attempt is that there is infinite polish to be had, and that their 20k loc PoC was <1% of the work.
That said, doesn't TFA show you what they use their loc for?
It's a completely different domain, e.g. very different integration surface area and abstractions.
Claude Code's source is dumped online so there's probably a more concrete analysis to be had than "that sounds like too many loc".
Also a AAA game (with the engine) with physics, networking, and rendering code is up there in terms of the most complex pieces of software.
For example, without looking at the code, the superstition also works in the opposite direction: Claude Code is an interface to using AI to do any computer task while a 3D game just lets you shoot some bad guys, so surely the 3D game must be done in fewer loc. That's equally unsatisfying.
You'd have to be more concrete than "sounds like a lot".
Shouldn't interfaces be smaller than the implementation?
Claude Code is quite literally a wrapper around a few APIs. At one point it needed 68GB of RAM to run and requires 11ms to "lay a scene graph" to display a few hundred characters on screen. All links here: https://news.ycombinator.com/item?id=47598488
> while a 3D game just lets you shoot some bad guys, so surely the 3D game must be done in fewer loc.
Yes, most games should be done in fewer loc
This file is exactly what I'm talking about.
Take the loadInitialMessage function: It's encumbered with real world incremental requirements. You can see exactly the bolted-on conditionals where they added features like --teleport, --fork-session, etc.
The runHeadlessStreaming function is a more extreme version of that where a bunch of incremental, lateral subsystems are wired together, not an example of superfluous loc.
Sure. You could have. But you're not the one playing football in the Champions League.
There were many roads that could have gotten you to the Champions League. But now you're in no position to judge the people who got there in the end and how they did it.
Or you can, but whatever.
The only reason people are using Claude Code is because it's the only way to use their (heavily subsidized) subscription plans. People who are okay with using and paying for their APIs often opt out for other, better, tools.
Also, analogies don't work. As we know for a fact that Claude Code is a bloated mess that these "champions league-level engineers" can't fix. They literally talk about it themselves: https://news.ycombinator.com/item?id=47598488 (they had to bring in actual Champions League engineers from bun to fix some of their mess).
Engineers using LOC as a measure of quality is the inverse of managers using LOC as a measure of productivity.
The principles of good software don't suddenly vanish just because now it's a machine writing the code instead of a human, they still have to deal with the issues humans have for more than half a century. The history of programming is new developers coming up with a new paradigm, then rediscovering all the issues that the previous generation had figured out before them.
It turns out that there is a tradeoff in code between velocity and quality that smart businesses consider relative to hardware cost/quality. The businesses that are outcompeting others are rarely those who have the highest quality code, but rather those that are shipping quickly at a quality level that is satisfactory for current hardware.
That worked because of rapid advancements in CPU performance. We’ve left that era.
It’s about more than performance. Code is and always has been a liability. Even with agents, you start seeing massive slowdowns with code base size.
It’s why I can nearly one shot a simple game for my kid in 20 minutes with Claude, but using it at work on our massive legacy codebase is only marginally faster than doing it by hand.
The more lines of code you have the more likely there is for one of them to be wrong and go unnoticed. It results in bugs, vulnerabilities,... and leaks.
But given that we know the functionality of Claude Code, we can guess how much complexity should be required. We could also be wrong.
>Why does it matter?
If there’s massively more code than there needs to be that does matter to the end user because it’s harder to maintain and has more surface area for bugs and security problems. Even with agents.
Because it's unmaintainable slop that they themselves don't know how to fix when something happens? https://news.ycombinator.com/item?id=47598488
At some point someone will probably take their LLM code and repoint it at the LLM and say 'hey lets refactor this so it uses less code is easier to read but does the same thing' and let it chrun.
One project I worked on I saw one engineer delete 20k lines of code one day. He replaced it with a few lines of stored procedure. That 20k lines of code was in production for years. No one wanted to do anything with it but it was a crucial part of the way the thing worked. It just takes someone going 'hey this isnt right' and sit down and fix it.
When software requires 68 GB or RAM to run, or when they spend a week not being able to find a bug that causes multiple people to immediately run out of tokens, it's not a "them" problem.
I love your implementation.
Here was my first stab:
https://news.ycombinator.com/item?id=47595140
https://brandonrc.github.io/journey-through-claude-code/
I guess I just find it weird because all the signals are messed up so whenever I see these sorts of layouts, I feel like I'm looking at the average where I don't think "gorgeous and interesting" at all. Instead, I'm forced to think "I should be skeptical of this based on the presentation because it presents as high quality but this may be hiding someone who is not actually aware of what they're presenting in any depth" as the author may have just shoved in a prompt and let it spin.
There's actually a similarly designed website (font weights, font styles etc) here in New Zealand (https://nzoilwatch.com/) where at a glance, it might seem like some overloaded professional-backed thing but instead it's just some guy who may or may not know anything about oil at all, yet people are linking it around the place like some sort of authoritative resource.
I would have way less of an issue if people just put their names by things and disclosed their LLM usage (which again, is fine) rather than giving the potentially false impression to unequipped people that the information presented is actually as accurate and trustworthy as the polish would suggest.
I'm serious. The hype chasing clearly clearly matters. .
things like this: https://github.com/instructkr/claw-code I mean ok, serious people put in years of effort for 100 of those stars ...
it's continually wild how extremely irrelevant hard effortful careful work is.
I think that's the game. Get up, look at the headlines, figure out how you can exploit them with vibe coding, do some hyphy project and repeat.
Maybe some lobster themed bullshit between openclaw and the claudecode leak.
I'm not being a cynic here, I'm just telling you what I'm going to do tomorrow.
It's sloppy work
Does not matter. Sloppiness is unimportant
Personally, I don't think I will be putting any such disclaimers or disclosures on my work, unless I deem it relevant to the functionality.
Last week we I was struggling to go from vague prompt to a OMG-it's-so-nice-looking web app, I remembered that example above and then decided to create my own component library, which I did in a couple days: https://www.substrateui.dev/. I was actually super happy that I was able to accomplish that, and then I realized I wanted to better understand the content that I had vibe coded into existence. So now I'm recreating that design system step by step w/ Claude code, filling in gaps in my knowledge & learning a bit about colors, typography, CSS, blah blah blah. It's actually a lot of fun because I'm able to explore all of the concepts and learn enough to build a front end that doesn't suck & is good enough for my use case without getting stuck for days on trying to center a stupid div by hand or play whack-mole-fix-something-and-break-something-else when trying to clean up AI slop.
Content resizing, needing to juggle a speed knob to read, and the overall presentation makes it feel like Edward Tufte flavored nightmare fuel.
That just seems to be human nature unfortunately - the complainers are always louder.
Those within well informed, technical circles will fall somewhere in between the for/against labels, myself included.
The GenAI hype cycle is finally starting to collapse as the general population starts to realize that these systems aren't the panacea for "everything" after all. They provide enormous utility in some domains like coding, but even then there are massive tradeoffs, footguns and the usual horse blinder ills that come with every hype cycle. I just hope we stop having to "learn the hard way" with respect to undisciplined use of current-gen LLM systems writ large, and cooler heads prevail sooner rather than later.
I've created some chinese characters learning website and I took me typing 1/3 of LoTR to get there[1]. I would have typed like 1% of that writing code directly. It is a different process, but it still needs some direction.
1. https://hanzirama.com/making-of
https://ccprompts.info
A 1yo project may be in good shape if written by just one dev, maybe a few. But if you have many devs, I can guarantee it will be messy and buggy. If anything, at 1yo it is probably still full of bugs because not enough time has elapsed for people to run into them.
And I'm sure we all know that when working on a greenfield project you can produce a lot more LoC per day than maintaining a legacy one.
Given that vibe code is significantly more verbose, you're probably talking about ~15 engineers worth of code?
I know that's all silly numbers, but this is just attempting to give people some context here, this isn't a massive code base. I've not read a lot of it, so maybe it's better than the verbose code I see Claude put out sometimes.
This is a two-pizza team sized project, so it's not a project that the code quality would inevitably spiral out of control due to communication problems.
A single senior architect COULD have kept the code quality under control.
This is why I personally don't take technical debt arguments about how LLM maintained code bases deteriorate with size/age seriously; it presumes that at some point I'll give up with the LLM and be left with a mess to clean up by hand, but that's not going to happen, future maintenance is to be left to LLMs and if that isn't possible for some reason then the project is as good as dead anyway. When you start a project with a LLM the plan should be to see it through with LLMs, planning to have unaided humans take over maintenance at some point is a mistake.
I find LLMs very useful and capable, but in my experience they definitely perform worse when things are unorganized. Maintenance isn't just aesthetics, it's a direct input to correctness.
Just a thought experiment, I very much doubt I'm the first one to think of it. It's probably in the same line of "why doesn't an LLM just write assembly directly"
I liken it to the problem of applying machine learning to hard video games (e.g. Starcraft). When trained to mimic human strategies, it can be extremely effective, but machine learning will not discover broadly effective strategies on a reasonable timescale.
If you convert "human strategies" to "human theory, programming languages, and design patterns", perhaps the point will be clear.
But: could the ouroboric cycle of LLM use decay the common strategies and design patterns we use into inexplicable blobs of assembly? Can LLMs improve at programming if humans do not advance the theory or invent new languages, patterns, etc?
The current training loop for coding is RL as well - so a departure from human coding patterns is not unexpected (even if departure from human coding structure is unexpected, as that would require development of a new coding language).
My suspicion is that the "language" part of LLMs means they tend to prefer languages which are closer to human languages than assembly and benefit from much of the same abstractions and tooling (hence the recent acquisition of bun and astral).
> It's a shame, because it's still the best coding agent, in my experience.
If it is the best, and if it delivers the value users are asking for, then why would they have an incentive to make further $$$ investments to make it of a "higher" quality if the value this difference could make is not substantial or hurts the ROI?
On many projects I found this "higher quality" not only to be false of delivering more substantial value but actually I found it was hurting the project to deliver the value that matters.
Maybe we are after all entering the era of SWE where all this bike-shedding is gone and only type of engineers who will be able to survive in it will be the ones who are capable of delivering the actual value (IME very few per project).
Or that's why tgey had to buy bun with actual engineers to work on Claude Code to reduce memory peaks from 68 GB (yes, 68 gigabytes) to a "measely" 1.7? Because code quality doesn't matter?
Or that a year later they still cannot figure out how to render anything in the terminal without flickering?
The only reason people use Claude Code is because it's the only way to use Anthropic's heavily subsidized subscription. You get banned if you use it through other, better, tools.
Meanwhile I apparently need to change my persoective about this: https://news.ycombinator.com/item?id=47598488
Now whether that’s actually possible is a second topic.
That's how you get "oh this TUI API wrapper needs 68GB of RAM" https://x.com/jarredsumner/status/2026497606575398987 or "we need 16ms to lay out a few hundred characters on screen that's why it's a small game engine": https://x.com/trq212/status/2014051501786931427
I particularly valued the tool list. People in these comments are complaining about how bad the code is, but I found the client-side tools that the model actually uses to be pretty clean/general.
My takeaway was more that at a very basic level they know what they are doing - keep the client general, so that you can innovate on the server side without revving the client as much.
This deployment is temporarily paused
https://web.archive.org/web/20260331105051/https://www.cclea...
BTW, that's why you should use your own infrastructure and not depend on Vercel
Also I definitely want a Claude Code spirit animal
(Yes, I know I can turn it off. I have.)
“Complete thyself.”
And I want an octopus. Who orchestrates octopuses.
Here is another one that goes in depth as well: www.markdown.engineering for anyone going deep on learning.
For example the whole animation on this website, what does it say beyond that you make a request to backend and get a response that may have some tool call?
If you prompt with little raw material and little actual specification of what you want to see in the end, eg you just say make a detailed breakdown dashboard-like site that analyzes this codebase, the result will have this uncanny character.
I'd describe it as a kind of "fanfic", it (and now I'm not just talking about this website but my overall impression related to this phenomenon) reminds me a bit like how when I was 15 or so, I had an idea about how the world works then things turned out to be less flashy, less movie-like, less clear-cut, less-impressive-to-a-teenage-boy than I had thought.
If you know the concept of "stupid man's idea of a smart man", I'd say AI made stuff (with little iteration) gives this outward appearance of a smart man from the Reddit-midwit-cinematic-universe. It's like how guns in movies sound more like guns than real guns. It's hyperreality.
Again this is less about the capabilities of AI and it's more connected to the people-pleasing nature of it. It's like you prompt it for some epic dinner and it heaps you up some hmmm epic bacon with bacon yeah (referring to the hivemind-meme). Or BigMac on the poster vs the tray, and the poster one is a model made with different components that are more photogenic. It's a simulacrum.
It looks more like your naive currently imagined thing about what you think you need vs what you'd actually need. It's like prompting your ideal girlfriend into AI avatar existence. I'm sure she will fit your ideal thought and imagination much better but your actual life would need the actual thing.
This relates to the Persona thing that Anthropic has been exploring, that each prompt guides the model towards adopting a certain archetypal fiction character as it's persona and there are certain attraction basins that get reinforced with post training. And in the computer world, simulated action can be easily turned into real action with harnesses and tools, so I'm not saying that it doesn't accomplish the task. But it seems that there are more sloppy personas, and it seems that experts can more easily avoid summoning them by giving them context that reflects more mundane reality than a novice or an expert who gives little context. Otherwise the AI persona will be summoned from the Reddit midwit movie.
I'm not fully clear about all this, but I think we have a lot to figure out around how to use and judge the output of AI in a productive workflow. I don't think it will go away ever, but will need some trimming at the edges for sure.
I use it all day and love it. Don't get me wrong. But it's a terminal-based app that talks to an LLM and calls local functions. Ooookay…
Agents in general are easy to make, and trivial to make for yourself especially, and the result will be much better than what any of the big providers can make for you.
`pi` with whatever commands/extensions you want to make for yourself is better than CC if you really don't want to go through the trouble of making your own thing.
Sincerely, someone running a team building similar things for analytics.
curious as i haven't gotten around to writing my own agent yet
Anything general is always going to be worse for specific use cases, and agents from these big providers are very general. They'll spend tons of tokens doing things that you might not need, including spend extra tokens on supporting MCP, etc., when you might not even need that.
But you can do a lot of interesting things on top of this. I highly recommend writing an agent and hooking it up to a local model.
I looked at the leaked code expecting some "secret sauce", but honestly didn't found anything interesting.
I don't get the hype around Claude Code. There's nothing new or unique. The real strength are the models.
The open source models are quite close, and they'd probably be just as good with the equivalent amount of compute/data the frontier labs have access to.
However, I assume that usage data could be increasingly valuable as well. That will likely help the big commercial cloud models to maintain a head start for general use.
The utils directory should only contain truly generic, business-agnostic utilities (such as date retrieval, simple string manipulation, etc.).
We can see that the code produced by Vibe is not what a professional engineer would write. This may be due to the engineers using the Vibe tool.
0 - https://github.com/zackautocracy/claude-code/blob/main/src/u...
it looks really interesting.
First command I looked at:
That is not what it does at all - it takes you to a stickermule website.What is the motivation for someone to put out junk like this?
Getting something with a link to their GitHub onto the frontpage of HN. Because form matters much more in this world than substance.
The animated explanation at the top is also way too fast at 1x, almost impossible to follow; that immediately hinted at the author not fully reading/experiencing the result before publishing this.
It's inappropriate to label a free side project 'junk' or 'slop' even if it contains major errors.
Particularly when there's a disclaimer about possible inaccuracies on the page.
I have no idea why because in my experience Claude Code and the same models inside of Cursor behave almost identically. I think all the secret sauce is in the RLHF.
How is this on the front page?
i do shift ctrl F
- find nothing - still manage to fill entire lages - somehow have a similar structure - are boring as fuck
At least this one is 3/4, the previous one had BINGO.
War flashbacks to genshin
The fact that now every agent designer knows what was already built is a huge shot of steroids to their codebase!
In all seriousness. I think you‘re supposed to run these in some kind of sandbox.
Which emperor, specifically?