Imo Cursor did had the first mover advantage by making the first well known AI coding agent IDE. But I can't help but think they have no realistic path forward.
As someone who is a huge IDE fan, I vastly prefer the experience from Codex CLI compared to having that built into my IDE, which I customize for my general purposes. The fact it's a fork of VSCode (or whatever) will make me never use it. I wonder if they bet wrong.
But that's just usability and preference. When the SOTA model makers give out tokens for substantially less than public API cost, how in the world is Cursor going to stay competitive? The moat just isn't there (in fact I would argue its non-existent)
Yeah, hard disagree on that one, based on recent surveys, 80-90% of developers globally use IDEs over CLIs for their day-to-day work.
I was pretty worried about Cursor's business until they launched their Composer 1 model, which is fine-tuned to work amazingly well in their IDE. It's significantly faster than using any other model, and it's clearly fine-tuned for the type of work people use Cursor for. They are also clearly charging a premium for it and making a healthy margin on it, but for how fast + good it's totally worth it.
Composer 1 + now eventually creating an AI native version of GitHub with Graphite, that's a serious business, with a much clearer picture to me how Cursor gets to serious profitability vs the AI labs.
OP isn't saying to do all of your work in the terminal; they're saying they prefer CLI-based LLM interfaces. You can have your IDE running alongside it just fine, and the CLIs can often present the changes as diffs in the IDEs too.
What I don't understand why people would go all in on one IDE/editor and refuse to make plugins for others. Whether you prefer the CLI or the integrated experience, only offering it on vscode (and a shitty version of it, as well) is just stupid.
As someone who uses Cursor, i don't understand why anyone would use CLI AI coding tools as opposed to tools integrated in the IDE. There's so much more flexibility and integration, I feel like I would be much less productive otherwise. And I say this as someone who is fluent in vim in the shell.
Now, would I prefer to use vs code with an extension instead? Yes, in the perfect world. But Cursor makes a better, more cohesive overall product through their vertical integration, and I just did the jump (it's easy to migrate) and can't go back.
If these ai companies had 100x dev output, why would you acquire a company? Why not just show screenshots to your agent and get it to implement everything?
Is it market share? Because I don't know who has a bigger user base that cursor.
The claims are clearly exaggerated or as you say, we'd have AI companies pumping out new AI focused IDEs left and right, crazy features, yet they all are Vs code forks that roughly do the same shit
A VSCode fork with AI, like 10 other competitors doing the same, including Microsoft and Copilot, MCPs, Vs code limitations, IDEs catching up. What do these AI VsCode forks have going for them? Why would I use one?
I'm really used to my Graphite workflow and I can't imagine going without it anymore. An acquisition like this is normally not good news for the product.
Heard on the worry, but I can confirm Graphite isn’t going anywhere. We're doubling down on building the best workflow, now with more resourcing than ever before!
Supermaven said the same thing when they were acquired by Cursor and then EOLed a year later. Honestly, it makes sense to me that Cursor would shut down products it acquires - I just dislike pretending that something else is happening.
we are a 70 person team, bringing in significant revenue through our product, have widespread usage at massive companies like shopify robinhood etc, this is a MUCH MUCH MUCH different story than supermaven (which I used myself and was sad to see go) which was a tiny team with a super-early product when they got acquired.
everyone is staying on to keep making the graphite product great. we're all excited to have these resources behind us!
There is literally nothing anyone can say to convince me any product or person is safe during an acquisition. Time and time again it's proven to just not be true. Some manager/product owner/VP/c-suite will eventually have the deciding factor and I trust none of them to actually care about the product they're building or the community that uses it
I’m working on something in a similar direction and would appreciate feedback from people who’ve built or operated this kind of thing at scale.
The idea is to hook into Bitbucket PR webhooks so that whenever a PR is raised on any repo, Jenkins spins up an isolated job that acts as an automated code reviewer. That job would pull the base branch and the feature branch, compute the diff, and use that as input for an AI-based review step. The prompt would ask the reviewer to behave like a senior engineer or architect, follow common industry review standards, and return structured feedback - explicitly separating must-have issues from nice-to-have improvements.
The output would be generated as markdown and posted back to the PR, either as a comment or some attached artifact, so it’s visible alongside human review. The intent isn’t to replace human reviewers, but to catch obvious issues early and reduce review load.
What I’m unsure about is whether diff-only context is actually sufficient for meaningful reviews, or if this becomes misleading without deeper repo and architectural awareness. I’m also concerned about failure modes - for example, noisy or overconfident comments, review fatigue, or teams starting to trust automated feedback more than they should.
If you’ve tried something like this with Bitbucket/Jenkins, or think this is fundamentally a bad idea, I’d really like to hear why. I’m especially interested in practical lessons.
At $DAYJOB, there's an internal version of this, which I think just uses Claude Code (or similar) under the hood on a checked out copy of the PR.
Then it can run `git diff` to get the diff, like you mentioned, but also query surrounding context, build stuff, run random stuff like `bazel query` to identify dependency chains, etc.
They've put a ton of work into tuning it and it shows, the signal-to-noise ratio is excellent. I can't think of a single time it's left a comment on a PR that wasn't a legitimate issue.
We have coding agents heavily coupled with many aspects of the company's RnD cycle. About 1k devs.
Yes, you definitely need the project's context to have valuable generations. Different teams here have different context and model steering, according to their needs.
For example, specific aspects of the company's architecture is supplied in the context. While much of the rest (architecture, codebases, internal docs, quarterly goals) are available as RAG.
It can become noisy and create more needless review work. Also, only experts in their field find value in the generations. If a junior relies on it blindly, the result is subpar and doesn't work.
I have such fond memories of Graphite's simplicity. Simply hit the server with a metric and value and BOOM it's on the chart, no dependencies, no fuss.
Yeah, "haven't most of the users already moved to Prometheus?" was my first reaction.
Turns out that the name's been re-used by some sort of slop code review system. Smells like a feature rather than a product, so I guess they were lucky to be acquired while the market's still frothy.
Are there thoughts on getting to something more like a "single window dev workflow"? The code editing and reviewing experiences are very disjoint, generally speaking.
My other question is whether stacked PRs are the endpoint of presenting changes or a waypoint to a bigger vision? I can't get past the idea that presenting changes as diffs in filesystem order is suboptimal, rather than as stories of what changed and why. Almost like literate programming.
Stacked PRs are a really natural fit for vibe coding workflows, it helps turn illegible 10k+ line PRs into manageable chunks that you can review independently. (Not affiliated with Cursor or Graphite)
Well, Graphite solves the problem of how to keep your stack of GitHub pull requests in sync while you squash merge the lowest pull request in the stack; which as far as I know jujutsu does not help with.
Took me a month to learn jujutsu. Was initially a skeptic but pulled through. Git was always easy to me. Its model somehow just clicks in my brain. So when I first switched to jj, it made a lot of easy things hard due to the lack of staging (which is often part of my workflow). But now I see the value & it really does make hard things easy. My commit history is much cleaner for one.
I've been using git spice (https://abhinav.github.io/git-spice/) for the stacked PRs part of graphite and it's been working pretty well and it's open source and free.
This was "announced" in October, and last week they were saying they're shipping to trusted partners to kick the tires before a real release, with posted screenshots.
So, we'll see what it ends up like, but they have apparently already executed.
GitHub have proven the ability to execute very well when they _want_ to. Their product people are top notch.
Given the VP of GitHub recently posted a screenshot of their new stacked diff concept on X, I'd be amazed if Graphite folks (whos product is adding this function) didn't get wind of it and look for a quick sell.
Woahhhhh I missed this. Got a reference or link? My Googling is failing me. That's my biggest complaint about Github coming from Gerrit for Open Stack.
Sure, but if the concern is googling "graphite" and finding results that aren't the Graphite you're looking for, it's the same problem. There will always be more results for graphite, the mineral than graphite, the enterprise-ready monitoring tool.
If that's not the concern, then what's the big deal?
As someone who is a huge IDE fan, I vastly prefer the experience from Codex CLI compared to having that built into my IDE, which I customize for my general purposes. The fact it's a fork of VSCode (or whatever) will make me never use it. I wonder if they bet wrong.
But that's just usability and preference. When the SOTA model makers give out tokens for substantially less than public API cost, how in the world is Cursor going to stay competitive? The moat just isn't there (in fact I would argue its non-existent)
I was pretty worried about Cursor's business until they launched their Composer 1 model, which is fine-tuned to work amazingly well in their IDE. It's significantly faster than using any other model, and it's clearly fine-tuned for the type of work people use Cursor for. They are also clearly charging a premium for it and making a healthy margin on it, but for how fast + good it's totally worth it.
Composer 1 + now eventually creating an AI native version of GitHub with Graphite, that's a serious business, with a much clearer picture to me how Cursor gets to serious profitability vs the AI labs.
What are we talking about? Autocomplete or GPT/Claude contender or...? What makes it so great?
Now, would I prefer to use vs code with an extension instead? Yes, in the perfect world. But Cursor makes a better, more cohesive overall product through their vertical integration, and I just did the jump (it's easy to migrate) and can't go back.
Is it market share? Because I don't know who has a bigger user base that cursor.
A VSCode fork with AI, like 10 other competitors doing the same, including Microsoft and Copilot, MCPs, Vs code limitations, IDEs catching up. What do these AI VsCode forks have going for them? Why would I use one?
My usually prefer Gemini but sometimes other tools catch bugs Gemini doesn't.
As someone who has never heard of Graphite, can anyone share their experience comparing it to any of the tools above?
everyone is staying on to keep making the graphite product great. we're all excited to have these resources behind us!
> "Will the plugin remain up? Yes!"
> https://supermaven.com/blog/sunsetting-supermaven
sweet summer child.
The idea is to hook into Bitbucket PR webhooks so that whenever a PR is raised on any repo, Jenkins spins up an isolated job that acts as an automated code reviewer. That job would pull the base branch and the feature branch, compute the diff, and use that as input for an AI-based review step. The prompt would ask the reviewer to behave like a senior engineer or architect, follow common industry review standards, and return structured feedback - explicitly separating must-have issues from nice-to-have improvements.
The output would be generated as markdown and posted back to the PR, either as a comment or some attached artifact, so it’s visible alongside human review. The intent isn’t to replace human reviewers, but to catch obvious issues early and reduce review load.
What I’m unsure about is whether diff-only context is actually sufficient for meaningful reviews, or if this becomes misleading without deeper repo and architectural awareness. I’m also concerned about failure modes - for example, noisy or overconfident comments, review fatigue, or teams starting to trust automated feedback more than they should.
If you’ve tried something like this with Bitbucket/Jenkins, or think this is fundamentally a bad idea, I’d really like to hear why. I’m especially interested in practical lessons.
Then it can run `git diff` to get the diff, like you mentioned, but also query surrounding context, build stuff, run random stuff like `bazel query` to identify dependency chains, etc.
They've put a ton of work into tuning it and it shows, the signal-to-noise ratio is excellent. I can't think of a single time it's left a comment on a PR that wasn't a legitimate issue.
Yes, you definitely need the project's context to have valuable generations. Different teams here have different context and model steering, according to their needs. For example, specific aspects of the company's architecture is supplied in the context. While much of the rest (architecture, codebases, internal docs, quarterly goals) are available as RAG.
It can become noisy and create more needless review work. Also, only experts in their field find value in the generations. If a junior relies on it blindly, the result is subpar and doesn't work.
1: https://graphiteapp.org/
Turns out that the name's been re-used by some sort of slop code review system. Smells like a feature rather than a product, so I guess they were lucky to be acquired while the market's still frothy.
My other question is whether stacked PRs are the endpoint of presenting changes or a waypoint to a bigger vision? I can't get past the idea that presenting changes as diffs in filesystem order is suboptimal, rather than as stories of what changed and why. Almost like literate programming.
Then Cursor takes on GitHub for the control of the repo.
So, we'll see what it ends up like, but they have apparently already executed.
Given the VP of GitHub recently posted a screenshot of their new stacked diff concept on X, I'd be amazed if Graphite folks (whos product is adding this function) didn't get wind of it and look for a quick sell.
https://www.merriam-webster.com/dictionary/graphite
If that's not the concern, then what's the big deal?
Graphiters in there doing Q&A also