GraphQL: The enterprise honeymoon is over

(johnjames.blog)

241 points | by johnjames4214 1 day ago

51 comments

  • hn_throwaway_99 1 day ago
    > The main problem GraphQL tries to solve is overfetching.

    My issue with this article is that, as someone who is a GraphQL fan, that is far from what I see as its primary benefit, and so the rest of the article feels like a strawman to me.

    TBH I see the biggest benefits of GraphQL are that it (a) forces a much tighter contract around endpoint and object definition with its type system, and (b) schema evolution is much easier than in other API tech.

    For the first point, the entire ecosystem guarantees that when a server receives an input object, that object will conform to the type, and similarly, a client receiving a return object is guaranteed to conform to the endpoint response type. Coupled with custom scalar types (e.g. "phone number" types, "email address" types), this can eliminate a whole class of bugs and security issues. Yes, other API tech does something similar, but I find the guarantees are far less "guaranteed" and it's much easier to have errors slip through. Like GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.

    When it comes to schema evolution, I've found that adding new fields and deprecating old ones, and especially that new clients only ever have to be concerned with the new fields, is a huge benefit. Again, other API tech allows you to do something like this, but it's much less standardized and requires a lot more work and cognitive load on both the server and client devs.

    • jakubriedl 19 hours ago
      I 100% agree that overfetching isn't the main problem graphql solves for me.

      I'm actually spending a lot of time in rest-ish world and contract isn't the problem I'd solve with GraphQL either. For that I'd go through OpenAPI, and it's enforcement and validation. That is very viable these days, just isn't a "default" in the ecosystem.

      For me what GraphQL solves as main problem, which I haven't got good alternative for is API composition and evolution especially in M:N client-services scenario in large systems. Having the mindset of "client describes what they need" -> "graphql server figures out how to get it" -> "domain services resolve the part" makes long term management of network of APIs much easier. And when it's combined with good observability it can become one of the biggest enablers for data access.

      • Seattle3503 12 hours ago
        > For me what GraphQL solves as main problem, which I haven't got good alternative for is API composition and evolution especially in M:N client-services scenario in large systems. Having the mindset of "client describes what they need" -> "graphql server figures out how to get it" -> "domain services resolve the part" makes long term management of network of APIs much easier. And when it's combined with good observability it can become one of the biggest enablers for data access.

        I've seen this this solved in REST land by using a load balancer or proxy that does path based routing. api.foo.com/bar/baz gets routed to the "bar" service.

        • btreecat 6 hours ago
          Doesn't even need to be a proxy, you can lay out your controller and endpoints like this just fine in most modern frameworks
          • Seattle3503 2 hours ago
            How do you do routing across services?
      • hn_throwaway_99 19 hours ago
        Completely agree with this rationale too. GraphQL does encapsulation really, really well. The client just knows about a single API surface, but the implementation about which actual backend services are handling the (parts of each) call is completely hidden.

        On a related note, this is also why I really dislike those "Hey, just expose your naked DB schemas as a GraphQL API!" tools. Like the best part about GraphQL is how it decouples your API contract from backend implementation details, and these tools come along and now you've tightly coupled all your clients to your DB schema. I think it's madness.

        • sandeepkd 15 hours ago
          I have used, implemented graphQL in two large scale companies across multiple (~xx) services. There are similarities in how it unfolds, however I have not seen any real world problem being solved with this so far

          1. The main argument to introduce has always been the appropriate data fetching for the clients where clients can describe exactly whats required

          2. Ability to define schema is touted as an advantage, managing the schema becomes a nightmare.( Btw the schema already exits at the persistence layer if that was required, schema changes and schema migration are already challenging, you just happen to replicate the challenge in one additional layer with graphQL)

          3. You go big and you get into graphQL servers calling into other graphQL servers and thats when things become really interesting. People do not realize/remember/care the source of the data, you have name collisions, you get into namespaces

          4. You started on the pretext of optimizing the query and now you have this layer that your client works with, the natural flow is to implement mutations with GraphQL.

          5. Things are downhill from this point, with distributed services you had already lost on transactionality, graphQL mutations just add to it. You get into circular references cause underlying services are just calling other services via graphQL to get the data you asked for with graphQL query

          6. The worst, you do not want to have too many small schema objects so now you have this one big schema that gets you everything from multiple REST API end points and clients are back to where they started from. Pick what you need to display on the screen.

          7. Open up the network tab of any *enterprise application which uses graphQL and it would be easy to see how much non-usable data is fetched via graphQL for displaying simplistic pages

          There is nothing wrong about graphQL, pretty much applies to all the tools. Comes down to how you use it, how good you are at understanding the trade-offs. Treating anything like a silver bullet is going to lead in the same direction. Pretty much all engineers who operated at the application scale is aware of it, unfortunately they just stay quiet

        • dfee 15 hours ago
          I agree as well. This may be the only thing GraphQL excels at. Dataloader implementations give this superpowers.

          OpenAPI, Thrift and protobuf/gRPC are all far better schema languages. For example: the separation of input types and object types.

    • lateforwork 23 hours ago
      If you generate TypeScript types from OpenAPI specs then you get contracts for both directions. There is no problem here for GraphQL to solve.
      • WickyNilliams 20 hours ago
        This is very much possible, and I have done it, and it works great once it's all wired up.

        But OpenAPI is verbose to the point of absurdity. You can't feasibly write it by hand. So you can't do schema first development. You need an open API compatible lib for authoring your API, you need some tooling to generate the schema from the code, then you need another tool to generate types from the schema. Each step tends to implement the spec to varying degrees, creating gaps in types, or just outright failing.

        Fwiw I tried many, many tools to generate the typescript from the schema. Most resulted in horrendous, bloated code. The official generators especially. Many others just choked on a complex schema, or used basic string concatenation to output the typescript leading to invalid code. Additionally the cost of the generated code scales with the schema size, which can mean shipping huge chunks of code to the client as your API evolves

        The tool I will wholeheartedly recommend (and which I am unaffiliated beside making a few PRs) is openapi-ts. It is fast and correct, and you pay a fixed cost - there's a fetch wrapper for runtime and everything else exists at the type level.

        I was kinda surprised how bad a lot of the tooling was considering how mature OpenAPI is. Perhaps it's advanced in the last year or so, when I stopped working on the project where I had to do this.

        https://openapi-ts.dev/

        • 0x696C6961 19 hours ago
          I write all of my openapi specs by hand. It's not hard.
          • WickyNilliams 19 hours ago
            I imagine you are very much in the minority. A simple hello world is like a screen full of yaml. The equivalent in graphql (or typespec which I always wanted to try as an authoring format for openapi https://typespec.io/) would be a few lines
            • btreecat 6 hours ago
              The standard pattern in go and some scala libs, is to define the spec and generate the code.

              I think you're over fitting your own experiences.

            • 0x696C6961 19 hours ago
              Being verbose doesn't make it difficult.
              • WickyNilliams 19 hours ago
                Not necessarily, no. But at a certain point, I believe it does. Difficult to read, is difficult to edit, is difficult to work with.

                A sibling comment to your reply expressed the same sentiment as me, and also mentioned typespec as a possible solution

            • makeitdouble 16 hours ago
              I see your point, yet writing openapi specs by hand is pretty common.

              There is the part where dealing with another tool isn't much worth it most of the time, and the other side where we're already reading/writing screens of yaml or yaml like docs all the time.

              Taking time to properly think about and define an entry point is reasonable enough.

          • aitchnyu 14 hours ago
            Do you validate responses from client-side and server-side(Fastapi does this and prevents invalid responses from being sent) from spec?
        • hokkos 19 hours ago
          I use https://typespec.io to generate openapi, writing openapi yaml quickly became horrible past a few apis.
          • WickyNilliams 19 hours ago
            Ha yes, see one of my other comments to another reply.

            I never got to use it when I last worked with OpenAPI but it seemed like the antidote to the verbosity. Glad to hear someone had positive experience with it. I'll definitely try it next time I get the chance

      • c-hendricks 22 hours ago
        What about the whole "graph" part? Are there any openapi libraries that deal with that?
        • lateforwork 22 hours ago
          OpenAPI definition includes class hierarchy as well. You can use tools to generate TypeScript type definitions from that.
          • c-hendricks 22 hours ago
            And the fetching in a single request?
            • WickyNilliams 20 hours ago
              There is json-schema which is a sort of dialect/extension of OpenAPI which offers support for fetching relations (and relations of relations etc) and selecting a subset of fields in a single request https://json-schema.org/

              I used this to get a fully type safe client and API, with minimal requests. But it was a lot of work to get right and is not as mainstream as OpenAPI itself. Gql is of course much simpler to get going

            • lateforwork 22 hours ago
              The question I answered was regarding contracts. Fetching in a single request can be handled by your BFF.
              • iterateoften 20 hours ago
                So make things more complicated than gql?
                • 0x696C6961 19 hours ago
                  gql is clearly the more complicated of the two ...
                  • iterateoften 18 hours ago
                    a gql server in python is about as simple as you can possibly go to exposing data via an API. You can use a raw http client to query it.
                    • JAlexoid 16 hours ago
                      You still require gql requests to deal with. There's pretty much the same amount of code to build in BFF as it is to build the same in GQL... and probably less code on the frontend.

                      The value of GQL is pretty much equivalent to SOA orchestration - great in theory, just gets in the way in practice.

                      Oh and not to mention that GQL will inadvertently hide away bad API design(ex. lack of pagination).. until you are left questioning why your app with 10k records in total is slow AF.

                      • c-hendricks 14 hours ago
                        Your response is incredibly anecdotal (as is mine absolutely), and misleading.

                        GQL paved the way for a lot of ergonomics with our microservices.

                        And there's nothing stopping you from just adding pagination arguments to a field and handling them. Kinda exactly how you would in any other situation, you define and implement the thing.

                        • 0x696C6961 6 hours ago
                          Yeah I love it when a request turns into an N+1 query because the FE guys needed 1 more field.
      • komali2 22 hours ago
        Discovering Kubb was a game changer for me last year.
        • HumanOstrich 22 hours ago
          Thanks for mentioning this. I always find it unsettling when I've researched solutions for something and only find a better option from a random HN comment.

          Site: https://kubb.dev/

          • WickyNilliams 20 hours ago
            Fwiw I tried every tool imaginable a few years ago including kubb, (which I think I contributed to while testing things out)

            The only mature, correct, fast option with a fixed cost (since it mostly exists at the type level meaning it doesn't scale your bundle with your API) was openapi-ts. I am not affiliated other than a previous happy user, though I did make some PRs while using it https://openapi-ts.dev/

          • bakugo 21 hours ago
            This project seems to be mostly AI generated, so keep that in mind before replacing any existing solutions.
            • 0x696C6961 19 hours ago
              No it doesn't
              • bakugo 17 hours ago
                Did you see the repo?

                https://github.com/kubb-labs/kubb

                Most of the commits and pull requests are AI. Issues are also seemingly being handled by AI with minimal human intervention.

                • komali2 7 hours ago
                  I've had a PR on Kubb that was taken over by a human maintainer. They then closed my PR and reimplemented my fix in their own PR.

                  So, the project is human enough to annoy me, anyway.

                • JAlexoid 16 hours ago
                  AI assisted, not necessarily generated.

                  And yes, current models are amazing at reducing time it takes to push out a feature or fix a bug. I wouldn't even consider working at a company that banned use of AI to help me write code.

                  PS: It's also irrelevant to whether it's AI generated or not, what matters is if it works and is secure.

                  • bakugo 15 hours ago
                    > what matters is if it works and is secure.

                    How do you know it works and is secure if a lot of the code likely hasn't ever been read and understood by a human?

                    • JAlexoid 15 hours ago
                      There are literally users here that say that it works.

                      And you presume that the code hasn't been read or understood by a human. AI doesn't click merge on a PR, so it's highly likely that the code has been read by a human.

      • iterateoften 21 hours ago
        Graphql solves the problem. There is no problem here for openapi to solve.

        See how that works?

        • thayne 21 hours ago
          Openapi is older than graphql.

          But the point is that that benefit is not unique to graphql, so by itself, that is not a compelling reason to choose graphql over something else.

          • iterateoften 20 hours ago
            Yeah that was one point of many of the benefits of the parent.
          • tt_dev 20 hours ago
            plus now you have 2 sources of truth
            • iterateoften 20 hours ago
              ? I have a single source of truth in the gql schema. My frontend calls are generated from backend schema and type checked against it.
      • bastawhiz 19 hours ago
        tRPC sort of does this (there's no spec, but you don't need a spec because the interface is managed by tRPC on both sides). But it loses the real main defining quality of gql: not needing subsequent requests.

        If I need more information about a resource that an endpoint exposes, I need another request. If I'm looking at a podcast episode, I might want to know the podcast network that the show belongs to. So first I have to look up the podcast from the id on the episode. Then I have to look up the network by the id on the podcast. Now, two requests later, I can get the network details. GQL gives that to me in one query, and the fundamental properties of what makes GQL GQL are what enables that.

        Yes, you can jam podcast data on the episode, and network data inside of that. But now I need a way to not request all that data so I'm not fetching it in all the places where I don't need it. So maybe you have an "expand" parameter: this is what Stripe does. And really, you've just invented a watered down, bespoke GraphQL.

        • lateforwork 19 hours ago
          Is dealing with GQL easier than implementing a BFF? There may be cases where that is true, but it is not always true.
          • bastawhiz 17 hours ago
            I think BFF works at a small scale, but that's true with any framework. Building a one off handful of endpoints will always be less work than putting a framework in place and building against it.

            GQL has a pretty substantial up front cost, undeniably. But you hopefully balance that with the benefit you'd get from it.

      • mixedCase 15 hours ago
        If you generate OpenAPI specs, and clients, and server type definitions from a declarative API definition made with Effect's own @effect/platform, it solves even more things in a nicer, more robust fashion.
    • hjnilsson 1 day ago
      Agree whole-heartedly. The strong contracts are the #1 reason to use GraphQL.

      The other one I would mention is the ability to very easily reuse resolvers in composition, and even federate them. Something that can be very clunky to get right in REST APIs.

      • verdverm 1 day ago
        re:#1 Is there a meaningful difference between GraphQl and OpenAPI here?

        Composed resolvers are the headache for most and not seen as a net benefit, you can have proxied (federated) subsets of routes in REST, that ain't hard at all

        • JasonSage 22 hours ago
          > Composed resolvers are the headache for most and not seen as a net benefit, you can have proxied (federated) subsets of routes in REST, that ain't hard at all

          Right, so if you take away the resolver composition (this is graph composition and not route federation), you can do the same things with a similar amount of effort in REST. This is no longer a GraphQL vs REST conversation, it's an acknowledgement that if you don't want any of the benefits you won't get any of the benefits.

          • verdverm 22 hours ago
            There are pros & cons to GraphQL resolver composition, not just benefits.

            It is that very compositional graph resolving that makes many see it as overly complex, not as a benefit, but as a detriment. You seem to imply that the benefit is guaranteed and that graph resolving cannot be done within a REST handler, which it can be, but it's much simpler and easier to reason about. I'm still going to go get the same data, but with less complexity and reasoning overhead than using the resolver composition concept from GraphQL.

            Is resolver composition really that different from function composition?

            • JasonSage 17 hours ago
              Local non-utility does not imply global non-value. Of course there's costs and benefits, but it's hard to have a conversation with good-faith comparison using "many see it as overly complex" -- this is an analysis that completely ignores problem-fit, which you then want to generalize onto all usage.
              • verdverm 19 minutes ago
                People can still draw generalizations about a piece of technology that hold true regardless context or problem fit

                One of those conclusions is that GraphQL is more complex than REST without commensurate ROI

      • specialp 20 hours ago
        Contracts for data with OpenAPI or an RPC don't come with the overhead of making a resolver for infinite permutations while your apps probably need a few or perhaps one. Which is why REST and something for validation is enough for most and doesn't cost as much.
    • 8n4vidtmkvmk 1 day ago
      Pruning the request and even the response is pretty trivial with zod. I wouldn't onboard GQL for that alone.

      Not sure about the schema evolution part. Protobufs seem to work great for that.

      • hamandcheese 12 hours ago
        In my (now somewhat dated) graphql experience, evolving an API is much harder. Input parameters in particular. If a server gets inputs it doesn't recognize, or if client and server disagree that a field is optional or not (even if a value was still supplied for it so the question is moot), the server will reject the request.
      • hn_throwaway_99 23 hours ago
        > Pruning the request and even the response is pretty trivial with zod.

        I agree with that, and when I'm in a "typescript only" ecosystem, I've switched to primarily using tRPC vs. GraphQL.

        Still, I think people tend to underestimate the value of having such clear contracts and guarantees that GraphQL enforces (not to mention it's whole ecosystem of tools), completely outside of any code you have to write. Yes, you can do your own zod validation, but in a large team as an API evolves and people come and go, having hard, unbreakable lines in the sand (vs. something you have to roll your own, or which is done by convention) is important IMO.

      • FootballMuse 1 day ago
        Pruning a response does nothing since everything still goes across the network
        • hdjrudni 23 hours ago
          Pruning the response would help validate your response schema is correct and that is delivering what was promised.

          But you're right, if you have version skew and the client is expecting something else then it's not much help.

          You could do it client-side so that if the server adds an optional field the client would immediately prune it off. If it removes a field, it could fill it with a default. At a certain point too much skew will still break something, but that's probably what you want anyway.

        • hn_throwaway_99 23 hours ago
          You're misunderstanding. In GraphQL, the server prunes the response object. That is, the resolver method can return a "fat" object, but only the object pruned down to just the requested fields is returned over the wire.

          It is an important security benefit, because one common attack vector is to see if you can trick a server method into returning additional privileged data (like detailed error responses).

          • JAlexoid 15 hours ago
            I would like to remind you that in most cases the GQL is not colocated on the same hardware as the services it queries.

            Therefore requests between GQL and downstream services are travelling "over the wire" (though I don't see it as an issue)

            Having REST apis that return only "fat" objects is really not the most secure way of designing APIs

          • fastball 15 hours ago
            "Just the requested fields" as requested by the client?

            Because if so that is no security benefit at all, because I can just... request the fat fields.

    • tomnipotent 19 hours ago
      Facebook had started bifurcating API endpoints to support iOS vs Android vs Web, and overtime a large number of OS-specific endpoints evolved. A big part of their initial GraphQL marketing was to solve for this problem specifically.
    • dgan 1 day ago
      Sorry but not convinced. How is this different from two endpoints communicating through, lets say, protobuf? Both input and output will be (un)parsed only when conforming to the definition
    • scotty79 19 hours ago
      > when a server receives an input object, that object will conform to the type

      Anything that comes from the front end can be tampered with. Server is guaranteed nothing.

      > GraphQL always prunes return objects to just the fields requested, which most other API tech doesn't do, and this can be a really nice security benefit.

      Request can be tampered with so there's additional security from GraphQL protocol. Security must be implemented by narrowing down to only allowed data on the server side. How much of it is requested doesn't matter for security.

      • JAlexoid 15 hours ago
        Expecting GraphQL to handle security is really one of the poorest ways of doing security, as GQL is not designed to do that.
        • scotty79 12 hours ago
          Sorry, I made a typo:

          Request can be tampered with so there's *NO additional security from GraphQL protocol.

  • rbalicki 22 hours ago
    The author is missing the #1 benefit of GraphQL: the ability to compose (the data for) your UI from smaller parts.

    This is not surprising: Apollo only recently added support for data masking and fragment colocation, but it has been a feature of Relay for eternity.

    See https://www.youtube.com/watch?v=lhVGdErZuN4 for the benefits of this approach:

    - you can make changes to subcomponents without worrying about affecting the behavior of any other subcomponent,

    - the query is auto-generated based on the fragment, so you don't have to worry that removing a field (if you stop using it one subcomponent) will accidentally break another subcomponent

    In the author's case, they (either) don't care about overfetching (i.e. they avoid removing fields from the GraphQL query), or they're at a scale where only a small number of engineers touch the codebase. (But imagine a shared component, like a user avatar. Imagine it stopped using the email field. How many BFFs would have to be modified to stop fetching the email field? And how much research must go into determining whether any other reachable subcomponent used that email field?)

    If moving fast without overhead isn't a priority (or you're not at the scale where it is a problem), or you're not using a tool that leverages GraphQL to enable this speed, then indeed, GraphQL seems like a bad investment! Because it is!

    • WickyNilliams 19 hours ago
      Yes, Apollo not leading people down the correct path has given people a warped perception of what the benefits actually are. Colocation is such a massive improvement that's not really replicated anywhere else - just add your data requirements beside your component and the data "magically" (though not actually magic) gets requested and funnelled to the right place

      Apollo essentially only had a single page mentioning this, and it wasn't easy to find, for _years_

    • girvo 21 hours ago
      Quite. Apollo Client is the problem, IMO, not GraphQL.

      Though Relay still needs to work on their documentation: Entrypoints are so excellent and yet still are basically bare API docs that sort of rely on internal Meta shit

      • sibeliuss 19 hours ago
        The docs situation continues to be hilarious and bad, for the gem they have created.

        It's the unfortunate situation where those who know, know, and those who do not, blasphemy the whole thing based on misunderstanding.

        Super unfortunate, which could be solved by simply moving a little money over to Relay's docs, and working on some marketing materials.

      • rbalicki 16 hours ago
        100% agree on the unnecessary connection between entrypoints and meta internals. I think this is one of the biggest misses in Relay, and severely limits its usefulness in OSS.

        If you're interested in entrypoints without the Meta internals, you may be interested in checking out Isograph (which I work on). See e.g. https://isograph.dev/docs/loadable-fields/, where the data + JS for BlogBody is loaded afterward, i.e. entrypoints. It's as simple as annotating a field (in Isograph, components define fields) with @loadable(lazyLoadArtifact: true).

        • girvo 14 hours ago
          Neat! I basically just reimplemented some of the missing pieces myself, but honestly for the kind of non-work GraphQL/Relay stuff I do React Router with an entry point-like interface for routes (including children!) to feed in route params to loadQuery and the ref to the route itself got me close enough for my purposes

          I’ll have a play though, sounds promising :)

          Oh this is interesting, sort of seems like the relay-3d thing in some ways?

          • rbalicki 14 hours ago
            Yeah, you can get a lot of features out of the same primitive. The primitive (called loadable fields, but you can think of it as a tool to specify a section of a query as loaded later) allows you to support: - live queries (call the loadable field in a setInterval) - pagination (pass different variables and concatenate the result) - defer - loading data in response to a click

            And if you also combine this with the fact that JS and fragments are statically associated in Relay, you can get: - entrypoints - 3D (if you just defer components within a type refinement, e.g. here we load ad items only when we encounter an item with typename AdItem https://github.com/isographlabs/isograph/blob/627be45972fc47.... asAdItem is a field that compiles to ... on AdItem in the actual query text)

            And all of it is doable with the same set of primitives, and requiring no server support (other than a node field).

            Do let me know if you check it out! Or if you get stuck, happy to unblock you/clarify things (it's hard for me to know what is confusing to folks new to the project.)

    • presentation 19 hours ago
      Agreed on fragment masking. Graphql-codegen added support for it but in a way that unfortunately is not composable with all the other plugins in their ecosystem (client preset or bust), to the point that to get it to work nicely in our codebase we had to write our own plugins that rip code from the client preset so that we could use them as standalone plugins.

      The ecosystem in general appears to be a problem.

  • timcobb 1 day ago
    > The main problem GraphQL tries to solve is overfetching.

    this gets repeated over and over again, but if this your take on GraphQL you def shouldn't be using GraphQL, because overfetching is never such a big problem that would warrant using GraphQL.

    In my mind, the main problem GraphQL tries to solve is the same "impedance mismatch" that ORMs try to solve. ORM's do this at the data level fetching level in the BE, while GraphQL does this in the client.

    I also believe that using GraphQL without a compiler like Relay or some query/schema generation tooling is an anti-pattern. If you're not going to use a compiler/query generation tool, you probably won't get much out of GraphQL either.

    In my opinion, GraphQL tooling never panned out enough to make GraphQL worthwhile. Hasura is very cool, but on the client side, there's not much going on... and now with AI programming you can just have your data layers generated bespoke for every application, so there's really no point to GraphQL anymore.

    • JAlexoid 15 hours ago
      > I also believe that using GraphQL without a compiler like Relay or some query/schema generation tooling is an anti-pattern. If you're not going to use a compiler/query generation tool, you probably won't get much out of GraphQL either.

      How is this easier or faster than writing a few lines of code at BFF?

    • rbalicki 21 hours ago
      If you're interested in an example of really good tooling and DevEx for GraphQL, then may I shamelessly promote this video in which I demonstrate the Isograph VSCode extension: https://www.youtube.com/watch?v=6tNWbVOjpQw

      TLDR, you get nice features like: if the field you're selecting doesn't exist, the extension will create the field for you (as a client field.) And your entire app is built of client fields that reference each other and eventually bottom out at server fields.

    • tcoff91 21 hours ago
      URQL and gql.tada are great client side tooling innovations.
    • smrtinsert 16 hours ago
      Curious what tooling you're for graphql? Intellij has excellent support for it as does Postman.
    • jiggawatts 22 hours ago
      > overfetching is never such a big problem

      Wait, what? Overfetching is easily one of the top #3 reasons for the enshittification on the modern web! It's one of the primary causes of incredible slowdowns we've all experienced.

      Just go to any slow web app, press F12 and look at the megabytes transferred on the network tab. Copy-paste all text on the screen and save it to a file. Count the kilobytes of "human readable" text, and then divide by the megabytes over the wire to work out the efficiency. For notoriously slow web apps, this is often 0.5% or worse, even if filtering down to API requests only!

      • andrewingram 17 hours ago
        It is still a major problem, yes. Interestingly, if you go back to the talks that introduced GraphQL, much of the motivation wasn’t about solving overfetching (they kinda assumed you were already doing that because it was at the peak of mobile app wave), but solving the organisational and technical issues with existing solutions.
      • rbalicki 21 hours ago
        #1 unnecessary network waterfalls

        #2 downloading the same fields multiple times

        #3 downloading unneeded data/code

        Checks out

        • switz 21 hours ago
          Hilariously – react server components largely solves all three of these problems, but developers don't seem to want to understand how or why, or seem to suggest that they don't solve any real problems.
          • andrewingram 17 hours ago
            It’s no secret that RSC was at least partially an attempt to get close to what Relay offers but without requiring you adopt GraphQL.
          • rbalicki 15 hours ago
            There's an informed critique of RSC, but no one is making it.
          • presentation 19 hours ago
            I agree though worth noting that data loader patterns in most pre-RSC react meta frameworks + other frameworks also solve for most of these problems without the complexity of RSC. But RSC has many benefits beyond simplifying and optimizing data fetching that it’s too bad HN commenters hate it (and anything frontend related whatsoever) so much.
      • gherkinnn 12 hours ago
        Overfetching does not lead to those megabytes. And it has nothing to do with the enshittification process of a middleman like Amazon fucking over both customers and sellers.
  • gavinray 1 day ago
    I'm probably about as qualified to talk about GraphQL as anyone on the internet: I started using it in late 2016, back when Apollo was just an alternate client-side state/store library.

    The internet at large seems to have a fundamental misunderstanding about what GraphQL is/is not.

    Put simply: GQL is an RPC spec that is essentially implemented as a Dict/Key-Value Map on the server, of the form: "Action(Args) -> ResultType"

    In a REST API you might have

      app.GET("/user", getUser)
      app.POST("/user", createUser)
    
    In GraphQL, you have a "resolvers" map, like:

      {
        "getUser": getUser,
        "createUser": createUser,
      }
    
    And instead of sending a GET /user request, you send a GET /query with "getUser" as your server action.

    The arguments and output shape of your API routes are typed, like in OpenAPI/OData/gRPC.

    That's all GraphQL is.

    • andrewingram 22 hours ago
      As someone who’s used GraphQL since mid-2015, if you haven’t used GraphQL with Relay you probably haven’t experienced GraphQL in a way that truly exploits its strengths.

      I say probably because in the last ~year Apollo shipped functionality (fragment masking) that brings it closer.

      I stand by my oft-repeated statement that I don’t use Relay because I need a React GraphQL client, I use GraphQL because I really want to use Relay.

      The irony is that I have a lot of grievances about Relay, it’s just that even with 10 years of alternatives, I still keep coming back to it.

      • maxcan 21 hours ago
        Can you elaborate? I've used URQL and Apollo with graphql code gen for type safety and am a big fan.

        What about relay is so compelling for you? I'm not disagreeing, just genuinely curious since I've never really used it.

        • andrewingram 20 hours ago
          For me it’s really about the component-level experience.

          * Relatively fine-grained re-rendering out of the box because you don’t pass the entire query response down the tree. useFragment is akin to a redux selector

          * Plays nicely with suspense and the defer fragment, deferring a component subtree is very intuitive

          * mutation updaters defined inline rather than in centralised config. This ended up being more important than expected, but having lived the reality of global cache config with our existing urql setup at my current job, I’m convinced the Relay approach is better.

          * Useful helpers for pagination, refetchable fragments, etc

          * No massive up-front representation of the entire schema needed to make the cache work properly. Each query/fragment has its own codegenned file that contains all the information needed to write to the cache efficiently. But because they’re distributed across the codebase, it plays well with bundle size for individual screens.

          * Guardrails against reuse of fragments thanks to the eslint plugin. Fragments are written to define the data contract for individual components or functions, so there’s no need to share them around. Our existing urql codebase has a lot of “god fragments” which are very incredibly painful to work with.

          Recent versions of Apollo have some of these things, but only Relay has the full suite. It’s really about trying to get the exact data a component needs with as little performance overhead as possible. It’s not perfect — it has some quite esoteric advanced parts and the documentation still sucks, but I haven’t yet found anything better.

          Did my only ever podcast appearance about it a few years ago. Haven’t watched it myself because yikes, but people say it was pretty good https://youtu.be/aX60SmygzhY?si=J8rQF6Pe5RGdX1r8

        • tcoff91 21 hours ago
          Try gql tada it’s much better than graphQL codegen
          • maxcan 5 hours ago
            I did. I really wanted to like it. I think it broke due to something I was doing with fragments or splitting up code in my monorepo. I may give it a shot again, from first principles it is a better approach.
    • jayd16 1 day ago
      This seems a bit reductive as it skims over the whole query resolution part entirely.
      • verdverm 1 day ago
        Which is where the real complexity comes in
      • thom 23 hours ago
        This, for me, is a perfect description of the entirety of GraphQL tbh.
    • 8n4vidtmkvmk 1 day ago
      I think you're oversimplifying it. You've left on the part where the client can specify which fields they want.
      • verdverm 1 day ago
        That's something you should only really do in development, and then cement for production. Having open queries where an attacker can find interesting resolver interactions in production is asking for trouble
        • Aurornis 20 hours ago
          > That's something you should only really do in development, and then cement for production

          My experience with GraphQL in a nutshell: A lot of effort and complexity to support open ended queries which we then immediately disallow and replace with a fixed set of queries that could have been written as their own endpoints.

          • twodave 19 hours ago
            This is not the intended workflow. It is meant to be dynamic in nature.
        • fgkramer 23 hours ago
          But has this been thoroughly documented and are there solid libraries to achieve this?

          My understanding is that this is not part of the spec and that the only way to achieve this is to sign/hash documents on clients and server to check for correctness

          • verdverm 22 hours ago
            Well, it seems that the Apollo way of doing it now, via their paid GraphOS, is backwards of what I learned 8 years ago (there is always more than one way to do things in CS).

            At build time, the server generates a random string resolver names that map onto queries, 1-1, fixed, because we know exactly what we need when we are shipping to production.

            Clients can only call those random strings with some parameters, the graph is now locked down and the production server only responds to the random string resolver names

            Flexibility in dev, restricted in prod

          • girvo 21 hours ago
            I mean yeah, in that Persisted Queries are absolutely documented and expected in production on the Relay side, and you’re a hop skip and jump away from disallowing arbitrary queries at that point if you want to

            Though you still don’t need to and shouldn’t. Better to use the well defined tools to gate max depth/complexity.

            • verdverm 20 minutes ago
              All these extra requirements are why GraphQL never really captured enough mindshare to be a commonly selected tool
        • hdjrudni 23 hours ago
          Sure, maybe you compile away the query for production but the server still needs to handle all the permutations.
          • verdverm 23 hours ago
            yup, and while they are fixed, it amounts to a more complicated code flow to reason about compared to you're typical REST handler

            Seriously though, you can pretty much map GraphQL queries and resolvers onto JSONSchema and functions however you like. Resolvers are conceptually close to calling a function in a REST handler with more overhead

            I suspect the companies that see ROI from GraphQL would have found it with many other options, and it was more likely about rolling out a standard way of doing things

    • ericyd 1 day ago
      Is this relevant to the posted article? I don't see how the OP misrepresents anything about GQL.
    • scotty79 19 hours ago
      GraphQL is best if the entire React page gathers all requirement from subcomponents into one large GraphQL query and the backend converts the query to a single large SQL query that requests all the data directly from database where table and row level security make sure no private data is exposed. Then the backend converts SQL result into GraphQL response and React distributes the received data across subcomponents.

      Resolvers should be an exception for the data that can't come directly from the database, not the backbone of the system.

    • mirekrusin 22 hours ago
      Except you can't have ie. union as argument, which means you can't construct ie. SQL/MongoDB-like where clauses.
      • rbalicki 15 hours ago
        This is a genuinely accurate critique of GraphQL. We're missing some extremely table-stakes things, like generics, discriminated unions in inputs (and in particular, discriminated unions you can discriminate and use later in the query as one of the variants), closed unions, etc.
  • verdverm 1 day ago
    I have strong agreement here and would add reasoning about auth flow through nested resolvers is one of the biggest challenges because it adds so much mental overhead. The reason is that a resolver may be called through completely different contexts and you have to account for that

    The complexity and time lost to thinking is just not worth it, especially once you ship your GarphQL app to production, you are locking down the request fields anyway (or you're keeping yourself open for more pain)

    I even wrote a zero-dependency auth helpers package and that was not enough for me to keep at it

    https://github.com/verdverm/graphql-autharoo

    Like OP says, pretty much everything GraphQL can do, you can do better without GraphQL

    • hirsin 20 hours ago
      Authz overhead for graphql is definitely a problem. At GitHub we're adding github app support to the enterprise account APIs, meaning introducing granular permissions for each graphql resource type.

      Because of the graph aspect, queries don't work til all of the underlying resources have been updated to support github apps. From a juice vs squeeze perspective it's terrible - lots of teams have to do work to update their resources (which given turnover and age they may not even be aware of) before basic queries start working, until you finally hit a critical mass at some high percentage of coverage.

      Add to all that the prevailing enterprise customer sentiment of "please anything but graphql" and it's a really hard sell - it's practically easier and better to ask teams to rebuild their APIs in REST than update the graphql.

      • verdverm 4 minutes ago
        GitHub search is among the worst out there, is this why?
      • andrewingram 17 hours ago
        I mean, the use of GraphQL for third party APIs has always been questionable wisdom. I’m about a big a GraphQL fan as it gets, but I’ve always come down on the side of being very skeptical that it’s suitable for anything beyond its primary use case — serving the needs of 1st-party UI clients.
        • hirsin 7 hours ago
          Strongly agreed.
    • cluckindan 23 hours ago
      Have you tried using a decorator for auth?

      Also, using a proper GraphQL server and not composing it yourself from primitives is usually beneficial.

      • verdverm 22 hours ago
        This was an auth extension or plugin for Apollo, forget what they called it.

        Apollo shows up in the README and package.json, so I'm not sure why you are assuming I was not using a proper implementation

  • gethly 22 hours ago
    GQL was always one of those things that sound good on the surface but in practice it never delivers and the longer you're stuck with it the worse it gets. Majority of tech is actually like this. People constantly want to reinvent the wheel but in the end, a wheel is a wheel and it will never be anything else.
    • trueno 10 hours ago
      i do a lot of data shenanigans and it's just annoying to work with when some saas goof doesn't consider that orgs are in the business of warehousing the piss out of entire platforms worth of data that they are paying saas guys a million dollars a year for just so they can marry it together with other reporting. all roads lead to damn reporting. so if you want to woo clients but only have graphql then you should probably build some connectors they can use elsewhere they can easily retrieve all their data from. i straight up don't meet business analysts who use graphql to fetch reporting data. it's always me and my engineers sidequesting to make that data available in a warehouse env. my prob with graphql is it forces me to get intimately familiar with platforms i want to just plug into the butt of some object storage container so it can auto ingest into the warehouse and walk away. this is easy to do when the platform who knows their data and their data structure well serves up a rest api that covers all your bases. with graphql the onus is on me to figure out what the f all data i might even need and a lot of platforms have garbage documentation. so much fun since every service/app designs their db differently. no matey, postman is not the time or place for me to familiarize myself with your data model. i shall do that in the sql gladiator arena once ive ironically over fetched and beat the shit out of your graphql resolvers and stuck the data back in a database anyways. if im developing an apps or tools to interface with some platform graphql is fine but it ends there. in situations where i need to bring data pipelines online for my org its just annoying to work with. syntactically im annoyed, my engineers are annoyed, it just amuses me to no end that platforms dont know how big reporting is at orgs they seem surprised not everyone is developing some front end app to their "modular commerce solution" and sometimes they dont even know how to answer when we ask if theres anything we should consider because we're about to hang out at the ceiling of our allowed rate limits when we bring these data pipelines online. they seem surprised that we're interested in reporting, like wtf we pay you a million a year so we can do your whatever as a service thing of fkn course we'll be reporting on the data there. how else are we gonna smoke that proverbial value add on quarterly calls? graphql brings a query language over http. it takes a resolver that's well designed, configured and resourced. i'd rather just rawdog a sql query over the net and have postgres or whatever transpose that to json, return that and let me figure the rest out myself. ive never needed this exactness and freedom out of an api that graphql enjoyers love. i can take whatever there you throw at me and polish into the turd needed for the job, but i generally prefer vendors who have a well thought out and comprehensive and reliable set of rest endpoints. in that scenario its just easier for me to real time it into a warehouse and immediately push off to a stream or queue that populates a postgres instance if i need to build a high traffic web app. reporting needs and application needs are met and i dont have to don't need to do bespoke jujutsu sitting in a rest client and staring at json requests to determine what data i need before i architect out some one off gql query. i look at ton of data, graphql is the most overengineered and unintuitive way to review a lot of it.

      its a data retrieval setup that specifically caters towards front end dev. i've done plenty of fe and i will design an app with whatever data when its needed when im building the front my headspace is completely impartial to whether or not im working with gql rest or a podunk db. so im here wondering why no one is just saying this: its nice and convenient when you're on the front, but its hardly a requirement to need a gql api. some like to think it solves for an organizational rift between front and backend devs, and that's just kicking the can down the road. im not sold on the empowerment of fe at the expensive of teams working well together. yeah isolate them more well never need to talk to fe again. great strat

      since i happen to also work backend and on enterprise data i see a lot of angles that tightly scoped front end graphql enjoyers do not see and will likely never have to deal with ever. but we deal with it all the time, at least it's convenient for one of us. sucks that it isn't me

      • gethly 9 hours ago
        @grok: summarise this post in two sentences.
        • FrustratedMonky 5 hours ago
          GPT: "GraphQL is fine for frontend apps, but it’s a pain for enterprise data pipelines where the real job is bulk ingestion, warehousing, and reporting—work that REST APIs handle far more cleanly without forcing engineers to reverse-engineer undocumented schemas and babysit resolvers and rate limits. Organizations pay SaaS vendors to extract value through reporting, not to do bespoke GraphQL gymnastics, and the industry seems oddly surprised that data teams just want to ingest everything, dump it into a warehouse, and get on with their lives. "
          • gethly 5 hours ago
            :thumbs_up: :)
  • trashymctrash 1 day ago
    What I liked about GraphQL was the fact that I only have to add a field in one place (where it belongs in the schema) and then any client can just query it. No more requests from Frontend developers like „Hey, can you also add that field to this endpoint? Then I don’t have to make multiple requests“. It just cuts that discussion short.

    I also really liked that you can create a snapshot of the whole schema for integration test purposes, which makes it very easy to detect breaking changes in the API, e.g. if a nullable field becomes not-nullable.

    But I also agree with lots of the points of the article. I guess I am just not super in love with REST. In my experience, REST APIs were often quite messy and inconsistent in comparison to GraphQL. But of course that’s only anecdotal evidence.

    • matsemann 1 day ago
      But the first point is also its demise. I have object A, and want to know something from a related object E. Since I can ask for A-B-C-D-E myself, I just do it, even though the performance or spaghettiness takes a hit. Then ends up with frontend that's tightly coupled to the representation at the time as well, when "in the context of A I also need to know E" could've been a specialized type hiding those details.
      • girvo 21 hours ago
        > Then ends up with frontend that's tightly coupled to the representation at the time as well, when "in the context of A I also need to know E" could've been a specialized type hiding those details.

        GraphQL clients are built to do exactly that, Relay originally and Apollo in the last year, if I’m understanding what you’re saying: any component that touches E doesn’t have to care about how you got to it, fragment masking makes short work

    • Culonavirus 23 hours ago
      > No more requests from Frontend developers like „Hey, can you also add that field to this endpoint? Then I don’t have to make multiple requests“.

      Do people actually work like this is 2025? I mean sure, I guess when you're having entire teams just for frontends and backends then yea, but your average corporate web app development? It's all full stack these days. It's often expected that you can handle both worlds (client and server) and increasingly its even TypeScript "shared universe" when you don't even leave the TS ecosystem (React w/ something like RR plus TS BFF w/ SQL). This last point, where frontend and backend meet, is clearly the way things are going in general. I mean these days React doesn't even beat around the bush and literally tells you to install it with a framework, no more create-react-app, server side rendering is a staple now and server side components are going to be a core concept of React within a few years tops.

      Javascript has conquered the client side of the internet, but not the server side. Typescript is going to unify the two.

      • Aurornis 20 hours ago
        > It's all full stack these days. It's often expected that you can handle both worlds (client and server)

        Full stack is common for simple web apps, where the backend is almost a thin layer over the database.

        But a lot of the products I’ve worked with have had backends that are far more complex than something you could expect the front end devs to just jump into and modify.

  • phendrenad2 3 hours ago
    GraphQL appeals to the enterprise mind in a way that few technologies have. Like SOAP/WSDL before it. It fits the model of spotlighting some small and medium problems, and offers a solution that adds complexity and makes everything take longer to build, and if you follow the implementation guidelines closely enough, they say you can solve the problems. Meanwhile, your competitor just has 300 API endpoints and runs circles around you, and you eventually acquire them to get all of your customers back.
  • JRagone 7 hours ago
    Funny that the top three threads are about how the author misses the real benefit of GraphQL and proceed to assert three different benefits. Perhaps its varied applications is one to consider :-)
    • designerarvid 4 hours ago
      No, that time it went wrong because it wasn't _true_ communism. True communism hasn't been tried yet.
  • aabhay 1 day ago
    How do GraphQL based systems solve the problem of underlying database thrashing, hot shards, ballooning inner joins, and other standard database issues? What prevents a client from writing some adversarial-level cursed query that causes massive internal state buildup?

    I’m not a database neckbeard but I’ve always been confused how GraphQL doesn’t require throwing all systems knowledge about databases out the window

    • spooneybarger 23 hours ago
      Most servers implement a heuristic for "query cost/complexity" with a configurable max. At the time the query is parsed, its cost is determined based on the heuristic and if it is over the max, the query is rejected.
      • lll-o-lll 21 hours ago
        Which would be fine for internal facing, but it doesn’t sound like it would be enough in an adversarial context?
        • spooneybarger 21 hours ago
          There are a lot of public facing graphql servers that use it without issue other than frustrating users of non adversarial but complex requirements. The problem is that it is generally on a per request basis.

          An adversary is going to utilize more than a single query. It mostly protects against well intentioned folks.

          Other forms of protection such as rate limiting are needed for threat models that involve an adversary.

          The same problems exist with REST but there it is easier as you can know query complexity ahead of time at end points. GraphQL has to have something to account for the unknown query complexity, thus the additional heuristics.

  • hashmap 23 hours ago
    > GraphQL isn’t bad. It’s just niche. And you probably don’t need it.

    > Especially if your architecture already solved the problem it was designed for.

    What I need is to not want to fall over dead. REST makes me want to fall over dead.

    > error handling is harder than it needs to be GraphQL error responses are… weird. > Simple errors are easier to reason about than elegant ones.

    Is this a common sentiment? Looking at a garbled mash of linux or whatever tells me a lot more than "500 sorry"

    I'm only trying out GraphQL for the first time right now cause I'm new with frontend stuff, but from life on the backend having a whole class of problems, where you can have the server and client agree on what to ask for and what you'll get, be compiled away is so nice. I don't actually know if there's something better than GraphQL for that, but I wish when people wrote blogs like this they'd fill them with more "try these things instead for that problem" than simply "this thing isn't as good as you think it is you probably don't need it".

    • Dibes 23 hours ago
      If isomorphic TS is your cup of tea, tRPC is a nicer version of client server contracting than graphql in my opinion. Both serve that problem quite well though.
      • hashmap 1 hour ago
        I do like the look of this! It seems like it nicely provides that without like kicking you into React, which I have ended up having to draw a hard line against in development after my first couple experiences not only with it, but how the distributions in AI models make it a real trap to touch. I'll swap this in in one of my projects and give it a go. Thanks!
        • Dibes 27 minutes ago
          No problem! I hope you have a good time with it!
  • marcus_holmes 14 hours ago
    I ran a team a few years ago. The FE folks really wanted to use GraphQL, and the BE folks agreed, because someone had found an interesting library that made it easy. No-one had any experience of GraphQL before.

    After a month's development I found out that there was one GraphQL call at the root of each React page, and it fetched all the data for that userID in a big JSON blob, that was then parsed into a JS object and used for the rest of the life of that page. Any updates sent the entire, modified, blob back to the server and the BE updated all the tables with the changed data. This didn't cause problems because users didn't share data or depend on shared data.

    Everyone was happy because they got to put GraphQL on their resume. The application worked. We hit the required deadline. The company didn't get any traction with the application and we pivoted to something else very quickly, and was sold to private equity within two years. None of the code we wrote is running now, which is probably a good thing.

    I get the feeling, from conversations with other people using GraphQL, that this is the sort of thing that actually happens in practice. The author's arguments make sense, as do the folks defending GraphQL. But I'd suggest that 80-90% of the GraphQL actually written and running out there is the kind of crap my team turned out.

  • jensneuse 15 hours ago
    The problem with this article is that GraphQL has become much more an enterprise solution over the last few years than a non enterprise one. Even though the general public opinion of X and HN seems to be that GraphQL has negative ROI, it's actually growing strongly in the enterprise API management segment.

    GraphQL, in combination with GraphQL has become the new standard for orchestrating Microservices APIs and the development of AI and LLMs gives it even another push as MCP is just another BFF and that's the sweet spot of GraphQL.

    Side note, I'm not even defending GraphQL here, it's just about facts if we're looking at who's using and adopting GraphQL. If you look around, from Meta to Airbnb, Uber, Reddit or Booking.com, Atlassian or Monday, GitHub or Gitlab, all these services use GraphQL successfully and these days, banks are adopting it to modernize API access to their Mainframe, SOAP and proprietary RPC APIs.

    How do I know you might say? I'm working with WunderGraph (https://wundergraph.com/), one of the most innovative vendors in the market and we're talking to enterprise every day. We've just came home from API days Paris and besides AI and LLMs, everyone in the enterprise is talking about API design, governance and collaboration, which is where GraphQL Federation is very strong and the ecosystem is very mature.

    Posts like this are super harmful for the API ecosystem because they come from inexperience and lack of knowledge.

    GraphQL can solve over fetching but that's not the reason why enterprises adopt it. GraphQL Federation solves a people problem, not a technical one. It helps orgs scale and govern APIs across a large number of teams and services.

    Just recently there was a post here on HN about the problems with dependencies between Microservices, a problem that GraphQL Federation solves very elegantly with the @requires directive.

    One thing I've learned over the years is that people who complain about GraphQL are typically not working in the enterprise, and those who use the query language successfully don't usually post on social media about it. It's a tool in the API tool belt besides others like Open API and Kafka. Just go to an API conference and ask what people use.

  • roscue 22 hours ago
    I would agree that REST beats GraphQL in most cases regarding complexity, development time, security, and maintainability if the backend and frontend are developed within the same organization.

    However, I think GraphQL really shines when the backend and frontend are developed by different organizations.

    I can only speak from my experience with Shopify's GraphQL APIs. From a client-side development perspective, being able to navigate and use the extensive and (admittedly sometimes over-)complex Shopify APIs through GraphQL schemas and having everything correctly typed on the client side is a godsend.

    Just imagining offering the same amount of functionality for a multitude of clients through a REST API seems painful.

  • akio 23 hours ago
    If all your experience comes from Apollo Client and Apollo Server, as the author's does, then your opinion is more about Apollo than it is about GraphQL.

    You should be using Relay[0] or Isograph[1] on the frontend, and Pothos[2] on the backend (if using Node), to truly experience the benefits of GraphQL.

    [0]: https://relay.dev/

    [1]: https://isograph.dev/

    [2]: https://pothos-graphql.dev/

    • rbalicki 15 hours ago
      Incidentally, v0.5.0 of Isograph just came out! https://isograph.dev/blog/2025/12/14/isograph-0.5.0/ There are lots of DevEx wins in this release, such as the ability to create have an autofix create fields for you. (In Isograph, these would be client fields.)
    • cluckindan 22 hours ago
      There are also GraphQL interfaces for various databases which can be useful, especially with federation to tie them together into a supergraph.
    • girvo 21 hours ago
      GraphQL Yoga is also excellent (and you get the whole Guild ecosystem of plugins etc), if you want to go schema-first
  • nmilo 21 hours ago
    This doesn’t really make sense. Obviously if you combine GQL with BFF/REST you’re gonna have annoying double-work —- you’re solving the same problem twice. GQL lets you structure your backend into semantic objects then have the frontend do whatever it wants without extra backend changes. Which lets frontend devs move way faster.
    • presentation 19 hours ago
      This is the true big benefit, the others talking about over fetching are not wrong but overfocusing on a technical merit over the operational ones.

      My frontend developers had their minds blown when they realized that because we’re using Hasura internally, the only backend work generally needed is to design the db schema and permissioning, and then once that’s done frontend developers aren’t ever blocked by anything (which is not a freedom that I would want to give to untrusted developers, hence emphasis on internal usage of GQL)

      (Unfortunately Hasura has shifted entirely into this VC-induced DDN thing that seems to be a hard break from the original product, so I can’t recommend that anymore… postgraphile is probably the way)

  • cluckindan 23 hours ago
    There is a pattern where GraphQL really shines: using a GraphQL native DB like Dgraph (self-hosting) and integrating other services via GraphQL Federation in a GraphQL BFF.
    • eatsyourtacos 17 hours ago
      Sounds like a great way to completely lock yourself into an ecosystem you'll never be able to leave!
      • cluckindan 4 hours ago
        On the contrary, you could swap the database rather easily compared to traditional REST+SQL backends.

        Migrate data to another GraphQL DB and join its GraphQL schema to the supergraph. The only pain point could be DB-specific decorators, but even those could be implemented at the supergraph level (in the Federation server) if needed.

        Even migrating to a non-GraphQL DB is feasible: you could just write your own resolvers in a separate GraphQL server and join that to the supergraph. But that would be more of a ecosystem lock already :)

        Really, any manner of SQL database is more of an ecosystem lock than a GraphQL database behind Federation.

  • gideon60 1 day ago
    Yup, honeymoon is over. Now is the time for the adult, long-term, and productive relationship.
    • sibeliuss 1 day ago
      Exactly! Once its working, it can be very healthy. And especially on the client. For a very, very, very long time. We started using GraphQL at the very beginning, back in 2015, and the way it has scaled over time -- across backend and frontend -- has worked amazingly well. Going on 10 years now and no slowing down.
      • c-hendricks 1 day ago
        We haven't been using it as long but it's definitely saved us from things that were "impossible" to associate in our microservice backend.
  • jwaldrip 16 hours ago
    On OpenAPI vs GraphQL: I disagree with the premise that OpenAPI achieves the same thing. GraphQL is necessarily tightly coupled to your backend — you can't design a schema that does something other than what's actually implemented. OpenAPI, on the other hand... I've seen countless implementors get it wrong. Specs drift from reality, documentation lies, and you're trusting convention. Sure, OpenAPI can do whatever you want, but for those of us who prefer convention over configuration, GraphQL's enforced contract is the whole point. On authentication concerns: Yes, auth in GraphQL has varied implementations with no open standard. But REST doesn't thrive here either... it's all bespoke. This is a tooling problem, not a GraphQL problem. Resolvers become your authorization boundary the same way endpoints with controller actions do in REST. Different shape, same responsibility. On type generation: In my experience, the codegen tooling with Apollo and Relay is incredible. I haven't seen anything on the OpenAPI side that comes close to that developer experience.
    • o1o1o1 16 hours ago
      > Specs drift from reality

      This is only an issue if the spec is maintained manually. In my opinion, best practice is to generate the specification from the actual implementation—assuming you didn’t start by hand-crafting the spec in the first place.

      If the spec is the source of truth, server and client stubs can be generated from it, which should likewise prevent this kind of drift.

      I realize that working with OpenAPI isn’t always straightforward, but most of the friction usually comes down to gaps in understanding or insufficient tooling for a given tech stack.

  • languagehacker 18 hours ago
    Production-Ready GraphQL is a pretty good read for anyone who needs to familiarize themselves with enterprise issues associated with GraphQL.

    My favorite saying on this subject is that any sufficiently expressive REST API takes on GraphQL-like properties. In other words, if you're planning on a complex API, GraphQL and its related libraries often comes with batteries-included conventions for things you're going to need anyway.

    I also like that GraphQL's schema-driven approach allows you to make useful declarations that can also be utilized in non-HTTP use cases (such as pub/sub) and keep much of the benefits of predictability.

    IMO the main GraphQL solutions out there should have richer integrations into OpenTelemetry so that many of the issues the author raises aren't as egregious.

    Many of the struggles people encounter with the GraphQL and React stack is that it's simply very heavyweight for many commodity solutions. Much as folks are encouraging just going the monorepo route these days, make sure that your solution can't be accommodated by server-side rendering, a simple REST API, and a little bit of vanilla JS. It might get you further than you think!

  • sheepscreek 23 hours ago
    What I’ve realized over time is the idea is beautiful and the problem it solves is partly of API/schema discovery.

    Yet I am conflicted on whether it’s a real value add for most use-cases though. Maybe if there are many micro-services and you need a nice way to tie it all together. Or the underlying DB (source or truth data stores) can natively support responses in GraphQL. Then you could wrap it in a thin API transformation BFF (backend for frontend) per client and call it a day.

    But in most cases, you’re just shifting the complexity + introducing more moving parts. With some discipline and standardization (if all services follow the same authentication mechanics), it is possible to get the same benefits with OpenAPI + an API catalog. Plus you avoid the layers of GraphQL transformations in clients and the server.

    100% based on my anecdotal experience supporting new projects and migrations to GraphQL in < $10B market cap companies (including a couple of startups).

  • etherfirma 17 hours ago
    I don't agree with the author on most of this. GraphQL is far better than REST in almost every way and I disagree that the server side resolvers are somehow difficult to write. In a true enterprise setting, the federation capabilities are fantastic.

    There are plenty of things to dislike about GraphQL that he doesn't touch on, like: * lack of input type polymorphism * lack of support for map types * lack of support for recursive data structures (e.g., BlogComments) * terrible fragment syntax

    • rbalicki 15 hours ago
      I would encourage you to write an educated person's critique of GraphQL, because OP's article + https://bessey.dev/blog/2024/05/24/why-im-over-graphql/ etc. suck up all of the oxygen, and no one hears about the genuine issues like that.

      (And don't forget lack of generics, no support for interfaces with no fields, lack of closed unions/interfaces, the absolutely silly distinction between unions and interfaces, the fact that the SDL and operation language are two completely different things...)

    • devmor 15 hours ago
      > GraphQL is far better than REST in almost every way

      I hear this so often, but never do I hear more than one or one and a half ways that it is better. No one seems capable of explaining how it's "better in almost every way" without diverging to very specific examples with cutout problems.

      • rbalicki 14 hours ago
        You may be interested in checking out https://www.youtube.com/watch?v=lhVGdErZuN4, where I talk about the benefits of Relay. This isn't (currently) possible without GraphQL, so it's a pretty compelling case for GraphQL.

        But yeah, IMO, GraphQL doesn't justify itself unless you're using a client like Relay, with data masking and fragment colocation.

  • websiteapi 1 day ago
    I tried graphql with hasura and it was pretty neat, but it still just seemed easier to use RPC or REST.
  • p2detar 22 hours ago
    We have a BFF and were considering for a while to go with GQL but eventually scrapped the idea: it seemed like a lot of work on the BE side.

    But, we are quite constraint on resources, so now even the BFF seems to consume more and more BE development time. Now we are considering letting the FE use some sort of bridge to the BE's db layer in order to directly CRUD what it needs and therefore skip the BFF API. That db layer already has all sorts of validations in place. Because the BE is Java and the FE is js, it seems the only usable bridge here would be gRPC. Does anyone have any other ideas or has done anything in this direction?

    • NewJazz 21 hours ago
      Postgrest and hasura are like the quintessential "some sort of bridge to the BE's db layer".
    • foreigner 21 hours ago
      Consider how authorization is going to work. You can't trust the client!
  • fcpguru 1 day ago
    i wrote this a few weeks ago:

    https://gist.github.com/andrewarrow/c75c7a3fedda9abb8fd1af14...

    400 lines of QL vs one rest DELETE / endpoint

    • gideon60 1 day ago
      Feels like a schema design issue? If your REST backend exposes a single path to remove an item, are there any reason why your GraphQL schema doesn't expose a root mutation field taking the same arguments?
      • fcpguru 1 day ago
        yeah tell shopify, it's their api!
        • johnjames4214 1 day ago
          Exactly. If it's that verbose and painful for a public API like Shopify/GitHub (where the 'flexibility' argument is strongest), it makes even less sense for internal enterprise apps.

          We are paying that same complexity tax you described, but without the benefit of needing to support thousands of unknown 3rd-party developers.

          • n_e 1 day ago
            The issue is that the API itself is, I assume, badly designed.

            Equivalent delete queries in rest / graphql would be

              curl -X DELETE 'https://api.example.com/users/123'
            
            vs

              curl 'https://api.example.com/graphql?query={ deleteUser(id: 123) { id } }'
    • throwaway613745 1 day ago
      wut

      we have a mixed graphql/REST api at $DAY_JOB and our delete mutations look almost identical to our REST DELETE endpoints.

      TFA complains needing to define types (lol), but if you're doing REST endpoints you should be writing some kind of API specification for it (swagger?). So ultimately there isn't much of a difference. However, having your types directly on your schema is nicer than just bolting on a fragile openapi spec that will quickly become outdated when a dev forgets to update it when a parameter is added/removed/changed.

      • ashishb 22 hours ago
        Generate the open API spec from the backend for internal applications.

        No need to update manually. Further, you can prevent breaking changes to the spec using oasdiff

    • roscue 21 hours ago
      I feel you. But I think this might have more to do with the cursed design of the Shopify order editing API than with GraphQL itself.
  • erkok 20 hours ago
    I tend to agree with the author. GraphQL has its use cases, but it is often times overused and simplicity is sacrificed for perceived elegance or efficiency that is often times not needed. "Pre-mature optimisation of root of all evil" comes to mind when GraphQL is picked for efficiency gains that may never become a problem in the first place.

    Facebook invented GraphQL to solve a very specific problem back in 2012 for mobile devices. Having to make multiple queries to construct the data needed in FE in mobile clients is bandwidth constraining (back then over 3G networks) and harmful for battery life, so this technology solved this problem neatly. However, these days when server-to-server communication is needed over an API, none of the problems Facebook invented the protocol for are problems in the first place. If you really want maximum efficiency or speed you probably ought to ditch HTTP entirely and communicate over some lower level binary protocol.

    REST is not perfect either, one thing I liked about SOAP was that it had a strong schema support and you got to name RPCs the way you liked, and didn't have to wrangle everything around the concept of a "resource" and CRUD operations, which often times becomes cumbersome to fit into the RESTful way of thinking if you need to support an RPC that "just does magic with multiple resources". These are the things I like about GraphQL, but on the other hand REST is just HTTP with some conventions, which you necessarily don't have to follow if things get in your way, and is generally simpler by design.

    The only thing I wish with REST is having a stronger vendor support for Swagger/OpenAPI specs. One of the things my team supports is a concept of Managed APIs for our product: https://docs.adaptavist.com/src/latest/managed-apis and we support primarily RESTful APIs but also couple of GraphQL based ones and the issue we face is that REST API specs for many products are either missing, incomplete or simply outdated, so we have to fix them ourselves before we generate our Managed API clients, or write them by hand if the specs don't exist. It's becoming easier with AI these days, but one thing I personally regret when we transitioned from SOAP to REST as a community, is that the strong schema support became a secondary concern. We no longer could just throw API client generator at SOAP's WSDL and generate a client, we needed to start handcrafting the clients ourselves for REST, which is still an issue to this day, unless perfect specs exists, which in my experience is a rather rare occurrence.

  • danielhep 20 hours ago
    I work on an open source server project that is deployed in many different contexts and with many different clients and front ends. GraphQL has allowed us to not feel bad about adding extra properties and object to the response, because if a particular client doesn’t want them, they don’t request them and don’t get them. It has allowed us to be much more flexible with adding features that only few people will use.
  • adsharma 1 day ago
    It's interesting to see people use the term "GQL" to refer to GraphQL.

    https://www.gqlstandards.org/ is an ISO standard. The Graph Database people don't love search engine results when they're looking for something.

    I maintain a graph database where support for GQL often comes up.

    https://github.com/LadybugDB/ladybug/issues/6

  • ianberdin 23 hours ago
    I hated GraphQL and all the hype around it. Until I finally got how to use it what for.

    Same I thought about nest.js, Angular.

    All of them hard to understand by heart at beginning, later (a few years), you feel it and get value.

    Sounds stupid, but I tried to reimplement all the benefits using class transformers, zod, custom validators, all others packages. And always end up: “alright, graphql does this out of the box”.

    REST is nice, same as express.js if you create non-production code. Reality is you need to love this boilerplate. AI writes this anyway.

    • ianberdin 23 hours ago
      Is it user friendly for all the apps. It’s not. Is it easy to understand? No. For beginners? No. For legacy corps? No. For public APIs? No.
  • be_erik 1 day ago
    The appeal of GraphQL is that it eliminates the need for a BFF and easily solves service meshing. Over fetching is more of a component design problem than a performance issue.
    • lateforwork 22 hours ago
      > eliminates the need for a BFF

      Does it really? What if you need to store user preferences?

      Also, some would say BFF is easier to implement than GraphQL.

  • storus 23 hours ago
    I thought that the main selling point of GraphQL was a single query per SPP argument, i.e. fetch your app state with a single query at the beginning instead of waiting for hundreds of REST calls. This also goes out of the window when you need to do some nested cursor stuff though, i.e. open app with third page selected, and inside the page have the second table on the 747th row selected.
  • nisalperi 1 day ago
    My hot take is that if you’re using GraphQL without Relay, you’re probably not using it to its full potential. I’ve used both Relay and Apollo Client on production, and the difference is stark when the app grows!
    • rbalicki 22 hours ago
      1000%. There's almost no reason to use GraphQL unless you take advantage of data masking + fragment colocation.
      • tcoff91 20 hours ago
        I have that with URQL+gql.tada.

        What else does relay give me that URQL does not?

        • rbalicki 14 hours ago
          I may be wrong on the details, but with URQL:

          - you don't have a normalized cache. You may not want one! But if you find yourself annoyed that modifying one entity in one location doesn't automatically cause another view into that same entity to update, it's due to a lack of a normalized cache. And this is a more frequent problem than folks admit. You might go from a detail view to an edit view, modify a few things, then press the back button. You can't reuse cached data without a normalized cache, or without custom logic to keep these items in sync. At scale, it doesn't work.

          - Since you don't have a normalized cache, you presumably just refetch instead of updating items in the cache. So you will presumably re-render an entire page in response to changes. Relay will just re-render components whose data has actually changed. In https://quoraengineering.quora.com/Choosing-Quora-s-GraphQL-..., the engineer at Quora points out that as one paginates, one can get hundreds of components on the screen. And each pagination slows the performance of the page, if you're re-rendering the entire page from root.

          - Fragments are great. You really want data masking, and not just at the type level. If you stop selecting some data in some component, it may affect the behavior of other components, if they do something like Object.stringify or JSON.keys. But admittedly, type-level data masking + colocation is substantially better than nothing.

          - Relay will also generate queries for you. For example, pagination queries, or refetch queries (where you refetch part of a tree with different variables.)

          There are lots of great reasons to adopt Relay!

          And if you don't like the complexity of Relay, check out isograph (https://isograph.dev), which (hopefully) has better DevEx and a much lower barrier to entry.

          https://www.youtube.com/watch?v=lhVGdErZuN4 goes into more detail about the advantages of Relay

  • EionRobb 1 day ago
    The article pretty much sums up why I've been a bigger fan of OData than GraphQL, especially in the business cases. OData will still let you get all those same wins that GraphQL does but without a sql-ish query syntax, and sticking to the REST roots that the web works better with. Also helps that lots of Microsoft services work out of the box with OData.
    • mansa10 22 hours ago
      in my experience OData has several big issues:

      - Overly verbose endpoint & request syntax: $expand, parenthesis and quotes in paths, actions etc.

      - Exposes too much filtering control by default, allowing the consumer to do "bad things" on unindexed fields without steering them towards the happy path.

      - Bad/lacking open source tooling for portals, mocks, examples, validation versus OpenAPI & graphQL.

      It all smells like unpolished MS enterprise crap with only internal MS & SAP adoption TBH.

    • rawgabbit 21 hours ago
      Is there an article that explains OData? The articles I have seen did such a poor job I came away with the impression that it was a dead tech.
  • jayd16 1 day ago
    One interesting conjecture that GQL makes, I think, is that idempotent request caching at the http level is dead... Or at least can't be a load bearing assumption because the downstream can change their query to fetch differently.

    Do we think this has turned out to hold? Is caching an API http response of no value in 2025.

  • bg_tagas 22 hours ago
    GraphQL is one of those solutions in need of a problem for most people. People want to use it. But they have no need for it. The number of companies who need it could probably be counted on both hands. But people try to shoehorn it into everything.
  • ashishb 22 hours ago
    Same experience here.

    Post-honeymoon, I returned to REST+Open API

    https://ashishb.net/programming/openapi/

  • storafrid 1 day ago
    A blog post about GraphQL in an enterprise setting, that fails to address the biggest GQL feature for enterprises. Not unlike most material on HN about microservices. Federated supergraph is the killer feature imo.
    • ericyd 1 day ago
      The author states that in their experience, most downstream services are REST, so adding a GQL aggregation layer on top isn't very helpful. It seems possible they would have a different opinion if they were working with multiple services that all implemented GQL schemas.
      • wrs 23 hours ago
        In that (common) case, the advantage is the frontend/app developers don’t need to know what a hot mess of inconsistent legacy REST endpoints the backend is made of, only the GQL layer does. Which also gives you some breathing room to start fixing said mess.
      • FootballMuse 1 day ago
        Being able to federate REST alongside GQL has been a value add in my experience. Apollo even has the ability to do this client side
  • frizlab 21 hours ago
    I wish I had read that before. It is very interesting and I would probably not have over-engineered my API so much (though I am not even using GraphQL).
  • mohas 1 day ago
    using graphql specifically Apollo was one of my regrettable decisions when I was designing a system 3 years ago, one that haunts me still today with wired bugs, too much effort to upgrade the version while prev version still have bugs etc. and I lost performance and simplicity of rest on top of that
    • cluckindan 22 hours ago
      It’s one of the best pieces of software I’ve worked with. I guess simplicity is in the eye of the beholder :)
      • culi 15 hours ago
        It took me a while to learn the "right way" of doing Apollo. An alternative like Relay is much more opinionated so perhaps that would've helped me get there faster. But I eventually came around and now I agree that Apollo is an incredible piece of technology. I later worked on a REST API and found myself wanting to recreate much of Apollo. Especially the front-end caching layer.
  • pjmlp 1 day ago
    I wish, plenty of SaaS their main query API is GraphQL.
  • petterroea 16 hours ago
    Another problem the article doesn't mention is how much of a hassle it is to deal with permissions. Depending on the GraphQL library you are using, sure, but my general experience with GraphQL is that the effort needed to secure a GraphQL API increases a lot the more granular permissions you need.

    Then again, if you find yourself needing per-field permission checks, you probably want a separate admin API or something instead.

  • greekrich92 23 hours ago
    Over a decade of web dev experience and constantly lurking on HN, I've never heard the initialism BFF. What is a Backend for Frontend and where did that term gain traction?
  • ramon156 1 day ago
    I like that Shopify chose GraphQL and I believe their API would've been messier if they kept the REST endpoint.

    Maybe I'm missing something, but I think they did well

  • scotty79 19 hours ago
    > The main problem GraphQL tries to solve is overfetching.

    GraphQL is solving another problem. Problem of communication between frontend and backend team. When frontend team needs to have yet another, field exposed it needs to communicate this to the backend team. GraphQL let's them do this with code instead of Jira ticket and now the communication between the teams can be done asynchronously and batched. No more waiting for backend implementation each time. And if backend exposes too much then it's a backend problem and the frontend has nothing to do with it so it again can be solved without granular communication between backend and frontend teams.

  • imperio59 21 hours ago
    GraphQL was created to solve many different problems, not just overfetching.

    These problemes at the time generally were: 1) Overfetching (yes) from the client from monolithic REST APIs, where you get the full response payload or nothing, even when you only want one field

    2) The ability to define what to fetch from the CLIENT side, which is arguably much better since the client knows what it needs, the server does not until a client is actually implemented (so hard to fix with REST unless you hand-craft and manually update every single REST endpoint for every tiny feature in your app). As mobile devs were often enough not the same as backend devs at the time GraphQL was created, it made sense to empower frontend devs to define what to fetch themselves in the frontend code.

    3) At the time GraphQL was invented, there was a hard pivot to NoSQL backends. A NoSQL backend typically represents things as Objects with edges between objects, not as tabular data. If your frontend language (JSON) is an object-with-nested-objects or objects-with-edges-between-objects, but your backend is tables-with-rows, there is a mismatch and a potentially expensive (at Facebook's scale) translation on the server side between the two. Modeling directly as Objects w/ relationships on the server side enables you to optimize for fetching from a NoSQL backend better.

    4) GraphQL's edges/connections system (which I guess technically really belongs to Relay which optimizes really well for it) was built for infinitely-scrolling feed-style social media apps, because that's what it was optimized for (Facebook's original rewrite of their mobile apps from HTML5 to native iOS/Android coincided with the adoption of GraphQL for data fetching). Designing this type of API well is actually a hard problem and GraphQL nails it for infinitely scrolling feeds really well.

    If you need traditional pagination (where you know the total row count and you want to paginate one page at a time) it's actually really annoying to use (and you should roll your own field definitions that take in page size and page number directly), but that's because it wasn't built for that.

    5) The fragment system lets every UI component builder specify their own data needs, which can be merged together as one top-level query. This was important when you have hundreds of devs each making their own Facebook feed component types but you still want to ensure the app only fetches what it needs (in this regard Relay with its code generation is the best, Apollo is far behind)

    There's many other optimizations we did on top of GraphQL such as sending the server query IDs instead of the full query body, etc, that really only mattered for low-end mobile network situations etc.

    GraphQL is still an amazing example of good product infra API design. Its core API has hardly changed since day 1 and it is able to power pretty much any type of app.

    The problems aren't with GraphQL, it's with your server infra serving GraphQL, which outside of Facebook/Meta I have yet to see anyone nail really well.

    • girvo 20 hours ago
      I never worked at Meta (lots of my coworkers did though), I have to wonder if GraphQL really shines with Ent (the internal one)
  • loxs 21 hours ago
    It depends very much on the language/server you are using. In Rust, IMO GraphQL is still the best, easiest and fastest way to have my Rust types propagated to the frontend(s) and making sure that I will have strict and maitainable contracts throughout the whole system. This is achieved via the "async_graphql" crate which allows you to define/generate the GraphQL schema in code, by implementing the field handlers.

    If you are using something which requires you to write the GraphQL schema manually and then adapt both the server and the client... it's a completely different experience and not that pleasant at all.

  • exasperaited 1 day ago
    I dunno. I still really like Lighthouse (for Laravel).

    It's about the only thing about my job I still do like.

    The difference is that it is schema-first, so you are describing your API at a level that largely replaces backend-for-frontend stuff. If it's the only interface to your data you have a lot less code to write, and it interfaces beautifully with the query builder.

    I tend not to use it in unsecured contexts and I don't know if I would bother with GraphQL more generally, though WP-GraphQL has its advantages.

  • tonyhart7 1 day ago
    I don't like GraphQL, it feels strange for me (for my rest brain)

    despite many Rest flaw that I know that it feels tedious sometimes, I still prefer that

    and now with AI that can scaffold most rest. the pain point of rest mostly "gone"

    now that people using a lot of Trpc, I wonder can we combine Grpc + rest that essentialy typesafe and client would be guaranteed to understand how model response look ?????

    • cpojer 20 hours ago
      Yes you can. Check out https://fate.technology
      • tonyhart7 15 hours ago
        Yeah but its react library, I talk about standard like OpenAPI schema but with GRPC model and discovery that can auto build a model response and inject it to most programming language
  • FrustratedMonky 21 hours ago
    I get the impression that GraphQL only got popular because it was backed by behemoth Facebook.

    But the other graph query language "Cypher" always seemed a lot more intuitive to me.

    Are they really trying to solve such different problems? Cypher seems much more flexible.

    • adsharma 20 hours ago
      Cypher tries to solve problems closer to storage.

      GraphQL was designed to add types and remote data fetching abstractions to a large existing PHP server side code base. Cypher is designed to work closer to storage, although there are many implementations that run cypher on top of anything ("table functions" in ladybug).

      Neo4j's implementation of cypher didn't emphasize types. You had a relatively schemaless design that made it easy to get started. But Kuzu/Ladybug implementation of cypher is closer to DuckDB SQL.

      They both have their places in computing as long as we have terminology that's clear and unambiguous.

      Look at the number of comments in this story that refer to GraphQL as GQL (which is a ISO standard).

      • FrustratedMonky 6 hours ago
        Got it. I didn't realize. Checking out the docs, looks like GQL is based on Cypher. So in the thread people were talking about it, just calling it GQL as the common name, not Cypher as the original name and I missed it.

        GQL-SQL - for queries.

        GraphQL, more for REST??

  • stevefan1999 15 hours ago
    The ability to pick fields is nice, but the article failed to mention GraphQL's schema stitching and federation capability, which is its actual killer feature that is yet to be seen from any other "RPC" protocols, nix the gRPC which is insanely good for backend but maybe too demanding for web, even with grpc-web *1.

    It allows you to separate your GraphQL in multiple "sub-graphs", and forward them to different microservices and facilitates separation of concern at backend level, while putting them back as one unified place for the frontend, giving it the best of both world in theory.

    Yet unfortunately, both stitching and federation is rarely in practice due to the people's lack of fundamental abilities to comprehend and manage complexity, and that the web development is so fast, that one product is put out for one another year by year, and the old code is basically thrown away and remain unmaintained, they eventually "siloified"/solidified *2, and therefore it is natural for a simple solution like REST and OpenAPI/Swagger beats the complicated GraphQL, becaues the tech market right now just want to make the product quick and dirty, get the money, then let it go, rinse and repeat. Last 30 years of VC is basically that.

    So let me tell you, this is the real reason GraphQL lost: GraphQL is the good money that was driven out, because the market just need the money, regardless of whether it is good, bad or ugly.

    Speaking asides, I enjoy GraphQL so much in C#: https://chillicream.com/docs/hotchocolate/v15/defining-a-sch..., they even have integrations with EF Core which is mind-boggling: https://chillicream.com/docs/hotchocolate/v15/integrations/e...

    It is so natural, and I've tried to make it run in the new single file C#, plus the dependency injection and NativeAOT...I think I made the single-file code in their discussion tab, but I couldn't find it.

    Another good honorable mention would be this: https://opensource.expediagroup.com/graphql-kotlin/docs/sche..., I used it before in place with Koin and Exposed, but I eventually went back to Spring Boot and Hibernate because I needed the integrations despited I loved to have innovations.

    *1: For example, why force everyone to use HTTP/2 and thus enfoced TLS by convention? This makes gRPC development quite hard that you will need to have self-signed key and certificates just for starting the server, and that is already a lot of barrier for most developers. And the protobuf, being a compact and concise binary protocol, is basically unreadable without the schema/reflection/introspection, and GraphQL still returns a JSON by default and you can choose to return MessagePack/CBOR based on what the HTTP request header asked for. Yes, grpc-web does return JSON and can be configured to run on H2C, but it is more like an afterthought and not designed for frontend developers

    *2: Maybe the better word would be enshittified, but enshittification is a dynamic process to the bottom, while what I mean is more like rotten to death like a zombie, so is it too overboard?

  • hmans 21 hours ago
    [dead]