A2UI: A Protocol for Agent-Driven Interfaces

(a2ui.org)

65 points | by makeramen 3 hours ago

15 comments

  • codethief 2 hours ago
    > A2UI lets agents send declarative component descriptions that clients render using their own native widgets. It's like having agents speak a universal UI language.

    (emphasis mine)

    Sounds like agents are suddenly able to do what developers have failed at for decades: Writing platform-independent UIs. Maybe this works for simple use cases but beyond that I'm skeptical.

  • iristenteije 15 minutes ago
    I think ultimately GenUI can be integrated into apps more seamlessly, but even if today it's more in context of chat interfaces with prompts, I think it's clear that a wall of text isn't always the best UX/output and it's already a win.
  • mbossie 2 hours ago
    So there's MCP-UI, OpenAI's ChatKit widgets and now Google's A2UI, that I know of. And probably some more...

    How many more variants are we introducing to solve the same problem. Sounds like a lot of wasted manhours to me.

    • MrOrelliOReilly 2 hours ago
      I agree that it's annoying to have competing standards, but when dealing with a lot of unknowns it's better to allow divergence and exploration. It's a worse use of time to quibble over the best way to do things when we have no meaningful data yet to justify any decision. Companies need freedom to experiment on the best approach for all these new AI use cases. We'll then learn what is great/terrible in each approach. Over time, we should expect and encourage consolidation around a single set of standards.
      • pscanf 1 hour ago
        > when dealing with a lot of unknowns it's better to allow divergence and exploration

        I completely agree, though I'm personally sitting out all of these protocols/frameworks/libraries. In 6 months time half of them will have been abandoned, and the other half will have morphed into something very different and incompatible.

        For the time being, I just build things from scratch, which–as others have noted¹–is actually not that difficult, gives you understanding of what goes on under the hood, and doesn't tie you to someone else's innovation pace (whether it's higher or lower).

        ¹ https://fly.io/blog/everyone-write-an-agent/

    • mystifyingpoi 29 minutes ago
      > Sounds like a lot of wasted manhours to me

      Sounds like a lot of people got paid because of it. That's a win for them. It wasn't their decision, it was company decision to take part in the race. Most likely there will be more than 1 winner anyway.

    • askl 1 hour ago
  • pedrozieg 1 hour ago
    We’ve had variations of “JSON describes the screen, clients render it” for years; the hard parts weren’t the wire format, they were versioning components, debugging state when something breaks on a specific client, and not painting yourself into a corner with a too-clever layout DSL.

    The genuinely interesting bit here is the security boundary: agents can only speak in terms of a vetted component catalog, and the client owns execution. If you get that right, you can swap the agent for a rules engine or a human operator and keep the same protocol. My guess is the spec that wins won’t be the one with the coolest demos, but the one boring enough that a product team can live with it for 5-10 years.

  • wongarsu 1 hour ago
    I wouldn't want this anywhere near production, but for rapid prototyping this seems great. People famously can't articulate what they want until they get to play around with it. This lets you skip right to the part where you realize they want something completely different from what was first described without having to build the first iteration by hand
  • _pdp_ 1 hour ago
    I am fan of using markdown to describe the UI.

    It is simple, effective and feels more native to me than some rigid data structure designed for very specific use-cases that may not fit well into your own problem.

    Honestly, we should think of Emacs when working with LLMs and kind of try to apply the same philosophy. I am not a fan of Emacs per-se but the parallels are there. Everything is a file and everything is a text in a buffer. The text can be rendered in various ways depending on the consumer.

    This is also the philosophy that we use in our own product and it works remarkably well for diverse set of customers. I have not encountered anything that cannot be modelled in this way. It is simple, effective and it allows for a great degree of flexibility when things are not going as well as planned. It works well with streaming too (streaming parsers are not so difficult to do with simple text structures and we have been doing this for ages) and LLMs are trained very well how to produce this type of output - vs anything custom that has not been seen or adopted yet by anyone.

    Besides, given that LLMs are getting good at coding and the browser can render iframes in seamless mode, a better and more flexible approach would be to use HTML, CSS and JavaScript instead of what Slack has been doing for ages with their block kit API which we know is very rigid and frustrating to work with. I get why you might want to have a data structures for UI in order to cover CLI tools as well but at the end of the day browsers and clis are completely different things and I don not believe you can meaningfully make it work for both of them unless you are also prepared to dumb it down and target only the lowest common dominator.

  • tasoeur 2 hours ago
    In an ideal world, people would be implementing UI/UX accessibility in the first place, and a lot of those problems would be solved in the first place. But one can also hope that having the motivation to get agents running on those things could actually bring a lot of accessibility features to newer apps.
  • qsort 2 hours ago
    This is very interesting if used judiciously, I can see many use cases where I'd want interfaces to be drawn dynamically (e.g. charts for business intelligence.)

    What scares me is that even without arbitrary code generation, there's the potential for hallucinations and prompt injection to hit hard if a solution like this isn't sandboxed properly. An automatically generated "confirm purchase" button like in the shown example is... probably something I'd not make entirely unsupervised just yet.

  • jy14898 2 hours ago
    I never want to unknowingly use an app that's driven this way.

    However, I'm happy it's happening because you don't need an LLM to use the protocol.

  • evalstate 2 hours ago
    I quite like the look of this one - seems to fit somewhere between the rigid structure of MCP Elicitations and the freeform nature of MCP-UI/Skybridge.
  • raybb 2 hours ago
    Is there a standard protocol for the way things like Cline sometimes give you multiple choice buttons to click on? Or how does that compare to something like this?
  • mentalgear 39 minutes ago
    The way to do this would be to come together and design a common W3C-like standard.
  • nsonha 43 minutes ago
    What's agent/AI specific about this? Seems just backend-driven UI
  • lowsong 1 hour ago
    > A2UI lets agents send declarative component descriptions that clients render using their own native widgets. It's like having agents speak a universal UI language.

    Why the hell would anyone want this? Why on earth would you trust an LLM to output a UI? You're just asking for security bugs, UI impersonation attacks, terrible usability, and more. This is a nightmare.

    • vidarh 1 hour ago
      If done in chat, it's just an alternative to talking to you freeform. Consider Claude Code's multiple-choice questions, which you can trigger by asking it to invoke the right tool, for example.
      • DannyBee 17 minutes ago
        None of the issues go away just because it's in chat?

        Freeform looks and acts like text, except for a set of things that someone vetted and made work.

        If the interactive diagram or UI you click on now owns you, it doesn't matter if it was inside the chat window or outside the chat window.

        Now, in this case, it's not arbitrary UI, but if you believe that the parsing/validation/rendering/two way data binding/incremental composition (the spec requires that you be able to build up UI incrementally) of these components: https://a2ui.org/specification/v0.9-a2ui/#standard-component...

        as transported/renderered/etc by NxM combinations of implementations (there are 4 renderers and a bunch of transports right now), is not going to have security issues, i've got a bridge to sell you.

        Here, i'll sell it to you in gemini, just click a few times on the "totally safe text box" for me before you sign your name.

        My friend once called something a babydoggle - something you know will be a boondoggle, but is still in its small formative stages.

        This feels like a babydoggle to me.