"* Full computer access: It's not sandboxed in a browser. Meka operates with OS-level controls, allowing it to handle system dialogues, file uploads, and other interactions that browser-only automation tools can't."
Hi there, I'm Edward, one of the co-founders. The OS that the agent operates in is a fresh confined environment, and not a company or personal computer.
We explored using a containerized VM that exposed agentic controls in the open source version, but generally found that the cloud-based solutions were much faster to get started and easier to work with. Our repo contains adapters that work with several of the most popular cloud-hosted VM-as-a-service infra providers.
Definitely would be happy to be wrong and missed something here!
I was unable to retrieve any live fare data because both airline sites became unworkable in the remote session (xxxx selectors would not stay open; xxxxsearch could not be completed before the session ended). Below is a blank comparison table you can fill in once you gather the prices manually:
is that the current state of best in class computer use agents? or is more of a we need to modify it until it is good for our use case?
trying to provide helpful feedback and honest curiosity, this is awesome work
James here from the team! Let us know if you have feedback on either our cloud or open source repo. We want to push the frontiers for computer-use so that people can do less repetitive work.
Out of curiosity, what do you think contributed to this working better than even OpenAI agent or some of the other tools out there?
I'm not that familiar with how OpenAI and other agents like Browser Use currently work, but is this, in your opinion, the most important factor?
> An infrastructure provider that exposes OS-level controls, not just a browser layer with Playwright screenshots. This is important for performance as a number of common web elements are rendered at the system level, invisible to the browser page
IMO, the combination of having an "evaluator model" at the end to verify if the intent of the task was complete, and using multiple models that look over each other's work in every step was helpful - lots of human organization analogies there, like "trust but verify" and pair programming. Memory management was also very key.
Nice job. It's exciting that the quality is approaching human level, but still I think we are spending way too many tokens, and the automation speed-up isn't really worth the total token price yet (unless you have very high-end gpus and you don't care about the completion speed of your tasks)
Thanks! I agree with your sentiment for a lot of basic mundane tasks, but there are a number of tasks that exist today that are very high value yet still mundane and requires manual work.
Examples include form filling, sales prospecting, lead enrichment, or even just keeping track of prices of important things.
Over time, we do expect the cost of tokens on these models to decrease drastically. Powerful vision models are still relatively new compared to other generic LLM models for text. Definitely a lot of room for optimizations that we expect will come quickly!
All good questions and is the second piece aside from the agent.
1. We have proxy support right now and most traffic are already being proxied today. Might allow fine tuning of this over time
2. We have plans to allow this, but not currently available
3. We are leveraging some anti bot/captcha solving, but I do believe this will be a never ending problem in some sense
Does it use openrouter for model selection? Which models did you achieve the webarena result with? Are there any open source models which are any good for this?
For the WebArena result, we actually used a mixture of models checking each other's work and evaluating in real time. We found the verifications to be really effective in producing accurate results. Feel free to take a look at our architectural blog post to learn more in detail: https://blog.withmeka.com/introducing-meka-an-open-source-fr...
Unfortunately, we didn't try it out with open source models, but you are welcome to pull the repo and try with any model that has good visual grounding! (I heard UI-TARS and the latest Qwen visual model are quite good)
Tested a few agentic browsers such as genspark, fellou and comet. I found the vision approach less effective comparing to the dom-based approach, and seem quite slower too. Does it need a reasoning step to type an url into the address bar?
* Accuracy (does it do what we want)
* Reliability (does it consistently do what we want)
* Speed (does it do what we want fast)
We're mostly focused on solving 1 and maybe in some capacity 2.
The belief here is that models are going to get better. With that smaller models will become more capable. This will result in speed ups automatically.
So yes, I will concur that speed is probably not the main strength of our framework right now, but believe that we will get there with time.
This seems pretty scary. Just recently an AI wiped a company database: https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-d...
Can it be installed on a conventional (personal or work) desktop?
Definitely would be happy to be wrong and missed something here!
Also the task I gave it this was the result:
I was unable to retrieve any live fare data because both airline sites became unworkable in the remote session (xxxx selectors would not stay open; xxxxsearch could not be completed before the session ended). Below is a blank comparison table you can fill in once you gather the prices manually:
is that the current state of best in class computer use agents? or is more of a we need to modify it until it is good for our use case?
trying to provide helpful feedback and honest curiosity, this is awesome work
Out of curiosity, what do you think contributed to this working better than even OpenAI agent or some of the other tools out there?
I'm not that familiar with how OpenAI and other agents like Browser Use currently work, but is this, in your opinion, the most important factor?
> An infrastructure provider that exposes OS-level controls, not just a browser layer with Playwright screenshots. This is important for performance as a number of common web elements are rendered at the system level, invisible to the browser page
IMO, the combination of having an "evaluator model" at the end to verify if the intent of the task was complete, and using multiple models that look over each other's work in every step was helpful - lots of human organization analogies there, like "trust but verify" and pair programming. Memory management was also very key.
I did YC back in S16 and was just reminiscing with a friend about how startups felt so different back then.
Examples include form filling, sales prospecting, lead enrichment, or even just keeping track of prices of important things.
Over time, we do expect the cost of tokens on these models to decrease drastically. Powerful vision models are still relatively new compared to other generic LLM models for text. Definitely a lot of room for optimizations that we expect will come quickly!
1. Proxy support for sites that block the user
2. Browser extensions support for uBlock, password managers, etc.
3. CAPTCHA solving
1. We have proxy support right now and most traffic are already being proxied today. Might allow fine tuning of this over time 2. We have plans to allow this, but not currently available 3. We are leveraging some anti bot/captcha solving, but I do believe this will be a never ending problem in some sense
Does it use openrouter for model selection? Which models did you achieve the webarena result with? Are there any open source models which are any good for this?
Unfortunately, we didn't try it out with open source models, but you are welcome to pull the repo and try with any model that has good visual grounding! (I heard UI-TARS and the latest Qwen visual model are quite good)
* Accuracy (does it do what we want) * Reliability (does it consistently do what we want) * Speed (does it do what we want fast)
We're mostly focused on solving 1 and maybe in some capacity 2.
The belief here is that models are going to get better. With that smaller models will become more capable. This will result in speed ups automatically.
So yes, I will concur that speed is probably not the main strength of our framework right now, but believe that we will get there with time.