6 comments

  • WhitneyLand 1 hour ago
    StepFun is an interesting model.

    If you haven’t heard of it yet there’s some good discussion here: https://news.ycombinator.com/item?id=47069179

  • smallerize 1 hour ago
    It looks like Unsloth had trouble generating their dynamic quantized versions of this model, deleted the broken files, then never published an update.
  • hadlock 1 hour ago
    According to openrouter.ai it looks like StepFun 3.5 Flash is the most popular model at 3.5T tokens, vs GLM 5 Turbo at 2.5T tokens. Claude Sonnet is in 5th place with 1.05T tokens. Which isn't super suprising as StepFun is ~about 5% the price of Sonnet.

    https://openrouter.ai/apps?url=https%3A%2F%2Fopenclaw.ai%2F

    • NitpickLawyer 1 hour ago
      > the most popular model

      It was free for a long time. That usually skews the statistics. It was the same with grok-code-fast1.

      • MaxikCZ 47 minutes ago
        Exactly. When I read the headline I thought: "Ofc it is, its free."
        • skysniper 38 minutes ago
          I should have clarified I didn't use the free version...
    • skysniper 1 hour ago
      the real surprising part to me is that, despite being the cheapest model on board, stepfun is often able to score high at pure performance. Other models at the same price range (e.g. kimi) fails to do that.
  • skysniper 1 hour ago
    another thing from the bench I didn't expect: gemini 3.1 pro is very unreliable at using skills. sometimes it just reads the skill and decide to do nothing, while opus/sonnet 4.6 and gpt 5.4 never have this issue.
  • dmazin 57 minutes ago
    why do half the comments here read like ai trying to boost some sort of scam?
  • skysniper 2 hours ago
    I ran 300+ benchmarks across 15 models in OpenClaw and published two separate leaderboards: performance and cost-effectiveness.

    The two boards look nothing alike. Top 3 performance: Claude Opus 4.6, GPT-5.4, Claude Sonnet 4.6. Top 3 cost-effectiveness: StepFun 3.5 Flash, Grok 4.1 Fast, MiniMax M2.7.

    The most dramatic split: Claude Opus 4.6 is #1 on performance but #14 on cost-effectiveness. StepFun 3.5 Flash is #1 cost-effectiveness, #5 performance.

    Other surprises: GLM-5 Turbo, Xiaomi MiMo v2 Pro, and MiniMax M2.7 all outrank Gemini 3.1 Pro on performance.

    Rankings use relative ordering only (not raw scores) fed into a grouped Plackett-Luce model with bootstrap CIs. Same principle as Chatbot Arena — absolute scores are noisy, but "A beat B" is reliable. Full methodology: https://app.uniclaw.ai/arena/leaderboard/methodology?via=hn

    I built this as part of OpenClaw Arena — submit any task, pick 2-5 models, a judge agent evaluates in a fresh VM. Public benchmarks are free.

    • johndough 39 minutes ago
      Could you add a column for time or number of tokens? Some models take forever because of their excessive reasoning chains.
      • skysniper 26 minutes ago
        both are shown in battle detail page already. Time is shown in Scores table. Number of tokens are shown in Cost details at the bottom of the Scores. (I thought most people just want to see cost in USD so I put token details at the bottom)
    • refulgentis 2 hours ago
      Please don’t use AI to write comments, it cuts against HN guidelines.
      • skysniper 1 hour ago
        sorry didn't know that. Here is my hand writing tldr:

        gemini is very unreliable at using skills, often just read skills and decide to do nothing.

        stepfun leads cost-effectiveness leaderboard.

        ranking really depends on tasks, better try your own task.

        • refulgentis 1 hour ago
          It’s too late once it’s happened. I was curious, then when I saw the site looked vibecoded and you’re commenting with AI, I decided to stop trying to reason through the discrepancies between what was claimed and what’s on the site (ex. 300 battles vs. only a handful in site data).
          • rat9988 1 hour ago
            Too late for what? For you? maybe. There are many others that are okay with it and it doesn't disminish the quality of the work. Props to the author.
            • refulgentis 1 hour ago
              > Too late for what? For you? maybe.

              Maybe? :)

              > There are many others that are okay with it

              Correct.

              > and it doesn't disminish the quality of the work.

              It does affect incoming people hearing about the work.

              I applaud your instinct to defend someone who put in effort. It's one of the most important things we can do.

              Another important thing we can do for them is be honest about our own reactions. It's not sunshine and rainbows on its face, but, it is generous. Mostly because A) it takes time B) other people might see red and harangue you for it.

          • skysniper 1 hour ago
            all 300+ battle data are available at https://app.uniclaw.ai/arena/battles, every single battle is shown with raw conversional history, produced files, judge's verdict and final scores
            • refulgentis 1 hour ago
              Thanks! Is the judge an LLM? There's lot of references to "just like LMArena", but LMArena is human evaluated?
              • skysniper 46 minutes ago
                > Is the judge an LLM?

                Yes, judge is one of opus 4.6, gpt 5.4, gemini 3.1 pro (submitter can choose). Self judge (judge model is also one of the participants) is excluded when computing ranking.

                > There's lot of references to "just like LMArena", but LMArena is human evaluated?

                Yeah LMArena is human evaluated, but here i found it not practical to gather enough human evaluation data because the effort it take to compare the result is much higher:

                - for code, judge needs to read through it to check code quality, and actually run it to see the output

                - when producing a webpage or a document, judge needs to check the content and layout visually

                - when anything goes wrong, judge needs to read the execution log to see whether partial credit shall be granted

                if you look at the cost details of each battle (available at the bottom of battle detail page), judge typically cost more than any participant model.

                if we evaluate with human, i would say each evaluation can easily take ~5-10 min

                • refulgentis 42 minutes ago
                  Fair enough, yeah, agent evals are hard especially across N models :/

                  Thanks for replying btw, didn't mean any disrespect, good on you for not getting aggro about feedback

                  • skysniper 23 minutes ago
                    I appreciate honest feedback, best way to learn :)
    • citizenpaul 26 minutes ago
      >Other surprises: GLM-5 Turbo, Xiaomi MiMo v2 Pro, and MiniMax M2.7 all outrank Gemini 3.1 Pro on performance

      This has also been my subjective experience But has also been objective in terms of cost.