17 comments

  • dennemark 1 hour ago
    I have been using lemonade for nearly a year already. On Strix Halo I am using nothing else - although kyuz0's toolboxes are also nice (https://kyuz0.github.io/amd-strix-halo-toolboxes/)

    Nowadays you get TTS, STT, text & image generation and image editing should also be possible. Besides being able to run via rocm, vulkan or on CPU, GPU and NPU. Quite a lot of options. They have a quite good and pragmatic pace in development. Really recommend this for AMD hardware!

    Edit: OpenAI and i think nowaday ollama compatible endpoints allow me to use it in VSCode Copilot as well as i.e. Open Web UI. More options are shown in their docs.

    • syntaxing 11 minutes ago
      Have you used it with any agents or claw? If so, which model do you run?
  • moconnor 1 hour ago
    Is... is this named because they have a lemon they're trying to make the most of?
    • parsimo2010 1 minute ago
      I think saying "L-L-M" sounds kind of like "lemon," so this is an LLM-aid (sounds like lemonade).
    • TeMPOraL 1 hour ago
      If life keeps giving it them, they should instead invent a combustible lemon.
      • eddieroger 1 hour ago
        Do they know who you are? They're the guys who are going to blow your house up ... with the lemons.
        • LorenDB 32 minutes ago
          On an unrelated note, do you think this software supports running models from a CD?...
  • sensitiveCal 1 hour ago
    Feels like this is sitting somewhere between Ollama and something like LM Studio, but with a stronger focus on being a unified “runtime” rather than just model serving.

    The interesting part to me isn’t just local inference, but how much orchestration it’s trying to handle (text, image, audio, etc). That’s usually where things get messy when running models locally.

    Curious how much of this is actually abstraction vs just bundling multiple tools together. Also wondering if the AMD/NPU optimizations end up making it less portable compared to something like Ollama in practice.

    • RealFloridaMan 7 minutes ago
      It bundles tools, model selection, and overall management.

      It’s portable in the sense it will install on any of the supported OS using CPU or vulkan backends. But it only supports out of the box ROCM builds and AMD NPUs. There is a way to override which llama.cpp version it uses if you want to run it on CUDA, but that adds more overhead to manage.

      If you have an AMD machine and want to run local models with minimal headache…it’s really the easiest method.

      This runs on my NAS, handles my home assistant setup.

      I have a strix halo and another server running various CUDA cards I manage manually by updating to bleeding edge versions of llama.cpp or vllm.

  • zozbot234 1 hour ago
    Note that the NPU models/kernels this uses are proprietary and not available as open source. It would be nice to develop more open support for this hardware.
    • swiftcoder 1 hour ago
      Are they? The docs say "You can also register any Hugging Face model into your Lemonade Server with the advanced pull command options"
  • rpdillon 1 hour ago
    Been running lemonade for some time on my Strix Halo box. It dispatches out to other backends that they include, like diffusion and llama. I actually don't like their combined server, and what I use instead is their llama CPP build for ROCm.

    https://github.com/lemonade-sdk/llamacpp-rocm

    But I'm not doing anything with images or audio. I get about 50 tokens a second with GPT OSS 120B. As others have pointed out, the NPU is used for low-powered, small models that are "always on", so it's not a huge win for the standard chatbot use case.

    • zozbot234 1 hour ago
      Even small NPUs can offload some compute from prefill which can be quite expensive with longer contexts. It's less clear whether they can help directly during decode; that depends on whether they can access memory with good throughput and do dequant+compute internally, like GPUs can. Apple Neural Engine only does INT8 or FP16 MADD ops, so that mostly doesn't help.
  • JSR_FDED 1 hour ago
    I’ve read the website and the news announcement, and I still don’t understand what it is. An alternative to LM Studio? Does it support MLX or metal on Macs? I’m assuming it will optimize things for AMD, but are you at a disadvantage using other GPUs?
    • RealFloridaMan 1 minute ago
      It’s an easy way to get started and maintain a local AI stack that concentrates on AMD optimization. It is a one stop install for endpoints for sst, tts, image generation, and normal LLM. It has its own webui for management and interacting with the endpoints.

      It also has endpoints that are compatible with OpenAI, Ollama, and Anthropic so you can throw any tool that is compatible with those and it will just run.

    • molticrystal 1 hour ago
      >Does it support MLX or metal on Macs?

      This is answered from their Project Roadmap over on Github[0]:

      Recently Completed: macOS (beta)

      Under Development: MLX support

      [0] https://github.com/lemonade-sdk/lemonade?tab=readme-ov-file#...

    • zelphirkalt 1 hour ago
      I think LM Studio itself uses other software to actually make use of LLMs. If that other software does not support your NPUs, then you are not going to get much performance out of those. This Lemonade thing I am guessing is one such other software, that LM Studio could be using.
  • jmillikin 1 hour ago
    Surprising that the Linux setup instructions for the server component don't include Docker/Podman as an option, its Snap/PPA for Ubuntu and RPM for Fedora.

    Maybe the assumption is that container-oriented users can build their own if given native packages?

    • freedomben 1 hour ago
      They do have some container options, though I definitely think they should be added to the release page: https://lemonade-server.ai/install_options.html#docker
      • zenoprax 1 hour ago
        Why should this be on the "Releases"? Shouldn't that just be for build artifacts? Pre-built containers belong on a registry, no?

        I suppose a Dockerfile could be included but that also seems unconventional.

        • freedomben 1 hour ago
          I just meant on the instructions part of the releases page (since they already have some installation instructions), not the artifacts themselves.
  • nijave 2 hours ago
    Anyone compare to ollama? I had good success with latest ollama with ROCm 7.4 on 9070 XT a few days ago
    • iugtmkbdfil834 2 hours ago
      Seconded. Currently on ollama for local inference, but I am curious how it compares.
      • LumielGR 30 minutes ago
        Lemonade is using llama.cpp for text and vision with a nightly ROCm build. It can also load and serve multiple LLMs at the same time. It can also create images, or use whisper.cpp, or use TTS models, or use NPU (e.g Strix Halo amdxdna2), and more!
  • kouunji 1 hour ago
    I’m looking forward to trying this currently Strix halo’s npu isn’t accessible if you’re running Linux, and previously I don’t think lemonade was either. If this opens up the npu that would be great! Resolute raccoon is adding npu support as well.
    • dennemark 1 hour ago
      Maybe you have seen NPU support via FLM already: https://lemonade-server.ai/flm_npu_linux.html

      "FastFlowLM (FLM) support in Lemonade is in Early Access. FLM is free for non-commercial use, however note that commercial licensing terms apply. "

    • boomskats 1 hour ago
      I thought the NPU has been available since something like 6.12?
  • freedomben 1 hour ago
    Neat, they have rpm, deb, and a companion AppImage desktop app[1]! Surprised I wasn't aware of this project before. Definitely going to give it a try.

    [1]: https://github.com/lemonade-sdk/lemonade/releases/tag/v10.0....

  • cpburns2009 1 hour ago
    Just in case anyone isn't aware. NPUs are low power, slow, and meant for small models.
  • syntaxing 2 hours ago
    Wow this is super interesting. This creates a local “Gemini” front end and all. This is more or less a generative AI aggregator where it installs multiple services for different gen modes. I’m excited to try this out on my strix halo. The biggest issue I had is image and audio gen so this seems like a great option.
  • ilaksh 1 hour ago
    Cool but is there a reason they can't just make PRs for vLLM and llama.cpp? Or have their own forks if they take too long to merge?
  • 9dc 2 hours ago
    so... what does it do? i dont get it Lol
    • iugtmkbdfil834 2 hours ago
      Initial read suggests it is a mini-swiss army knife, because it seems to be able to do a lot ( based on website claims anyway ). The app integration seems to suggest they want to be more of a control dashboard.
  • philbitt 21 minutes ago
    [dead]
  • arafeq 20 minutes ago
    [dead]
  • avib99 1 hour ago
    [dead]