GPU memory snapshots: sub-second startup (2025)

(modal.com)

25 points | by jxmorris12 3 days ago

5 comments

  • Imustaskforhelp 22 hours ago
    Is modal running every single service inside gvisor?

    I have heard that gvisor isn't recommended to run every single production but rather only some front facing or some other activities but it has some serious performance degradation which is why most end up using firecracker

    This is really cool though, does this mean that we could probably have AI models that are snapshotted?

    Are the states of checkpoint/recovery encrypted by default or how would that even work? Like what are the privacy aspects of it. I don't think even using something like modal would be the private llm that many people sometimes want on subreddits like localllama but the people dont have gpu. of course nothing beats privacy if you have your own gpu's but I'd be curious to know what people's thoughts are

    • markasoftware 21 hours ago
      the thing is modal is running untrusted containers, so there's not really a concept of "some front facing" containers. Any container running an untrusted workload is at high risk / is "front facing".

      If Modal's customers' workloads are mainly GPU-bound, then the performance hit of gvisor isn't as big as it might be for other workloads. GPU activity does have to go through the fairly heavyweight nvproxy to be executed on the host, but most gpu activity is longer-lived async calls like running kernels so a bit of overhead in starting / retrieving the results from those calls can be tolerated.

      • Imustaskforhelp 20 hours ago
        Well if someone is gonna use Modal exactly for GPU purposes then I guess its okay but anything compute related just feels like it would have some issues performance wise

        So I can agree that perhaps Modal might make sense for LLM's but they position themselves as sandbox including something like running python code etc. and some of this may be more intensive in workflows than others so I just wanted to point it out

        Fly.io uses firecracker so I kinda like firecracker related applications (I tried to run firecracker myself its way too hard to build your own firecracker based provider or anything) and they recently released https://sprites.dev/

        E2B is another well known solution out there. I have talked to their developers once and they mentioned that they run it on top of gcp

        I am really interested in kata containers as well because I think kata runs on top of firecracker and can hook with docker rather quickly.

        • amitprasad 14 hours ago
          If you're not looking for GPU snapshotting the ecosystem is relatively mature. Specifically, CPU-only VM-based snapshotting techniques are pretty well understood. However, if you need GPUs, this is a notoriously hard problem. IIRC Fly also was planning on using gVisor (EDIT: cloud-hypervisor) for their GPU cloud, but abandoned the effort [1].

          Kata runs atop many things, but is a little awkward because it creates a "pod" (VM) inside which it creates 1+ containers (runc/gVisor). Firecracker is also awkward because GPU support is pretty hard / impossible.

          [1] https://fly.io/blog/wrong-about-gpu/

          • Imustaskforhelp 4 hours ago
            Ohh this makes sense now. Firecracker is good for compute related workflows but gvisor is more good for GPU related workflows, gotcha.

            For my use cases usually, its Firecracker but I can now see why company like Modal would use gvisor because they focus a lot (and I mean a lot) on providing gpu access. I think that its one of their largest selling points or one of them, for them compute is secondary customer and gvisor's compute performance hit is a well worth trade off for them

            Thanks for trying to explain the situation!

  • vivzkestrel 15 hours ago
    - as a guy not familiar or in loop with all these sandbox products, i have a quick question for anyone reading this

    - what is the difference between docker and modal?

    - what does modal do that docker doesnt?

    - what is the cold start time comparison between both?

    - how do both of these differ from something called "Firecracker VM"?

    • BobbyTables2 13 hours ago
      I can describe firecracker.

      With Intel VMX virtualization, instruction execution is handled by the CPU but (a lot) of software still has to deal with HW peripheral emulation .

      QEMU uses KVM (Intel VMX, etc) but implements HW peripherals (display, network, disk, etc) faithfully matching really HW and provides a full BIOS (SeaBios) or UEFI firmware (EDK) to deal with with boot process.

      Over time, Linux (and Windows) were extended to support novel “peripherals” designed for high emulation performance (not a real HW product).

      Firecracker basically skips all the “real” peripheral emulation and skips the full BIOS/UEFI firmware. Instead, it implements just enough to boot modern Linux directly. Also written in Rust instead of C. It will never support DOS, Windows 95 or probably anything else.

      The “microVM” BIOS allows it to start booting Linux very quickly (sub-second). A traditional QEMU VM might take 2-5 seconds. Some people are emboldened to effectively move back from containers to running applications in a VM…

      Instead of the VM being long lived, it is really just for running a single app.

      I think Kata containers had this idea for much longer but Firecracker provides a more efficient implementation for such a thing.

      • vivzkestrel 11 hours ago
        thank you very much for the detail there. I assume you would also know very well how a docker container would compare to firecracker in terms of boot time. I understand that a container and a VM are not the same thing but just curious
  • zackangelo 18 hours ago
    This uses Nvidia’s CUDA snapshot API under the hood, but you have to pair it with a host side snapshot as well. Modal uses gVisor for this, which is notoriously high overhead.

    Does anyone know of a more efficient alternative if you’re running a trusted container?

    • luiscape 1 hour ago
      Post author here: there are other projects that will create a proxy for CUDA calls and use the log of CUDA operations to checkpoint / restore or live migration tasks. We haven’t used them. I don’t believe they are very popular nor used outside specific orgs.

      This is the only API available for snapshotting NVIDIA GPU memory, afaik.

      As for needing to combine it with a host memory snapshot step, this is required because CUDA sessions need to be mapped to a host process, so you need to snapshot both things in order for the program to be restored correctly.

      CRIU is another project that uses the same technique (CUDA snapshot + host memory snapshot). Different than CRIU, our snapshots work at the function level so we’re able to take snapshots after functions have been initialized (including GPU memory), making Modal cold boots fast. One would have to implement this entire process using CRIU.

  • erwaen98 23 hours ago
    Looks great
  • erichocean 22 hours ago
    Tried it out, first curl after deploy gave me a 303, but second attempt worked.