Always nice to see folks talking about VM snapshots - they're an extremely powerful tool for building systems of all kinds. At AWS, we use snapshots in Lambda Snapstart (along with cloning, and snapshots are distributed across multiple workers), and in Aurora DSQL (where we clone and restore a snapshot of Postgres on every database connection), in AgentCore Runtime, and a number of other places.
> But Firecracker comes with a few limitations, specifically around PCI passthrough and GPU virtualization, which prevented Firecracker from working with GPU Instances
Worth mentioning that Firecracker supports PCI passthrough as of 1.13.0. But that doesn't diminish the value of Cloud Hypervisor - it's really good to have multiple options in this space with different design goals (including QEMU, which has the most features).
> We use the sk_buff.mark field — a kernel-level metadata flag on packets - to tag health check traffic.
Clever!
> Light Sleep, which reduces cold starts to around 200ms for CPU workloads.
If you're restoring on the same box, I suspect 200ms is significantly above the best you can do (unless your images are huge). Do you know what you're spending those 200ms doing? Is it just creating the VMM process and setting up kvm? Device and networking setup? I assume you're mmapping the snapshot of memory and loading it on demand, but wouldn't expect anywhere near 200ms of page faults to handle a simple request.
Slightly OT but would be cool if there was a way to run computations in some on-demand VM that cold started in 200ms, did it thing, died and you only paid for the time you used it. In essence s lambda that exposed you a full blown VM rather than a limited environment.
There are a few ways to approach this. If you don't mind owning the orchestration layer this is precisely what firecracker does.
If you don't even want to pay for that though scheduling unikernels on something like ec2 gets you your full vm, is cheaper, has more resources than lambda and doesn't have the various limitations such as no gpu or timeouts or anything like that.
I would kill for this as a AWS service, but I admit all my use cases are around being too frugal to pay for the time it takes to initialize a EC2 instance from zero (like CI workers where I don’t want to pay when idle but also the task could possibly run longer than the lambda timeout).
Does this include the RAM for the VM? For auto-idle systems like this where to park the RAM tends to be a significant concern. If you don't "retire" the RAM too the idling savings are limited to CPU cycles but if you do, the overheads of moving RAM around can easily wreck any latency budget you may have.
> Alongside the eBPF program, we run a lightweight daemon — scaletozero-agent — that monitors those counters. If no new packets show up for a set period, it initiates the sleep process.
> No polling. No heuristics. Just fast, kernel-level idle detection.
Isn't the `scaletozero-agent` daemon effectively polling eBPF map counters...?
Always nice to see folks talking about VM snapshots - they're an extremely powerful tool for building systems of all kinds. At AWS, we use snapshots in Lambda Snapstart (along with cloning, and snapshots are distributed across multiple workers), and in Aurora DSQL (where we clone and restore a snapshot of Postgres on every database connection), in AgentCore Runtime, and a number of other places.
> But Firecracker comes with a few limitations, specifically around PCI passthrough and GPU virtualization, which prevented Firecracker from working with GPU Instances
Worth mentioning that Firecracker supports PCI passthrough as of 1.13.0. But that doesn't diminish the value of Cloud Hypervisor - it's really good to have multiple options in this space with different design goals (including QEMU, which has the most features).
> We use the sk_buff.mark field — a kernel-level metadata flag on packets - to tag health check traffic.
Clever!
> Light Sleep, which reduces cold starts to around 200ms for CPU workloads.
If you're restoring on the same box, I suspect 200ms is significantly above the best you can do (unless your images are huge). Do you know what you're spending those 200ms doing? Is it just creating the VMM process and setting up kvm? Device and networking setup? I assume you're mmapping the snapshot of memory and loading it on demand, but wouldn't expect anywhere near 200ms of page faults to handle a simple request.
How is this comparing to Rund? https://www.usenix.org/conference/atc22/presentation/li-ziju...
Slightly OT but would be cool if there was a way to run computations in some on-demand VM that cold started in 200ms, did it thing, died and you only paid for the time you used it. In essence s lambda that exposed you a full blown VM rather than a limited environment.
There are a few ways to approach this. If you don't mind owning the orchestration layer this is precisely what firecracker does.
If you don't even want to pay for that though scheduling unikernels on something like ec2 gets you your full vm, is cheaper, has more resources than lambda and doesn't have the various limitations such as no gpu or timeouts or anything like that.
I would kill for this as a AWS service, but I admit all my use cases are around being too frugal to pay for the time it takes to initialize a EC2 instance from zero (like CI workers where I don’t want to pay when idle but also the task could possibly run longer than the lambda timeout).
Working on that now ;)
> Saves the full VM state to disk
Does this include the RAM for the VM? For auto-idle systems like this where to park the RAM tends to be a significant concern. If you don't "retire" the RAM too the idling savings are limited to CPU cycles but if you do, the overheads of moving RAM around can easily wreck any latency budget you may have.
Curious how you are dealing with it.
How does this stack up against unikernel-based VM snapshots?
> Alongside the eBPF program, we run a lightweight daemon — scaletozero-agent — that monitors those counters. If no new packets show up for a set period, it initiates the sleep process.
> No polling. No heuristics. Just fast, kernel-level idle detection.
Isn't the `scaletozero-agent` daemon effectively polling eBPF map counters...?
Nope! There are evented eBPF map types that userspace processes can watch with epoll(2), e.g. https://docs.ebpf.io/linux/map-type/BPF_MAP_TYPE_RINGBUF/#ep...