Replace LocalStack with Kumo: A Practical Migration Guide for Go Teams
A step-by-step Go migration guide from LocalStack to kumo, with AWS SDK v2 code, CI tips, and test-suite fixes.
Replace LocalStack with Kumo: A Practical Migration Guide for Go Teams
If your Go team has outgrown heavyweight AWS emulation in local dev and CI, kumo is a compelling LocalStack alternative worth a serious look. It’s a lightweight AWS emulator written in Go, ships as a single binary, supports Docker, requires no authentication, and is designed for fast startup with optional persistence via KUMO_DATA_DIR. For teams already leaning on kumo’s AWS SDK v2 compatibility, the migration path is less about rewriting tests and more about tightening the seams between your app, your test harness, and your CI pipeline. In this guide, we’ll walk through a practical switch from LocalStack to kumo for Go services, including real test-suite changes, common gotchas, and how to structure your environment for reliable integration testing.
We’ll also cover when the migration is worth it, when it isn’t, and how to keep your feedback loops fast without sacrificing realism. If you’re already using Docker Compose for local infrastructure, or you’re exploring stronger CI integration patterns for developer tools, kumo can simplify a lot of moving parts. And because emulator migrations often surface hidden coupling, we’ll treat this like an engineering change management exercise—not just a tool swap.
Why teams replace LocalStack in the first place
Heavyweight emulation can slow down the whole team
LocalStack has been a popular choice for years because it offers broad AWS coverage, but that breadth can become overhead. Larger emulation stacks often mean longer startup times, more RAM usage, and a harder time reproducing the exact state you want in tests. When a tool becomes “the thing you work around,” rather than “the thing that helps you ship,” teams begin looking for a practical evaluation framework to compare alternatives and reduce friction. Kumo’s positioning is simple: lightweight, single-binary, Go-native, and focused on the workflows that matter most in local development and CI.
Common migration drivers from LocalStack to kumo
The most common reasons teams switch are speed, operational simplicity, and test stability. If your CI jobs spend several minutes waiting for emulators to come online, or your dev machine fans spin up just to run a handful of AWS-dependent tests, the problem isn’t just inconvenience—it’s developer throughput. For small teams, a single-binary emulator can be much easier to manage than a containerized ecosystem with multiple moving pieces, and that can matter as much as raw service coverage. The best migration candidates are teams using S3, DynamoDB, SQS, SNS, EventBridge, Lambda, and similar core AWS services for integration tests.
When you should not migrate yet
If your test suite depends on niche AWS behaviors that aren’t supported by kumo, or you rely on advanced LocalStack features such as deep edge-case coverage for a specific service, don’t force the move. A good migration plan is staged: identify the critical services you use, map unsupported capabilities, and only switch the workflows that benefit. This is the same discipline you’d apply in choosing the right SDK for your team or deciding between multiple production frameworks. Migration success is less about ideology and more about coverage, confidence, and the shape of your test pyramid.
What kumo is, and how it differs from LocalStack
Single-binary architecture and why it matters
Kumo is a lightweight AWS service emulator written in Go and distributed as a single binary. That means no multi-container orchestration just to start your local AWS surface area, and no complicated bootstrap process for dev laptops or CI agents. In practice, this means your team can version the emulator the same way you version application tools: pin it, checksum it, and update it deliberately. That simplicity is especially valuable in constrained environments, including remote CI runners and even intermittently connected workflows, much like the resilience patterns discussed in secure DevOps over intermittent links.
Persistence is optional, not mandatory
One of kumo’s best features is optional data persistence. By setting KUMO_DATA_DIR, you can keep emulator state across restarts, which can speed up iterative local development and some classes of integration tests. The important design lesson is to treat persistence as a tool, not a default assumption. For deterministic test suites, you usually want explicit setup/teardown and fresh state; for local debugging, persistence can save a lot of time. If you’ve ever evaluated the tradeoffs of stateful systems and recovery plans, the same principle applies here: persistence is useful only when you control it.
Service coverage and where it fits
Source material indicates kumo supports a broad set of AWS services, including S3, DynamoDB, SQS, SNS, EventBridge, Lambda, IAM, STS, CloudWatch, API Gateway, Step Functions, and more. The broad takeaway is that kumo is not a toy emulator; it’s aimed at real developer workflows and CI pipelines. That said, broad service lists do not equal exact parity with AWS, so you still need to verify the API shapes and behaviors your application depends on. Treat the emulator as a test acceleration layer, not as proof that production semantics are identical.
| Dimension | LocalStack | Kumo | Migration Impact |
|---|---|---|---|
| Startup footprint | Heavier, often slower | Single binary, lightweight | Faster CI and local boot times |
| Operational complexity | Container-based multi-service setup | Simple binary or container | Less orchestration overhead |
| Persistence model | Varies by setup | Optional via KUMO_DATA_DIR | More explicit state management |
| Go SDK v2 support | Supported depending on configuration | Designed for compatibility | Less application code churn |
| CI friendliness | Good but sometimes resource-heavy | No auth required, fast startup | Cleaner pipeline integration |
Audit your current LocalStack setup before changing anything
Inventory the services your tests actually use
Before you touch code, make a list of every AWS service your unit, integration, and end-to-end tests use. Most teams discover that they only rely on a subset of the services they thought were critical. That matters because migration work should be proportional to actual usage, not theoretical coverage. If you’re building developer tooling with measurable outputs, this is similar to the discipline behind profiling latency, recall, and cost: measure first, then optimize.
Map assumptions in your tests
Your tests may depend on subtle emulator behaviors: immediate consistency, permissive bucket policies, simplified IAM, or message delivery timing. Kumo’s no-auth model is a feature for CI, but it also means tests that accidentally depend on auth failures will no longer prove what you think they prove. Look for hard-coded endpoint URLs, sleep-based polling, and brittle resource names. These are exactly the places where migration failures hide, so document them before switching the emulator underneath your suite.
Define acceptance criteria for the migration
Set explicit success criteria: startup time reduction, CI pass rate stability, local developer boot time, and acceptable parity for key workflows. If possible, compare pipeline timing before and after the migration, and keep the benchmark honest by using the same hardware and test subset. This is a useful moment to revisit your broader engineering habits, like how you maintain feedback loops, not just your infrastructure. A good migration should improve your developer experience the same way a well-tuned workflow improves your shipping velocity—in small, measurable increments.
Set up kumo locally with Docker or a single binary
Option 1: run kumo directly on your machine
The simplest local setup is often the most effective. If your team prefers a binary-first workflow, download the kumo executable, pin the version, and add it to your dev tooling scripts. This avoids dealing with Docker daemon dependencies on every workstation. For teams that already manage dev environments through shell scripts, Makefiles, or task runners, this fits naturally into existing workflows and keeps local development snappy.
Option 2: run kumo in Docker Compose
If your team is standardized on containers, kumo can be run as a service in docker-compose.yml. That’s a good choice when you want every contributor and CI runner to share the same startup behavior. A practical pattern is to pair kumo with your app service and any supporting databases, then expose only the emulator ports your tests need. If you already treat Compose as your local system-of-record, kumo slots in cleanly alongside other service emulators and developer services, much like orchestrating a multi-service platform with route-and-escalate patterns in a single channel.
Option 3: use persistence intentionally
If you are debugging a workflow that creates objects, writes queues, or updates table state repeatedly, persistence can reduce churn. But don’t persist blindly across test runs, because test contamination is one of the fastest ways to create “green but wrong” confidence. A solid practice is to use separate data directories for local debugging and CI, and to reset or namespace state between integration test packages. That keeps the benefits of persistence without creating hidden dependencies.
services:
kumo:
image: ghcr.io/sivchari/kumo:latest
environment:
- KUMO_DATA_DIR=/data
ports:
- "4566:4566"
volumes:
- kumo-data:/data
volumes:
kumo-data:Update your Go AWS SDK v2 client configuration
Point the SDK at the emulator endpoint
Most migrations start in one place: client configuration. With AWS SDK v2 in Go, you typically override the endpoint resolver or use a custom config to point service clients at kumo instead of AWS. The key is to centralize that setup so every test and local tool reuses the same endpoint logic. Avoid hard-coding the emulator endpoint across packages, because that makes future refactors expensive and increases the chance of one service drifting out of sync.
cfg, err := config.LoadDefaultConfig(ctx,
config.WithRegion("us-east-1"),
config.WithEndpointResolverWithOptions(
aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{
URL: "http://localhost:4566",
SigningRegion: "us-east-1",
HostnameImmutable: true,
}, nil
}),
),
)Use explicit service clients in tests
Your test helper should construct the exact AWS clients your application uses: S3, DynamoDB, SQS, SNS, and so on. That keeps your tests close to production code and makes failures easier to diagnose. For example, if your app writes a record to DynamoDB and then emits an SQS event, create both clients in the same harness and verify each side effect independently. This is the kind of concrete, project-driven pattern that helps teams learn and ship faster, especially when you combine it with well-structured reference material like build-from-SDK-to-production guides.
Prefer small adapter functions over global state
One of the biggest migration gotchas is global AWS client initialization. If your app creates clients at package init time, replacing endpoints for tests becomes painful. Introduce a small factory or dependency injection layer so production and test environments can share constructors while differing only by configuration. That change pays off immediately with simpler tests, better reuse, and cleaner CI integration.
type AWSClients struct {
S3 *s3.Client
DDB *dynamodb.Client
SQS *sqs.Client
}
func NewAWSClients(ctx context.Context, cfg aws.Config) AWSClients {
return AWSClients{
S3: s3.NewFromConfig(cfg),
DDB: dynamodb.NewFromConfig(cfg),
SQS: sqs.NewFromConfig(cfg),
}
}Rewrite your integration tests for determinism
Build and tear down resources explicitly
Emulator tests fail most often when they depend on leftover state. The fix is straightforward: create your bucket, table, or queue inside the test, use it, and delete it when done. That makes test failures easier to reproduce and ensures a fresh run on every CI job. In other words, treat each test as a self-contained transaction rather than assuming the emulator will magically look like production.
Replace sleeps with polling and assertions
If your existing LocalStack tests use time.Sleep to “wait for eventual consistency,” it’s time to upgrade the suite. Use polling loops with deadlines and specific assertions so you know exactly what condition you’re waiting on. This reduces flakiness and usually shortens runtime because you stop waiting longer than necessary. As a bonus, your tests become more readable and much easier to debug when something breaks.
Validate behavior, not implementation details
Teams often overfit tests to emulator quirks. A better approach is to assert business outcomes: the object exists, the job was queued, the record status changed, the workflow advanced. If a test needs to know how many retries the emulator used internally, it may be testing the wrong thing. The same principle shows up in other engineering evaluations, like build-vs-buy decisions for platform features: value comes from outcomes, not incidental mechanics.
func TestUploadInvoice(t *testing.T) {
ctx := context.Background()
cfg := mustTestConfig(ctx)
s3Client := s3.NewFromConfig(cfg)
bucket := "test-invoices"
_, err := s3Client.CreateBucket(ctx, &s3.CreateBucketInput{Bucket: aws.String(bucket)})
require.NoError(t, err)
_, err = s3Client.PutObject(ctx, &s3.PutObjectInput{
Bucket: aws.String(bucket),
Key: aws.String("invoice-123.pdf"),
Body: strings.NewReader("pdf-bytes"),
})
require.NoError(t, err)
out, err := s3Client.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String("invoice-123.pdf"),
})
require.NoError(t, err)
defer out.Body.Close()
}Handle persistence, cleanup, and test isolation carefully
Use persistence for local debugging, not default CI
Kumo’s optional persistence can be a huge productivity boost when a developer is tracing a workflow across multiple runs. But CI should usually start from a clean slate, because persistent state in pipelines can hide regressions and make failures non-reproducible. A good rule is to reserve persistence for local sandboxes and ephemeral branch debugging, while keeping automated jobs stateless unless you’re testing recovery behavior. That separation gives you the best of both worlds: speed when you need it, determinism when it matters.
Namespace resources for parallel test runs
When multiple tests run in parallel, shared resource names become a hidden source of collisions. Prefix buckets, queues, tables, and topics with a run ID or package-specific suffix so each test can operate independently. This is particularly important in CI, where one flaky suite can poison another if they share a namespace. If you’ve ever studied operational workflows like mass account migration and data removal, the lesson is the same: lifecycle hygiene is not optional.
Clean up emulator state deliberately
Even with a disposable emulator, cleanup matters. Explicit teardown keeps test runs fast, reduces surprises when debugging, and avoids training engineers to rely on “restart the world” as a solution. In practice, that means deleting objects and resources in test cleanup hooks and resetting any data directories used by persistent local environments. Make cleanup a habit, and your migration will feel safer almost immediately.
Make CI integration boring in the best way
Spin up kumo early in the pipeline
CI should spend as little time as possible on emulator boot. Because kumo is lightweight and requires no authentication, it can typically come online early and predictably, which makes it a good fit for reusable workflow steps. The pattern is simple: start kumo, wait for the health check or socket port, then run integration tests against it. That keeps your pipeline readable and makes failure modes much easier to localize.
Cache binaries or pin images
If you use the single binary directly, cache it in your CI environment or preinstall it in your runner image. If you use Docker, pin the image tag to avoid surprise behavior changes. This kind of boring infrastructure discipline is what keeps teams from losing hours to a “worked yesterday” problem. For teams optimizing for cost and footprint, smaller runtime images can also support a broader efficiency strategy, similar to the tradeoffs discussed in smaller compute footprints.
Separate unit, integration, and emulator tests
Do not run every test through the emulator. Keep unit tests fast and emulator-free, use integration tests for service behavior, and reserve full end-to-end tests for higher confidence gates. That split lets kumo do what it does best: accelerate the high-value middle of your test pyramid without becoming a bottleneck. It also makes it easier to detect whether a failure came from your code, the emulator, or a pipeline issue.
Pro Tip: If your CI time drops after switching to kumo, keep the old LocalStack job around for one or two weeks in shadow mode. That parallel run is often the fastest way to catch unsupported service behavior without blocking delivery.
Real-world gotchas teams hit during migration
IAM assumptions often need the most cleanup
Kumo’s no-auth model is great for speed, but it means tests that depended on IAM failures may need to be redesigned. Instead of proving that an invalid role is rejected by the emulator, move those checks to contract tests or production-policy validation. In many teams, this is a net win because the test suite becomes about application logic rather than an imperfect security simulation. If you need to reason about identity and permissions, treat the emulator as a functional harness, not a full IAM replica.
Event timing is usually different
EventBridge, SQS, SNS, Lambda triggers, and similar workflows often exhibit different timing profiles across emulators. Your tests should assume asynchronous behavior and poll for completion rather than expecting instant propagation. This is especially important for chained workflows where one service writes to another, because the second hop may lag just enough to produce intermittent flakes. Being explicit about timing is one of the easiest ways to improve test reliability after the migration.
Endpoint resolution bugs surface fast
Many migration failures come from clients that still point at real AWS in one code path while others use the emulator. Build one shared test config and inject it everywhere, including background workers, helper processes, and fixtures. If you have multiple binaries in the repo, make sure they all honor the same environment variables and endpoint settings. These are the kinds of seams that become obvious only after a migration, which is why staged rollout is so valuable.
Measure the performance gains and keep them honest
Track boot time, test duration, and flake rate
Before you declare victory, measure the things that matter: emulator startup time, suite runtime, and flaky test percentage. The best migration stories are not “we switched tools,” but “we cut feedback time by X and reduced failures by Y.” Even if your specific numbers vary, the direction should be obvious: faster startup, less orchestration, and fewer environment-specific failures. That’s the sort of operational improvement that compounds every day a developer runs the suite.
Compare against a stable baseline
Benchmarks only help if they are controlled. Run the same subset of tests before and after the migration, keep the same hardware, and avoid measuring cold-cache LocalStack against warm-cache kumo or vice versa. Good comparison discipline is what turns anecdotes into evidence. If you want a template for deciding what counts as “good enough,” the methodology in comparison-driven consumer decisions is a useful reminder that the baseline matters as much as the headline.
Use the speed gain to improve test design
Do not just enjoy faster tests—invest the time you save. Shorter emulator boot times can let you split huge integration suites into smaller files, add more scenario coverage, or run tests more often in pre-commit hooks. The point of a better emulator is not simply to save minutes, but to improve engineering habits. If a migration gives your team back 10 to 15 minutes a day, that can translate to much more careful development and fewer release-day surprises.
A practical migration plan for Go teams
Phase 1: dual-run and compare
Start by running the same integration tests against both LocalStack and kumo in separate CI jobs. Use this phase to identify differences in behavior, unsupported APIs, and timing issues. Do not optimize yet; just observe and log the deltas. This is where you’ll discover whether your application uses an AWS surface area that is simple enough for kumo to handle cleanly.
Phase 2: switch the easy paths first
Move low-risk services such as S3, DynamoDB, and SQS first, because they usually offer the biggest speed wins with the least semantic risk. Once those paths are stable, expand to workflows with EventBridge or Step Functions if your tests need them. Keep the rollout visible to the team so they know which environments are authoritative and which are experimental. Migration is much smoother when it feels like a deliberate project rather than a surprise.
Phase 3: remove dead emulator assumptions
After the switch, delete the hacks you added for LocalStack over the years: sleeps, retries, port-finding scripts, and brittle bootstrap logic. Simplifying the test harness is often the biggest hidden win of all. It’s the moment where the emulator stops being the topic and your actual software becomes the topic again. That’s what good developer tooling should do.
FAQ and next steps
Once teams finish the migration, the biggest benefit is usually not the tool itself but the cleaner workflow it enables. Smaller startup footprints, fewer moving parts, and clearer test boundaries make it easier to keep developer experience healthy as the codebase grows. If you want to keep improving your platform workflow, these adjacent reads can help you think more systematically about tooling, validation, and content-worthy technical operations: what LLMs look for when citing web sources, synthetic personas and faster ideation, and channel-based approval workflows.
Frequently Asked Questions
Is kumo a drop-in replacement for LocalStack?
Not exactly. For many common AWS workflows, it can be close enough to swap into your local and CI testing flow with minimal code changes, but you should still verify service-specific behavior. The safest path is to start with the services your tests depend on most, then expand coverage after you confirm parity.
Does kumo work with AWS SDK v2 in Go?
Yes. The source material states that kumo is AWS SDK v2 compatible, which makes it a strong fit for modern Go codebases. In practice, you’ll still want a clean endpoint override and a shared test client factory so every service client points at the same emulator.
Should CI use persistence?
Usually no. CI is best kept stateless so each run starts from a known clean state. Persistence is more useful for local debugging, where you want to inspect or continue a workflow across multiple runs without recreating every resource.
What are the biggest migration gotchas?
The most common issues are endpoint resolution bugs, hidden assumptions about IAM behavior, timing differences in asynchronous workflows, and leftover state from persistent local runs. The fix is to make tests explicit about setup, teardown, and assertions rather than depending on emulator magic.
How do I know if the migration was successful?
Look for faster startup, reduced CI runtime, fewer flaky integration tests, and simpler developer setup. If the team can run the same high-value tests with less friction and fewer environment-specific issues, the migration is paying for itself.
What if my service uses AWS features kumo doesn’t support?
Keep LocalStack for those paths, or split your test strategy so only supported workflows move to kumo. The right answer is often hybrid, especially in larger systems where only a subset of AWS behavior needs emulation.
Related Reading
- Satellite Connectivity for Developer Tools: Building Secure DevOps Over Intermittent Links - Useful background on resilient tooling in constrained environments.
- Choosing the Right Quantum SDK for Your Team: A Practical Evaluation Framework - A structured way to compare platform tools without guesswork.
- Profiling Fuzzy Search in Real-Time AI Assistants: Latency, Recall, and Cost - A model for measuring tradeoffs before you optimize.
- Operational Playbook: Handling Mass Account Migration and Data Removal When Email Policies Change - Great reference for lifecycle hygiene and cleanup discipline.
- Build vs Buy for EHR Features: A Decision Framework for Engineering Leaders - A reminder to evaluate tools by outcomes, not assumptions.
Related Topics
Daniel Mercer
Senior SEO Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Designing Reliable Multi-Service Integration Tests With Kumo
Rethinking AI Use Cases: Beyond Keywords to Intent-Driven Actions
Designing Real-Time Telemetry and Analytics Pipelines for Motorsports
Explainable AI for Public-Sector Procurement: A Playbook for IT Teams
AI Chip Demand: The Battle for TSMC's Wafer Supply
From Our Network
Trending stories across our publication group