This section contains answers to frequently asked questions about mirrord. Here you'll find information on common issues, general usage, limitations, and comparisons with other tools. If you encounter a problem or want to better understand how mirrord works, check here for quick solutions and clarifications.
What are the limitations to using mirrord?
mirrord works by hooking libc, so it should work with any language/framework that uses libc (vast majority).
This includes: Rust, Node, Python, Java, Kotlin, Ruby, and others (most languages use libc).
mirrord also supports for Go, which doesn't use libc
Yes, mirrord works exactly the same way with and without a service mesh installed.
Yes, mirrord works with OpenShift. However, OpenShift usually ships with a default security policy that doesn't let mirrord create pods. To fix this, you would need to tweak your scc
settings - more information here. If you'd rather keep the default security policies, we recommend trying out mirrord for Teams.
No, mirrord needs to be able to leverage dynamic linking in order to work. This means static binaries are not supported.
To check a binary, you can use the file <FILE_NAME>
command - dynamically linked binaries will look like this:
marvin@heart-of-gold:~$ file /usr/bin/ls
/usr/bin/ls: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=36b86f957a1be53733633d184c3a3354f3fc7b12, for GNU/Linux 3.2.0, stripped
And static binaries will look like this:
marvin@heart-of-gold:~/MetalBear$ file some_static_binary
some_static_binary: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, BuildID[sha1]=2e1eda62d5f755377435c009e856cd7b9836734e, for GNU/Linux 3.2.0, not stripped
Sometimes Go is statically compiled by default, so it's important to check and compile dynamically if necessary. See this section in Common Issues for more info.
How does mirrord compare to other solutions?
When you use a remote debugger, you still have to deploy new code to the cluster. When you plug local code into the cloud with mirrord, you don't have to wait for cloud deployment. Using mirrord is also less disruptive to the cluster, since the stable version of the code is still running and handling requests.
Our assumption is that some environments are too complex to run wholly on your local machine (or their components are just not virtualizable). If that's the case with your environment, you can only run the microservice you're currently working on locally, but connect it to your cloud environment with mirrord. Note that mirrord can also be used to connect your non-containerized process to your local Kubernetes cluster.
mirrord can be a great alternative to Telepresence. The main differences are:
mirrord works on the process level, meaning it doesn't require you to run a "daemon" locally and it doesn't change your local machine settings. For example, if you run another process, it won't be affected by mirrord.
This means that you can run multiple services at the same time, each in a different context and without needing to containerize them.
mirrord doesn't require you to install anything on the cluster.
mirrord duplicates traffic and doesn't intercept/steal it by default.
More details can be found in this GitHub discussion.
General questions about mirrord.
First and most important, mirrord doesn't just mirror traffic. It does that, but also a lot more.
mirrord lets you connect a process on your development machine to your Kubernetes cluster. It does this by injecting itself into the local process (no code changes needed!), intercepting all of the input and output points of the process - network traffic, file access, and environment variables - and proxying them to the cluster. This mechanism is discussed in more detail here.
When you run mirrord, you select a Target - this is the Kubernetes Pod or Deployment whose context you want your local code to run in. For example, if you have a staging cluster running the latest stable version of all of your microservices, and you're now coding the next version of one of these microservices, you'd select as your Target the Pod or Deployment running the stable version of that microservice in staging. The following things will then happen:
The Target's environment variables will be made available to the local process.
When the local process tries to read a file, it will be read from the Target's filesystem instead.
Traffic reaching the remote Target will reach your locally running process (this incoming traffic can either be mirrored, intercepted entirely, or intercepted based on a filter you define).
Traffic sent out from your local process will be sent out from the Target instead, letting it reach any endpoint that's accessible to the Target, and the response will be sent back to your local process.
By proxying all of your local process' input and output points in this way, mirrord makes it "think" it's running in the cloud, which lets you test it in cloud conditions:
Without having to run your entire architecture locally
Without going through lengthy CI and deployment processes
Without deploying untested code to the cloud environment - the stable version of the code is still running in the cluster and handling requests - letting multiple users test on the same cluster without queueing to use it or breaking the cluster for everyone else.
mirrord is free and open source (MIT License). Our paid offering, mirrord for Teams, includes a Kubernetes operator that acts as a control plane for mirrord. You can read more about it here.
Yes, you can use the --steal
flag to intercept traffic instead of duplicating it.
No, mirrord doesn't install anything on the cluster, nor does it have any persistent state. It does spawn a short-living pod/container to run the proxy, which is automatically removed when mirrord exits. mirrord works using the Kubernetes API, and so the only prerequisite to start using mirrord is to have kubectl configured for your cluster.
If you have any restrictions for pulling external images inside your cluster, you have to allow pulling of ghcr.io/metalbear-co/mirrord image.
By letting you mirror traffic rather than intercept it, the stable version of the code can still run in the cluster and handle requests.
By letting you control which functionality runs locally and which runs in the cloud, you can configure mirrord in the way that's safest for your architecture. For example, you can configure mirrord to read files and receive incoming traffic from the cloud, but write files and send outgoing traffic locally. Our main goal in future versions of mirrord is to reduce the risk of disruption of the shared environment when using mirrord. This will be achieved by providing more granular configuration options (for example, filtering traffic by hostname or protocol), and advanced functionality like copy-on-write for databases.
Yes! You can use the mirrord container
command to run a local container in the context of the remote Kubernetes cluster. You can read more about it here.
mirrord works by creating an agent on a privileged pod in the remote cluster that accesses another pod's namespaces (read more about it here). If you can't give your end users permissions to create pods with the capabilities mirrord needs, we suggest trying out mirrord for Teams. It adds a Kubernetes operator that acts as a control plane for mirrord clients, and lets them work with mirrord without creating pods themselves. If mirrord for Teams doesn't work for you either, let us know and we'll try to figure a solution that matches your security policies.
mirrord OSS supports the following Kubernetes objects as targets:
Pods
Deployments
Argo Rollouts
In mirrord OSS, mirrord will always target a random pod when a workload with multiple pods is used as the remote target.
mirrord for Teams adds support for the following workloads:
Jobs
CronJobs
StatefulSets
In mirrord for Teams, mirrord will always target all pods when a workload with multiple pods is used as the remote target.
Both in mirrord OSS and mirrord for Teams, if you don't name any specific container to be targeted, mirrord will pick the first container from the pod spec. Some containers, like service mesh proxies, will be automatically ignored.
Some common issues and workarounds.
There are currently two known cases where mirrord cannot load into the application's process:
Statically linked binaries. Since mirrord uses the dynamic linker to load into the application's process, it cannot load if the binary is statically linked. Support for statically linked binaries is planned for the long term, but for now you would have to make sure your binaries are dynamically linked in order to run them with mirrord. With Go programs, for example, it is as simple as adding import "C"
to your program code. If you don't want to add an import to your Go program, you can alternatively build a dynamically linked binary using go build -ldflags='-linkmode external'
. In VSCode, this can be done by adding "buildFlags": "-ldflags='-linkmode external'"
to your launch.json
.
On Linux, append -ldflags="-s=false"
to instruct go run
not to omit the symbol table and debug information required by mirrord.
If you are running mirrord on MacOS and the executable you are running is protected by SIP (the application you are developing wouldn't be, but the binary that is used to execute it, e.g. bash
for a bash script, might be protected), mirrord might have trouble loading into it (mirrord can generally bypass SIP, but there are still some unhandled edge cases). If that is the case, you could try copying the binary you're trying to run to an unprotected directory (e.g. anywhere in your home directory), changing the IDE run configuration or the CLI to use the copy instead of the original binary, and trying again. If it still doesn't work, also remove the signature from the copy with:
sudo codesign --remove-signature ./<your-binary>
Please let us know if you're having trouble with SIP by opening an issue on GitHub or talking to us on Discord.
Another reason that mirrord might seem not to work is if your remote pod has more than one container. mirrord works at the level of the container, not the whole pod. If your pod runs multiple containers, you need to make sure mirrord targets the correct one by by specifying it explicitly in the target configuration. Note that we filter out the proxy containers added by popular service meshes automatically.
This can be caused when Go resolves DNS without going through libc. Build your Go binary with the following environment variable: GODEBUG=netdns=cgo
When executing a task Turbo strips most of the existing process environment, including internal mirrord variables required during libc call interception setup. There are two alternative ways to solve this problem:
Explicitly tell Turbo to pass mirrord environment to the task. To do this, merge the snippet below into your turbo.json
. You should be able to run the task like mirrord exec turbo dev
.
{
"globalPassThroughEnv": ["MIRRORD_*", "LD_PRELOAD", "DYLD_INSERT_LIBRARIES"]
}
Invoke mirrord inside the Turbo task command line itself.
This could happen because the local process is listening on a different port than the remote target. You can either change the local process to listen on the same port as the remote target (don't worry about the port being used locally by other processes), or use the port_mapping
configuration to map the local port to a remote port.
This can happen in some clusters using a service mesh when stealing incoming traffic. You can use this configuration to fix it:
{"agent": {"flush_connections": false}}
mirrord has a list of path patterns that are read locally by default regardless of the configured fs mode. You can override this behavior in the configuration.
Here you can find all the pre-defined exceptions:
Paths that match the patterns defined here are read locally by default.
Paths that match the patterns defined here are read remotely by default when the mode is localwithoverrides
.
Paths that match the patterns defined here under the running user's home directory will be failed to be found by default when the mode is not local
.
In order to override that settings for a path or a pattern, add it to the appropriate set:
feature.fs.read_only
if you want read operations to that path to happen remotely, but write operations to happen locally.
feature.fs.read_write
if you want read and write operations to that path to happen remotely.
feature.fs.local
if you want read and write operations to that path to happen locally.
feature.fs.not_found
if you want the application to "think" that file does not exist.
If you've set feature.fs.mode
to local
, try changing it to localwithoverrides
.
When the local
mode is set, all files will be opened locally. This might prevent your process from resolving cluster-internal domain names correctly, because it can no longer read Kubelet-generated configuration files like /etc/resolv.conf
. With localwithoverrides
, such files are read from the remote pod instead.
If an agent pod's status is Running
, it means mirrord is probably still running locally as well. Once you terminate the local process, the agent pod's status should change to Completed
.
On clusters with Kubernetes version v1.23 or higher, agent pods are automatically cleaned up immediately (or after a configurable TTL). If your cluster is v1.23 or higher and mirrord agent pods are not being cleaned up automatically, please open an issue on GitHub. As a temporary solution for cleaning up completed agent pods manually, you can run:
kubectl delete jobs --selector=app=mirrord --field-selector=status.successful=1
If your cluster is running on Bottlerocket or has SELinux enabled, please try enabling the privileged
flag in the agent configuration:
{
"agent": {
"privileged": true
}
}
mirrord operator status
fails with 503 Service Unavailable
on GKEIf private networking is enabled, it is likely due to firewall rules blocking the mirrord operator's API service from the API server. To fix this, add a firewall rule that allows your cluster's master nodes to access TCP port 443 in your cluster's pods. Please refer to the GCP docs for information.
When running processes locally versus in a container within Kubernetes, some languages handle certificate validation differently. For instance, a Go application on macOS will use the macOS Keychain for certificate validation, whereas the same application in a container will use different API calls. This discrepancy can lead to unexpected certificate validation errors when using tools like mirrord.
A specific issue with Go can be found here, where Go encounters certificate validation errors due to certain AWS services serving certificates that are deemed invalid by the macOS Keychain, but not by Go’s certificate validation in other environments.
To work around this issue (on macOS), you can use the following mirrord configuration:
{
"experimental": {"trust_any_certificate": true}
}
This configuration would make any certificate trusted for the process.
Other alternatives are to either disable certificate validation in your application or import the problematic certificate (or its root CA) into your macOS Keychain. For guidance on how to do this, refer to this Apple support article.
When running the agent as an ephemeral container, the agent shares the network stack with the target pod. This means that incoming connections to the agent are handled by the service mesh, which might drop it for various reasons (lack of TLS, not HTTP, etc.) To work around that, set the agent.port to be static using agent.port
in values.yaml when installing the operator, then add a port exclusion for the agent port in your service mesh's configuration. For example, if you use Istio and have set the agent port to 5000, you can add the following annotation for exclusion:
traffic.sidecar.istio.io/excludeInboundPorts: '50000'
If you encounter the error “auth error: unable to run auth exec: No such file or directory”
in your IDE, but the mirrord CLI or kubectl work correctly in your terminal, this usually means your IDE is using a different PATH
environment variable than your terminal.
The Kubernetes client relies on external executables for authentication with certain providers. If these executables are referenced by relative paths, they may not be found if the PATH
is not set up properly in your IDE environment.
To resolve this issue, you can try one of the following solutions:
Launch your IDE from the same terminal where kubectl works, so it inherits the correct PATH
.
Update your ~/.kube/config
file to use absolute paths for any referenced executables (for example, change cmd: aws
to cmd: /usr/bin/aws
if that is the full path).
This should help your IDE locate the necessary authentication executables.