Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
How to steal traffic using mirrord
By default, mirrord mirrors all incoming traffic into the remote target, and sends a copy to your local process. This is useful when you want the remote target to answer requests, keeping the remote environment completely agnostic to your local code. However, sometimes you do want to test out how your local code responds to requests; or maybe your process writes to a database when receiving a request, and you want to avoid duplicate records (one from your local code, one from the remote target). In these cases, you probably want to steal traffic instead of mirroring it. When you steal traffic, your local process is the one answering the requests, and not the remote target. This guide will show you how to do that.
If you want all traffic arriving at the remote target to be redirected to your local process, change the feature.network.incoming
configuration to steal
:
{
"feature": {
"network": {
"incoming": "steal"
}
}
}
Run your process with mirrord using the steal configuration, then send a request to the remote target. The response you receive will have been sent by the local process. If you're using one of our IDE extensions, set a breakpoint in the function handling the request - your request should hang when the breakpoint is hit and until you continue the process.
For incoming HTTP traffic (including HTTP2 and gRPC), mirrord also supports stealing a subset of the remote target's traffic. You can do this by specifying a filter on either an HTTP header or path. To specify a filter on a header, use the feature.network.incoming.http_filter.header_filter
configuration:
{
"feature": {
"network": {
"incoming": {
"mode": "steal",
"http_filter": {
"header_filter": "X-My-Header: my-header-value",
"ports": [80, 8080]
},
}
}
}
}
The feature.network.incoming.http_filter.ports
configuration lets mirrord know which ports are listening to HTTP traffic and should be filtered. It defaults to [80, 8080]
.
To specify a filter on a path, use the feature.network.incoming.http_filter.path_filter
configuration:
{
"feature": {
"network": {
"incoming": {
"mode": "steal",
"http_filter": {
"path_filter": "my/path",
"ports": [80, 8080]
},
}
}
}
}
Note that both header_filter
and path_filter
take regex value, so for example "header_filter": "X-Header-.+: header-value-.+"
would work.
The HTTP filters both take "fancy" regexes that support negative look-aheads. This can be useful for avoiding the stealing of Kubernetes liveness, readiness and startup probes.
For filtering out any probes sent to the application by kubernetes, you can use this header filter, to require a user-agent that does not start with "kube-probe":
{
"feature": {
"network": {
"incoming": {
"mode": "steal",
"http_filter": {
"header_filter": "^User-Agent: (?!kube-probe)",
}
}
}
}
}
To avoid stealing requests sent to URIs starting with "/health/", you can set this filter:
{
"feature": {
"network": {
"incoming": {
"mode": "steal",
"http_filter": {
"path_filter": "^(?!/health/)",
}
}
}
}
}
feature.network.incoming.http_filter
allows you to steal a subset of HTTP requests. To apply the filter, the mirrord-agent needs to be able to parse the requests stolen from the target. Most commonly, the incluster traffic is encrypted with TLS, but it is decrypted by a service mesh before it gets to the target service. In this case, mirrord is able to parse the requests out of the box.
However, in some cases the traffic is only decrypted by the target service itself. Using an HTTP filter in this case requires some additional setup. Check out the HTTPS stealing guide for more information. Note that this HTTPS stealing requires mirrord Operator, which is part of mirrord for Teams.
If your local process reads from a queue, you might want to test out the copy target feature, which temporarily creates a copy of the mirrord session target. With its scaledown
flag it allows you to temporarily delete all replicas in your targeted rollout or deployment, so that none competes with your local process for queue messages.
If you don't want to impersonate a remote target - for example, if you want to run a tool in the context of your cluster - check out our guide on the targetless mode.
If you just want to learn more about mirrord, why not check out our architecture or configuration sections?
How to run mirrord on a local container instead of a local process
The common way to use mirrord is on a locally running process. This way you can easily debug it in your IDE, as well as make quick changes and test them out without going through the additional layer of containerization.
However, sometimes you're just not able to run your microservice locally - usually due to complicated dependencies. For these cases, you can run mirrord on a local container instead. To do this, simply run the following command:
mirrord container --target <target-path> -- <command used to run the local container>
For example:
mirrord container -- docker run nginx
In addition to Docker, Podman and nerdctl are also supported.
Local container execution is currently only supported in the mirrord CLI tool. IDE extension support will be added in the future.
If you'd like to intercept traffic rather than mirror it so that your local process is the one answering the remote requests, check out this guide. Note that you can even filter which traffic you intercept!
If you don't want to impersonate a remote target - for example, if you want to run a tool in the context of your cluster - check out our guide on the targetless mode.
If you just want to learn more about mirrord, why not check out our architecture or configuration sections?
How to use mirrord for port forwarding
The port-forward command allows you to forward traffic from a local port to any destination that the mirrord targeted pod has access to, in a similar way to kubectl port-forward
. The traffic is forwarded as-if it was coming from the target pod, meaning it has access to destinations that might be outside the cluster, like third-party APIs, depending on what's accessible by the target pod.
You can use the command like so:
mirrord port-forward --target <target-path> -L <local port>:<remote address>:<remote port>
For example, to forward traffic from localhost:8080 to an incluster service py-serv listening on port 80:
mirrord port-forward -L 8080:py-serv:80
It also allows for reverse port forwarding, where traffic is redirected from a port on the target pod or workload to a local port, like so:
mirrord port-forward --target <target-path> -R <remote port>:<local port>
For example, to forward traffic from an incluster deployment py-serv listening on port 80 to localhost:8080:
mirrord port-forward --target deployment/py-serv -R 80:8080
In addition, multiple ports can be forwarded in one direction or both directions simultaneously in the same command by providing each source and destination as a separate -L
or -R
argument.
Regular port forwarding with an -L
can be done in targetless mode and does not require specifying any target. Reverse port forwarding always requires a target.
The local port component of the -L
argument is optional, and without it the same port will be used locally as on the remote.
The same is true of the -R
argument: if one port number is provided, it will be used for both local and remote ports.
Port-forwarding only supports TCP, not UDP.
The remote address can be an IPv4 address or a hostname - hostnames are resolved in the cluster.
In regular port forwarding (-L
) connections are made lazily and hostname resolution is attempted only data is sent to the local port.
Reverse forwarding (-R
) can read the feature.network.incoming
section of a mirrord config file when the file is passed to the command with -f
.
Session management for the mirrord Operator
Whenever a user starts mirrord on a cluster where mirrord for Teams is installed, the Operator assigns a session to this user, until they stop running mirrord, at which point the session is closed in the Operator automatically.
This feature is only relevant for users on the Team and Enterprise pricing plans.
Users can use the command mirrord operator status
to see active sessions in the cluster. For example, in the following output, we can see the session ID, the target used, the namespace of the target, the session duration, and the user running that session. We can also see that Ports
is empty, meaning the user isn't stealing or mirroring any traffic at the moment.
+------------------+-----------------------------+-----------+---------------------------------------------------------------+-------+------------------+
| Session ID | Target | Namespace | User | Ports | Session Duration |
+------------------+-----------------------------+-----------+---------------------------------------------------------------+-------+------------------+
| 487F4F2B6D2376AD | deployment/ip-visit-counter | default | Aviram Hassan/[email protected]@avirams-macbook-pro-2.local | | 4s |
+------------------+-----------------------------+-----------+---------------------------------------------------------------+-------+------------------+
The User
field is generated in the following format - whoami/k8s-user@hostname
. whoami
and hostname
are from the local machine, while k8s-user
is the user we see from the operator side.
In this example, we can see that the session has an active steal on port 80, filtering HTTP traffic with the following filter: X-PG-Tenant: Avi.+
+------------------+-----------------------------+-----------+---------------------------------------------------------------+----------------------------------------------------------+------------------+
| Session ID | Target | Namespace | User | Ports | Session Duration |
+------------------+-----------------------------+-----------+---------------------------------------------------------------+----------------------------------------------------------+------------------+
| C527FE7D9C30979E | deployment/ip-visit-counter | default | Aviram Hassan/[email protected]@avirams-macbook-pro-2.local | Port: 80, Type: steal, Filter: header=X-PG-Tenant: Avi.+ | 13s |
+------------------+-----------------------------+-----------+---------------------------------------------------------------+----------------------------------------------------------+------------------+
Users may also forcefully stop a session with the mirrord operator session
CLI commands. These allow users to manually close Operator sessions while they're still alive (user is still running mirrord).
The session management commands are:
mirrord operator session kill-all
which will forcefully stop ALL sessions!
mirrord operator session kill --id {id}
which will forcefully stop a session with id
, where you may obtain the session id through mirrord operator status
;
sessions
RBAC
Every mirrord-operator-user
has access to all session operations by default, as they come with deletecollection
and delete
privileges for the sessions
resource. You may limit this by changing the RBAC configuration. Here is a sample role.yaml
with the other Operator rules omitted:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: mirrord-operator-user
rules:
- apiGroups:
- operator.metalbear.co
resources:
- sessions
verbs:
- deletecollection
- delete
mirrord operator session kill-all
requires the deletecollection
verb;
mirrord operator session kill --id {id}
requires the delete
verb;
How to configure mirrord to access some endpoints locally and some remotely
There are several features underlying mirrord's ability to let your local app send outgoing network requests to cluster resources:
By importing the remote target's environment variables, your app will send the request to the remote hostnames configured in them.
By intercepting DNS resolution, mirrord will resolve the remote hostnames to the remote pod's IP address.
Finally, by intercepting outgoing network requests, mirrord will send the request from the remote pod, allowing it to access resources that are only available from within the cluster.
However, sometimes you might have a resource in the cluster that you don't want to access from your local process - perhaps a shared database. This is what the outgoing filter is for. It allows you to specify a list of hostnames that should be resolved accessed remotely, or a list of hostnames that should be resolved and accessed locally. That way, you can run a local instance of your database and have your local process read and write to it, while still running all other operations against the cluster.
For example, if you want your app to access the hostname example-hostname.svc
locally, and everything else remotely, you can do it with the following configuration:
{
"feature": {
"network": {
"outgoing": {
"filter": {
"local": ["example-hostname.svc"]
},
}
}
}
}
You can see all the configuration options for the outgoing filter feature here.
If you'd like to intercept traffic rather than mirror it so that your local process is the one answering the remote requests, check out this guide. Note that you can even filter which traffic you intercept!
If you don't want to impersonate a remote target - for example, if you want to run a tool in the context of your cluster - check out our guide on the targetless mode.
If you just want to learn more about mirrord, why not check out our architecture or configuration sections?
This section provides detailed guides and explanations on how to use mirrord in various scenarios. You'll find instructions and best practices for features such as copying targets, running local containers, filtering outgoing traffic, port forwarding, managing sessions, stealing HTTPS and general traffic, using targetless mode, web browsing, and integrating with popular development tools like IntelliJ and VSCode. Whether you're new to mirrord or looking to leverage its advanced capabilities, these topics will help you get the most out of your development workflow.
How to run mirrord without a remote target
The common use case for mirrord is testing out modifications to an existing application. In this case, the stable version of the service is running in the cloud, and the new code runs locally, using the stable cloud version as its remote target. However, sometimes you want to test a brand new application that has never been deployed to the cloud. Or you might not want to run an application at all - maybe you just want to run a tool, like Postman or pgAdmin, in the context of your cluster.
This is where targetless mode comes in. When running in targetless mode, mirrord doesn't impersonate a remote target. There's no incoming traffic functionality in this mode, since there's no remote target receiving traffic, but everything else works exactly the same.
To run mirrord in targetless mode, just don't specify a target! For example:
mirrord exec /bin/my-tool
If you want to run in targetless mode using the IntelliJ or VSCode plugin, you can select the No Target ("targetless")
option from the target selection dialog, or you can add
{
"target": "targetless"
}
to your mirrord configuration file.
If you'd like to intercept traffic rather than mirror it so that your local process is the one answering the remote requests, check out this guide. Note that you can even filter which traffic you intercept!
Want to use Targetless mode to run a web browser in the context of your cluster? Check out this guide.
If you just want to learn more about mirrord, why not check out our architecture or configuration sections?
Making mirrord copy a target and use the copy instead of the original
When you set the copy_target
configuration field, instead of using the target of the run directly, mirrord will create a new pod using the pod spec of the original target, and use that new pod as a target. This feature is only relevant for users on the Team and Enterprise pricing plans.
This can be useful when you want to run your application with access to the resources and I/O of a target that isn't reliable, for example because the target pod keeps crashing, or because it is managed by a Job and might terminate before you are done debugging your application with mirrord.
The new, copied pod will not have any liveness, readiness or startup probes even if the original pod spec does define them. This means you can steal traffic without having to also answer those probes. This might come in handy when debugging with breakpoints with stolen traffic. Without copy_target
, if you linger too long on a breakpoint, the application might miss some probes, which could cause a target pod to restart.
scale_down
When the scale_down
option is set, mirrord will scale the target workload down to zero, effectively replacing all existing pods of that workload by the one new copied pod, that is then used as the target for the mirrord run. This feature is supported with Deployment, Argo Rollout, StatefulSet, and ReplicaSet (owned by either a Deployment or an Argo Rollout) targets.
The scale down feature can be useful e.g. when a workload reads from a queue. By scaling it down to zero, the application you run with mirrord does not have to compete with the workload's pods for queue items.
Only one mirrord session can scale down a workload at the same time. If you try to scale down a workload that is already being scaled down in another mirrord session (by you or by a teammate), mirrord will display an error and exit.
You can see active copied targets by running mirrord operator status
. When there are no active copy targets, the relevant part of the output will say "No active copy targets".
When there are active copy targets, the relevant section of the output will look like this:
Active Copy Targets:
+-------------------------------+-----------+------------------------------+-------------+
| Original Target | Namespace | Copy Pod Name | Scale Down? |
+-------------------------------+-----------+------------------------------+-------------+
| deployment/py-serv-deployment | default | mirrord-copy-job-wd8kj-2gvd4 | * |
+-------------------------------+-----------+------------------------------+-------------+
With an asterisk marking copy targets that are also scaling down their original target.
Please note however that you don't necessarily have to check if a target is already being scaled down, as trying to scale it down again will not interrupt the ongoing session, it will just result in your new run exiting with an error.
How mirrord makes it possible for developers to use the same cluster concurrently.
The core value of mirrord is that it cuts iteration time by letting developers run their code against the cluster directly, instead of having to build, push and deploy images. This significantly cuts down iteration time by letting developers test their code in the cloud from the very first step of the development cycle. However, in order to properly test new code in the cloud, it needs to be able to not only read or receive traffic from the environment, but also to write or send traffic to it, potentially mutating it. This discussion is only relevant for users on the Team and Enterprise pricing plans.
This raises the question, what if I want multiple users in my organization to use the same cluster (e.g. the organization's staging cluster) concurrently? Wouldn't they step on each other toes and affect each other's work?
If one developer steals traffic from a remote service, wouldn't that prevent other users from stealing or mirroring traffic from that same service?
If a service reads from a queue, wouldn't a developer targeting it with mirrord steal all the messages from the queue, preventing other developers from reading them?
If a developer writes to a database, wouldn't that affect the data that other developers see when they read from the same database?
These conflicts and more are resolved by the mirrord Operator, available in the mirrord Team and Enterprise plans. By having a persistent, centralized component in the cluster that can synchronize and orchestrate different instances of mirrord running in the cluster, we can allow developers to use mirrord against the same cluster without affecting each other.
mirrord's HTTP filters let users only steal a subset of the incoming traffic to the remote service. By adding personalized headers to incoming traffic and then configuring mirrord to only steal traffic with those headers, users can debug the same service concurrently without affecting each other. Learn more about HTTP filters.
NOTE: While HTTP filters are supported in the OSS version of mirrord, concurrently debugging the same service using HTTP filters is only supported in the Team and Enterprise versions.
mirrord's queue splitting feature lets users only steal a subset of the messages from a queue. By configuring mirrord to only steal messages with specific properties, users can debug the same queue-based service concurrently without affecting each other. Learn more about queue splitting.
mirrord Policies let you define rules that prevent users from doing certain actions. For example, you can prevent users from writing to a database, or from stealing traffic without using an HTTP filter. Learn more about mirrord Policies.
Sometimes a database is just too sensitive to write to remotely. Or maybe you want to test a migration, and don't want it to affect your coworkers who are using the same cluster. In these cases, you can use the outgoing traffic filter to send traffic to a locally running component instead of the one that's running in the cluster. Your local process will still communicate with all of its other dependencies remotely in the cluster. Learn more about the outgoing traffic filter.
Sometimes, all you need to avoid clashes is just to see what other users are doing in the cluster. The mirrord operator status
command displays a list of all the currently running sessions in the cluster, along with the user who started them. If you see a session that's causing problems, you can kill it using the mirrord operator kill
command (given you have the necessary permissions). Learn more about managing mirrord sessions.
Even though using mirrord with a shared cluster is already safer than actually deploying your code to it, we're constantly working to make it even safer and more seamless for multiple users to use mirrord concurrently on the same environment. If you have any questions or suggestions, please don't hesitate to reach out to us here or on our Discord. Happy mirroring!
Using mirrord & browser to set your IP address
One way to use mirrord's is to set up your browser to use the IP address of the remote target. This way, you can browse the web as if you were in the same location as the remote target. Below is a guide on how to do this with Google Chrome.
Prerequisites
(via brew or apt)
Steps
In a terminal session, trigger microsocks
using mirrord
.
If you want to use a specific target's network: mirrord exec -t deployment/my_deployment microsocks
If you just want a specific namespace networking: mirrord exec -a namespace microsocks
And you can just do: mirrord exec microsocks
if you want to use your current namespace.
In a Chrome window:
Open the Socks5 Configurator extension
Make sure the "Socks5 Proxy" is enabled
Type in its respective textbox 127.0.0.1:1080
Hit the save button
That's it! You can verify your IP address has changed via a quick "what is my ip address" search in Google
If you'd like to intercept traffic rather than mirror it so that your local process is the one answering the remote requests, check out . Note that you can even filter which traffic you intercept!
If your local process reads from a queue, you might want to test out the , which temporarily creates a copy of the mirrord session target. With its scaledown
flag it allows you to temporarily delete all replicas in your targeted rollout or deployment, so that none competes with your local process for queue messages.
If you just want to learn more about mirrord, why not check out our or sections?
Using the mirrord plugin in JetBrains' IDEs
If you develop your application in one of the JetBrains' IDEs (e.g PyCharm, IntelliJ or GoLand), you can debug it with mirrord using our JetBrains Marketplace plugin. Simply:
Download the plugin
Enable mirrord using the toolbar button (next to "mirrord" popup menu)
Run or debug your application as you usually do
When you start a debugging session with mirrord enabled, you'll be prompted with a target selection dialog. This dialog will allow you to select the target in your Kubernetes cluster that you want to impersonate.
Note: For some projects, the plugin might not be able to present the target selection dialog.
When this happens, you'll see a warning notification and the execution will be cancelled. You can still use mirrord, but you'll have to specify the target in mirrord config.
This is known to happen with Java projects using the IntelliJ build system.
The toolbar button enables/disables mirrord for all run and debug sessions.
mirrord's initial state on startup can be configured in the plugin settings (Settings -> Tools -> mirrord -> Enable mirrord on startup
)
mirrord can be persistently enabled or disabled for a specific run configuration, regardless of the toolbar button state. This is controlled via the MIRRORD_ACTIVE
environment variable in your run configuration.
To have mirrord always enabled for the given run configuration, set MIRRORD_ACTIVE=1
in the run configuration's environment variables. To have mirrord always disabled, set MIRRORD_ACTIVE=0
.
mirrord's target can be specified in two ways:
with the target selection dialog
The dialog will only appear if the mirrord config does not specify the target.
The dialog will initially show targets in the namespace specified in the mirrord config (.target.namespace
). If the namespace is not specified, your Kubernetes user's default namespace will be used.
If you want to see targets in a different namespace, there is a dropdown to choose between namespaces.
in the mirrord config's target section
The plugin allows for using the mirrord config. For any run/debug session, the mirrord config to be used can be specified in multiple ways:
The toolbar dropdown menu allows for specifying a temporary mirrord config override. This config will be used for all run/debug sessions.
To specify the override, use Select Active Config
action.
You will be prompted with a dialog where you can select a mirrord config from your project files. For the file to be present in the dialog, its path must contain mirrord
and end with either .json
, .yaml
or .toml
.
You can remove the override using the same action.
If no active config is specified, the plugin will try to read the config file path from the MIRRORD_CONFIG_FILE
environment variable specified in the run configuration.
This path should be absolute.
If the config file path is not specified in the run configuration environment, the plugin will try to find a default config.
The default config is the lexicographically first file in <PROJECT ROOT>/.mirrord
directory that ends with either .json
, .yaml
or .toml
.
The plugin relies on the standard mirrord CLI binary.
By default, the plugin checks the latest release version and downloads the most up-to-date binary in the background. You can disable this behavior in the plugin settings (Settings -> Tools -> mirrord -> Auto update mirrord binary
).
You can also pin the binary version in the plugin settings (Settings -> Tools -> mirrord -> mirrord binary version
).
The guide on how to use the plugin with remote development on WSL can be found here.
Using the mirrord extension in Visual Studio Code
If you develop your application in Visual Studio Code, you can debug it with mirrord using our Visual Studio Marketplace . Simply:
Download the extension
Enable mirrord using the "mirrord" button on the bottom toolbar
Run or debug your application as you usually do
When you start a debugging session with mirrord enabled, you'll be prompted with a target selection quick pick. This quick pick will allow you to select the target in your Kubernetes cluster that you want to impersonate.
The toolbar button enables/disables mirrord for all run and debug sessions.
mirrord's initial state on startup can be configured in the VSCode settings:
mirrord can be persistently enabled or disabled for a specific launch configuration, regardless of the toolbar button state. This is controlled via the MIRRORD_ACTIVE
environment variable in your launch configuration. The value "1"
keeps mirrord always enabled, while the value "0"
disables it.
mirrord's target can be specified in two ways:
with the target selection quick pick
The quick pick will only appear if the mirrord config does not specify the target.
The quick pick will initially show targets in the namespace specified in the mirrord config (). If the namespace is not specified, your Kubernetes user's default namespace will be used.
If you want to see targets in a different namespace, there is an option to "Select Another Namespace".
in the mirrord config's
The extension allows for using the . For any run/debug session, the mirrord config to be used can be specified in multiple ways:
The toolbar dropdown menu allows for specifying a temporary mirrord config override. This config will be used for all run/debug sessions.
To specify the override, use Select active config
action.
You will be prompted with a quick pick where you can select a mirrord config from your project files. For the file to be present in the dialog, it must either be located in a directory which name ends with .mirrord
, or have a name that ends with mirrord
. Accepted config file extensions are: json
, toml
, yml
and yaml
.
You can remove the override using the same action.
If no active config is specified, the extension will try to read the config file path from the MIRRORD_CONFIG_FILE
environment variable specified in the launch configuration.
This path should be absolute.
If the config file path is not specified in the launch configuration environment, the plugin will try to find a default config.
The default config is the lexicographically first file in <PROJECT ROOT>/.mirrord
directory that ends with mirrord
. Accepted config file extensions are: json
, toml
, yml
and yaml
.
The extension relies on the standard mirrord CLI binary.
By default, the extension checks the latest release version and downloads the most up-to-date binary in the background. You can disable this behavior in the VSCode settings:
You can also pin the binary version with:
To use a specific mirrord binary from your filesystem:
The guide on how to use the extension with remote development on WSL can be found .
{
"mirrord.enabledByDefault": true
}
{
"env": {
// mirrord always enabled
"MIRRORD_ACTIVE": "1"
// mirrord always disabled
// "MIRRORD_ACTIVE": "0"
}
}
{
"mirrord.autoUpdate": false
}
{
"mirrord.autoUpdate": "3.128.0"
}
{
"mirrord.binaryPath": "/path/to/local/mirrord/binary"
}
Installing and using mirrord on windows with WSL.
Using mirrord on Windows requires setting up the Linux Subsystem for Windows (WSL). You’ll also need a Kubernetes cluster. If you don’t have one, you can set one up locally using Docker Desktop. mirrord works with any Kubernetes cluster, be it remote or local.
You can read about the prerequisites and installation options on the official Microsoft documentation for How to install Linux on Windows with WSL.
The mirrord guide uses the default installation options, which has Ubuntu as the Linux distro. mirrord itself is not limited to any particular distro.
To install WSL from the Microsoft Store just open the Microsoft Store app, then search for the name of the Linux distro you want. We recommend installing Ubuntu, but mirrord works with any Linux distro.
After installation is complete, click on the Open button and a terminal window will appear.
Open a terminal with administrator privileges.
It doesn’t have to be the Windows Terminal. PowerShell and Command Prompt will also work.
On the terminal, run the wsl –install
command to install the default (Ubuntu) Linux distro:
wsl --install
You can read more about the prerequisites and installation options on the official Microsoft documentation for How to install Linux on Windows with WSL. This guide uses the default installation options, which has Ubuntu as the Linux distro. mirrord itself is not limited to any particular distro.
After installing WSL, in a terminal window, you should see the following output from executing the wsl --list
command:
C:\> wsl --list
Windows Subsystem for Linux Distributions:
Ubuntu (Default)
If you're not seeing any Linux Distribution listed, please refer back to the Microsoft guide, or join our [Discord server](link pending) and we'll be happy to help you.
To start a session in WSL, now enter the wsl
command:
wsl
After starting a new WSL session (either from the command line, or from the Microsoft Store) you’ll be prompted to set up a Linux user. The username and password does not need to match your Windows user.
After setting up your Linux user, it’s time to prepare the Linux environment for development. Install the tools needed to access your Kubernetes cluster (gcloud cli, azure cli, or whatever cli tool you use for authentication and cluster connection). You’ll also need to install the compilers and project management tools (such as nvm, JDK, dotnet cli) necessary to run and debug your project.
Many of those tools may be installed using the Linux distro package manager, but some might require manual installation and setup.
Some IDEs may support running in WSL from Windows directly (the IDE is installed on Windows), such as VS Code and the IntelliJ family of IDEs, while others may require being installed in Linux itself.
Setting up a Kubernetes cluster is out of scope for this guide - we’re assuming that you have a remote cluster to target with mirrord. If you don’t have a Kubernetes cluster to use and still want to try out mirrord, we recommend checking out the Docker Desktop guide on Install Docker Desktop on Windows.
With the tooling out of the way, and after cluster authorization has been set up, you may check cluster access with kubectl get all
.
username@hostname:/mnt/c$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
If you got a command not found
error instead, this means that kubectl
is not installed. Some Kubernetes tools install it as part of their setup, but you can also manually install it directly, follow the official guide for installing it on Linux. You can also install it on Windows, but this may require changing the KUBECONFIG
environment variable.
If you’re not seeing any of your Kubernetes resources, you might need to change your Kubernetes configuration. Refer to the kube config
manual manual.
Before starting your IDE, it’s recommended that you copy your project files from the Windows file system to Linux, to avoid performance issues. The best practice is to have everything inside Linux.
You can do this from the command line (from within Linux, the Windows file system should be something like /mnt/{windows-drive-letter}
, so by default it’ll be /mnt/c
), or from File Explorer.
If you already have your own project, you may skip this section.
We’ll provide you with a small playground project here, if you don’t already have your own. Let's create a sample NodeJS project to use with mirrord, but bear in mind that mirrord is not limited to any programming languages or frameworks. In the Linux terminal, navigate to the home
directory.
cd ~
Create a new playground
directory.
mkdir playground && cd playground
Install NodeJS (if you haven’t already in the Setting up the Linux Distro section). First update the package manager.
sudo apt update
Now install the nodejs package.
sudo apt install nodejs
Create a very simple NodeJS program.
echo "console.log('Hello, mirrord');" > app.mjs
Running node app.mjs
should look something like this.
username@hostname:~/playground$ node app.mjs
Hello, mirrord
We can finally move on to installing and using mirrord.
Microsoft provides a very good guide on how to use WSL with VS Code.
Open VS Code from Windows, as you normally would, and click on the Remote Explorer.
Select the Linux distro you have set up, and click on the Connect in Current Window button that appears.
VS Code will notify you it’s starting WSL, and the Remote Explorer will change to indicate you’re connected.
Now go to the Extensions panel, search for mirrord and install it.
Some of your extensions may appear as disabled, with a button to Install in WSL
. If you want to use these extensions from the WSL VS Code, then you must click the button and install them.
If you get an error saying that mirrord does not support the Windows platform, this means that you’re trying to install it on the Windows VS Code. Uninstall the mirrord extension, and follow the previous steps to start the WSL VS Code.
With mirrord installed, open up your project.
Keep in mind that you’ll be navigating the directories with Linux style paths. If you have not copied your project files to WSL, you can navigate the Windows files from the /mnt
directory.
Jetbrains provides a very good guide on how to use WSL with IntelliJ.
Open the Jetbrains IDE you have installed on Windows (the mirrord plugin is available for every Jetbrains IDE. In this tutorial we’ll show screen caps from IntelliJ Idea Ultimate, but that’s not a requirement).
Select the WSL option under Remote Development.
Click on the + button (if you already have a project, otherwise select New Project).
Pay attention to the IDE version you’re choosing. The recommendation here is to select the same one that you have installed on Windows, pay close attention to the version numbers as well (sometimes the Beta version comes selected by default).
Either type the path to your project, or click on the ...
button to open the path picker.
Now click Download IDE and Connect
at the bottom.
The IDE will be downloaded and installed on Linux. After it’s ready, it should automatically open.
Click on the gear button, select Plugins
and search the Marketplace
for “mirrord”.
After clicking to install it, the install button will change to Restart IDE
. Instead of restarting it like that, close the WSL IDE, and in the Windows IDE select to open your project again.
If you get an error saying that mirrord does not support the Windows platform, this means that you’re trying to install it on the Windows IDE. Uninstall the mirrord extension, and follow the previous steps to start the WSL IDE.
In your WSL terminal, you can download and install mirrord by running the following command:
curl -fsSL https://raw.githubusercontent.com/metalbear-co/mirrord/main/scripts/install.sh | bash
You might get prompted to enter your root
user password, so we can install it in /usr/local/bin
.
If curl
is not installed in the Linux distro, you can use the distro package manager to install it, or download and install it manually from the curl website.
Now to execute your project with mirrord, just run the mirrord exec
command:
mirrord exec --target "<pod-target>" <process command>
If you’re using this guide’s playground project your mirrord exec
command should be:
mirrord exec --target “targetless” node app.mjs
You can list the available mirrord targets with the mirrord ls
command. If no targets are being shown, you might not have any Kubernetes resources that can be targeted by mirrord, or you might not be using the right Kubernetes context. You can check the later with kubectl config view
, look at the current-context
and see if it’s the intended one. You may change the context with the kubectl config use-context [CONTEXT NAME]
command.
You can use mirrord exec –help
to list other exec
options.
If you're seeing a mirrord notification pop-up that says something along the lines of:
failed to update the mirrord binary: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Something is wrong with the local certificate that IntelliJ is trying to use. You can read more about this on the IntelliJ certificates manual installation page.
You can fix this issue by navigating to the IntelliJ IDE dir (change it to match where your IntelliJ IDE is installed) in the WSL terminal:
cd ~/.local/share/JetBrains/Toolbox/apps/{NAME-OF-IDE}/jbr/lib/security
And issuing the following command:
keytool -importcert -trustcacerts -alias <alias-name> -file <path/to/file.crt> -keystore cacerts
How to steal HTTPS traffic with a filter using mirrord
With mirrord for Teams, you can steal a subset of HTTP requests coming to your target, even if the deployed application receives the traffic encrypted with TLS.
This feature is only relevant for users on the Team and Enterprise pricing plans.
Important: stealing HTTPS with a filter requires mirrord-operator version at least 3.106.0
and mirrord-agent version at least 1.134.0
.
To enable mirrord users to steal HTTPS requests with a filter, you must provide the mirrord Operator with some insight into your TLS configuration. This can be done with dedicated custom resources: MirrordTlsStealConfig
and MirrordClusterTlsStealConfig
. These two resources look and work almost the same. The only exception is that MirrordTlsStealConfig
is scoped to the namespace in which you create it, while MirrordClusterTlsStealConfig
scopes the whole Kubernetes cluster.
An example MirrordTlsStealConfig
resource that configures HTTPS stealing from an example-deploy
deployment living in namespace example-deploy-namespace
:
apiVersion: mirrord.metalbear.co/v1alpha
kind: MirrordTlsStealConfig
metadata:
# The name indicates that this configuration is for the `example-deploy` deployment,
# but it does not really matter. The mirrord Operator does not inspect config resources' names.
name: tls-steal-config-for-example-deploy
# This is the namespace-scoped variant of the configuration resource,
# so it must live in the same namespace as the `example-deploy` deployment.
namespace: example-deploy-namespace
spec:
# A wildcard pattern that will be matched against session target's path.
#
# This pattern can contain `*` and `?` characters, where:
# 1. `*` will match any amount of any characters;
# 2. `?` will any character once.
#
# E.g `deploy/*/container/container-?` will match both `deploy/name/container/container-1` and `deploy/another-name/container/container-2`.
#
# mirrord session target path is produced from:
# 1. Target resource type (e.g deploy, pod, rollout, statefulset, etc.);
# 2. Target resource name;
# 3. `container` literal (if the user selected an exact container as the target);
# 4. Target container name (if the user selected an exact container as the target).
#
# Note that the user can target pods of the `example-deploy` deployment either indirectly, by targeting the deployment, or directly.
# They can also specify an exact target container or not.
#
# Optional. Defaults to a pattern that matches everything.
targetPath: "*/example-deploy*"
# A label selector that will be matched against session target's labels.
#
# Optional. Defaults to a selector that matches everything.
selector:
matchLabels:
app: example-deploy
# Each port on the target can be configured separately.
ports:
# This entry configures HTTPS stealing from port 443.
- port: 443
# Configures how the mirrord-agent authenticates itself and verifies the clients (original request senders) when acting as a TLS server.
agentAsServer:
# Configures how the server authenticates itself.
authentication:
# Path to a PEM file containing a certificate chain to use.
#
# This file must contain at least one certificate.
# It can contain entries of other types, e.g private keys, which are ignored.
# Certificates are expected to be listed from the end-entity to the root.
certPem: /path/to/server/cert.pem
# Path to a PEM file containing a private key matching the certificate chain from `certPem`.
#
# This file must contain exactly one private key.
# It can contain entries of other types, e.g certificates, which are ignored.
keyPem: /path/to/server/key.pem
# ALPN protocols supported by the server, in order of preference.
#
# If empty, ALPN is disabled.
#
# Optional. Defaults to en ampty list.
alpnProtocols:
- h2
- http/1.1
- http/1.0
# Configures how the server verifies the clients.
#
# Optional. If not present, the server will not offer TLS client authentication at all.
verification:
# Whether anonymous clients should be accepted.
#
# Optional. Defaults to false.
allowAnonymous: false
# Whether the server should accept any certificate, regardless of its validity and who signed it.
#
# Note that this setting does not affect whether anonymous clients are accepted or not.
# If `allowAnonymous` is not set, some certificate will still be required.
#
# Optional. Defaults to false.
acceptAnyCert: false
# Paths to PEM files and directories PEM files containing allowed root certificates.
#
# Directories are not traversed recursively.
#
# Each certificate found in the files is treated as an allowed root.
# The files can contain entries of other types, e.g private keys, which are ignored.
#
# Optional. Defaults to an empty list.
trustRoots:
- /path/to/trusted/client/root/cert.pem
# Configures how the mirrord-agent authenticates itself and verifies the server (original request destination) when acting as a TLS client.
agentAsClient:
# Configures how the client authenticates itself.
#
# Optional. If not present, the client will make the connections anonymously.
authentication:
# Path to a PEM file containing a certificate chain to use.
#
# This file must contain at least one certificate.
# It can contain entries of other types, e.g private keys, which are ignored.
# Certificates are expected to be listed from the end-entity to the root.
certPem: /path/to/client/cert.pem
# Path to a PEM file containing a private key matching the certificate chain from `certPem`.
#
# This file must contain exactly one private key.
# It can contain entries of other types, e.g certificates, which are ignored.
keyPem: /path/to/client/key.pem
# Configures how the client verifies the server.
verification:
# Whether to accept any certificate, regardless of its validity and who signed it.
#
# Optional. Defaults to false.
acceptAnyCert: false
# Paths to PEM files and directories PEM files containing allowed root certificates.
#
# Directories are not traversed recursively.
#
# Each certificate found in the files is treated as an allowed root.
# The files can contain entries of other types, e.g private keys, which are ignored.
#
# Optional. Defaults to an empty list.
trustRoots:
- /path/to/trusted/server/root/cert.pem
Each MirrordTlsStealConfig
/MirrordClusterTlsStealConfig
resource configures HTTPS stealing for some set of available mirrord targets. With the use of spec.targetPath
and spec.selector
, you can link one configuration resource to multiple pods, deployments, rollouts, etc.
When the mirrord Operator finds multiple configuration resources matching the session target path and labels, it merges their ports
lists. The same port cannot be configured multiple times (extra entries are discarded).
Important: mirrord-agent will search for all files and directories referenced by the config resources in the target container filesystem.
By default, when delivering stolen HTTPS requests to your local application, mirrord uses the original protocol - TLS. The connection is be made from your local machine by an anonymous TLS client that does not verify the server certificate.
This behavior can be configured in your mirrord config with feature.network.incoming.https_delivery
.
Sharing queues by splitting messages between multiple clients and the cluster
If your application consumes messages from a queue service, you should choose a configuration that matches your intention:
Running your application with mirrord without any special configuration will result in your local application competing with the remote target (and potentially other mirrord runs by teammates) for queue messages.
Running your application with copy_target
+ scale_down
will result in the deployed application not consuming any messages, and your local application being the exclusive consumer of queue messages.
If you want to control which messages will be consumed by the deployed application, and which ones will reach your local application, set up queue splitting for the relevant target, and define a messages filter in the mirrord configuration. Messages that match the filter will reach your local application, and messages that do not, will reach either the deployed application, or another teammate's local application, if they match their filter.
This feature is only relevant for users on the Team and Enterprise pricing plans.
NOTE: So far queue splitting is available for Amazon SQS and Kafka. Pretty soon we'll support RabbitMQ as well.
When an SQS splitting session starts, the operator changes the target workload to consume messages from a different, temporary queue created by the operator. The operator also creates a temporary queue that the local application reads from.
So if we have a consumer app reading messages from a queue:
After a mirrord SQS splitting session starts, the setup will change to this:
The operator will consume messages from the original queue, and try to match their attributes with filter defined by the user in the mirrord configuration file (read more in the last section). A message that matches the filter will be sent to the queue consumed by the local application. Other messages will be sent to the queue consumed by the remote application.
And as soon as a second mirrord SQS splitting session starts, the operator will create another temporary queue for the new local app:
The users' filters will be matched in the order of the start of their sessions. If filters defined by two users both match a message, the message will go to whichever user started their session first.
After a mirrord session ends, the operator will delete the temporary queue that was created for that session. When all sessions that split a certain queue end, the mirrord Operator will wait for the deployed application to consume the remaining messages in its temporary queue, and then delete that temporary queue as well, and change the deployed application to consume messages back from the original queue.
When a Kafka splitting session starts, the operator changes the target workload to consume messages from a different, temporary topic created by the operator in the same Kafka cluster. The operator also creates a temporary topic that the local application reads from.
So if we have a consumer app reading messages from a topic:
After a mirrord Kafka splitting session starts, the setup will change to this:
The operator will consume messages from the original topic (using the same consumer group id as the target workload), and try to match their headers with filter defined by the user in the mirrord configuration file (read more in the last section). A message that matches the filter will be sent to the topic consumed by the local application. Other messages will be sent to the topic consumed by the remote application.
And as soon as a second mirrord Kafka splitting session starts, the operator will create another temporary queue for the new local app:
The users' filters will be matched in the order of the start of their sessions. If filters defined by two users both match a message, the message will go to whichever user started their session first.
After a mirrord session ends, the operator will delete the temporary topic that was created for that session. When all sessions that split a certain topic end, the mirrord Operator will change the deployed application to consume messages back from the original topic and delete the temporary topic as well.
In order to use the SQS splitting feature, some extra values need be provided during the installation of the mirrord Operator.
First of all, the SQS splitting feature needs to be enabled:
When installing with the mirrord-operator Helm chart it is enabled by setting the operator.sqsSplitting
value to true
.
When installing via the mirrord operator setup
command, set the --sqs-splitting
flag.
When SQS splitting is enabled during installation, some additional resources are created, and the SQS component of the mirrord Operator is started.
Additionally, the operator needs to be able to do some operations on SQS queues in your account. For that, an IAM role with an appropriate policy has to be assigned to the operator's service account. Please follow AWS's documentation on how to do that.
Some of the permissions are needed for your actual queues that you would like to split, and some permissions are only needed for the temporary queues the mirrord Operator creates and later deletes. Here is an overview:
GetQueueUrl
✓
ListQueueTags
✓
ReceiveMessage
✓
DeleteMessage
✓
GetQueueAttributes
✓
✓ (both!)
CreateQueue
✓
TagQueue
✓
SendMessage
✓
DeleteQueue
✓
Here we provide a short explanation for each required permission.
sqs:GetQueueUrl
: the operator finds queue names to split in the provided source, and then it fetches the URL from SQS in order to make all other API calls.
sqs:GetQueueAttributes
: the operator gives all temporary queues the same attributes as their corresponding original queue, so it needs permission to get the original queue's attributes. It also reads the attributes of temporary queues it created, in order to check how many messages they have approximately.
sqs:ListQueueTags
: the operator queries your queue's tags, in order to give all temporary queues that are created for that queue the same tags.
sqs:ReceiveMessage
: the mirrord Operator will read messages from queues you want to split.
sqs:DeleteMessage
: after reading a message and forwarding it to a temporary queue, the operator deletes it.
sqs:CreateQueue
: the mirrord Operator will create temporary queues in your SQS account.
sqs:TagQueue
: all the queues mirrord creates will be tagged with all the tags of their respective original queues, plus any tags that are configured for them in the MirrordWorkloadQueueRegistry
in which they are declared.
sqs:SendMessage
: mirrord will send the messages it reads from an original queue to the temporary queue of the client whose filter matches it, or to the temporary queue the deployed application reads from.
sqs:DeleteQueue
: when a user session is done, mirrord will delete the temporary queue it created for that session. After all sessions that split a certain queue end, also the temporary queue that is for the deployed application is deleted.
This is an example for a policy that gives the operator's roles the minimal permissions it needs to split a queue called ClientUploads
:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sqs:GetQueueUrl",
"sqs:GetQueueAttributes",
"sqs:ListQueueTags",
"sqs:ReceiveMessage",
"sqs:DeleteMessage"
],
"Resource": [
"arn:aws:sqs:eu-north-1:314159265359:ClientUploads"
]
},
{
"Effect": "Allow",
"Action": [
"sqs:CreateQueue",
"sqs:TagQueue",
"sqs:SendMessage",
"sqs:GetQueueAttributes",
"sqs:DeleteQueue"
],
"Resource": "arn:aws:sqs:eu-north-1:314159265359:mirrord-*"
}
]
}
The first statement gives the role the permissions it needs for your original queues.
Instead of specifying the queues you would like to be able to split in the first statement, you could alternatively make that statement apply for all resources in the account, and limit the queues it applies to using conditions instead of resource names. For example, you could add a condition that makes the statement only apply to queues with the tag splittable=true
or env=dev
etc. and set those tags for all queues you would like to allow the operator to split.
The second statement in the example gives the role the permissions it needs for the temporary queues. Since all the temporary queues created by mirrord are created with the name prefix mirrord-
, that statement in the example is limited to resources with that prefix in their name.
If you would like to limit the second statement with conditions instead of (only) with the resource name, you can set a condition that requires a tag, and in the MirrordWorkloadQueueRegistry
resource you can specify for each queue tags that mirrord will set for temporary queues that it creates for that original queue.
If the queue messages are encrypted, the operator's IAM role should also have the following permissions:
kms:Encrypt
kms:Decrypt
kms:GenerateDataKey
The ARN of the IAM role has to be passed when installing the operator.
When installing with Helm, the ARN is passed via the sa.roleArn
value
When installing via the mirrord operator setup
command, use the --aws-role-arn
flag.
In order to be targeted with SQS queue splitting, a workload has to be able to read from queues that are created by mirrord.
Any temporary queues created by mirrord are created with the same policy as the original queues they are splitting (with the single change of the queue name in the policy), so if a queue has a policy that allows the target workload to call ReceiveMessage
on it, that is enough.
However, if the workload gets its access to the queue by an IAM policy (and not an SQS policy, see SQS docs) that grants access to that specific queue by its exact name, you would have to add a policy that would allow that workload to also read from new temporary queues created by mirrord on the run.
On operator installation, a new CustomResources
type was created on your cluster: MirrordWorkloadQueueRegistry
. Users with permissions to get CRDs, can verify its existence with kubectl get crd mirrordworkloadqueueregistries.queues.mirrord.metalbear.co
. After an SQS-enabled operator is installed, and before you can start splitting queues, a resource of that type must be created for the target you want to run against, in the target's namespace.
Below we have an example for such a resource, for a meme app that consumes messages from two queues:
apiVersion: queues.mirrord.metalbear.co/v1alpha
kind: MirrordWorkloadQueueRegistry
metadata:
name: meme-app-q-registry
spec:
queues:
meme-queue:
queueType: SQS
nameSource:
envVar: INCOMING_MEME_QUEUE_NAME
tags:
tool: mirrord
ad-queue:
queueType: SQS
nameSource:
envVar: AD_QUEUE_NAME
tags:
tool: mirrord
consumer:
name: meme-app
container: main
workloadType: Deployment
spec.queues
holds queues that should be split when running mirrord with this target. It is a mapping from a queue ID to the details of the queue.
The queue ID is chosen by you, and will be used by every teammate who wishes to filter messages from this queue. You can choose any string for that, it does not have to be the same as the name of the queue. In the example above the first queue has the queue id meme-queue
and the second one ad-queue
.
nameSource
tells mirrord where the app finds the name of this queue.
Currently envVar
is the only supported source for the queue name, but in the future we will also support other sources, such as config maps. The value of envVar
is the name of the environment variable the app reads the queue name from. It is crucial that both the local and the deployed app use the queue name they find in that environment variable. mirrord changes the value of that environment variable in order to make the application read from a temporary queue it creates.
tags
is an optional field where you can specify queue tags that should be added to all temporary queues mirrord creates for splitting this queue.
spec.consumer
is the workload that consumes these queues. The queues specified above will be split whenever that workload is targeted.
container
is optional, when set - this queue registry only applies to runs that target that container.
In order to use the Kafka splitting feature, some extra values need be provided during the installation of the mirrord Operator.
First of all, the Kafka splitting feature needs to be enabled:
When installing with the mirrord-operator Helm chart it is enabled by setting the operator.kafkaSplitting
value to true
.
When installing via the mirrord operator setup
command, set the --kafka-splitting
flag.
When Kafka splitting is enabled during installation, some additional resources are created, and the Kafka component of the mirrord Operator is started.
On operator installation, new CustomResources
types were created on your cluster: MirrordKafkaTopicsConsumer
and MirrordKafkaClientConfig
. Users with permissions to get CRDs, can verify their existence with kubectl get crd mirrordkafkatopicsconsumers.queues.mirrord.metalbear.co
and kubectl get crd mirrordkafkaclientconfigs.queues.mirrord.metalbear.co
.
After a Kafka-enabled operator is installed, and before you can start splitting queues, resources of these types must be created.
MirrordKafkaTopicsConsumer
is a resource that must be created in the same namespace as the target workload. It describes Kafka topics that this workload consumes and contains instructions for the mirrord Operator on how to execture splitting. Each MirrordKafkaTopicsConsumer
is linked to a single workload that can be targeted with a Kafka splitting session.
MirrordKafkaClientConfig
is a resource that must be created in the namespace where mirrord operator is installed. It contains properties that the operator will use when creating a Kafka client used for all Kafka operations during the split. This resource is referenced by MirrordKafkaTopicsConsumer
.
MirrordKafkaTopicsConsumer
Below we have an example for MirrordKafkaTopicsConsumer
resource, for a meme app that consumes messages from a Kafka topic:
apiVersion: queues.mirrord.metalbear.co/v1alpha
kind: MirrordKafkaTopicsConsumer
metadata:
name: meme-app-topics-consumer
spec:
consumerApiVersion: apps/v1
consumerKind: Deployment
consumerName: meme-app
topics:
- id: views-topic
clientConfig: base-config
groupIdSources:
- directEnvVar:
container: consumer
variable: KAFKA_GROUP_ID
nameSources:
- directEnvVar:
container: consumer
variable: KAFKA_TOPIC_NAME
spec.topics
is a list of topics that can be split when running mirrord with this target.
The topic ID is chosen by you, and will be used by every teammate who wishes to filter messages from this topic. You can choose any string for that, it does not have to be the same as the name of the queue. In the example above the topic has id views-topic
.
clientConfig
is the name of the MirrordKafkaClientConfig
resource living in the mirrord Operator's namespace that will be used when interacting with the Kafka cluster.
groupIdSources
holds a list of all occurences of Kafka consumer group id in the workload's pod spec. mirrord Operator will use this group id when consuming messages from the topic.
Currently the only supported source type is an environment variable with value defined directly in the pod spec.
nameSources
holds a list of all occurences of topic name in the workload's pod spec. mirrord Operator will use this name when consuming messages. It is crucial that both the local and deployed app take topic name from these sources, as mirrord Operator will use them to inject the names of temporary topics.
Currently the only supported source type is an environment variable with value defined directly in the pod spec.
MirrordKafkaClientConfig
Below we have an example for MirrordKafkaClientConfig
resource:
apiVersion: queues.mirrord.metalbear.co/v1alpha
kind: MirrordKafkaClientConfig
metadata:
name: base-config
namespace: mirrord
spec:
properties:
- name: bootstrap.servers
value: kafka.default.svc.cluster.local:9092
When used by the mirrord Operator for Kafka splitting, the example below will be resolved to following .properties
file:
bootstrap.servers=kafka.default.svc.cluster.local:9092
This file will be used when creating a Kafka client for managing temporary topics, consuming messages from the original topic and producing messages to the temporary topics. Full list of available properties can be found here.
NOTE:
group.id
property will always be overwritten by mirrord Operator when resolving the.properties
file.
MirrordKafkaClientConfig
resource supports property inheritance via spec.parent
field. When resolving a resource X
that has parent Y
:
Y
is resolved into a .properties
file.
For each property defined in X
:
If value
is provided, it overrides any previous value of that property
If value
is not provided (null
), that property is removed
Below we have an example of two MirrordKafkaClientConfig
s with inheritance relation:
apiVersion: queues.mirrord.metalbear.co/v1alpha
kind: MirrordKafkaClientConfig
metadata:
name: base-config
namespace: mirrord
spec:
properties:
- name: bootstrap.servers
value: kafka.default.svc.cluster.local:9092
- name: message.send.max.retries
value: 4
apiVersion: queues.mirrord.metalbear.co/v1alpha
kind: MirrordKafkaClientConfig
metadata:
name: with-client-id
namespace: mirrord
spec:
parent: base-config
properties:
- name: client.id
value: mirrord-operator
- name: message.send.max.retries
value: null
When used by the mirrord Operator for Kafka splitting, the with-client-id
below will be resolved to following .properties
file:
bootstrap.servers=kafka.default.svc.cluster.local:9092
client.id=mirrord-operator
MirrordKafkaClientConfig
also supports setting properties from a Kubernetes Secret
with the spec.loadFromSecret
field. The value for loadFromSecret
is given in the form: <secret-namespace>/<secret-name>
.
Each key-value entry defined in secret's data will be included in the resulting .properties
file. Property inheritance from the parent still occurs, and within each MirrordKafkaClientConfig
properties loaded from the secret are overwritten by those in properties
.
This means the priority of setting properties (from highest to lowest) is like so:
childProperty
childSecret
parentProperty
parentSecret
Below is an example for a MirrordKafkaClientConfig
resource that references a secret:
apiVersion: queues.mirrord.metalbear.co/v1alpha
kind: MirrordKafkaClientConfig
metadata:
name: base-config
namespace: mirrord
spec:
loadFromSecret: mirrord/my-secret
properties: []
NOTE: By default, the operator will only have access to secrets in its own namespace (
mirrord
by default).
NOTE: Available since chart version
1.27
and operator version 3.114.0
To serve Kafka splitting sessions, mirrord Operator creates temporary topics in the Kafka cluster. The default format for their names is as follows:
mirrord-tmp-1234567890-fallback-topic-original-topic
- for the fallback topic (unfiltered messages, consumed by the deployed workload).
mirrord-tmp-9183298231-original-topic
- for the user topics (filtered messages, consumed by local applications running with mirrord).
Note that the random digits will be unique for each temporary topic created by the mirrord Operator.
You can adjust the format of the created topic names to suit your needs (RBAC, Security, Policies, etc.), using the OPERATOR_KAFKA_SPLITTING_TOPIC_FORMAT
environment variable of the mirrord Operator, or operator.kafkaSplittingTopicFormat
helm chart value. The default value is:
mirrord-tmp-{{RANDOM}}{{FALLBACK}}{{ORIGINAL_TOPIC}}
The provided format must contain the three variables: {{RANDOM}}
, {{FALLBACK}}
and {{ORIGINAL_TOPIC}}
.
{{RANDOM}}
will resolve to random digits.
{{FALLBACK}}
will resolve either to -fallback-
or -
literal.
{{ORIGINAL_TOPIC}}
will resolve to the name of the original topic that is being split.
Once everything else is set, you can start using message filters in your mirrord configuration file. Below is an example for what such a configuration might look like:
{
"operator": true,
"target": "deployment/meme-app/main",
"feature": {
"split_queues": {
"meme-queue": {
"queue_type": "SQS",
"message_filter": {
"author": "^me$",
"level": "^(beginner|intermediate)$"
}
},
"ad-queue": {
"queue_type": "SQS",
"message_filter": {}
},
"views-topic": {
"queue_type": "Kafka",
"message_filter": {
"author": "^me$",
"source": "^my-session-"
}
}
}
}
}
feature.split_queues
is the configuration field you need to specify in order to filter queue messages. Directly under it, we have a mapping from a queue or topic ID to a queue filter definition.
Queue or topic ID is the ID that was set in the SQS queue registry resource or Kafka topics consumer resource.
message_filter
is a mapping from message attribute (SQS) or header (Kafka) names to message attribute or header value regexes. Your local application will only see queue messages that have all of the specified message attributes or headers.
Empty message_filter
is treated as a match-none directive.
In the example above, the local application:
Will receive a subset of messages from SQS queue with ID meme-queue
. All received messages will have an attribute author
with the value me
, AND an attribute level
with value either beginner
or intermediate
.
Will receive a subset of messages from Kafka topic with ID views-topic
. All received messages will have an attribute author
with the value me
, AND an attribute source
with value starting with my-session-
(e.g my-session-844cb78789-2fmsw
).
Will receive no messages from SQS queue with id ad-queue
.
Once all users stop filtering a queue (i.e. end their mirrord sessions), the temporary queues (SQS) and topics (Kafka) that mirrord operator created will be deleted.