Dot Metrics Migration Guide
SigNoz is migrating from the legacy underscore exporter to the new dot-metrics exporter. This guide will help you migrate your existing alerts and dashboards to the new system.
Overview
As previously communicated, SigNoz is moving from running both the old and new exporters simultaneously to running only the new exporter.
Further Details about this migration can be seen here
- This migration is a one-time process. Once completed successfully, your system will be fully transitioned to the new metrics exporter.
- Scripts are idempotent, so you dont have to worry about running them again
- Please Remove the configuration steps done in this guide after the migration is complete given in accordance to your deployment type
Migration
Migration Script
We provide a comprehensive migration script that will help you:
- Migrate all existing alerts and dashboards
- Backfill historical data for high retention users
- Ensure seamless transition to the new metrics system
Github - Migration Script
Pre-Requisite
- Take the backup of your SQLite DB.
- Take the count of total metrics, samples, and fingerprints, for checking the counts post migration.
- The migration process uses various environment variables to configure database connections, performance settings, and migration behavior. Understanding these variables is crucial for successful migration.
Environment Variables
ClickHouse Connection Variables
Variable | Description | Default | Required |
---|---|---|---|
CH_ADDR | ClickHouse server address and port | localhost:9001 | Yes |
CH_DATABASE | ClickHouse database name for metrics | default | Yes |
CH_USER | ClickHouse username for authentication | default | Yes |
CH_PASS | ClickHouse password for authentication | "" (empty) | Yes |
ClickHouse Performance & Connection Pool Variables
Variable | Description | What we used / Default |
---|---|---|
CH_MAX_OPEN_CONNS | Maximum number of open database connections | 10 (metadata), 32 (data), 5 (default) |
CH_MAX_IDLE_CONNS | Maximum number of idle connections in pool | 8 , 2 (default) |
CH_CONN_MAX_LIFETIME | Maximum lifetime of a connection | 30m , 10m (default) |
CH_DIAL_TIMEOUT | Connection timeout duration | 60s , 5s (default) |
CH_MAX_MEMORY_USAGE | Maximum memory usage per query (bytes) | 8388608000 (8GB), 1048576000 (default 1GB) |
CH_MAX_BYTES_BEFORE_EXTERNAL_GROUP_BY | Memory threshold for external GROUP BY operations | 524288000 (500MB), 104857600 (default 100MB) |
CH_MAX_BYTES_BEFORE_EXTERNAL_SORT | Memory threshold for external sorting | 524288000 (500MB), 104857600 (default 100MB) |
CH_MAX_EXECUTION_TIME | Maximum query execution time (seconds) | 300 , 90 (default) |
CH_MAX_THREADS | Maximum threads for query processing | 50 , 10 (default) |
Migration-Specific Variables
Variable | Description | Default | Usage |
---|---|---|---|
MIGRATE_WORKERS | Number of parallel migration workers | 12 | Data migration performance |
MIGRATE_MAX_OPEN_CONNS | Maximum connections for migration process | 32 | Migration-specific connection limit |
Metrics & Attributes Mapping Variables
These variables handle mappings for metrics and attributes that may have changed names during the migration, and their dot correspondent metrics and attribute name is not present in the DB.
Variable | Description | Example Value |
---|---|---|
NOT_FOUND_METRICS_MAP | Maps old metric names to new ones, it would be passed as string in this manner 'key1=value1,key2=value2' | rpc_server_responses_per_rpc_bucket=rpc.server.responses_per_rpc.bucket |
NOT_FOUND_ATTR_MAP | Maps old attribute names to new ones, it would be passed as string in this manner 'key1=value1,key2=value2' | http_scheme=http.scheme,net_peer_name=net.peer.name |
SKIP_METRICS_MAP | skip invalid metrics, it would be passed as string in this manner 'metricName1=true,metricName2=true' | http_scheme=http.scheme,net_peer_name=net.peer.name |
Kubernetes
If you're using Helm charts for deployment, follow these steps:
- Update your values.yaml to include the migration configuration
- Run the migration commands using the same Docker approach but ensure your ClickHouse connection parameters match your Helm deployment
- Verify the migration by checking your dashboards and alerts after completion
Step 1: Migrate Historical Data (For High Retention Users)
This job runs alongside your SigNoz pods and connects with ClickHouse to perform insert operations for migrating older data, (only for those users who have high retention periods)
apiVersion: batch/v1
kind: Job
metadata:
name: signoz-data-migration-job
namespace: your-namespace # Replace with your namespace
spec:
backoffLimit: 3
template:
spec:
containers:
- name: migration
image: signoz/migrate:v0.70.5
imagePullPolicy: IfNotPresent
args: ["migrate-data", "--workers=$(MIGRATE_WORKERS)", "--max-open-conns=$(MIGRATE_MAX_OPEN_CONNS)"]
env:
- name: CH_ADDR
value: "your-clickhouse-service:9000" # Replace with your ClickHouse service
- name: CH_DATABASE
value: "signoz_metrics"
- name: CH_USER
value: "admin" # Replace with your ClickHouse user
- name: CH_PASS
value: "your-password" # Replace with your ClickHouse password
- name: CH_MAX_OPEN_CONNS
value: "32"
- name: CH_MAX_MEMORY_USAGE
value: "8388608000"
- name: CH_MAX_BYTES_BEFORE_EXTERNAL_GROUP_BY
value: "524288000"
- name: CH_MAX_BYTES_BEFORE_EXTERNAL_SORT
value: "524288000"
- name: CH_DIAL_TIMEOUT
value: "60s"
- name: CH_CONN_MAX_LIFETIME
value: "30m"
- name: CH_MAX_IDLE_CONNS
value: "8"
- name: CH_MAX_EXECUTION_TIME
value: "300"
- name: CH_MAX_THREADS
value: "50"
- name: MIGRATE_WORKERS
value: "12"
- name: MIGRATE_MAX_OPEN_CONNS
value: "32"
- name: NOT_FOUND_METRICS_MAP
value: "rpc_server_responses_per_rpc_bucket=rpc.server.responses_per_rpc.bucket"
- name: NOT_FOUND_ATTR_MAP
value: "http_scheme=http.scheme,net_peer_name=net.peer.name,net_peer_port=net.peer.port,net_protocol_name=net.protocol.name,net_protocol_version=net.protocol.version,rpc_grpc_status_code=rpc.grpc.status_code,rpc_method=rpc.method,rpc_service=rpc.service,rpc_system=rpc.system"
resources:
requests:
memory: 116000Mi
cpu: 29500m
restartPolicy: Never
tolerations:
- effect: NoSchedule
key: signoz.cloud/workload
operator: Equal
value: store
- effect: NoSchedule
key: signoz.cloud/deployment.tier
operator: Equal
value: premium
Apply the data migration job:
kubectl apply -f data-migration-job.yaml
# 1) List Jobs (verify signoz-data-migration-job exists and its status)
kubectl get jobs -n <your-namespace>
# 2) List Pods created by that Job
kubectl get pods -l job-name=signoz-data-migration-job -n <your-namespace> -o wide
To verify the migration is successful, you can check the logs of the migration
job logs:
kubectl logs job/signoz-data-migration-job -c migration -n <your-namespace>
2025-06-25 18:30:12 {"level":"info","ts":1750856412.9032447,"caller":"migrate/main.go:659","msg":"query args","start":1749718800000,"end":1749722250683}
2025-06-25 18:30:12 {"level":"info","ts":1750856412.943157,"caller":"migrate/main.go:728","msg":"migration success for windows start: 1749718800000, and end: 1749722250683"}
2025-06-25 18:30:12 2025/06/25 13:00:12 Data migration completed [1749718800000…1749722250683]
Step 2: Migrate Alerts and Dashboards Using Init Container
For alerts and dashboards migration, you need to add an init container to your SigNoz deployment. This init container will run before the main SigNoz container starts and execute the necessary queries on the SQLite database to migrate alerts and dashboards.
Add the following init container to your SigNoz deployment manifest:
initContainers:
- name: migration
image: signoz/migrate:v0.70.5
imagePullPolicy: IfNotPresent
env:
- name: SQL_DB_PATH
value: /var/lib/signoz/signoz.db
- name: CH_ADDR
value: "your-clickhouse-service:9000" # Replace with your ClickHouse service
- name: CH_DATABASE
value: signoz_metrics
- name: CH_USER
value: admin
- name: CH_PASS
value: "your-password" # Replace with your ClickHouse password
- name: CH_MAX_OPEN_CONNS
value: "10"
- name: SKIP_METRICS_MAP
value: "dd_internal_stats_payload=true"
- name: CH_MAX_MEMORY_USAGE
value: "8388608000" # 8 GB
- name: CH_MAX_BYTES_BEFORE_EXTERNAL_GROUP_BY
value: "4194304000" # 4 GB
- name: CH_MAX_BYTES_BEFORE_EXTERNAL_SORT
value: "4194304000" # 4 GB
- name: NOT_FOUND_METRICS_MAP
value: |-
rpc_server_responses_per_rpc_bucket=rpc.server.responses_per_rpc.bucket
- name: NOT_FOUND_ATTR_MAP
value: |-
http_scheme=http.scheme,
args:
- migrate-meta
resources: {} # add limits/requests as needed
volumeMounts:
- name: signoz-db
mountPath: /var/lib/signoz
# 1) List all Deployments in the Signoz namespace
kubectl get deploy -n <your-namespace>
# 2) List Pods (init containers show up as Init:<x>/<y> until they finish)
kubectl get pod -n <your-namespace> -o wide
To verify the migration is successful, you can check the logs of the migration
init container:
kubectl logs <pod-name> -c migration -n <your-namespace>
✅ updated x dashboards in signoz.db
✅ updated y rules in /var/lib/signoz/signoz.db
Step 3: Enable New Metrics
You also need to pass environment variable DOT_METRICS_ENABLED
to true
while running the SigNoz container, to allow the platform to use dot metrics.
Step 4: Clean Up Migration Jobs
After the migration is complete, you should remove the init container from your SigNoz deployment manifest to prevent it from running again.
Docker Standalone Users
If you're running SigNoz using Docker, follow these steps:
- Update your Docker Compose file to include the migration configuration
- Run the migration commands using the provided Docker image
- Verify the migration by checking your dashboards and alerts after completion
Step 1: Migrate Historical Data (For High Retention Users)
This job runs alongside your SigNoz container and connects with ClickHouse to perform insert operations for migrating older data, (only for those users who have high retention periods)
Add the following to your docker-compose.yml
file, Kindly make sure that your Clickhouse container is running and accessible.
migration-job:
!!merge <<: *db-depend
image: signoz/migrate:v0.70.5
command: >
migrate-data
--workers=${MIGRATE_WORKERS:-12}
--max-open-conns=${MIGRATE_MAX_OPEN_CONNS:-32}
environment:
# Point to the ClickHouse service defined in SigNoz's compose file
CH_ADDR: "your-clickhouse-service-address:9000"
CH_DATABASE: signoz_metrics
CH_USER: default # Replace with your ClickHouse user
# If you have a password set for ClickHouse, replace it here
# If you are using ClickHouse without password, you can leave it empty
CH_PASS: ""
CH_MAX_OPEN_CONNS: "32"
CH_MAX_MEMORY_USAGE: "8388608000"
CH_MAX_BYTES_BEFORE_EXTERNAL_GROUP_BY: "524288000"
CH_MAX_BYTES_BEFORE_EXTERNAL_SORT: "524288000"
CH_DIAL_TIMEOUT: "60s"
CH_CONN_MAX_LIFETIME: "30m"
CH_MAX_IDLE_CONNS: "8"
CH_MAX_EXECUTION_TIME: "300"
CH_MAX_THREADS: "50"
MIGRATE_WORKERS: "12"
MIGRATE_MAX_OPEN_CONNS: "32"
NOT_FOUND_METRICS_MAP: >
rpc_server_responses_per_rpc_bucket=rpc.server.responses_per_rpc.bucket
NOT_FOUND_ATTR_MAP: >
http_scheme=http.scheme,net_peer_name=net.peer.name,net_peer_port=net.peer.port,
net_protocol_name=net.protocol.name,net_protocol_version=net.protocol.version,
rpc_grpc_status_code=rpc.grpc.status_code,rpc_method=rpc.method,
rpc_service=rpc.service,rpc_system=rpc.system
restart: "no"
Apply the migration job:
docker-compose up -d migration-job
Check the logs to ensure the migration is running smoothly:
docker-compose logs -f migration-job
2025-06-25 18:30:12 {"level":"info","ts":1750856412.9032447,"caller":"migrate/main.go:659","msg":"query args","start":1749718800000,"end":1749722250683}
2025-06-25 18:30:12 {"level":"info","ts":1750856412.943157,"caller":"migrate/main.go:728","msg":"migration success for windows start: 1749718800000, and end: 1749722250683"}
2025-06-25 18:30:12 2025/06/25 13:00:12 Data migration completed [1749718800000…1749722250683]
Step 2: Migrate Alerts and Dashboards
For alerts and dashboards migration, you need to run the migration script against your SQLite database. This can be done using the same Docker image.
Add the following service in your docker compose setup or run the container separately ( if running separately make sure to bring down the signoz container first, as it will modify the SQLite database):
migrate-signoz:
!!merge <<: *db-depend
image: signoz/migrate:v0.70.5
command: migrate-meta
restart: "no"
environment:
SQL_DB_PATH: /var/lib/signoz/signoz.db
CH_ADDR: signoz-clickhouse:9000
CH_DATABASE: signoz_metrics
CH_USER: default
CH_PASS: ""
CH_MAX_OPEN_CONNS: "10"
CH_MAX_MEMORY_USAGE: "8388608000"
CH_MAX_BYTES_BEFORE_EXTERNAL_GROUP_BY: "4194304000"
CH_MAX_BYTES_BEFORE_EXTERNAL_SORT: "4194304000"
SKIP_METRICS_MAP: "dd_internal_stats_payload=true"
NOT_FOUND_METRICS_MAP: "rpc_server_responses_per_rpc_bucket=rpc.server.responses_per_rpc.bucket"
NOT_FOUND_ATTR_MAP: "http_scheme=http.scheme,"
volumes:
- sqlite:/var/lib/signoz
To make sure that migration runs before the main SigNoz container, you can use the depends_on
directive in your docker-compose.yml
:
signoz:
depends_on:
- migrate-signoz
For checking whether the migration is successful or not, you can check the logs of the migrate-signoz
container:
docker logs -f migrate-signoz
✅ updated x dashboards in signoz.db
✅ updated y rules in /var/lib/signoz/signoz.db
Step 3: Enable New Metrics
You also need to pass environment variable DOT_METRICS_ENABLED
to true
, to allow the platform to use dot metrics.
signoz:
environment:
DOT_METRICS_ENABLED: "true"
Apply the updated stack configuration:
docker-compose up -d signoz
Step 4: Clean Up Migration Jobs
After the migration is complete, you should remove the migration jobs from your Docker setup to prevent them from running again. You can do this by stopping and removing the migration containers:
Docker Swarm
If you're running SigNoz using Docker Swarm, follow these steps:
Step 1: Migrate Historical Data (For High Retention Users)
You can run the migration job in your Docker Swarm setup by creating a service with the migration image, ** Kindly make sure your ClickHouse service is running and accessible**.
Below service will run as a job and will not restart after completion.
docker service create \
--name signoz_migration_job \
--mode replicated-job --replicas 1 \
--restart-condition none \
--network signoz-net \
--mount type=volume,src=sqlite,dst=/var/lib/signoz \
\
--env CH_ADDR=signoz_clickhouse:9000 \
--env CH_DATABASE=signoz_metrics \
--env CH_USER=default \
--env CH_PASS= \
\
--env CH_MAX_OPEN_CONNS=32 \
--env CH_MAX_MEMORY_USAGE=8388608000 \
--env CH_MAX_BYTES_BEFORE_EXTERNAL_GROUP_BY=524288000 \
--env CH_MAX_BYTES_BEFORE_EXTERNAL_SORT=524288000 \
--env CH_DIAL_TIMEOUT=60s \
--env CH_CONN_MAX_LIFETIME=30m \
--env CH_MAX_IDLE_CONNS=8 \
--env CH_MAX_EXECUTION_TIME=300 \
--env CH_MAX_THREADS=50 \
\
--env MIGRATE_WORKERS=12 \
--env MIGRATE_MAX_OPEN_CONNS=32 \
\
--env NOT_FOUND_METRICS_MAP=rpc_server_responses_per_rpc_bucket=rpc.server.responses_per_rpc.bucket \
--env NOT_FOUND_ATTR_MAP=http_scheme=http.scheme,net_peer_name=net.peer.name,net_peer_port=net.peer.port,net_protocol_name=net.protocol.name,net_protocol_version=net.protocol.version,rpc_grpc_status_code=rpc.grpc.status_code,rpc_method=rpc.method,rpc_service=rpc.service,rpc_system=rpc.system \
\
signoz/migrate:v0.70.5 \
migrate-data --workers 12 --max-open-conns 32
After the service is created, you can check the status of the migration job:
docker service ls
To check the logs of the migration job, you can use:
docker service logs signoz_migration_job -f
signoz_migration_job | {"level":"info","ts":1750878083.9140384,"caller":"migrate/main.go:659","msg":"query args","start":1750867200000,"end":1750869262833}
signoz_migration_job | {"level":"info","ts":1750878084.1492937,"caller":"migrate/main.go:728","msg":"migration success for windows start: 1750867200000, and end: 1750869262833"}
signoz_migration_job | 2025/06/25 19:01:24 Data migration completed [1750867200000…1750869262833]
Step 2: Migrate Alerts and Dashboards
For alerts and dashboards migration, you can create another service in your Docker Swarm setup using the migration image.
** Make sure to bring down the existing SigNoz service before running this migration job, as it will modify the SQLite database. You can bring it back up after the migration is complete.**
# Keep the original count of repliacs for signoz service
orig_replicas=$(docker service inspect "$SIGNOZ_SERVICE" \
--format '{{.Spec.Mode.Replicated.Replicas}}')
docker service scale "${SIGNOZ_SERVICE}=0"
Run the migration job to update alerts and dashboards:
docker service create \
--name "signoz_meta-job" \
--mode replicated-job --replicas 1 \
--restart-condition none \
--network "signoz-net" \
--mount type=volume,src=signoz-sqlite,dst=/var/lib/signoz/ \
\
-e SQL_DB_PATH=/var/lib/signoz/signoz.db \
-e CH_ADDR=signoz_clickhouse:9000 \
-e CH_DATABASE=signoz_metrics \
-e CH_USER=default \
-e CH_PASS="" \
-e CH_MAX_MEMORY_USAGE="8388608000" \
-e CH_MAX_BYTES_BEFORE_EXTERNAL_GROUP_BY="4194304000" \
-e CH_MAX_BYTES_BEFORE_EXTERNAL_SORT="4194304000" \
-e CH_MAX_OPEN_CONNS="10" \
-e SKIP_METRICS_MAP="dd_internal_stats_payload=true" \
-e NOT_FOUND_METRICS_MAP="rpc_server_responses_per_rpc_bucket=rpc.server.responses_per_rpc.bucket" \
-e NOT_FOUND_ATTR_MAP="http_scheme=http.scheme," \
\
signoz/migrate:v0.70.5 \
migrate-meta
After the service is created, you can check the status of the migration job:
docker service ls
To check the logs of the migration job, you can use:
docker service logs signoz_meta-job -f
signoz_meta-job | ✅ updated 3 dashboards in signoz.db
signoz_meta-job | ✅ updated 0 rules in /var/lib/signoz/signoz.db
After the migration is complete, you can bring back your SigNoz service with the original number of replicas:
docker service scale "${SIGNOZ_SERVICE}=${orig_replicas}"
Step 3: Enable New Metrics
You also need to pass environment variable DOT_METRICS_ENABLED
to true
, to allow the platform to use dot metrics.
signoz:
environment:
DOT_METRICS_ENABLED: "true"
Apply the updated stack configuration:
docker stack deploy -c docker-compose.yml signoz
Step 4: Clean Up Migration Jobs
After the migration is complete, you should remove the migration jobs from your Docker Swarm setup to prevent them from running again.** Remove the following jobs:
docker service rm signoz_migration_job
docker service rm signoz_meta-job
Linux Users
If you're running SigNoz on a Linux server without Docker or Kubernetes, you can follow these steps:
- Download the migration binary from the migration repository.
- Run the migration binary from the command line, passing the necessary environment variables and command-line arguments according to your setup.
Step 0: Enabled Dual Exporter
Before proceeding with the migration, ensure you that you have the Dual Exporter enabled in your SigNoz setup.
Check the otel-collector configuration file to verify that you have both of the following exporters in your signozspanmetrics/delta
processor.
clickhousemetricswrite
signozclickhousemetrics
If not then you can enable it by following configuration:
processors:
signozspanmetrics/delta:
metrics_exporters: clickhousemetricswrite, signozclickhousemetrics
exporter:
clickhousemetricswrite:
endpoint: tcp://localhost:9000/signoz_metrics?password=password
timeout: 15s
resource_to_telemetry_conversion:
enabled: true
disable_v2: true
signozclickhousemetrics:
endpoint: tcp://localhost:9000/signoz_metrics?password=password
timeout: 15s
pipeline:
metrics:
exporters: [clickhousemetricswrite, signozclickhousemetrics]
Kindly check the logs of the otel-collector to ensure that both exporters are running correctly and no errors are being reported. You can do this by running:
sudo journalctl -u signoz-otel-collector.service -f
Step 1: Migrate Historical Data (For High Retention Users)
Please ensure that you have the clickhouse
installed on your system and ClickHouse server is running and accessible.
Run the migration command to backfill historical data:
#!/usr/bin/env bash
set -euo pipefail
trap 'echo "❌ Migration failed at line $LINENO."; exit 1' ERR
# 1) Export ClickHouse settings
export CH_ADDR="localhost:9000"
export CH_DATABASE="signoz_metrics"
export CH_USER="default"
export CH_PASS="password"
# 2) Export ClickHouse tuning knobs (optional—use what you need)
export CH_MAX_OPEN_CONNS="32"
export CH_MAX_MEMORY_USAGE="8388608000"
export CH_MAX_BYTES_BEFORE_EXTERNAL_GROUP_BY="524288000"
export CH_MAX_BYTES_BEFORE_EXTERNAL_SORT="524288000"
export CH_DIAL_TIMEOUT="60s"
export CH_CONN_MAX_LIFETIME="30m"
export CH_MAX_IDLE_CONNS="8"
export CH_MAX_EXECUTION_TIME="300"
export CH_MAX_THREADS="50"
export MIGRATE_WORKERS="12"
export MIGRATE_MAX_OPEN_CONNS="32"
export NOT_FOUND_METRICS_MAP="rpc_server_responses_per_rpc_bucket=rpc.server.responses_per_rpc.bucket"
export NOT_FOUND_ATTR_MAP="http_scheme=http.scheme,net_peer_name=net.peer.name,net_peer_port=net.peer.port,net_protocol_name=net.protocol.name,net_protocol_version=net.protocol.version,rpc_grpc_status_code=rpc.grpc.status_code,rpc_method=rpc.method,rpc_service=rpc.service,rpc_system=rpc.system"
export SKIP_METRICS_MAP="http_scheme=http.scheme,net_peer_name=net.peer.name"
# 3) Point to your local SQLite file (for metadata)
export SQL_DB_PATH="/var/lib/signoz/signoz.db"
# 4) Run metadata migration first (dashboards, users, etc.)
# echo ">> Migrating metadata (sqlite)…"
# migrate-linux migrate-meta
# 5) Then run the data migration (ClickHouse)
echo ">> Migrating metrics data (ClickHouse)…"
migrate_output=$(./migrate migrate-data \
--workers "${MIGRATE_WORKERS}" \
--max-open-conns "${MIGRATE_MAX_OPEN_CONNS}" \
2>&1)
# Print what the tool said
echo "$migrate_output"
# If any line contains `"level":"error"`, abort
if grep -q '"level":"error"' <<<"$migrate_output"; then
echo "❌ Migration failed: errors detected in migrate-data output."
exit 1
fi
echo "✅ Migration complete!"
Run the migration script to backfill historical data:
chmod +x migrate-script.sh
# Make sure you have the migrate binary in the same directory as this script
# If not, download it from the migration repository
# and place it in the same directory or update the script to point to the correct path.
./migrate-script.sh
On successful completion, you should see a message indicating that the migration was successful:
>> Migrating metrics data (ClickHouse)…
✅ Migration complete!
Step 2: Migrate Alerts and Dashboards
Bring down the Signoz service before running the migration script, as it will modify the SQLite database.
# Stop the SigNoz service if it's running
sudo systemctl stop signoz.service
Please ensure that you have the right user access to the SQLite database, if you have followed the default installation, the database file is located at /var/lib/signoz/signoz.db
with signoz
user having access to it.
Follow these steps to run the scripts as the signoz
user:
# Switch to the signoz user
sudo -u signoz -s -- bash
Run the migration script to update alerts and dashboards:
#!/usr/bin/env bash
set -euo pipefail
trap 'echo "❌ Migration failed at line $LINENO."; exit 1' ERR
# 1) Export ClickHouse settings
export CH_ADDR="localhost:9000"
export CH_DATABASE="signoz_metrics"
export CH_USER="default"
export CH_PASS="password"
# 2) Export ClickHouse tuning knobs (optional—use what you need)
export CH_MAX_OPEN_CONNS="32"
export CH_MAX_MEMORY_USAGE="8388608000"
export CH_MAX_BYTES_BEFORE_EXTERNAL_GROUP_BY="4194304000"
export CH_MAX_BYTES_BEFORE_EXTERNAL_SORT="524288000"
export NOT_FOUND_METRICS_MAP="rpc_server_responses_per_rpc_bucket=rpc.server.responses_per_rpc.bucket"
export NOT_FOUND_ATTR_MAP="http_scheme=http.scheme,"
export SKIP_METRICS_MAP="dd_internal_stats_payload=true"
# 3) Point to your local SQLite file (for metadata)
export SQL_DB_PATH="/var/lib/signoz/signoz.db"
# 4) Run metadata migration first (dashboards, users, etc.)
echo ">> Migrating metadata (sqlite)…"
migrate_output=$(./migrate migrate-meta \
2>&1)
echo "$migrate_output"
# If any line contains `"level":"error"`, abort
if grep -q '"level":"error"' <<<"$migrate_output"; then
echo "❌ Migration failed: errors detected in migrate-meta output."
exit 1
fi
echo "✅ Migration complete!"
Run the migration script to backfill historical data:
chmod +x migrate-script.sh
# Make sure you have the migrate binary in the same directory as this script
# If not, download it from the migration repository
# and place it in the same directory or update the script to point to the correct path.
./migrate-script.sh
On successful completion, you should see a message indicating that the migration was successful:
>> Migrating metadata (sqlite)…
✅ updated 0 dashboards in signoz.db
✅ updated 0 rules in /var/lib/signoz/signoz.db
✅ Migration complete!
Bring back the SigNoz service after the migration is complete:
# Start the SigNoz service again
sudo systemctl start signoz.service
Step 3: Enable New Metrics
To enable the new dot metrics, you need to set the environment variable DOT_METRICS_ENABLED
to true
in your SigNoz configuration.
Follow these steps to update the environment variable for the signoz
service:
vim /opt/signoz/conf/systemd.env
# Add the following line to enable dot metrics
DOT_METRICS_ENABLED=true
reload the signoz service to apply the changes:
sudo systemctl daemon-reload
sudo systemctl restart signoz.service
Migration Failure
If you encounter issues during migration:
- Visit our GitHub Issues
- Join our Community Slack for the support
Next Steps
Once migration is complete:
- Monitor your system for a few days to ensure stability
- Update any external integrations that might reference the old metrics format
- Update any SigNoz API key if the dot metrics exporter requires a new key
- Consider optimizing your retention policies based on the new exporter's capabilities
FAQ
- Connection errors: Verify your ClickHouse connection parameters
- Permission issues: Ensure the Docker container has access to your
signoz.db
file - Data inconsistencies: Run the migration script again if you notice missing data