feat(mining-2.0): 支持 standalone 独立部署模式,2.0 服务可脱离 1.0 单独运行

## 背景
2.0 挖矿生态需要从 1.0 服务器(192.168.1.111)迁移到独立服务器(192.168.1.10),
实现物理隔离部署。2.0 拥有独立的 PostgreSQL/Redis,通过远程 Kafka 接收 1.0 CDC 数据。

## docker-compose.2.0.yml 变更

### 网络参数化
- 网络改为 external: true + name 参数化:
  `name: ${RWA_NETWORK_NAME:-services_rwa-network}`
- shared 模式复用 1.0 网络, standalone 模式使用独立 rwa-2-network

### 基础设施地址参数化 (所有 7 个 backend service)
- DATABASE_URL: @${POSTGRES_HOST:-postgres}:${POSTGRES_PORT:-5432}
- REDIS_HOST: ${REDIS_HOST:-redis}
- KAFKA_BROKERS: ${KAFKA_BROKERS:-kafka:29092}

### 新增 3 个 standalone profile 基础设施服务
- postgres-2: PostgreSQL 16, WAL logical, init-multiple-dbs.sh 自动建库
- redis-2: Redis 7, appendonly 持久化
- debezium-2: Debezium Connect 2.4, 连接远程 Kafka(192.168.1.111:9093)
  用于 2.0 内部 outbox 模式 (服务事件 → Kafka → mining-admin-service)

### 可选依赖
- 应用 service 添加 depends_on postgres-2/redis-2 with required: false
  (Docker Compose v2.20+, shared 模式下自动跳过)

## deploy-mining.sh 变更

### 部署模式切换
- DEPLOY_MODE 环境变量: shared(默认) / standalone
- load_env() 按模式设置基础设施地址和 COMPOSE_ARGS
- shared 模式不 export 变量,避免覆盖容器侧 Docker Compose 默认值

### Kafka 远程操作支持
- run_kafka_cmd() / run_kafka_cmd_stdin(): standalone 模式用临时容器执行 Kafka CLI
- 所有 Kafka 操作函数 (sync_reset, sync_status, cdc_resnapshot, full_reset) 已适配

### 基础设施生命周期管理
- infra_up / infra_down: 独立管理 standalone 基础设施
- services_up(): standalone 模式自动先启动基础设施
- services_down(): standalone 模式同时停止基础设施
- services_clean(): fallback 列表包含 standalone 容器名

### full_reset() standalone 适配
- standalone 模式跳过 1.0 source CDC 操作 (远程 Debezium 管理)
- standalone 使用 debezium_2_offsets offset topic
- 自动启动 debezium-2 后注册 outbox connectors

### 其他
- health_check(): standalone 模式检查本地 PG/Redis 容器 + 远程 Kafka
- show_help(): 新增 DEPLOY_MODE 说明和 infra-up/infra-down 命令
- print_header(): 显示当前 DEPLOY_MODE
- 所有 ~15 处 docker compose 调用统一使用 $COMPOSE_ARGS

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
hailin 2026-02-01 19:32:03 -08:00
parent f39fd52001
commit ee07b52af9
2 changed files with 632 additions and 262 deletions

View File

@ -7,6 +7,10 @@
# The 2.0 system is completely isolated and can be reset at any time without
# affecting the 1.0 system.
#
# Deploy Modes (set via DEPLOY_MODE env var):
# shared (default) Co-located with 1.0, shares PostgreSQL/Redis/Kafka
# standalone Separate server with local PostgreSQL/Redis, remote Kafka (192.168.1.111:9093)
#
# Usage:
# ./deploy-mining.sh up [service] # Start all or specific service
# ./deploy-mining.sh down [service] # Stop all or specific service
@ -56,6 +60,9 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ENV_FILE="$SCRIPT_DIR/.env"
COMPOSE_FILE="$SCRIPT_DIR/docker-compose.2.0.yml"
# Deployment mode: "shared" (default, same server as 1.0) or "standalone" (separate server)
DEPLOY_MODE="${DEPLOY_MODE:-shared}"
# 2.0 Services
MINING_SERVICES=(
"contribution-service"
@ -143,9 +150,6 @@ CDC_POSTGRES_CONNECTORS=(
"planting-postgres-connector"
)
# Debezium Connect URL (default port 8084 as mapped in docker-compose)
DEBEZIUM_CONNECT_URL="${DEBEZIUM_CONNECT_URL:-http://localhost:8084}"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
@ -169,6 +173,7 @@ print_header() {
echo ""
echo -e "${CYAN}╔════════════════════════════════════════════════════════════╗${NC}"
echo -e "${CYAN}${NC} ${BOLD}RWA Mining Ecosystem 2.0 - Management Script${NC} ${CYAN}${NC}"
echo -e "${CYAN}${NC} Mode: ${BOLD}${DEPLOY_MODE}${NC} ${CYAN}${NC}"
echo -e "${CYAN}╚════════════════════════════════════════════════════════════╝${NC}"
echo ""
}
@ -254,17 +259,43 @@ load_env() {
fi
# Set defaults (match docker-compose.yml settings)
export POSTGRES_HOST="${POSTGRES_HOST:-localhost}"
export POSTGRES_PORT="${POSTGRES_PORT:-5432}"
export POSTGRES_USER="${POSTGRES_USER:-rwa_user}"
export POSTGRES_PASSWORD="${POSTGRES_PASSWORD:-rwa_secure_password}"
export KAFKA_BROKERS="${KAFKA_BROKERS:-localhost:9092}"
export REDIS_HOST="${REDIS_HOST:-localhost}"
export REDIS_PORT="${REDIS_PORT:-6379}"
# Docker container names (match docker-compose.yml container_name)
POSTGRES_CONTAINER="${POSTGRES_CONTAINER:-rwa-postgres}"
KAFKA_CONTAINER="${KAFKA_CONTAINER:-rwa-kafka}"
# Mode-specific configuration
if [ "$DEPLOY_MODE" = "standalone" ]; then
# Standalone: local PostgreSQL/Redis, remote Kafka
export POSTGRES_HOST="${POSTGRES_HOST:-postgres-2}"
export POSTGRES_PORT="${POSTGRES_PORT:-5432}"
export KAFKA_BROKERS="${KAFKA_BROKERS:-192.168.1.111:9093}"
export REDIS_HOST="${REDIS_HOST:-redis-2}"
export RWA_NETWORK_NAME="rwa-2-network"
POSTGRES_CONTAINER="${POSTGRES_CONTAINER:-rwa-postgres-2}"
KAFKA_CONTAINER="" # Kafka is remote, no local container
DEBEZIUM_CONTAINER="${DEBEZIUM_CONTAINER:-rwa-debezium-2}"
COMPOSE_ARGS="-f $COMPOSE_FILE --profile standalone --env-file $ENV_FILE"
# Create standalone network if it doesn't exist
docker network create rwa-2-network 2>/dev/null || true
else
# Shared: use 1.0's infrastructure
# NOTE: Do NOT export these — they are for script-side operations (run_psql, run_kafka_cmd).
# Docker Compose must use its own defaults (postgres, redis, kafka:29092) for container-side.
POSTGRES_HOST="${POSTGRES_HOST:-localhost}"
POSTGRES_PORT="${POSTGRES_PORT:-5432}"
KAFKA_BROKERS="${KAFKA_BROKERS:-localhost:9092}"
REDIS_HOST="${REDIS_HOST:-localhost}"
POSTGRES_CONTAINER="${POSTGRES_CONTAINER:-rwa-postgres}"
KAFKA_CONTAINER="${KAFKA_CONTAINER:-rwa-kafka}"
DEBEZIUM_CONTAINER="${DEBEZIUM_CONTAINER:-rwa-debezium-connect}"
COMPOSE_ARGS="-f $COMPOSE_FILE --env-file $ENV_FILE"
fi
# Debezium Connect URL (default port 8084 as mapped in docker-compose)
DEBEZIUM_CONNECT_URL="${DEBEZIUM_CONNECT_URL:-http://localhost:8084}"
}
# Substitute environment variables in a file, supporting ${VAR:-default} syntax.
@ -294,6 +325,55 @@ run_psql() {
return $?
}
# ===========================================================================
# Helper: Execute Kafka admin command (auto-detect Docker or remote)
# ===========================================================================
run_kafka_cmd() {
local cmd="$1"
shift
if [ "$DEPLOY_MODE" = "standalone" ]; then
# Standalone: Kafka is remote, use temporary container with host networking
docker run --rm --network host confluentinc/cp-kafka:7.5.0 \
$cmd --bootstrap-server "$KAFKA_BROKERS" "$@"
return $?
fi
# Shared: try local kafka container first
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^${KAFKA_CONTAINER}$"; then
docker exec "$KAFKA_CONTAINER" $cmd --bootstrap-server localhost:9092 "$@"
return $?
fi
# Fall back to local kafka CLI
if command -v $cmd &>/dev/null; then
$cmd --bootstrap-server "$KAFKA_BROKERS" "$@"
return $?
fi
log_warn "No Kafka CLI available (container '$KAFKA_CONTAINER' not found, no local CLI)"
return 1
}
# Helper: Pipe stdin to Kafka admin command (for kafka-console-producer)
run_kafka_cmd_stdin() {
local cmd="$1"
shift
if [ "$DEPLOY_MODE" = "standalone" ]; then
docker run --rm -i --network host confluentinc/cp-kafka:7.5.0 \
$cmd --bootstrap-server "$KAFKA_BROKERS" "$@"
return $?
fi
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^${KAFKA_CONTAINER}$"; then
docker exec -i "$KAFKA_CONTAINER" $cmd --bootstrap-server localhost:9092 "$@"
return $?
fi
return 1
}
# ===========================================================================
# Database Functions
# ===========================================================================
@ -370,8 +450,8 @@ db_migrate() {
docker exec "$container_name" npx prisma db push --accept-data-loss 2>/dev/null || {
log_warn "Container migration failed, trying to build and run temporary container..."
# Build and run a temporary container for migration
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" run --rm "$service" npx prisma migrate deploy 2>/dev/null || \
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" run --rm "$service" npx prisma db push --accept-data-loss 2>/dev/null || {
docker compose $COMPOSE_ARGS run --rm "$service" npx prisma migrate deploy 2>/dev/null || \
docker compose $COMPOSE_ARGS run --rm "$service" npx prisma db push --accept-data-loss 2>/dev/null || {
log_warn "Migration failed for $service"
}
}
@ -383,8 +463,8 @@ db_migrate() {
else
# No npx and no container - try docker compose run
log_info "No local npx, using docker compose run for migration"
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" run --rm "$service" npx prisma migrate deploy 2>/dev/null || \
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" run --rm "$service" npx prisma db push --accept-data-loss 2>/dev/null || {
docker compose $COMPOSE_ARGS run --rm "$service" npx prisma migrate deploy 2>/dev/null || \
docker compose $COMPOSE_ARGS run --rm "$service" npx prisma db push --accept-data-loss 2>/dev/null || {
log_warn "Migration failed for $service"
}
fi
@ -457,7 +537,7 @@ service_start() {
local port="${SERVICE_PORTS[$service]}"
log_step "Starting: $service (port $port)"
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" up -d "$service"
docker compose $COMPOSE_ARGS up -d "$service"
log_success "$service started"
}
@ -465,14 +545,14 @@ service_stop() {
local service="$1"
log_step "Stopping: $service"
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" stop "$service"
docker compose $COMPOSE_ARGS stop "$service"
log_success "$service stopped"
}
service_restart() {
local service="$1"
log_step "Restarting: $service"
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" restart "$service"
docker compose $COMPOSE_ARGS restart "$service"
log_success "$service restarted"
}
@ -485,10 +565,10 @@ service_build() {
# Use docker compose to build
if [ "$no_cache" = "--no-cache" ] || [ "$no_cache" = "true" ]; then
log_info "Building Docker image (no cache)..."
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" build --no-cache "$service"
docker compose $COMPOSE_ARGS build --no-cache "$service"
else
log_info "Building Docker image..."
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" build "$service"
docker compose $COMPOSE_ARGS build "$service"
fi
log_success "$service built successfully"
@ -512,6 +592,46 @@ service_rebuild() {
log_success "$service rebuilt and restarted successfully"
}
# ===========================================================================
# Standalone Infrastructure Functions
# ===========================================================================
infra_up() {
if [ "$DEPLOY_MODE" != "standalone" ]; then
log_warn "infra-up is only available in standalone mode (current: $DEPLOY_MODE)"
log_info "In shared mode, infrastructure is managed by the 1.0 deployment"
return 1
fi
print_section "Starting Standalone Infrastructure"
log_step "Starting PostgreSQL and Redis..."
docker compose $COMPOSE_ARGS up -d postgres-2 redis-2
log_info "Waiting for PostgreSQL and Redis to become healthy..."
docker compose $COMPOSE_ARGS up -d --wait postgres-2 redis-2 2>/dev/null || sleep 15
log_step "Starting Debezium Connect..."
docker compose $COMPOSE_ARGS up -d debezium-2
log_success "Standalone infrastructure started"
log_info " PostgreSQL: rwa-postgres-2 (port 5432)"
log_info " Redis: rwa-redis-2 (port 6379)"
log_info " Debezium: rwa-debezium-2 (port 8084)"
log_info " Kafka: $KAFKA_BROKERS (remote)"
}
infra_down() {
if [ "$DEPLOY_MODE" != "standalone" ]; then
log_warn "infra-down is only available in standalone mode (current: $DEPLOY_MODE)"
return 1
fi
print_section "Stopping Standalone Infrastructure"
docker compose $COMPOSE_ARGS stop debezium-2 redis-2 postgres-2 2>/dev/null || true
log_success "Standalone infrastructure stopped"
}
# ===========================================================================
# Batch Service Functions
# ===========================================================================
@ -519,6 +639,17 @@ services_up() {
local target="$1"
print_section "Starting 2.0 Services"
# In standalone mode, start infrastructure first
if [ "$DEPLOY_MODE" = "standalone" ] && [ -z "$target" ]; then
log_step "Starting standalone infrastructure (postgres-2, redis-2, debezium-2)..."
docker compose $COMPOSE_ARGS up -d postgres-2 redis-2
log_info "Waiting for PostgreSQL and Redis to become healthy..."
docker compose $COMPOSE_ARGS up -d --wait postgres-2 redis-2 2>/dev/null || sleep 15
log_step "Starting Debezium Connect..."
docker compose $COMPOSE_ARGS up -d debezium-2
log_success "Standalone infrastructure started"
fi
local services
services=$(get_services_to_process "$target")
[ -z "$services" ] && exit 1
@ -542,6 +673,13 @@ services_down() {
service_stop "$service"
done
# In standalone mode, stop infrastructure when stopping all services
if [ "$DEPLOY_MODE" = "standalone" ] && [ -z "$target" ]; then
log_step "Stopping standalone infrastructure..."
docker compose $COMPOSE_ARGS stop debezium-2 redis-2 postgres-2 2>/dev/null || true
log_success "Standalone infrastructure stopped"
fi
log_success "Services stopped"
}
@ -654,7 +792,7 @@ services_logs() {
exit 1
}
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" logs -f --tail="$lines" "$resolved"
docker compose $COMPOSE_ARGS logs -f --tail="$lines" "$resolved"
}
# ===========================================================================
@ -696,28 +834,13 @@ sync_reset() {
local max_retries=3
while [ "$reset_success" = false ] && [ $retry_count -lt $max_retries ]; do
# Try local kafka-consumer-groups.sh first
if command -v kafka-consumer-groups.sh &>/dev/null; then
if kafka-consumer-groups.sh --bootstrap-server "$KAFKA_BROKERS" \
--group "$group" \
--reset-offsets \
--to-earliest \
--all-topics \
--execute 2>&1 | grep -q "NEW-OFFSET"; then
reset_success=true
fi
fi
# Try docker exec if local failed
if [ "$reset_success" = false ] && docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^${KAFKA_CONTAINER}$"; then
if docker exec "$KAFKA_CONTAINER" kafka-consumer-groups --bootstrap-server localhost:9092 \
--group "$group" \
--reset-offsets \
--to-earliest \
--all-topics \
--execute 2>&1 | grep -q "NEW-OFFSET"; then
reset_success=true
fi
if run_kafka_cmd kafka-consumer-groups \
--group "$group" \
--reset-offsets \
--to-earliest \
--all-topics \
--execute 2>&1 | grep -q "NEW-OFFSET"; then
reset_success=true
fi
if [ "$reset_success" = false ]; then
@ -782,10 +905,8 @@ cdc_resnapshot() {
log_step "Deleting consumer groups..."
for group in "${CDC_CONSUMER_GROUPS[@]}"; do
log_info "Deleting consumer group: $group"
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^${KAFKA_CONTAINER}$"; then
docker exec "$KAFKA_CONTAINER" kafka-consumer-groups --bootstrap-server localhost:9092 \
--delete --group "$group" 2>/dev/null && log_success "Deleted $group" || log_warn "Could not delete $group"
fi
run_kafka_cmd kafka-consumer-groups \
--delete --group "$group" 2>/dev/null && log_success "Deleted $group" || log_warn "Could not delete $group"
done
# Clear processed_cdc_events tables in all CDC consumer databases
@ -893,20 +1014,9 @@ sync_status() {
echo -e "${BOLD}Consumer Group:${NC} $group"
echo ""
# Try local first, then docker
if command -v kafka-consumer-groups.sh &>/dev/null; then
kafka-consumer-groups.sh --bootstrap-server "$KAFKA_BROKERS" \
--group "$group" \
--describe 2>/dev/null && echo "" && continue
fi
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^${KAFKA_CONTAINER}$"; then
docker exec "$KAFKA_CONTAINER" kafka-consumer-groups --bootstrap-server localhost:9092 \
--group "$group" \
--describe 2>&1 || log_warn "Could not get status for $group"
else
log_warn "Kafka container '$KAFKA_CONTAINER' not found"
fi
run_kafka_cmd kafka-consumer-groups \
--group "$group" \
--describe 2>&1 || log_warn "Could not get status for $group"
echo ""
done
}
@ -1053,7 +1163,7 @@ services_clean() {
# Step 1: Stop and remove containers, networks, images, and volumes via docker compose
log_step "Stopping and removing containers, images, and volumes..."
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" down --rmi all --volumes 2>/dev/null || true
docker compose $COMPOSE_ARGS down --rmi all --volumes 2>/dev/null || true
# Step 2: Remove any remaining containers by name (in case compose down missed them)
log_step "Removing any remaining containers..."
@ -1067,6 +1177,10 @@ services_clean() {
"rwa-mining-blockchain-service"
"rwa-mining-admin-web"
)
# Standalone infrastructure containers
if [ "$DEPLOY_MODE" = "standalone" ]; then
container_names+=("rwa-postgres-2" "rwa-redis-2" "rwa-debezium-2")
fi
for container in "${container_names[@]}"; do
if docker ps -a --format '{{.Names}}' 2>/dev/null | grep -q "^${container}$"; then
log_info "Removing container: $container"
@ -1103,13 +1217,20 @@ full_reset() {
echo -e "${RED}${BOLD}║ - Recreate databases ║${NC}"
echo -e "${RED}${BOLD}║ - Run migrations ║${NC}"
echo -e "${RED}${BOLD}║ - Reset CDC consumer offsets ║${NC}"
echo -e "${RED}${BOLD}║ - Re-snapshot 1.0 source CDC connectors ║${NC}"
echo -e "${RED}${BOLD}║ - Restart services (will sync from 1.0) ║${NC}"
echo -e "${RED}${BOLD}║ ║${NC}"
echo -e "${RED}${BOLD}║ 1.0 services are NOT affected, but source CDC connectors ║${NC}"
echo -e "${RED}${BOLD}║ will be briefly re-created for a fresh snapshot. ║${NC}"
echo -e "${RED}${BOLD}╚════════════════════════════════════════════════════════════╝${NC}"
echo ""
if [ "$DEPLOY_MODE" = "standalone" ]; then
echo -e "${CYAN}Deploy mode: standalone${NC}"
echo -e "${CYAN} - Local PostgreSQL (postgres-2) and Redis (redis-2)${NC}"
echo -e "${CYAN} - Remote Kafka ($KAFKA_BROKERS)${NC}"
echo -e "${CYAN} - 1.0 source CDC connectors NOT touched (managed on 1.0 server)${NC}"
else
echo -e "${CYAN}Deploy mode: shared${NC}"
echo -e "${CYAN} - Using 1.0's PostgreSQL, Redis, and Kafka${NC}"
echo -e "${CYAN} - 1.0 source CDC connectors will be re-created for fresh snapshot${NC}"
fi
echo ""
read -p "Type 'RESET' to confirm: " confirm
if [ "$confirm" != "RESET" ]; then
@ -1118,6 +1239,14 @@ full_reset() {
fi
echo ""
# In standalone mode, ensure infrastructure is running (needed for DB operations)
if [ "$DEPLOY_MODE" = "standalone" ]; then
log_step "Ensuring standalone infrastructure is running..."
docker compose $COMPOSE_ARGS up -d postgres-2 redis-2 2>/dev/null || true
docker compose $COMPOSE_ARGS up -d --wait postgres-2 redis-2 2>/dev/null || sleep 15
log_success "Standalone infrastructure is up"
fi
log_step "Step 1/19: Stopping 2.0 services..."
for service in "${MINING_SERVICES[@]}"; do
service_stop "$service"
@ -1136,26 +1265,20 @@ full_reset() {
local max_retries=3
while [ "$reset_success" = false ] && [ $retry_count -lt $max_retries ]; do
# Try docker exec with the correct container name
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^${KAFKA_CONTAINER}$"; then
if docker exec "$KAFKA_CONTAINER" kafka-consumer-groups --bootstrap-server localhost:9092 \
--group "$group" \
--reset-offsets \
--to-earliest \
--all-topics \
--execute 2>&1 | grep -q "NEW-OFFSET"; then
log_success "CDC offsets reset for $group"
reset_success=true
else
retry_count=$((retry_count + 1))
if [ $retry_count -lt $max_retries ]; then
log_warn "Consumer group still active, waiting 10s (retry $retry_count/$max_retries)..."
sleep 10
fi
fi
if run_kafka_cmd kafka-consumer-groups \
--group "$group" \
--reset-offsets \
--to-earliest \
--all-topics \
--execute 2>&1 | grep -q "NEW-OFFSET"; then
log_success "CDC offsets reset for $group"
reset_success=true
else
log_warn "Kafka container '$KAFKA_CONTAINER' not found"
break
retry_count=$((retry_count + 1))
if [ $retry_count -lt $max_retries ]; then
log_warn "Consumer group still active, waiting 10s (retry $retry_count/$max_retries)..."
sleep 10
fi
fi
done
@ -1169,7 +1292,7 @@ full_reset() {
local connectors=("auth-outbox-connector" "contribution-outbox-connector" "mining-outbox-connector" "trading-outbox-connector" "mining-wallet-outbox-connector")
for connector in "${connectors[@]}"; do
log_info "Deleting connector: $connector"
curl -s -X DELETE "http://localhost:8084/connectors/$connector" 2>/dev/null || true
curl -s -X DELETE "$DEBEZIUM_CONNECT_URL/connectors/$connector" 2>/dev/null || true
done
log_info "Waiting 5 seconds for connectors to be fully removed..."
sleep 5
@ -1189,7 +1312,7 @@ full_reset() {
for connector in "${connectors[@]}"; do
log_info "Attempting to delete offset via REST API: $connector"
local delete_result
delete_result=$(curl -s -w "\n%{http_code}" -X DELETE "http://localhost:8084/connectors/$connector/offsets" 2>/dev/null)
delete_result=$(curl -s -w "\n%{http_code}" -X DELETE "$DEBEZIUM_CONNECT_URL/connectors/$connector/offsets" 2>/dev/null)
local http_code=$(echo "$delete_result" | tail -1)
if [ "$http_code" = "200" ] || [ "$http_code" = "204" ]; then
log_success "Deleted offset via REST API: $connector"
@ -1201,8 +1324,13 @@ full_reset() {
# Strategy 2: Always try tombstone method as primary approach
# The offset topic name is configured via OFFSET_STORAGE_TOPIC env var in Debezium Connect
# Default is "debezium_offsets" (not "connect-offsets"!)
local offset_topic="debezium_offsets"
# In standalone mode, offset topic is "debezium_2_offsets"; in shared mode, "debezium_offsets"
local offset_topic
if [ "$DEPLOY_MODE" = "standalone" ]; then
offset_topic="debezium_2_offsets"
else
offset_topic="debezium_offsets"
fi
log_info "Sending tombstones to offset topic: $offset_topic"
# The offset key format is: ["connector-name",{"server":"topic.prefix"}]
@ -1220,27 +1348,13 @@ full_reset() {
local offset_key="[\"$connector\",{\"server\":\"$topic_prefix\"}]"
log_info "Sending tombstone for: $connector (key: $offset_key)"
# Send tombstone (NULL value) using kafka-console-producer
# Use null.marker to mark __NULL__ as NULL value
# Format: key\t__NULL__ with parse.key=true and null.marker=__NULL__
local tombstone_sent=false
# Use kafka-console-producer which is available in Kafka container
# --property parse.key=true: Enable key parsing
# --property key.separator=<TAB>: Use literal tab as key-value separator
# --property null.marker=__NULL__: Treat __NULL__ as null value (tombstone)
# Note: Must use printf to properly pass tab character through SSH/docker
if printf '%s\t%s\n' "$offset_key" "__NULL__" | docker exec -i "$KAFKA_CONTAINER" kafka-console-producer \
--bootstrap-server localhost:9092 \
if printf '%s\t%s\n' "$offset_key" "__NULL__" | run_kafka_cmd_stdin kafka-console-producer \
--topic "$offset_topic" \
--property parse.key=true \
--property "key.separator= " \
--property "null.marker=__NULL__" 2>/dev/null; then
log_success "Sent tombstone via kafka-console-producer for: $connector"
tombstone_sent=true
fi
if [ "$tombstone_sent" = false ]; then
else
log_warn "Could not send tombstone for $connector"
fi
done
@ -1254,62 +1368,65 @@ full_reset() {
log_info "Deleting Kafka outbox topics to clear old messages..."
local outbox_topics=("cdc.auth.outbox" "cdc.contribution.outbox" "cdc.mining.outbox" "cdc.trading.outbox" "cdc.mining-wallet.outbox")
for topic in "${outbox_topics[@]}"; do
if docker exec "$KAFKA_CONTAINER" kafka-topics --bootstrap-server localhost:9092 --delete --topic "$topic" 2>/dev/null; then
if run_kafka_cmd kafka-topics --delete --topic "$topic" 2>/dev/null; then
log_success "Deleted Kafka topic: $topic"
else
log_warn "Could not delete Kafka topic: $topic (may not exist)"
fi
done
# Also delete 1.0 source CDC connectors so they can be re-created with a fresh snapshot.
# Without this, connectors retain old Debezium offsets and skip re-snapshot, which means
# any tables added to the config after initial registration (e.g., wallet_addresses) will
# never have their existing data captured into Kafka topics.
log_info "Deleting 1.0 source CDC connectors for re-snapshot..."
for connector in "${CDC_POSTGRES_CONNECTORS[@]}"; do
log_info "Deleting source connector: $connector"
curl -s -X DELETE "$DEBEZIUM_CONNECT_URL/connectors/$connector" 2>/dev/null || true
done
sleep 3
# 1.0 source CDC connector operations (shared mode only)
# In standalone mode, source CDC connectors are on 1.0's Debezium - managed separately
if [ "$DEPLOY_MODE" != "standalone" ]; then
# Delete 1.0 source CDC connectors so they can be re-created with a fresh snapshot
log_info "Deleting 1.0 source CDC connectors for re-snapshot..."
for connector in "${CDC_POSTGRES_CONNECTORS[@]}"; do
log_info "Deleting source connector: $connector"
curl -s -X DELETE "$DEBEZIUM_CONNECT_URL/connectors/$connector" 2>/dev/null || true
done
sleep 3
# Clear source connector offsets (same tombstone approach as outbox connectors)
log_info "Clearing 1.0 source connector offsets..."
for connector in "${CDC_POSTGRES_CONNECTORS[@]}"; do
local topic_prefix=""
case "$connector" in
"identity-postgres-connector") topic_prefix="cdc.identity" ;;
"referral-postgres-connector") topic_prefix="cdc.referral" ;;
"planting-postgres-connector") topic_prefix="cdc.planting" ;;
esac
# Clear source connector offsets (same tombstone approach as outbox connectors)
log_info "Clearing 1.0 source connector offsets..."
for connector in "${CDC_POSTGRES_CONNECTORS[@]}"; do
local topic_prefix=""
case "$connector" in
"identity-postgres-connector") topic_prefix="cdc.identity" ;;
"referral-postgres-connector") topic_prefix="cdc.referral" ;;
"planting-postgres-connector") topic_prefix="cdc.planting" ;;
esac
if [ -n "$topic_prefix" ]; then
local offset_key="[\"$connector\",{\"server\":\"$topic_prefix\"}]"
if printf '%s\t%s\n' "$offset_key" "__NULL__" | docker exec -i "$KAFKA_CONTAINER" kafka-console-producer \
--bootstrap-server localhost:9092 \
--topic "$offset_topic" \
--property parse.key=true \
--property "key.separator= " \
--property "null.marker=__NULL__" 2>/dev/null; then
log_success "Cleared offset for: $connector"
else
log_warn "Could not clear offset for $connector"
if [ -n "$topic_prefix" ]; then
local offset_key="[\"$connector\",{\"server\":\"$topic_prefix\"}]"
if printf '%s\t%s\n' "$offset_key" "__NULL__" | run_kafka_cmd_stdin kafka-console-producer \
--topic "$offset_topic" \
--property parse.key=true \
--property "key.separator= " \
--property "null.marker=__NULL__" 2>/dev/null; then
log_success "Cleared offset for: $connector"
else
log_warn "Could not clear offset for $connector"
fi
fi
fi
done
done
# Drop replication slots so new connectors will do a clean initial snapshot
log_info "Dropping 1.0 source connector replication slots..."
local source_slots=("debezium_identity_slot" "debezium_referral_slot" "debezium_planting_slot")
for slot in "${source_slots[@]}"; do
if run_psql "rwa_identity" "SELECT pg_drop_replication_slot('$slot');" 2>/dev/null; then
log_success "Dropped replication slot: $slot"
else
log_warn "Could not drop replication slot: $slot (may not exist or still active)"
fi
done
# Drop replication slots so new connectors will do a clean initial snapshot
log_info "Dropping 1.0 source connector replication slots..."
local source_slots=("debezium_identity_slot" "debezium_referral_slot" "debezium_planting_slot")
for slot in "${source_slots[@]}"; do
if run_psql "rwa_identity" "SELECT pg_drop_replication_slot('$slot');" 2>/dev/null; then
log_success "Dropped replication slot: $slot"
else
log_warn "Could not drop replication slot: $slot (may not exist or still active)"
fi
done
log_info "Waiting 5 seconds for offset and slot changes to take effect..."
sleep 5
log_info "Waiting 5 seconds for offset and slot changes to take effect..."
sleep 5
else
log_info "Standalone mode: skipping 1.0 source CDC connector management"
log_info "1.0 source CDC connectors are managed on the 1.0 server (192.168.1.111)"
fi
log_step "Step 5/19: Dropping 2.0 databases..."
db_drop
@ -1324,7 +1441,7 @@ full_reset() {
log_step "Step 8/19: Stopping containers and resetting CDC offsets again..."
log_info "Migration may have started CDC consumers, stopping them now..."
for service in "${MINING_SERVICES[@]}"; do
docker compose -f "$COMPOSE_FILE" --env-file "$ENV_FILE" stop "$service" 2>/dev/null || true
docker compose $COMPOSE_ARGS stop "$service" 2>/dev/null || true
done
log_info "Waiting 20 seconds for consumer groups to become inactive..."
@ -1338,25 +1455,20 @@ full_reset() {
local max_retries=3
while [ "$reset_success" = false ] && [ $retry_count -lt $max_retries ]; do
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^${KAFKA_CONTAINER}$"; then
if docker exec "$KAFKA_CONTAINER" kafka-consumer-groups --bootstrap-server localhost:9092 \
--group "$group" \
--reset-offsets \
--to-earliest \
--all-topics \
--execute 2>&1 | grep -q "NEW-OFFSET"; then
log_success "CDC offsets reset for $group"
reset_success=true
else
retry_count=$((retry_count + 1))
if [ $retry_count -lt $max_retries ]; then
log_warn "Consumer group still active, waiting 10s (retry $retry_count/$max_retries)..."
sleep 10
fi
fi
if run_kafka_cmd kafka-consumer-groups \
--group "$group" \
--reset-offsets \
--to-earliest \
--all-topics \
--execute 2>&1 | grep -q "NEW-OFFSET"; then
log_success "CDC offsets reset for $group"
reset_success=true
else
log_warn "Kafka container '$KAFKA_CONTAINER' not found"
break
retry_count=$((retry_count + 1))
if [ $retry_count -lt $max_retries ]; then
log_warn "Consumer group still active, waiting 10s (retry $retry_count/$max_retries)..."
sleep 10
fi
fi
done
@ -1385,42 +1497,53 @@ full_reset() {
fi
log_step "Step 9/19: Re-registering 1.0 source CDC connectors (fresh snapshot)..."
# Re-create source connectors using JSON config files.
# Since we cleared their offsets and dropped replication slots in Step 4,
# snapshot.mode=initial will trigger a full re-snapshot of all configured tables.
local scripts_dir="$SCRIPT_DIR/scripts/debezium"
for connector in "${CDC_POSTGRES_CONNECTORS[@]}"; do
local config_file=""
case "$connector" in
"identity-postgres-connector") config_file="$scripts_dir/identity-connector.json" ;;
"referral-postgres-connector") config_file="$scripts_dir/referral-connector.json" ;;
"planting-postgres-connector") config_file="$scripts_dir/planting-connector.json" ;;
esac
if [ "$DEPLOY_MODE" != "standalone" ]; then
# Re-create source connectors using JSON config files.
# Since we cleared their offsets and dropped replication slots in Step 4,
# snapshot.mode=initial will trigger a full re-snapshot of all configured tables.
local scripts_dir="$SCRIPT_DIR/scripts/debezium"
for connector in "${CDC_POSTGRES_CONNECTORS[@]}"; do
local config_file=""
case "$connector" in
"identity-postgres-connector") config_file="$scripts_dir/identity-connector.json" ;;
"referral-postgres-connector") config_file="$scripts_dir/referral-connector.json" ;;
"planting-postgres-connector") config_file="$scripts_dir/planting-connector.json" ;;
esac
if [ -n "$config_file" ] && [ -f "$config_file" ]; then
log_info "Registering source connector: $connector"
local result
result=$(substitute_env_vars "$config_file" | curl -s -X POST "$DEBEZIUM_CONNECT_URL/connectors" \
-H "Content-Type: application/json" \
-d @- 2>/dev/null)
if [ -n "$config_file" ] && [ -f "$config_file" ]; then
log_info "Registering source connector: $connector"
local result
result=$(substitute_env_vars "$config_file" | curl -s -X POST "$DEBEZIUM_CONNECT_URL/connectors" \
-H "Content-Type: application/json" \
-d @- 2>/dev/null)
if echo "$result" | grep -q '"name"'; then
log_success "Registered source connector: $connector"
if echo "$result" | grep -q '"name"'; then
log_success "Registered source connector: $connector"
else
log_warn "Failed to register source connector $connector: $result"
fi
else
log_warn "Failed to register source connector $connector: $result"
log_warn "Config file not found for $connector, skipping"
fi
else
log_warn "Config file not found for $connector, skipping"
fi
sleep 2
done
sleep 2
done
# Wait for Debezium snapshots to produce data to Kafka topics
log_info "Waiting 15 seconds for source connector snapshots to complete..."
sleep 15
# Wait for Debezium snapshots to produce data to Kafka topics
log_info "Waiting 15 seconds for source connector snapshots to complete..."
sleep 15
else
log_info "Standalone mode: skipping 1.0 source CDC connector re-registration"
log_info "Source CDC connectors are managed on the 1.0 server (192.168.1.111)"
log_info "Ensure 1.0 source connectors are running and producing to Kafka"
fi
log_step "Step 10/19: Starting 2.0 services..."
# In standalone mode, also start Debezium (needed for outbox connector registration in Step 12)
if [ "$DEPLOY_MODE" = "standalone" ]; then
log_info "Starting Debezium Connect (standalone)..."
docker compose $COMPOSE_ARGS up -d debezium-2
fi
for service in "${MINING_SERVICES[@]}"; do
service_start "$service"
done
@ -1578,7 +1701,91 @@ health_check() {
local all_healthy=true
echo -e "${BOLD}Deploy Mode:${NC} $DEPLOY_MODE"
echo ""
# Check infrastructure
echo -e "${BOLD}Infrastructure:${NC}"
if [ "$DEPLOY_MODE" = "standalone" ]; then
# Standalone: check local Docker containers for PostgreSQL and Redis
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^rwa-postgres-2$"; then
local pg_health
pg_health=$(docker inspect --format='{{.State.Health.Status}}' rwa-postgres-2 2>/dev/null || echo "unknown")
if [ "$pg_health" = "healthy" ]; then
echo -e " ${GREEN}${NC} PostgreSQL (rwa-postgres-2, healthy)"
else
echo -e " ${YELLOW}~${NC} PostgreSQL (rwa-postgres-2, $pg_health)"
fi
else
echo -e " ${RED}${NC} PostgreSQL (rwa-postgres-2 container not running)"
all_healthy=false
fi
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^rwa-redis-2$"; then
local redis_health
redis_health=$(docker inspect --format='{{.State.Health.Status}}' rwa-redis-2 2>/dev/null || echo "unknown")
if [ "$redis_health" = "healthy" ]; then
echo -e " ${GREEN}${NC} Redis (rwa-redis-2, healthy)"
else
echo -e " ${YELLOW}~${NC} Redis (rwa-redis-2, $redis_health)"
fi
else
echo -e " ${RED}${NC} Redis (rwa-redis-2 container not running)"
all_healthy=false
fi
# Remote Kafka
if nc -z ${KAFKA_BROKERS%%:*} ${KAFKA_BROKERS##*:} 2>/dev/null; then
echo -e " ${GREEN}${NC} Kafka ($KAFKA_BROKERS, remote)"
else
echo -e " ${RED}${NC} Kafka ($KAFKA_BROKERS, remote - unreachable)"
all_healthy=false
fi
# Local Debezium
if docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^rwa-debezium-2$"; then
if curl -s "$DEBEZIUM_CONNECT_URL" &>/dev/null; then
echo -e " ${GREEN}${NC} Debezium Connect (rwa-debezium-2, $DEBEZIUM_CONNECT_URL)"
else
echo -e " ${YELLOW}~${NC} Debezium Connect (rwa-debezium-2 running, API not ready)"
fi
else
echo -e " ${RED}${NC} Debezium Connect (rwa-debezium-2 container not running)"
all_healthy=false
fi
else
# Shared: check via network connectivity
if run_psql "postgres" "SELECT 1" &>/dev/null; then
echo -e " ${GREEN}${NC} PostgreSQL ($POSTGRES_HOST:$POSTGRES_PORT)"
else
echo -e " ${RED}${NC} PostgreSQL ($POSTGRES_HOST:$POSTGRES_PORT)"
all_healthy=false
fi
if nc -z ${KAFKA_BROKERS%%:*} ${KAFKA_BROKERS##*:} 2>/dev/null; then
echo -e " ${GREEN}${NC} Kafka ($KAFKA_BROKERS)"
else
echo -e " ${RED}${NC} Kafka ($KAFKA_BROKERS)"
all_healthy=false
fi
if nc -z "$REDIS_HOST" "$REDIS_PORT" 2>/dev/null; then
echo -e " ${GREEN}${NC} Redis ($REDIS_HOST:$REDIS_PORT)"
else
echo -e " ${RED}${NC} Redis ($REDIS_HOST:$REDIS_PORT)"
all_healthy=false
fi
if curl -s "$DEBEZIUM_CONNECT_URL" &>/dev/null; then
echo -e " ${GREEN}${NC} Debezium Connect ($DEBEZIUM_CONNECT_URL)"
else
echo -e " ${YELLOW}~${NC} Debezium Connect ($DEBEZIUM_CONNECT_URL - not reachable)"
fi
fi
# Check databases
echo ""
echo -e "${BOLD}Databases:${NC}"
for db in "${MINING_DATABASES[@]}"; do
if run_psql "$db" "SELECT 1" &>/dev/null; then
@ -1602,24 +1809,6 @@ health_check() {
fi
done
echo ""
echo -e "${BOLD}Infrastructure:${NC}"
# Kafka
if nc -z ${KAFKA_BROKERS%%:*} ${KAFKA_BROKERS##*:} 2>/dev/null; then
echo -e " ${GREEN}${NC} Kafka ($KAFKA_BROKERS)"
else
echo -e " ${RED}${NC} Kafka ($KAFKA_BROKERS)"
all_healthy=false
fi
# Redis
if nc -z "$REDIS_HOST" "$REDIS_PORT" 2>/dev/null; then
echo -e " ${GREEN}${NC} Redis ($REDIS_HOST:$REDIS_PORT)"
else
echo -e " ${RED}${NC} Redis ($REDIS_HOST:$REDIS_PORT)"
all_healthy=false
fi
echo ""
if [ "$all_healthy" = true ]; then
echo -e "${GREEN}${BOLD}All systems healthy!${NC}"
@ -1724,6 +1913,26 @@ show_help() {
echo " $0 logs admin 200 # Show last 200 lines of admin logs"
echo " $0 db-reset mining # Reset only mining-service database"
echo ""
echo -e "${BOLD}Deploy Modes:${NC}"
echo " DEPLOY_MODE=shared (default) Share 1.0's PostgreSQL, Redis, Kafka"
echo " DEPLOY_MODE=standalone Independent PostgreSQL/Redis, remote Kafka"
echo ""
echo -e "${BOLD}Standalone Infrastructure (standalone mode only):${NC}"
echo " infra-up Start PostgreSQL, Redis, Debezium (standalone only)"
echo " infra-down Stop standalone infrastructure"
echo ""
echo -e "${BOLD}Standalone Mode Examples:${NC}"
echo " DEPLOY_MODE=standalone $0 infra-up # Start standalone infrastructure first"
echo " DEPLOY_MODE=standalone $0 up # Start all services (auto-starts infra too)"
echo " DEPLOY_MODE=standalone $0 health # Check standalone infra health"
echo " DEPLOY_MODE=standalone $0 full-reset # Reset (skips 1.0 CDC connectors)"
echo ""
echo -e "${BOLD}Environment Variables (standalone mode):${NC}"
echo " KAFKA_BROKERS Remote Kafka address (default: 192.168.1.111:9093)"
echo " POSTGRES_HOST PostgreSQL host (default: postgres-2)"
echo " REDIS_HOST Redis host (default: redis-2)"
echo " DEBEZIUM_CONNECT_URL Debezium REST API (default: http://localhost:8084)"
echo ""
echo -e "${YELLOW}Note: The 2.0 system is completely isolated from 1.0.${NC}"
echo -e "${YELLOW}Any reset operation will NOT affect the 1.0 system.${NC}"
}
@ -1812,6 +2021,16 @@ main() {
delete_outbox_connectors
;;
# Standalone infrastructure
infra-up)
print_header
infra_up
;;
infra-down)
print_header
infra_down
;;
# Clean
clean)
print_header

View File

@ -2,21 +2,113 @@
# RWA Mining Ecosystem 2.0 - Docker Compose
# =============================================================================
#
# 2.0 系统共享 1.0 的基础设施 (PostgreSQL, Redis, Kafka),但完全隔离:
# - 数据库:使用独立的数据库名称 (rwa_contribution, rwa_mining, etc.)
# - Redis使用独立的 DB 分区 (DB 8,10,12-16),避免与 1.0 服务 (DB 0-11) 冲突
# - Kafka仅通过 CDC 消费 1.0 的数据变更事件,单向同步
# - HTTP2.0 服务之间互相调用,不调用 1.0 服务
# 支持两种部署模式 (通过 DEPLOY_MODE 环境变量切换):
#
# 1. shared 模式 (默认): 共享 1.0 的 PostgreSQL/Redis/Kafka
# - 网络: services_rwa-network (由 1.0 创建)
# - 基础设施: 复用 1.0 的 postgres, redis, kafka 容器
#
# 2. standalone 模式: 独立部署在单独服务器上
# - 网络: rwa-2-network (本地)
# - 基础设施: 本地 postgres-2, redis-2 + 远程 Kafka (192.168.1.111:9093)
# - 启动命令需加 --profile standalone
#
# Usage:
# docker-compose -f docker-compose.2.0.yml up -d # 启动所有 2.0 服务
# docker-compose -f docker-compose.2.0.yml down # 停止所有 2.0 服务
# docker-compose -f docker-compose.2.0.yml up -d mining-admin # 启动单个服务
# docker-compose -f docker-compose.2.0.yml logs -f # 查看日志
# # Shared 模式 (默认):
# docker compose -f docker-compose.2.0.yml --env-file .env up -d
#
# # Standalone 模式:
# docker compose -f docker-compose.2.0.yml --profile standalone --env-file .env up -d
#
# =============================================================================
services:
# ===========================================================================
# Infrastructure (standalone 模式专属, 通过 --profile standalone 启用)
# ===========================================================================
postgres-2:
image: postgres:16-alpine
container_name: rwa-postgres-2
profiles: ["standalone"]
environment:
TZ: Asia/Shanghai
POSTGRES_USER: ${POSTGRES_USER:-rwa_user}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-rwa_secure_password}
POSTGRES_MULTIPLE_DATABASES: rwa_contribution,rwa_mining,rwa_trading,rwa_mining_admin,rwa_auth,rwa_mining_wallet,rwa_mining_blockchain
ports:
- "5432:5432"
volumes:
- postgres_2_data:/var/lib/postgresql/data
- ./init-multiple-dbs.sh:/docker-entrypoint-initdb.d/init-multiple-dbs.sh:ro
command: >
postgres
-c wal_level=logical
-c max_replication_slots=20
-c max_wal_senders=20
-c max_connections=200
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-rwa_user}"]
interval: 5s
timeout: 5s
retries: 10
restart: unless-stopped
networks:
- rwa-2-network
redis-2:
image: redis:7-alpine
container_name: rwa-redis-2
profiles: ["standalone"]
environment:
TZ: Asia/Shanghai
command: redis-server --appendonly yes --databases 20
ports:
- "6379:6379"
volumes:
- redis_2_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 10
restart: unless-stopped
networks:
- rwa-2-network
debezium-2:
image: debezium/connect:2.4
container_name: rwa-debezium-2
profiles: ["standalone"]
depends_on:
postgres-2:
condition: service_healthy
ports:
- "8084:8083"
environment:
TZ: Asia/Shanghai
GROUP_ID: debezium-connect-2
BOOTSTRAP_SERVERS: ${KAFKA_BROKERS:-kafka:29092}
CONFIG_STORAGE_TOPIC: debezium_2_configs
OFFSET_STORAGE_TOPIC: debezium_2_offsets
STATUS_STORAGE_TOPIC: debezium_2_statuses
CONFIG_STORAGE_REPLICATION_FACTOR: 1
OFFSET_STORAGE_REPLICATION_FACTOR: 1
STATUS_STORAGE_REPLICATION_FACTOR: 1
KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
KEY_CONVERTER_SCHEMAS_ENABLE: "false"
VALUE_CONVERTER_SCHEMAS_ENABLE: "false"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8083/"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
restart: unless-stopped
networks:
- rwa-2-network
# ===========================================================================
# Backend Services (2.0)
# ===========================================================================
@ -26,19 +118,26 @@ services:
context: ./contribution-service
dockerfile: Dockerfile
container_name: rwa-contribution-service
depends_on:
postgres-2:
condition: service_healthy
required: false
redis-2:
condition: service_healthy
required: false
environment:
NODE_ENV: production
TZ: Asia/Shanghai
PORT: 3020
# PostgreSQL - 使用独立的数据库
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgres:5432/rwa_contribution?schema=public
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@${POSTGRES_HOST:-postgres}:${POSTGRES_PORT:-5432}/rwa_contribution?schema=public
# Redis - 使用 DB 10 隔离
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_PASSWORD: ${REDIS_PASSWORD:-}
REDIS_DB: 10
# Kafka - 消费 CDC 事件 (从1.0服务同步数据)
KAFKA_BROKERS: kafka:29092
KAFKA_BROKERS: ${KAFKA_BROKERS:-kafka:29092}
# 认种订单 (planting-service)
CDC_TOPIC_ADOPTIONS: ${CDC_TOPIC_ADOPTIONS:-cdc.planting.public.planting_orders}
# 推荐关系 (referral-service)
@ -56,26 +155,33 @@ services:
start_period: 60s
restart: unless-stopped
networks:
- rwa-network
- rwa-2-network
mining-service:
build:
context: ./mining-service
dockerfile: Dockerfile
container_name: rwa-mining-service
depends_on:
postgres-2:
condition: service_healthy
required: false
redis-2:
condition: service_healthy
required: false
environment:
NODE_ENV: production
TZ: Asia/Shanghai
PORT: 3021
# PostgreSQL - 使用独立的数据库
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgres:5432/rwa_mining?schema=public
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@${POSTGRES_HOST:-postgres}:${POSTGRES_PORT:-5432}/rwa_mining?schema=public
# Redis - 使用 DB 16 隔离 (避免与 1.0 blockchain-service 的 DB 11 冲突)
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_PASSWORD: ${REDIS_PASSWORD:-}
REDIS_DB: 16
# Kafka
KAFKA_BROKERS: kafka:29092
KAFKA_BROKERS: ${KAFKA_BROKERS:-kafka:29092}
# JWT 配置 (与 auth-service 共享密钥以验证 token)
JWT_SECRET: ${JWT_SECRET:-your-jwt-secret-change-in-production}
# 2.0 内部服务调用
@ -90,26 +196,33 @@ services:
start_period: 60s
restart: unless-stopped
networks:
- rwa-network
- rwa-2-network
trading-service:
build:
context: ./trading-service
dockerfile: Dockerfile
container_name: rwa-trading-service
depends_on:
postgres-2:
condition: service_healthy
required: false
redis-2:
condition: service_healthy
required: false
environment:
NODE_ENV: production
TZ: Asia/Shanghai
PORT: 3022
# PostgreSQL - 使用独立的数据库
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgres:5432/rwa_trading?schema=public
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@${POSTGRES_HOST:-postgres}:${POSTGRES_PORT:-5432}/rwa_trading?schema=public
# Redis - 使用 DB 12 隔离
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_PASSWORD: ${REDIS_PASSWORD:-}
REDIS_DB: 12
# Kafka
KAFKA_BROKERS: kafka:29092
KAFKA_BROKERS: ${KAFKA_BROKERS:-kafka:29092}
# 2.0 内部服务调用
MINING_SERVICE_URL: http://mining-service:3021
AUTH_SERVICE_URL: http://auth-service:3024
@ -134,26 +247,33 @@ services:
start_period: 60s
restart: unless-stopped
networks:
- rwa-network
- rwa-2-network
mining-admin-service:
build:
context: ./mining-admin-service
dockerfile: Dockerfile
container_name: rwa-mining-admin-service
depends_on:
postgres-2:
condition: service_healthy
required: false
redis-2:
condition: service_healthy
required: false
environment:
NODE_ENV: production
TZ: Asia/Shanghai
PORT: 3023
# PostgreSQL - 使用独立的数据库
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgres:5432/rwa_mining_admin?schema=public
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@${POSTGRES_HOST:-postgres}:${POSTGRES_PORT:-5432}/rwa_mining_admin?schema=public
# Redis - 使用 DB 13 隔离
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_PASSWORD: ${REDIS_PASSWORD:-}
REDIS_DB: 13
# Kafka - 消费 2.0 服务间事件
KAFKA_BROKERS: kafka:29092
KAFKA_BROKERS: ${KAFKA_BROKERS:-kafka:29092}
CDC_CONSUMER_GROUP: mining-admin-service-cdc-group
# CDC Topics - 从各 2.0 服务同步数据
CDC_TOPIC_USERS: ${CDC_TOPIC_ADMIN_USERS:-mining-admin.auth.users}
@ -181,26 +301,33 @@ services:
start_period: 60s
restart: unless-stopped
networks:
- rwa-network
- rwa-2-network
auth-service:
build:
context: ./auth-service
dockerfile: Dockerfile
container_name: rwa-auth-service
depends_on:
postgres-2:
condition: service_healthy
required: false
redis-2:
condition: service_healthy
required: false
environment:
NODE_ENV: production
TZ: Asia/Shanghai
PORT: 3024
# PostgreSQL - 使用独立的数据库
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgres:5432/rwa_auth?schema=public
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@${POSTGRES_HOST:-postgres}:${POSTGRES_PORT:-5432}/rwa_auth?schema=public
# Redis - 使用 DB 14 隔离
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_PASSWORD: ${REDIS_PASSWORD:-}
REDIS_DB: 14
# Kafka - 消费 CDC 事件 (从1.0 identity-service同步用户)
KAFKA_BROKERS: kafka:29092
KAFKA_BROKERS: ${KAFKA_BROKERS:-kafka:29092}
CDC_ENABLED: "true"
CDC_TOPIC_USERS: ${CDC_TOPIC_USERS:-cdc.identity.public.user_accounts}
CDC_TOPIC_WALLET_ADDRESSES: ${CDC_TOPIC_WALLET_ADDRESSES:-cdc.identity.public.wallet_addresses}
@ -224,26 +351,33 @@ services:
start_period: 60s
restart: unless-stopped
networks:
- rwa-network
- rwa-2-network
mining-wallet-service:
build:
context: ./mining-wallet-service
dockerfile: Dockerfile
container_name: rwa-mining-wallet-service
depends_on:
postgres-2:
condition: service_healthy
required: false
redis-2:
condition: service_healthy
required: false
environment:
NODE_ENV: production
TZ: Asia/Shanghai
PORT: 3025
# PostgreSQL - 使用独立的数据库
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgres:5432/rwa_mining_wallet?schema=public
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@${POSTGRES_HOST:-postgres}:${POSTGRES_PORT:-5432}/rwa_mining_wallet?schema=public
# Redis - 使用 DB 15 隔离
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_PASSWORD: ${REDIS_PASSWORD:-}
REDIS_DB: 15
# Kafka
KAFKA_BROKERS: kafka:29092
KAFKA_BROKERS: ${KAFKA_BROKERS:-kafka:29092}
# JWT 配置
JWT_SECRET: ${JWT_SECRET:-your-jwt-secret-change-in-production}
# KAVA 区块链配置
@ -259,26 +393,33 @@ services:
start_period: 60s
restart: unless-stopped
networks:
- rwa-network
- rwa-2-network
mining-blockchain-service:
build:
context: ./mining-blockchain-service
dockerfile: Dockerfile
container_name: rwa-mining-blockchain-service
depends_on:
postgres-2:
condition: service_healthy
required: false
redis-2:
condition: service_healthy
required: false
environment:
NODE_ENV: production
TZ: Asia/Shanghai
PORT: 3026
# PostgreSQL - 使用独立的数据库
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@postgres:5432/rwa_mining_blockchain?schema=public
DATABASE_URL: postgresql://${POSTGRES_USER:-postgres}:${POSTGRES_PASSWORD:-postgres}@${POSTGRES_HOST:-postgres}:${POSTGRES_PORT:-5432}/rwa_mining_blockchain?schema=public
# Redis - 使用 DB 8 隔离 (0-15 有效范围)
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_PASSWORD: ${REDIS_PASSWORD:-}
REDIS_DB: 8
# Kafka - 用于 MPC 签名通信和事件发布
KAFKA_BROKERS: kafka:29092
KAFKA_BROKERS: ${KAFKA_BROKERS:-kafka:29092}
# JWT 配置
JWT_SECRET: ${JWT_SECRET:-your-jwt-secret-change-in-production}
# 区块链配置
@ -319,7 +460,7 @@ services:
start_period: 60s
restart: unless-stopped
networks:
- rwa-network
- rwa-2-network
# ===========================================================================
# Frontend Services (2.0)
@ -346,18 +487,28 @@ services:
start_period: 15s
restart: unless-stopped
networks:
- rwa-network
- rwa-2-network
# =============================================================================
# 网络配置 - 连接到 1.0 的网络以共享基础设施
# Volumes
# =============================================================================
volumes:
mining-admin-uploads:
driver: local
trading-uploads:
driver: local
# standalone 模式专属 volumes
postgres_2_data:
driver: local
redis_2_data:
driver: local
# =============================================================================
# Network
# =============================================================================
# shared 模式 (默认): RWA_NETWORK_NAME 未设置,使用 services_rwa-network (1.0 已创建)
# standalone 模式: deploy-mining.sh 设置 RWA_NETWORK_NAME=rwa-2-network 并预先创建
networks:
rwa-network:
rwa-2-network:
external: true
name: services_rwa-network
name: ${RWA_NETWORK_NAME:-services_rwa-network}