Major architectural refactoring to align with international MPC standards and enable horizontal scalability. ## Core Changes ### 1. DeviceInfo Made Optional - Modified DeviceInfo.Validate() to allow empty device information - Aligns with international MPC protocol standards - MPC protocol layer should not mandate device-specific metadata - Location: services/session-coordinator/domain/entities/device_info.go ### 2. Kubernetes Party Discovery Service - Created infrastructure/k8s/party_discovery.go (220 lines) - Implements dynamic service discovery via Kubernetes API - Supports in-cluster config and kubeconfig fallback - Auto-refreshes party list every 30s (configurable) - Health-aware selection (only ready pods) - Uses pod names as unique party IDs ### 3. Party Pool Architecture - Defined PartyPoolPort interface for abstraction - CreateSessionUseCase now supports automatic party selection - When no participants specified, selects from K8s pool - Graceful fallback to dynamic join mode if discovery fails - Location: services/session-coordinator/application/ports/output/party_pool_port.go ### 4. Integration Updates - Modified CreateSessionUseCase to inject partyPool - Updated session-coordinator main.go to initialize K8s discovery - gRPC handler already supports optional participants - Added k8s client-go dependencies (v0.29.0) to go.mod ## Kubernetes Deployment ### New K8s Manifests - k8s/namespace.yaml: mpc-system namespace - k8s/configmap.yaml: shared configuration - k8s/secrets-example.yaml: secrets template - k8s/server-party-deployment.yaml: scalable party pool (3+ replicas) - k8s/session-coordinator-deployment.yaml: coordinator with RBAC - k8s/README.md: comprehensive deployment guide ### RBAC Configuration - ServiceAccount for session-coordinator - Role with pods/services get/list/watch permissions - RoleBinding to grant discovery capabilities ## Key Features ✅ Dynamic service discovery via Kubernetes API ✅ Horizontal scaling (kubectl scale deployment) ✅ No hardcoded party IDs ✅ Health-aware party selection ✅ Graceful degradation when K8s unavailable ✅ MPC protocol compliance (optional DeviceInfo) ## Deployment Modes ### Docker Compose (Existing) - Fixed 3 parties (server-party-1/2/3) - Quick setup for development - Backward compatible ### Kubernetes (New) - Dynamic party pool - Auto-discovery and scaling - Production-ready ## Documentation - Updated main README.md with deployment options - Added architecture diagram showing scalable party pool - Created comprehensive k8s/README.md with: - Quick start guide - Scaling instructions - Troubleshooting section - RBAC configuration details 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| README.md | ||
| configmap.yaml | ||
| namespace.yaml | ||
| secrets-example.yaml | ||
| server-party-deployment.yaml | ||
| session-coordinator-deployment.yaml | ||
README.md
Kubernetes Deployment for MPC System
This directory contains Kubernetes manifests for deploying the MPC system with dynamic party pool service discovery.
Architecture Overview
The Kubernetes deployment implements a Party Pool architecture where:
- Server parties are dynamically discovered via Kubernetes service discovery
- Session coordinator automatically selects available parties from the pool
- Parties can be scaled up/down without code changes (just scale the deployment)
- No hardcoded party IDs - each pod gets a unique name as its party ID
Prerequisites
- Kubernetes cluster (v1.24+)
- kubectl configured to access your cluster
- Docker images built for all services
- PostgreSQL, Redis, and RabbitMQ deployed (see infrastructure/)
Quick Start
1. Create namespace
kubectl apply -f namespace.yaml
2. Create secrets
Copy the example secrets file and fill in your actual values:
cp secrets-example.yaml secrets.yaml
# Edit secrets.yaml with your base64-encoded secrets
# Generate base64: echo -n "your-secret" | base64
kubectl apply -f secrets.yaml
3. Create ConfigMap
kubectl apply -f configmap.yaml
4. Deploy Session Coordinator
kubectl apply -f session-coordinator-deployment.yaml
The session coordinator requires RBAC permissions to discover party pods.
5. Deploy Server Party Pool
kubectl apply -f server-party-deployment.yaml
This creates a deployment with 3 replicas by default. Each pod gets a unique name (e.g., mpc-server-party-0, mpc-server-party-1, etc.) which serves as its party ID.
6. Deploy other services
kubectl apply -f message-router-deployment.yaml
kubectl apply -f account-service-deployment.yaml
kubectl apply -f server-party-api-deployment.yaml
Scaling Server Parties
To scale the party pool, simply adjust the replica count:
# Scale up to 5 parties
kubectl scale deployment mpc-server-party -n mpc-system --replicas=5
# Scale down to 2 parties
kubectl scale deployment mpc-server-party -n mpc-system --replicas=2
The session coordinator will automatically discover new parties within 30 seconds (configurable via MPC_PARTY_DISCOVERY_INTERVAL).
Service Discovery Configuration
The session coordinator uses environment variables to configure party discovery:
K8S_NAMESPACE: Namespace to search for parties (auto-detected from pod metadata)MPC_PARTY_SERVICE_NAME: Service name to discover (mpc-server-party)MPC_PARTY_LABEL_SELECTOR: Label selector (app=mpc-server-party)MPC_PARTY_GRPC_PORT: gRPC port for parties (50051)MPC_PARTY_DISCOVERY_INTERVAL: Refresh interval (30s)
RBAC Permissions
The session coordinator requires the following Kubernetes permissions:
pods: get, list, watch (to discover party pods)services: get, list, watch (to discover services)
These permissions are granted via the mpc-session-coordinator-role Role and RoleBinding.
Health Checks
All services expose a /health endpoint on their HTTP port (8080) for:
- Liveness probes: Detects if the service is alive
- Readiness probes: Detects if the service is ready to accept traffic
Monitoring Party Pool
Check available parties:
# View all party pods
kubectl get pods -n mpc-system -l app=mpc-server-party
# Check party pod logs
kubectl logs -n mpc-system -l app=mpc-server-party --tail=50
# Check session coordinator logs for party discovery
kubectl logs -n mpc-system -l app=mpc-session-coordinator | grep "party"
Troubleshooting
Session coordinator can't discover parties
- Check RBAC permissions:
kubectl get role,rolebinding -n mpc-system
- Check if service account is correctly assigned:
kubectl get pod -n mpc-system -l app=mpc-session-coordinator -o yaml | grep serviceAccount
- Check coordinator logs:
kubectl logs -n mpc-system -l app=mpc-session-coordinator
Parties not showing as ready
- Check party pod status:
kubectl get pods -n mpc-system -l app=mpc-server-party
- Check readiness probe:
kubectl describe pod -n mpc-system <party-pod-name>
- Check party logs:
kubectl logs -n mpc-system <party-pod-name>
Migration from Docker Compose
Key differences from docker-compose deployment:
-
No hardcoded party IDs: In docker-compose, parties had static IDs (
server-party-1,server-party-2,server-party-3). In K8s, pod names are used as party IDs. -
Dynamic scaling: Can scale parties up/down without restarting other services.
-
Service discovery: Automatic discovery via Kubernetes API instead of DNS.
-
DeviceInfo optional:
DeviceInfois now optional in the protocol layer, aligning with international MPC standards.
Advanced Configuration
Custom party selection strategy
The default selection strategy is "first N available parties". To implement custom strategies (e.g., load-based, geo-aware), modify the SelectParties() method in services/session-coordinator/infrastructure/k8s/party_discovery.go.
Party affinity
To ensure parties run on different nodes for fault tolerance:
spec:
template:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: mpc-server-party
topologyKey: kubernetes.io/hostname
Add this to server-party-deployment.yaml under spec.template.