fix(wallet): resolve account creation and wallet status query issues
This commit fixes three critical bugs that prevented the wallet creation
flow from completing successfully:
1. mpc-service: extraPayload not included in Kafka messages
- KeygenCompletedEvent's extraPayload (containing userId, accountSequence,
username, derivedAddresses) was being set dynamically but not serialized
- identity-service received events without userId and skipped processing
- Fix: Merge extraPayload into the published payload in event-publisher
2. mpc-service: KAFKA_BROKERS hostname mismatch
- mpc-service used KAFKA_BROKERS=rwa-kafka:29092
- Kafka advertises itself as kafka:29092 in cluster metadata
- During consumer group rebalance, mpc-service couldn't connect to
the coordinator address returned by Kafka
- Fix: Use kafka:29092 to match Kafka's advertised listener
3. blockchain-service: recovery_mnemonics table missing
- RecoveryMnemonic model exists in schema.prisma but not in migration
- prisma migrate deploy found no pending migrations
- Address derivation failed with "table does not exist" error
- Fix: Use prisma db push instead of migrate deploy to sync schema
Tested: E2E flow now completes successfully
- POST /user/auto-create creates account
- MPC keygen completes and publishes event with extraPayload
- blockchain-service derives addresses and saves recovery mnemonic
- GET /user/wallet returns status=ready with 3 addresses and mnemonic
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
parent
43016e9ee8
commit
d983525aa5
|
|
@ -52,11 +52,11 @@ RUN DATABASE_URL="postgresql://user:pass@localhost:5432/db" npx prisma generate
|
|||
# Copy built files
|
||||
COPY --from=builder /app/dist ./dist
|
||||
|
||||
# Create startup script that runs migrations before starting the app
|
||||
# Create startup script that syncs schema before starting the app
|
||||
RUN echo '#!/bin/sh\n\
|
||||
set -e\n\
|
||||
echo "Running database migrations..."\n\
|
||||
npx prisma migrate deploy || npx prisma db push --accept-data-loss\n\
|
||||
echo "Syncing database schema..."\n\
|
||||
npx prisma db push --skip-generate\n\
|
||||
echo "Starting application..."\n\
|
||||
exec node dist/main.js\n' > /app/start.sh && chmod +x /app/start.sh
|
||||
|
||||
|
|
|
|||
|
|
@ -342,7 +342,7 @@ services:
|
|||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
|
||||
- REDIS_DB=5
|
||||
- KAFKA_BROKERS=rwa-kafka:29092
|
||||
- KAFKA_BROKERS=kafka:29092
|
||||
- KAFKA_CLIENT_ID=mpc-service
|
||||
- KAFKA_GROUP_ID=mpc-service-group
|
||||
# MPC System - 使用 Docker 网络内部地址
|
||||
|
|
|
|||
|
|
@ -73,13 +73,17 @@ export class EventPublisherService implements OnModuleInit, OnModuleDestroy {
|
|||
this.logger.log(`[PUBLISH] EventId: ${event.eventId}`);
|
||||
this.logger.log(`[PUBLISH] AggregateId: ${event.aggregateId}`);
|
||||
|
||||
// Merge extraPayload into payload if present (used by KeygenCompletedEvent etc.)
|
||||
const extraPayload = (event as any).extraPayload || {};
|
||||
const fullPayload = { ...event.payload, extraPayload };
|
||||
|
||||
const messageValue = {
|
||||
eventId: event.eventId,
|
||||
eventType: event.eventType,
|
||||
occurredAt: event.occurredAt.toISOString(),
|
||||
aggregateId: event.aggregateId,
|
||||
aggregateType: event.aggregateType,
|
||||
payload: event.payload,
|
||||
payload: fullPayload,
|
||||
};
|
||||
|
||||
this.logger.log(`[PUBLISH] Payload keys: ${Object.keys(event.payload).join(', ')}`);
|
||||
|
|
|
|||
Loading…
Reference in New Issue