The API returns {success: true, data: {...}} but code was accessing
progressResult.currentContributionPerTree directly instead of
progressResult.data.currentContributionPerTree.
This caused:
- totalTreeCount to be 0 (undefined → 0)
- networkTotalContribution to be 0
- No mining distributions happening
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add buyEnabled field to TradingConfig in trading-service with migration
- Add API endpoints for get/set buy enabled status in admin controller
- Add buy function switch card in mining-admin-web trading page
- Implement buyEnabledProvider in mining-app with 2-minute cache
- Show "待开启" when buy function is disabled in trading page
- Add real-time asset value refresh in asset page (1-second updates)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Changed from reading config.remainingDistribution to calculating:
remainingDistribution = distributionPool - totalDistributed
- Ensures data consistency: remaining + distributed = total pool
- Added Math.max(0, ...) to prevent negative values
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Handle missing currentContributionPerTree with default value
- Add null checks for all network progress fields
- Prevent DecimalError when contribution service returns incomplete data
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously used (pool - remaining) / pool which was incorrect.
Now uses actual distributed amount / total pool for accurate percentage.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add /api/v2/mining/progress endpoint in MiningController
- Update frontend API to call /progress instead of /admin/mining/status
- Kong routes /api/v2/mining/* with strip_path=false, so endpoint must
be under /mining controller path
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update label in Flutter mining records page
- Update table header in admin web mining records list
- Update memo strings in mining-wallet-service
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Wrap all database operations in executeSecondDistribution with
UnitOfWork.executeInTransaction
- Pass transaction client to repository save methods
- Use longer transaction timeout (60s) for batch operations
- Move Redis operations outside transaction (non-ACID)
- Add distributeToSystemAndPendingInTx method that accepts tx client
This resolves the "Unable to start a transaction in the given time"
error caused by multiple concurrent transactions competing for
database connections.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Prevent TypeError when config properties are undefined by using
optional chaining and default values in publishMiningConfigUpdated.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add real-time mining progress statistics similar to burn progress:
- Backend: new /admin/mining/status endpoint in mining-service
- Frontend: MiningProgress component with progress bar and stats
- Shows: total distributed, remaining pool, minutes left, per-minute rate
- Auto-refresh every 60 seconds via React Query
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Problem:
- Multiple concurrent transactions updating system_mining_accounts caused row lock contention
- 16+ transactions waiting for tuple/transactionid locks led to timeout errors
- This prevented writeMinuteRecords() from executing, leaving mining_records empty
Solution:
- Modified SystemMiningAccountRepository.mine() to accept optional external transaction client
- Created new distributeToSystemAndPending() method that processes all system accounts
and pending contributions in a single transaction
- Pre-calculate all rewards before transaction, then execute updates sequentially
- Aggregate all pending contribution rewards into single HEADQUARTERS update
- Move Redis accumulation outside transaction to avoid blocking
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The GetTeamTreeQuery was importing SYNCED_DATA_REPOSITORY as a Symbol from
the domain interface, but InfrastructureModule defined its own string token.
This caused NestJS dependency resolution to fail.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The dashboard was incorrectly calculating distributed shares using
distributionPool - remainingDistribution. The correct value is the sum
of all users' totalMined balances. Updated mining-service to return
totalDistributed directly, and mining-admin-service to use it.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The mining-admin-service was only listening for SystemContributionUpdated
events, but contribution-service publishes SystemAccountSynced events.
Added the missing handler to properly sync system account contribution data.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add team info and direct referrals endpoints to contribution-service,
using SyncedReferral data synced via CDC. Update mining-app to use the
new v2 contribution API instead of legacy referral-service.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The dashboard was incorrectly using 5 billion as the distribution pool
default when calculating already distributed shares. The actual mining
distribution pool is 2 million shares, not 100 billion.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add event publishing to enable mining-admin-service to sync data via
Debezium CDC instead of direct API calls:
- MiningConfigUpdated: Published every minute with distribution status
- DailyMiningStatCreated: Published when daily stats are generated
- MiningAccountUpdated: Method added for future per-account sync
These events will be captured by Debezium monitoring the outbox_events
table and forwarded to mining-admin-service via Kafka.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Dashboard now fetches totalDistributed and totalBurned directly from
mining-service and trading-service APIs instead of relying solely on
CDC sync which may not have data.
- Add fetchRemoteServiceData() to get real-time data
- Use mining-service /admin/status for totalDistributed
- Use trading-service /asset/market for totalBurned and circulationPool
- Add 30-second cache to reduce API calls
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Mining-service incorrectly implemented its own burn mechanism (10-year
cycle) which was not in the requirements. Per requirements, only
trading-service should handle per-minute burn (4756.47/minute).
Removed:
- BlackHoleRepository and all burn-related methods
- executeBurn() from mining distribution service
- Burn stats from admin API and queries
- Burn progress UI from mining admin web
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add UnallocatedContributionSyncedEvent in contribution-service
- Add event handler in mining-service's contribution-event.handler.ts
- Add handleUnallocatedContributionSynced in network-sync.service.ts
- Add admin endpoint to publish all unallocated contributions
- Sync pending/unallocated contributions to PendingContributionMining table
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add Kafka consumer to listen for burn events (minute burn and sell burn)
from trading-service and deduct from SHARE_POOL_A (100B pool), updating
BLACK_HOLE_POOL balance accordingly.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- mining-service: publish MINING_MINUTE_DISTRIBUTED event to Kafka after
each minute's mining distribution is completed
- mining-wallet-service: add MiningDistributionConsumer to consume the
event and deduct from SHARE_POOL_B
- Add deductFromSharePoolB method in PoolAccountService
- This ensures the share pool balance displayed in mining-app reflects
the actual remaining balance after mining distributions
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add trading system activate/deactivate endpoints to trading-service
- Add trading management page to mining-admin-web with:
- Trading system status display and control
- Market overview (price, green points, circulation pool)
- Burn progress visualization
- Burn records list with filtering
- Add trading-service proxy configuration to next.config.js
- Add trading menu item to sidebar navigation
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Modify AssetService to fetch mining account balance from mining-service
- Sum mining balance + trading balance for total share display
- Add miningShareBalance and tradingShareBalance fields to AssetDisplay
- Update frontend entity and model to support new fields
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add SharePool and CirculationPool initialization in BurnService.initialize()
- Initialize SharePool with 5760 green points (fixes price showing as 0)
- Remove misleading "= 5,760 积分值" display from trading page
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
auth-service puts accountSequence in payload.sub, not payload.accountSequence.
This mismatch caused 401 errors when accessing trading endpoints.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Both services were missing JWT_SECRET environment variable, causing
401 Unauthorized errors when validating JWT tokens from auth-service.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
All mining services need to read shared environment variables
(JWT_SECRET, DATABASE_URL, etc.) from backend/services/.env
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Both services need to read JWT_SECRET from the shared .env file
in the parent directory (backend/services/.env).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove existence check, directly upsert pool accounts to ensure
consistent state on every seed run.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Split the share pool into two accounts:
- SHARE_POOL_A: 100亿 for burning
- SHARE_POOL_B: 200万 for mining distribution
Total: 100.02亿
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The service uses 'api/v2' as global prefix, not 'api/v1'.
Request flow: /api/v2/mining-admin/auth/login -> strip path -> /auth/login -> upstream /api/v2/auth/login
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
NestJS routes are matched in order, so the parameterized :type route
was capturing 'share-pool-balance' before it could reach the public
endpoint, causing 401 errors.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Backend changes:
- mining-wallet-service: Split SHARE_POOL into SHARE_POOL_A (100亿, for burning)
and SHARE_POOL_B (200万, for mining distribution)
- Add /pool-accounts/share-pool-balance API endpoint to get total balance
- Update pool initialization logic and seed data
- Fix Kong routing for mining-wallet-service (strip_path: true)
- Fix Kong routing for trading-service (strip_path: true)
Constant updates (100.02亿 = 10,002,000,000):
- mining-service: TOTAL_SHARES
- trading-service: TOTAL_SHARES, trading config defaults
- trading-service seed: initial green points = 5760
Frontend changes:
- Add sharePoolBalanceProvider to fetch pool balance from mining-wallet-service
- Update contribution page to display real-time share pool balance (A + B)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Change prefix from 'api/v2' to 'api/v2/trading' to match Kong gateway
configuration with strip_path: false.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add CDC consumer to listen for UserWalletCreated events from mining-wallet-service
- Create trading accounts when user contribution wallets are created (lazy creation)
- Add WalletSystemAccountCreated handler for province/city system accounts
- Add seed script for core system accounts (HQ, operation, cost, pool)
- Keep auth.user.registered listener for V2 new user registration
This ensures trading accounts are created in sync with wallet accounts,
supporting both V2 new users and V1 migrated users.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Effective contribution should equal theoretical total (totalTrees * 22617)
since it includes all parts: personal 70% + operation 12% + province 1% +
city 2% + level 7.5% + bonus 7.5% = 100%.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Backend: Add status filter to getPlantingLedger and getPlantingSummary
- Frontend: Change Row to Wrap for info items to prevent width overflow
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add totalTrees field from syncedAdoption aggregate
- Rename fields: networkLevelPending, networkBonusPending
- Stats card: show level pending and bonus pending separately
- Add new stats card for total trees count
- Price overview: 2-row layout showing all contribution metrics
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add networkPendingContribution and networkBonusPendingContribution to API
- Display combined pending contribution (teamLevel + teamBonus) in stats card
- Replace 'total contribution' with 'pending contribution' in price overview
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, totalContribution was incorrectly set to effectiveContribution.
Now correctly calculated as: personal + teamLevel + teamBonus
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add networkTotalContribution field to dashboard API response
- Display total hashrate alongside effective hashrate in stats cards
- Update price overview to show both effective and total contribution
- Change grid from 3 to 4 columns in price overview
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, adoptions were synced with hardcoded contributionPerTree=1,
resulting in contribution values like 0.7 instead of the expected 15831.9.
Now the handler fetches the actual contribution rate from ContributionRateService
based on the adoption date, storing values like:
- Personal (70%): 22617 × 70% = 15831.9
- Team Level (0.5%): 22617 × 0.5% = 113.085
- Team Bonus (2.5%): 22617 × 2.5% = 565.425
Note: Historical data may need migration to apply the correct multiplier.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Kong routes /api/v2/auth/* to auth-service without stripping the path,
so controllers need 'auth/' prefix to match frontend requests:
- SmsController: 'sms' -> 'auth/sms'
- PasswordController: 'password' -> 'auth/password'
- UserController: 'user' -> 'auth/user'
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Changed `record.finalContribution` to `record.amount` for getting final contribution value
- Added optional chaining to prevent undefined errors
- Added default values for safety
The ContributionRecordAggregate uses `amount` property, not `finalContribution`.
This was causing "Cannot read properties of undefined (reading 'value')" errors.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add trading burn system with black hole, share pool, and price calculation
- Implement per-minute auto burn and sell burn with multiplier
- Add Kafka event publishing via outbox pattern (order, trade, burn events)
- Add user.registered consumer to auto-create trading accounts
- Implement Redis + DB dual idempotency for event processing
- Add price, burn, and asset API controllers
- Add migrations for burn tables and processed events
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Publish UserWalletCreated when a new wallet is created
- Publish UserWalletUpdated when wallet balance changes
- Events sent to cdc.mining-wallet.outbox topic for mining-admin-service
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add handler for CONTRIBUTION_CREDITED events from mining-wallet-service
to sync user wallet data to synced_user_wallets table.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Return usdtAvailable, usdtFrozen, pendingUsdt, settleableUsdt,
settledTotalUsdt, expiredTotalUsdt instead of old field names
- Query SyncedUserWallet table for GREEN_POINTS wallet data
- Use miningAccount.availableBalance for pendingUsdt
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add @Public() decorator to MiningController to allow mining-admin-service
to fetch mining records without authentication.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Changed event handler to:
- Listen to 'cdc.contribution.outbox' topic (CDC/Debezium format)
- Handle 'ContributionAccountUpdated' events instead of 'ContributionCalculated'
- Use effectiveContribution for mining power calculation
This fixes the issue where mining accounts had zero totalContribution
because they weren't receiving contribution sync events.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update getUserMiningRecords to call mining-service API instead of
returning empty records. This enables the admin dashboard to display
actual user mining records.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
NestJS matches routes in definition order. The :category/:key route was
matching mining/status before the specific mining routes. Moved mining
routes before the parameter routes to fix routing.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add debug logging to trace mining service calls
- Return error object instead of null when data is missing
- Include error message in response for debugging
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add GET and POST /configs/transfer-enabled endpoints to control
the transfer switch. Routes are placed before :category/:key to
avoid being matched as path parameters.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- 添加ts-node/typescript到生产环境以支持seed执行
- 启动脚本中添加prisma db seed执行
- 复制tsconfig.json到生产环境
参考mining-wallet-service的Dockerfile配置
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add admin endpoints:
- GET /admin/status - Get mining system status
- POST /admin/initialize - Initialize mining config (one-time)
- POST /admin/activate - Activate mining distribution
Add prisma seed script for database initialization:
- MiningConfig: 100.02B total shares, 200万 distribution pool
- BlackHole: 100亿 burn target
- MiningEra: First era with 100万 distribution
- PoolAccounts: SHARE_POOL, BLACK_HOLE_POOL, CIRCULATION_POOL
Based on requirements:
- 第一个两年分配100万积分股
- 第二个两年分配50万积分股(减半)
- 100亿通过10年销毁到黑洞
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Frontend expects flat DashboardStats and RealtimeData interfaces.
Transform backend nested response to:
- totalUsers, adoptedUsers, networkEffectiveContribution, etc.
- currentMinuteDistribution, activeOrders, pendingTrades, etc.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
System initialization is now handled by seed scripts and CDC sync,
so the manual initialization UI is no longer needed.
Removed:
- Frontend: initialization page and sidebar menu item
- Backend: InitializationController and InitializationService
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Change controller path from /audit-logs to /audit to match frontend
- Transform response to frontend expected format (items, totalPages, etc.)
- Map admin.username to adminUsername field
- Add keyword query parameter support
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add ReportsController with /reports/daily endpoint that maps the
dashboard service data to the format expected by the frontend.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Change SystemAccountsService to read from syncedWalletSystemAccount and
syncedWalletPoolAccount tables instead of local tables. This fixes the
issue where the frontend shows "暂无数据" despite data being synced.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add WalletSystemAccountCreated and WalletPoolAccountCreated events:
- seed.ts: publish events when creating HQ/OP/FEE and pool accounts
- contribution-wallet.service.ts: publish events when auto-creating
province/city system accounts
This enables mining-admin-service to sync system accounts via CDC.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use unitOfWork.isInTransaction() to detect if already in a transaction
context (called from ContributionCalculationService). If so, reuse the
existing transaction instead of opening a new one, preventing Prisma
interactive transaction timeout errors.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use provinceCode directly instead of inferring from cityCode
- Use code as name for province/city records
- Add ts-node to production for seed execution
- Run prisma db seed on container startup
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>