Only count adoptions with status='MINING_ENABLED' when calculating:
- Personal adoption count (user list)
- Team adoption count (user list)
- Personal adoption stats (user detail)
- Direct referral adoptions (user detail)
- Team adoptions (user detail)
- Referral tree adoption stats
This fixes incorrect adoption counts that included pending/unconfirmed orders.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fix hydration race condition where token check happened before
localStorage was read. Now waits for client-side initialization
before deciding whether to redirect to login.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
kafkacat/kcat not available in containers. Switch to kafka-console-producer
with null.marker property to send tombstone messages for Debezium offset deletion.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The Debezium Connect container uses OFFSET_STORAGE_TOPIC=debezium_offsets,
not the default connect-offsets. This fix updates the tombstone method
to use the correct topic name.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This fixes an issue where Debezium would skip initial snapshot after
full-reset because the connector offset persisted in Kafka Connect's
internal connect-offsets topic.
The fix adds two strategies to delete connector offsets:
1. REST API method (Kafka Connect 3.6+): DELETE /connectors/<name>/offsets
2. Tombstone method (Kafka Connect 3.5-): Send NULL messages via kafkacat
Reference: https://debezium.io/documentation/faq/#how_to_remove_committed_offsets_for_a_connector
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Old messages in Kafka topics were corrupting the sync because they contained
outdated data from previous resets. Now we delete the outbox topics during
full-reset to ensure clean re-sync.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The full-reset script was missing the cleanup of rwa_mining_admin.processed_events
table, which caused stale idempotency records to prevent re-consumption of
contribution outbox events after reset.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace skeleton blocks with shimmer text placeholders across all pages:
- Asset page: show full UI with flickering numbers during load
- Trading page: show full UI with flickering market data during load
- Contribution page: show full UI with flickering stats during load
- shimmer_loading.dart: add ShimmerText, DataText, AmountText components
This approach shows the complete UI immediately, with only the dynamic
number values flickering while data loads - much better UX than showing
grey skeleton blocks.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Optimize splash page: reduce wait to 500ms, refresh token in background
- Cache SharedPreferences instance to avoid blocking API requests
- Add global 401 handler to auto-redirect to login page
- Create shimmer loading components (ShimmerLoading, ShimmerBox, skeletons)
- Replace CircularProgressIndicator with skeleton screens across all pages
- Add keepAlive + auto-invalidation (5min) to providers to reduce API calls
- Fix trading page: invalidate globalStateProvider after trade for data sync
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The outbox_events table uses outbox_id as the primary key column name
(mapped from id in Prisma). When Debezium captures changes, the message
contains outbox_id field, not id. This caused all events to have
undefined eventId, resulting in duplicate detection treating all events
as duplicates after the first one.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Root cause: calculateForAdoption uses separate DB connections, which
cannot see uncommitted data in Serializable isolation level, causing
"Adoption not found" errors.
Solution (following Kafka Idempotent Consumer best practice):
- Add TransactionalCDCHandlerWithResult<T> type for handlers with return
- Add withIdempotencyAndCallback() wrapper for post-commit callbacks
- Add registerTransactionalHandlerWithCallback() registration method
- AdoptionSyncedHandler.handle() now returns AdoptionSyncResult
- Contribution calculation runs AFTER transaction commits via callback
Reference: Lydtech Consulting - Kafka Idempotent Consumer Pattern
https://www.lydtechconsulting.com/blog/kafka-idempotent-consumer-transactional-outbox
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Problem: full-reset script resets Kafka consumer offsets but doesn't
clear the processed_cdc_events table. When migrations run, containers
may start and consume CDC events, inserting records into this table.
After offset reset, the service restarts and detects all events as
"duplicates" because the idempotency records still exist.
Solution: Add TRUNCATE processed_cdc_events step after CDC offset reset
in Step 8, before starting services. This ensures the idempotency table
is in sync with the Kafka offset position.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The bcrypt package requires native compilation. Added build dependencies
to the builder stage so npm ci can compile bcrypt successfully.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add comments explaining unique key composition:
- CDC events: (source_topic, offset) = Kafka topic + message offset
- Outbox events: (source_service, event_id) = service name + outbox ID
- Fix contribution-service migration:
- Extend source_service column from VARCHAR(50) to VARCHAR(100)
- Set source_service as NOT NULL to match schema
- Use snake_case for index name consistency
- Clarify that offset/event_id are NOT database auto-increment IDs
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update splash_page.dart to orange theme (#FF6B00) matching other pages
- Change app name from "榴莲挖矿" to "榴莲生态"
- Fix refreshTokenIfNeeded to properly throw on failure instead of
silently calling logout (which caused Riverpod ref errors)
- Clear local storage directly on refresh failure without remote API call
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements 100% exactly-once semantics for CDC events from 1.0 identity-service
(user_accounts table) to auth-service.
Key changes:
- Add ProcessedCdcEvent model with (sourceTopic, offset) unique constraint
- Implement processWithIdempotency() using Serializable transaction isolation
- All database operations now use the transaction client
- Outbox event creation is also within the same transaction
This ensures that:
1. Each CDC event is processed exactly once
2. Idempotency record and business logic are in the same transaction
3. Outbox event publishing is atomic with data sync
4. Any failure causes complete rollback
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements 100% exactly-once semantics for CDC events from 1.0 databases
(identity-service, planting-service, referral-service) to contribution-service.
Key changes:
- Add ProcessedCdcEvent model with (sourceTopic, offset) unique constraint
- Add withIdempotency() wrapper using Serializable transaction isolation
- Add registerTransactionalHandler() for handlers requiring idempotency
- Modify CDC handlers to accept external transaction client
- All database operations now use the passed transaction client
This ensures that:
1. Each CDC event is processed exactly once
2. Idempotency record and business logic are in the same transaction
3. Any failure causes complete rollback
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use Prisma $transaction with Serializable isolation level
- Insert idempotency record FIRST, then execute business logic
- Unique constraint violation (P2002) indicates duplicate event
- All operations atomic - either fully commit or fully rollback
- Modified all handlers to accept transaction client parameter
- Removed old non-atomic isEventProcessed/recordProcessedEvent methods
This ensures 100% data consistency for CDC synchronization, which is
critical for financial data where any error is catastrophic.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use (sourceTopic, eventId) as composite unique key in processed_events
- Add sourceTopic to ServiceEvent for globally unique idempotency key
- Wrap all handlers with withIdempotency() for duplicate event detection
- Fix ID collision issue between different service outbox tables
This implements the industry-standard CDC exactly-once semantics pattern.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Wrap content in LayoutBuilder + ConstrainedBox to ensure proper
scrolling behavior when content exceeds viewport height.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Changes:
1. Delete Debezium connectors BEFORE dropping databases (releases replication slots)
2. Start services BEFORE registering connectors (ensures tables exist and data is synced)
3. Register connectors AFTER services sync from 1.0 CDC (snapshot.mode=initial captures existing data)
4. Add wait time for connectors to initialize before publishing data
Step order: stop services → delete connectors → drop DBs → create DBs → migrate →
start services → wait for sync → register connectors → publish data
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update login/register pages to use orange color scheme (#FF6B00)
matching the navigation pages design
- Fix SafeArea bottom: false on all navigation pages since MainShell
handles bottom safe area via bottomNavigationBar
- Add AlwaysScrollableScrollPhysics to asset page for consistent scroll
- Increase bottom padding to 100px on all navigation pages to clear
the navigation bar
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The full-reset function was calling contribution-records/publish-all API
which caused duplicate records in mining-admin-service because:
- Contribution records are already published to outbox when calculated
- Debezium automatically captures outbox_events and sends to Kafka
- Calling publish-all again creates duplicate events with different IDs
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The toDto method was accessing non-existent properties on ContributionAccountAggregate:
- teamLevelContribution -> totalLevelPending
- teamBonusContribution -> totalBonusPending
- totalContribution -> effectiveContribution
- isCalculated -> true (always calculated when account exists)
- lastCalculatedAt -> updatedAt
This was causing "Cannot read properties of undefined (reading 'value')" error
on GET /api/v2/contribution/accounts/{accountSequence}
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Delete old green-themed HomePage and its widgets (asset_card, price_card, quick_actions)
- Remove /home route from router configuration
- Fix SplashPage to redirect to /contribution instead of /home after login
- Now all navigation goes through the new orange-themed UI pages
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Delete incorrect TEAM_BONUS records (where account_sequence !=
source_account_sequence) on each container startup after migrations.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove the UPDATE statement that referenced non-existent columns.
The contribution_accounts table uses different field structure.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use treeCount * contributionPerTree from adoption record to calculate
the actual distribution amounts (70%, 12%, 1%, 2%, 15%) instead of
deriving from contribution_records table.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
TEAM_BONUS should only be given to the adopter themselves, not to their
uplines. This migration deletes all TEAM_BONUS records where
account_sequence != source_account_sequence.
Also updates contribution_accounts totals in contribution-service.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The AuthEventBus was causing "Cannot use ref functions after the
dependency of a provider changed" error when 401 responses triggered
logout during provider rebuilds.
Now 401 handling is done through normal exception flow in splash page
and route guards respond to isLoggedInProvider state changes.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Login fixes:
- Add AuthEventBus for global 401 error handling with auto-logout
- Add route guards with GoRouter redirect to protect authenticated routes
- Remove setMockUser() security vulnerability and legacy login() dead code
- Remove unused AuthInterceptor class
Contribution page:
- Add ContributionRecord entity and model for records API
- Connect contribution details card to GET /accounts/{id}/records endpoint
- Display real team stats (direct referrals, unlocked levels/tiers)
- Calculate expiration countdown from actual record data
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add distributionSummary Text column to synced_adoptions table for storing
distribution details calculated during contribution calculation.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Changes:
1. contribution-service:
- Add distributionSummary field to SyncedAdoption schema
- Store distribution summary after contribution calculation
2. mining-admin-service:
- Add distributionSummary field to SyncedAdoption schema
- Calculate actual distribution from contribution_records table
- Return distribution details in planting ledger API
3. mining-admin-web:
- Display distribution details in planting ledger table
- Show: 70%(amount) personal, 12%(amount) operation,
1%(amount) province, 2%(amount) city, 15%(amount) team
- Show team distribution breakdown (distributed vs unallocated)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
TEAM_BONUS (7.5% = 2.5% × 3 tiers) should be given to the adopter themselves,
not to their direct referrer. The unlock conditions are:
- T1 (2.5%): Self adoption (always unlocked when adopting)
- T2 (2.5%): 2+ direct referrals adopted
- T3 (2.5%): 4+ direct referrals adopted
Also fixed findAncestorChain to correctly include the first ancestor
in the chain instead of skipping it.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Switch to Prisma's "prisma" relation mode to handle CDC event ordering issues.
This mode emulates foreign key relations at the Prisma Client layer instead of
creating database-level FK constraints, which is the recommended approach for
CDC scenarios where event arrival order cannot be guaranteed.
Reference: https://www.prisma.io/docs/orm/prisma-schema/data-model/relations/relation-mode
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
CDC events arrive asynchronously and order is not guaranteed.
Child records (referrals, accounts) may arrive before parent (users).
This follows CDC best practices: destination tables should not have FK constraints.
Reference: https://estuary.dev/blog/cdc-done-correctly/
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add synced_contribution_records table for tracking contribution ledger
- Add synced_network_progress table for tracking network-wide stats
- Revert Dockerfile to use prisma migrate deploy
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Change Dockerfile to use `prisma db push` instead of `migrate deploy`
- Add publish steps for contribution records and network progress in full-reset
- This ensures new tables are created automatically when schema changes
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add isHeartbeatMessage to detect heartbeat messages (only have ts_ms field)
- Skip processing for heartbeat messages to avoid unnecessary warnings
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add isDebeziumOutboxEvent to detect outbox table CDC messages
- Add handleDebeziumOutboxEvent to extract service event from payload.after
- Fix CDC consumer not recognizing events from contribution-service outbox
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Change saveMany to return saved records with IDs
- Update saveDistributionResult to use saved records for event publishing
- Contribution records are now automatically synced to mining-admin-service
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update ContributionRecord type to match backend response fields (sourceType, baseContribution, distributionRate, etc.)
- Update contribution-records-list component with improved UI showing source type badges, user info, tree count, and expiry status
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix updateContribution to properly update levelXPending and bonusTierXPending fields
- Add NetworkAdoptionProgress and DailyContributionRate tables for tracking contribution coefficient
- Create ContributionRateService for dynamic rate calculation (base 22617, +0.3% per 100 trees after 1000)
- Add ContributionRecordSynced and NetworkProgressUpdated events for CDC sync
- Add admin endpoints for network progress query and contribution records publishing
- Update mining-admin-service to sync contribution records and network progress
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add detailed [CDC] prefixed logs to all CDC handlers:
- cdc-consumer.service.ts: log message parsing, handler dispatch
- user-synced.handler.ts: log user sync operations with field details
- adoption-synced.handler.ts: log adoption sync and contribution calc
- referral-synced.handler.ts: log referral relationship sync
All logs use consistent [CDC] prefix for easy filtering.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove HOT_WALLET and COLD_WALLET from initializeCoreAccounts
- Remove BLOCKCHAIN from counterpartyType union
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>