hailin
17f8a61bcf
feat(contribution-service): 贡献值明细按 L/T 降序排序
...
- 同伴下贡献值 (TEAM_LEVEL): 按 levelDepth DESC 排序 (L3, L2, L1)
- 同伴上贡献值 (TEAM_BONUS): 按 bonusTier DESC 排序 (T3, T2, T1)
- 全部/本人: 按 levelDepth DESC, bonusTier DESC, createdAt DESC
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 14:20:03 -08:00
hailin
b1607666a0
fix(contribution): LEVEL_OVERFLOW 回收任务,修复已解锁层级的溢出记录无法被回收的 bug
...
当下级认种时上级 unlocked_level_depth 不足,层级奖励进入 LEVEL_OVERFLOW(PENDING)。
上级后续解锁到足够层级后,现有 backfill 因条件 expectedLevel > currentLevel 为 false
而跳过,导致 PENDING 记录永远无法被回收。新增独立调度任务每10分钟扫描并回收。
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 10:12:48 -08:00
hailin
817b7d3a9f
fix(contribution): 算力分配时序保证 + bonus补发stale-read修复
...
4处改动确保部署清库重新同步后100%可靠:
1. contribution.scheduler.ts - CDC就绪门控
注入CDCConsumerService,processUnprocessedAdoptions/publishRecentlyUpdatedAccounts/
processContributionBackfill三个调度方法开头加isCdcReady()检查,
确保用户+推荐+认种三阶段CDC同步全部完成后才开始处理。
2. contribution-calculation.service.ts - 推荐数据防护
calculateForAdoption()中,userReferral为null时warn并return,
不标记distributed,调度器下次重试。覆盖continuous mode下
认种事件先于推荐事件到达的竞态场景。
3. bonus-claim.service.ts - bonus补发stale-read修复
processBackfillForAccount()中,level事务的updateAccountUnlockStatus
通过incrementDirectReferralAdoptedCount()同时修改unlockedLevelDepth
和unlockedBonusTiers,导致bonus分支条件永远为false。
修复:保存originalDirectReferralAdoptedCount和originalUnlockedBonusTiers,
bonus分支使用原始值判断和传参。
4. config.controller.ts - mining-admin同步检查增强
isSynced新增allAdoptionsProcessed条件(unprocessedAdoptions===0),
确保所有认种分配+补发完成后才允许激活挖矿。
修复data变量作用域问题(原在if块内声明,外部引用会报错)。
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 05:54:02 -08:00
hailin
cec98e9d3e
feat(contribution): 添加定时任务补发未完全解锁的贡献值
...
每10分钟扫描已认种但解锁状态不完整的账户,检查其直推用户认种情况,
若满足新的解锁条件则自动补发层级贡献值和奖励档位。
- 添加 findAccountsWithIncompleteUnlock 查询方法
- 添加 findPendingLevelByAccountSequence 和 claimLevelRecords 方法
- 实现 processBackfillForAccount 和 claimLevelContributions 补发逻辑
- 添加 processContributionBackfill 定时任务(每10分钟执行)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 06:25:57 -08:00
hailin
37d3300b17
fix(contribution-service): CDC planting_orders 阶段按 order_id 排序处理
...
## 问题背景
用户 D25122700018 的层级已解锁(unlocked_level_depth=5),但缺少 TEAM_LEVEL 算力记录。
其下级用户 D25122700019 的团队算力被错误地分配给了 D25122700015(level 2)而非 D25122700018(level 1)。
## 根本原因分析
1. 源系统数据顺序正确:
- D25122700018: order_id=55, created_at=2026-01-09 11:57:01 (先认种)
- D25122700019: order_id=57, created_at=2026-01-09 12:00:38 (后认种)
2. Kafka 消息顺序错误:
- D25122700019: offset=732, synced_at=10:15:32 (先处理)
- D25122700018: offset=798, synced_at=10:15:41 (后处理)
3. 原因:Debezium snapshot 按 PostgreSQL 物理存储顺序(heap order)读取数据,
而非按主键顺序。即使 topic 只有 1 个分区,消息顺序仍然错误。
4. 后果:当处理 D25122700019 的认种时,D25122700018 的 unlocked_level_depth 还是 0,
导致 D25122700019 的 TEAM_LEVEL 算力跳过 level 1 直接分配给 level 2。
## 解决方案
对 planting_orders 阶段实现"收集-排序-处理"模式:
1. 先收集所有消息到内存数组(不立即处理)
2. 按 order_id(源系统主键)升序排序
3. 再按排序后的顺序逐条处理
这确保上游用户的认种记录先于下游用户处理,避免算力分配错误。
## 受影响用户案例
- 上游用户: D25122700018 (order_id=55)
- 下游用户: D25122700019 (order_id=57, 58, 59)
- 错误分配: D25122700019 的 TEAM_LEVEL 给了 D25122700015 而非 D25122700018
## 回滚方法
如需回滚此修改,将 consumePhaseToEnd 方法中的判断条件改为 false:
```typescript
const needsSorting = false; // 原: phase.tableName === 'planting_orders'
```
或直接 revert 此 commit。
## 风险评估
- 业务逻辑完全不变,只改变处理顺序
- user_accounts 和 referral_relationships 阶段保持原有逻辑
- 内存开销可控(10000 条记录约 5MB)
- 排序开销可忽略(O(n log n))
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 03:41:14 -08:00
hailin
390cc3131d
fix(contribution): 修复T2/T3补发记录缺少treeCount和baseContribution
...
补发奖励时从SyncedAdoption查询原始认种数据,
确保补发记录包含正确的棵数和基础贡献值。
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 19:38:18 -08:00
hailin
f7dbe2f62b
refactor(contribution): 合并所有迁移到 0001_init
...
- 将 0002_add_soft_delete 的 deleted_at 字段合并到 0001_init
- 删除 0002_add_soft_delete_to_system_contribution_records 目录
- 现在只保留一个初始化迁移文件
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 04:31:52 -08:00
hailin
21c6c25f7c
refactor(contribution): 合并 source_type 迁移到 0001_init
...
将 0003_add_source_type 迁移合并到 0001_init/migration.sql
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 04:28:14 -08:00
hailin
e7260be219
feat(contribution): 添加系统账户算力来源类型字段
...
- 添加 sourceType 字段区分算力来源类型:
- FIXED_RATE: 固定比例分配(OPERATION 12%、PROVINCE 1%、CITY 2%)
- LEVEL_OVERFLOW: 层级溢出归总部(上线未解锁该级别)
- LEVEL_NO_ANCESTOR: 无上线归总部(该级无上线)
- BONUS_TIER_1/2/3: 团队奖励未解锁归总部
- 添加 levelDepth 字段记录层级深度(1-15级)
- 更新前端表格显示来源类型列
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 04:23:50 -08:00
hailin
eeaa43e044
feat(contribution): 系统账户明细记录改为软删除
...
- 在 SystemContributionRecord 模型添加 deleted_at 字段
- 修改 deleteContributionRecordsByAdoption 方法为软删除(设置 deleted_at)
- 修改 findContributionRecords 方法过滤已删除记录(deletedAt: null)
- 添加数据库迁移文件
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 03:09:41 -08:00
hailin
e0eb734196
fix(contribution): 用户领取奖励时从 HEADQUARTERS 减少算力并删除明细
...
- 添加 subtractContribution 方法减少系统账户算力
- 添加 deleteContributionRecordsByAdoption 方法删除明细记录
- 在 BonusClaimService 中领取奖励时同步更新 HEADQUARTERS
2026-01-21 02:56:58 -08:00
hailin
974b45554d
feat(contribution): 为 HEADQUARTERS 未分配算力创建明细记录
...
- 每笔未分配算力都创建 HEADQUARTERS 的明细记录
- 发布 SystemContributionRecordCreatedEvent 事件同步到 mining-admin-service
- 明细记录包含来源用户ID (sourceAccountSequence)
2026-01-21 02:20:36 -08:00
hailin
27a045e082
fix(contribution): 在算力明细事件中添加 regionCode 字段
...
修改 SystemContributionRecordCreatedEvent 事件,将 systemAccountType
拆分为 accountType 和 regionCode 两个独立字段,以便 mining-admin-service
正确同步按省市细分的算力明细记录
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 01:25:46 -08:00
hailin
96da7518bf
fix(system-accounts): 修复省市系统账户自动创建的数据流问题
...
1. contribution-service: 修复 CITY 类型账户的 provinceCode 映射
- 之前 CITY 的 provinceCode 被错误设为 cityCode
- 现在正确传递 provinceCode 用于创建省份
2. mining-wallet-service: 修复系统账户创建事件的 topic
- 之前发布到 mining-wallet.system-account.created
- 现在发布到 cdc.mining-wallet.outbox 供 mining-admin-service 同步
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 00:28:48 -08:00
hailin
40ac037c03
fix(contribution): 修复系统账户查询中 nullable regionCode 的 TypeScript 类型错误
...
## 问题
- Prisma 生成的类型不允许在 unique where 条件中传递 null
- addContribution 方法被传入多余参数
- findByType 返回数组被当作单个对象使用
## 修复
- findByTypeAndRegion: 使用 findFirst 替代 findUnique
- ensureSystemAccountsExist: 使用 findFirst + create 替代 upsert
- addContribution: 使用 findFirst + create/update 替代 upsert
- 修正 HEADQUARTERS 账户同步事件调用参数
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 21:37:44 -08:00
hailin
9062346650
refactor(system-accounts): 移除 baseType 字段,使用 accountType+regionCode 复合唯一键
...
## 主要变更
### 数据模型简化
- 移除冗余的 baseType 字段,accountType 已包含类型信息
- 使用 accountType (OPERATION/PROVINCE/CITY/HEADQUARTERS) + regionCode (省市代码) 作为复合唯一键
- 所有查询改用 accountType+regionCode,100% 弃用数据库自增 ID
### contribution-service
- SystemAccount 表移除 baseType,改用 accountType+regionCode 唯一约束
- 修改算力分配逻辑,省市账户使用对应 regionCode
- 事件发布增加 regionCode 字段
### mining-service
- SystemMiningAccount 表使用 accountType+regionCode 唯一约束
- API 改为 /system-accounts/:accountType/records?regionCode=xxx 格式
- 挖矿分配逻辑支持按省市细分
### mining-admin-service
- SyncedSystemContribution 表使用 accountType+regionCode 唯一约束
- CDC 同步处理器适配新格式
- API 统一使用 accountType+regionCode 查询
## API 示例
- 运营账户: GET /admin/system-accounts/OPERATION/records
- 广东省: GET /admin/system-accounts/PROVINCE/records?regionCode=440000
- 广州市: GET /admin/system-accounts/CITY/records?regionCode=440100
- 总部: GET /admin/system-accounts/HEADQUARTERS/records
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 21:29:01 -08:00
hailin
81b2e7a4c2
refactor(migrations): 合并各服务的 migration 文件为单个 0001_init
...
将各服务的多个 migration 文件合并到单一的初始化 migration 中:
- contribution-service: 3→1 (含 region 支持)
- mining-service: 4→1 (含 second 分配和 region 支持)
- mining-admin-service: 4→1 (含 region 和算力明细同步)
- auth-service: 2→1 (含 CDC 幂等)
- trading-service: 9→1 (含销毁系统/做市商/C2C)
- mining-wallet-service: 2→1 (含 SHARE_POOL 拆分)
所有迁移统一使用 TEXT 类型(非 VARCHAR)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 20:34:56 -08:00
hailin
9c816266ac
fix(schema): 统一使用 TEXT 类型替代 VARCHAR
...
问题:
- 之前 schema 和 migration 中使用了 VARCHAR(n) 限制字段长度
- Prisma 的 String 类型默认映射到 PostgreSQL TEXT
- VARCHAR 和 TEXT 在 PostgreSQL 中性能相同,VARCHAR 限制反而增加风险
修复:
1. contribution-service:
- schema: 移除 accountType/baseType/regionCode/name 的 @db.VarChar
- migration: VARCHAR -> TEXT
2. mining-service:
- schema: 移除 accountType/baseType/regionCode/name 的 @db.VarChar
- migration: VARCHAR -> TEXT
3. mining-admin-service:
- migration: VARCHAR -> TEXT (schema 已使用 TEXT)
原则:Prisma String 直接使用,不加 @db.VarChar()
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 20:24:01 -08:00
hailin
5f2f223f7b
fix(contribution): 修复 SystemAccountSyncedEvent 缺少 baseType/regionCode 参数
...
问题:
- admin.controller.ts 中 republishSystemAccounts 端点调用 SystemAccountSyncedEvent 时
只传递了 4 个参数,但构造函数需要 6 个参数
- 缺少 baseType(基础类型)和 regionCode(区域代码)参数
修复:
- 添加 account.baseType 和 account.regionCode 参数
- 与 contribution-calculation.service.ts 中的调用保持一致
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 20:12:49 -08:00
hailin
09b0bc077e
feat(system-accounts): 实现系统账户按省市细分算力和挖矿分配
...
## 核心功能
### 1. 算力按省市细分分配
- accountType 从枚举改为组合键字符串:PROVINCE_440000, CITY_440100
- 新增 baseType (基础类型) 和 regionCode (区域代码) 字段
- 认种时根据 selectedProvince/selectedCity 分配到具体省市账户
- 无省市信息时归入汇总账户
### 2. 系统账户参与挖矿
- 运营、省、市账户按各自 totalContribution 参与挖矿
- 总部账户(HEADQUARTERS)不直接参与,接收待解锁算力收益
- 待解锁算力 100% 参与挖矿,收益归总部
### 3. 算力来源明细追溯
- 新增 SystemContributionRecord 记录每笔算力来源
- 新增 SystemContributionRecordCreatedEvent 事件同步明细
- 前端新增"算力来源"标签页展示明细
## 修改的服务
### contribution-service
- schema: SystemAccount 新增 baseType, regionCode
- contribution-calculator: 按省市生成组合键
- system-account.repository: 支持动态创建省市账户
- 新增 SystemContributionRecordCreatedEvent 事件
### mining-service
- schema: SystemMiningAccount 从枚举改为字符串
- network-sync: 处理带 baseType/regionCode 的同步事件
- mining-distribution: 系统账户和待解锁算力参与挖矿
### mining-admin-service
- schema: 新增 SyncedSystemContributionRecord 表
- cdc-sync: 处理 SystemContributionRecordCreated 事件
- system-accounts.service: 新增算力来源明细和统计 API
### mining-admin-web
- 新增 ContributionRecordsTable 组件
- 系统账户详情页新增"算力来源"标签页
- 显示来源认种ID、用户、分配比例、金额
## 数据库迁移
- contribution-service: 20250120000001_add_region_to_system_accounts
- mining-service: 20250120000001_add_region_to_system_mining_accounts
- mining-admin-service: 20250120000001, 20250120000002
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 19:55:14 -08:00
hailin
1d5e3ebff2
fix(contribution): 使用 upsert 替代 update 避免记录不存在错误
...
将 addContribution 方法改为 upsert,当系统账户不存在时自动创建,
存在时增加算力余额。
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 08:43:37 -08:00
hailin
5ec310124d
fix(contribution): 确保 HEADQUARTERS 账户存在后再更新算力
...
修复 Record to update not found 错误,在调用 addContribution 前
先调用 ensureSystemAccountsExist 确保系统账户记录已创建。
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 08:42:02 -08:00
hailin
d844228711
fix(contribution): 将未分配算力汇总到总部账户(HEADQUARTERS)
...
之前 HEADQUARTERS 账户在算力分配时被遗漏,未获得未分配算力的汇总。
现在在保存未分配算力时,同时更新 HEADQUARTERS 账户的 contributionBalance,
并发布同步事件用于 mining-admin-service 同步。
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 08:26:53 -08:00
hailin
1c787a22a3
fix(mining): 修复 mining-service 订阅错误的 Kafka topic
...
问题:mining-service 订阅的是 cdc.contribution.outbox (Debezium CDC topic),
但 contribution-service 使用 Outbox Pattern 直接发送到 contribution.{eventType} topic。
修复:
- mining-service 订阅正确的 topic 列表
- 修复消息解析逻辑支持 Outbox Pattern 消息格式
- contribution-service 添加 GET /admin/unallocated-contributions 端点(调试用)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 03:03:17 -08:00
hailin
e8f3c34723
fix(contribution): 认种记录总贡献值显示用户实际有效算力
...
后端:
- get-planting-ledger.query.ts: 添加effectiveContribution字段
- 从contributionAccount获取用户实际的个人算力(personalContribution)
前端:
- planting_record.dart: PlantingSummary添加effectiveContribution字段
- planting_record_model.dart: 解析effectiveContribution字段
- planting_records_page.dart: 总贡献值显示effectiveContribution而非totalAmount
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 08:29:14 -08:00
hailin
0ebb0ad076
fix(contribution): use Symbol token for SYNCED_DATA_REPOSITORY injection
...
The GetTeamTreeQuery was importing SYNCED_DATA_REPOSITORY as a Symbol from
the domain interface, but InfrastructureModule defined its own string token.
This caused NestJS dependency resolution to fail.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 09:33:21 -08:00
hailin
4ec6c9f48b
feat(contribution/mining-app): add team tree API using contribution-service 2.0
...
Add team info and direct referrals endpoints to contribution-service,
using SyncedReferral data synced via CDC. Update mining-app to use the
new v2 contribution API instead of legacy referral-service.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 09:17:18 -08:00
hailin
f22c3efb11
fix: use correct property name 'type' for unallocated contribution
...
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 05:40:40 -08:00
hailin
0241930011
feat(contribution/mining): sync unallocated contributions to mining-service
...
- Add UnallocatedContributionSyncedEvent in contribution-service
- Add event handler in mining-service's contribution-event.handler.ts
- Add handleUnallocatedContributionSynced in network-sync.service.ts
- Add admin endpoint to publish all unallocated contributions
- Sync pending/unallocated contributions to PendingContributionMining table
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 05:29:28 -08:00
hailin
130bf57842
fix(contribution): 处理认种时发布系统账户同步事件
...
- 在保存系统账户算力后,发布 SystemAccountSyncedEvent
- 使 mining-service 能够同步运营/省/市公司的算力
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 05:24:04 -08:00
hailin
962e7874c8
fix(contribution): 修复全网理论算力未同步问题
...
- 在 calculateForAdoption 中调用 updateNetworkProgress 更新 NetworkAdoptionProgress 表
- 之前 publishNetworkProgressEvent 读取的 totalTreeCount 始终为 0,因为表未被更新
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 04:55:30 -08:00
hailin
bb75ff19a4
feat(contribution): 认种处理后自动发布全网进度事件
...
- 每次认种分配完成后发布 NetworkProgressUpdatedEvent
- mining-service 通过 Kafka 实时接收全网理论算力更新
- 定时同步改为每5分钟一次,作为兜底方案
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 04:46:12 -08:00
hailin
de5416aee6
feat(mining): 实现系统账户和待解锁算力参与挖矿
...
重大变更:
- 挖矿分母从用户有效算力改为全网理论算力(networkTotalContribution)
- 系统账户(运营12%/省1%/市2%)参与挖矿,有独立的挖矿记录
- 待解锁算力参与挖矿,收益归总部账户,记录包含完整来源信息
新增功能:
- mining-service: 系统挖矿账户表、待解锁算力表及相关挖矿记录表
- mining-service: NetworkSyncService 同步全网数据
- mining-service: /admin/sync-network 和 /admin/system-accounts 端点
- contribution-service: /admin/system-accounts 和发布系统账户事件端点
- mining-admin-service: 状态检查返回全网理论算力信息
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 03:39:56 -08:00
hailin
b5fca7bb04
fix(mining-admin): 修复算力同步状态检查的 API 路径
...
- contribution-service: 给 /contribution/stats 接口添加 @Public() 装饰器
- mining-admin-service: 修正 API 路径从 api/v1 改为 api/v2
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 03:09:05 -08:00
hailin
01bd638dbb
fix(contribution-service): add parent .env path for shared config
...
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 07:00:02 -08:00
hailin
cfbf1b21f3
feat(dashboard): add detailed contribution breakdown by category
...
Backend (contribution-service):
- Add getDetailedContributionStats() to repository
- Add getUnallocatedByLevelTier/BonusTier() to repository
- Extend stats API with level/bonus breakdown by tier
- Add getTotalTrees() to synced-data repository
Backend (mining-admin-service):
- Add detailed contribution stats calculation
- Calculate theoretical vs actual values per category
- Return level/bonus breakdown with unlocked/pending amounts
Frontend (mining-admin-web):
- Add ContributionBreakdown component showing:
- Personal (70%), Operation (12%), Province (1%), City (2%)
- Level contribution (7.5%) by tier: 1-5, 6-10, 11-15
- Bonus contribution (7.5%) by tier: T1, T2, T3
- Update DashboardStats type definition
- Integrate breakdown component into dashboard page
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 01:43:37 -08:00
hailin
1f15daa6c5
fix(planting-records): filter only MINING_ENABLED records and fix UI overflow
...
- Backend: Add status filter to getPlantingLedger and getPlantingSummary
- Frontend: Change Row to Wrap for info items to prevent width overflow
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 00:12:07 -08:00
hailin
b310fde426
feat(mining-admin): show pending contribution in dashboard
...
- Add networkPendingContribution and networkBonusPendingContribution to API
- Display combined pending contribution (teamLevel + teamBonus) in stats card
- Replace 'total contribution' with 'pending contribution' in price overview
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 23:46:05 -08:00
hailin
81a58edaca
fix(contribution-service): calculate totalContribution correctly in CDC event
...
Previously, totalContribution was incorrectly set to effectiveContribution.
Now correctly calculated as: personal + teamLevel + teamBonus
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 23:40:50 -08:00
hailin
141db46356
fix(contribution-service): use real contributionPerTree from rate service
...
Previously, adoptions were synced with hardcoded contributionPerTree=1,
resulting in contribution values like 0.7 instead of the expected 15831.9.
Now the handler fetches the actual contribution rate from ContributionRateService
based on the adoption date, storing values like:
- Personal (70%): 22617 × 70% = 15831.9
- Team Level (0.5%): 22617 × 0.5% = 113.085
- Team Bonus (2.5%): 22617 × 2.5% = 565.425
Note: Historical data may need migration to apply the correct multiplier.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 18:01:30 -08:00
hailin
f2692a50ed
fix(contribution-service): fix toRecordDto using wrong property name
...
- Changed `record.finalContribution` to `record.amount` for getting final contribution value
- Added optional chaining to prevent undefined errors
- Added default values for safety
The ContributionRecordAggregate uses `amount` property, not `finalContribution`.
This was causing "Cannot read properties of undefined (reading 'value')" errors.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 08:43:14 -08:00
hailin
0bbb52284c
fix(contribution): avoid nested transaction timeout in BonusClaimService
...
Use unitOfWork.isInTransaction() to detect if already in a transaction
context (called from ContributionCalculationService). If so, reuse the
existing transaction instead of opening a new one, preventing Prisma
interactive transaction timeout errors.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 01:02:08 -08:00
hailin
2a79c83715
feat(contribution): implement TEAM_BONUS backfill when unlock conditions met
...
When a user's direct referral count reaches 2 or 4, the system now automatically
backfills previously pending TEAM_BONUS (T2/T3) contributions that were allocated
to headquarters while waiting for unlock conditions.
- Add BonusClaimService for handling bonus backfill logic
- Add findPendingBonusByAccountSequence and claimBonusRecords to repository
- Integrate bonus claim into updateReferrerUnlockStatus flow
- Add BonusClaimed event consumer in mining-wallet-service
- Generate ledger records for backfilled contributions
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 23:58:54 -08:00
hailin
631fe2bf31
fix(contribution-service): reset consumer group offsets to earliest on startup
...
Use admin.resetOffsets({ earliest: true }) before connecting consumer
to ensure CDC sync always starts from the beginning of Kafka topics,
regardless of previously committed offsets.
This fixes the infinite loop issue where existing consumer groups
had committed offsets at high watermark, causing eachMessage to
never be called.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 22:14:51 -08:00
hailin
d968efcad4
fix(contribution): run CDC sync in background to allow API access during sync
...
Change CDC consumer startup from blocking await to non-blocking .then()
so HTTP server starts immediately and /health/cdc-sync API is accessible
for deploy script to poll sync status.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:50:59 -08:00
hailin
5a4970d7d9
Revert "fix(contribution): run CDC sync in background to avoid blocking service startup"
...
This reverts commit 703c12e9f6 .
2026-01-13 21:44:18 -08:00
hailin
703c12e9f6
fix(contribution): run CDC sync in background to avoid blocking service startup
...
- Change await to .then() for cdcConsumer.start()
- Allows HTTP endpoints to be accessible during CDC sync
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:44:00 -08:00
hailin
8199bc4d66
feat(contribution): add CDC sync status API and fix deploy script timing
...
- Add initialSyncCompleted flag to track CDC sequential sync completion
- Add getSyncStatus() method to CDCConsumerService
- Add /health/cdc-sync endpoint to expose sync status
- Update deploy-mining.sh to wait for CDC sync completion before calling publish APIs
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:34:58 -08:00
hailin
aef6feb2cd
fix(contribution): use unique consumer group id for each phase
...
Previous consumer group had already consumed messages, so fromBeginning
had no effect. Now using timestamp-based unique group id to ensure
fresh consumption from beginning each time.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:11:40 -08:00
hailin
22523aba14
revert: restore blocking await for sequential CDC consumption
...
The previous change was wrong - running sequential consumption in
background defeats its purpose. The whole point is to ensure data
dependency order (users -> referrals -> adoptions) before any other
operations can proceed.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:07:57 -08:00