rwadurian/backend/mpc-system/services/service-party-android/EXACT_CHANGES.md

7.6 KiB
Raw Blame History

准确的改动清单

1. 删除的内容

删除的文件:

  • app/src/main/java/com/durian/tssparty/data/remote/StreamManager.kt (整个文件282行)

删除的代码TssRepository.kt

第17行 - 删除 import

- import com.durian.tssparty.data.remote.StreamManager

第217-242行 - 删除 StreamManager 实例和注释:

- /**
-  * StreamManager - 管理 gRPC 双向流的生命周期
-  * ...(大段注释)
-  */
- private val streamManager = StreamManager(grpcClient, repositoryScope)

第293-304行 - 删除 init 块:

- init {
-     repositoryScope.launch {
-         grpcConnectionEvents
-             .filter { it is GrpcConnectionEvent.Reconnected }
-             .collect {
-                 android.util.Log.d("TssRepository", "gRPC reconnected, restarting streams via StreamManager...")
-                 streamManager.restartAllStreams()
-             }
-     }
- }

第511-611行 - 删除 StreamManager 的事件订阅:

- streamManager.startEventStream(
-     partyId = effectivePartyId,
-     onEvent = { event ->
-         // ... 事件处理逻辑 ...
-     },
-     onError = { error ->
-         android.util.Log.e("TssRepository", "Event stream error: ${error.message}")
-     }
- )

第2062-2098行 - 删除 StreamManager 的消息订阅:

- streamManager.startMessageStream(
-     sessionId = sessionId,
-     partyId = effectivePartyId,
-     partyIndex = partyIndex,
-     onMessage = { message ->
-         // ... 消息处理逻辑 ...
-     },
-     onError = { error ->
-         android.util.Log.e("TssRepository", "Message stream error: ${error.message}")
-     }
- )

2. 添加的内容

TssRepository.kt 第220行 - 添加 Job 常量:

+ private const val JOB_MESSAGE_SENDING = "message_sending"

第488-496行 - 添加 registerParty 错误检查:

+ // Register with gRPC and check result
+ val registerResult = grpcClient.registerParty(partyId, "temporary", "1.0.0")
+ if (registerResult.isFailure) {
+     val error = registerResult.exceptionOrNull()
+     android.util.Log.e("TssRepository", "Failed to register party: ${error?.message}")
+     throw error ?: Exception("Failed to register party")
+ }
+
+ android.util.Log.d("TssRepository", "Party registered successfully: $partyId")

第511-577行 - 恢复简单的事件订阅(添加 retryWhen

+ // 使用 JobManager 启动(自动取消同名旧 Job
+ // 添加 Flow.retryWhen 实现自动重连(基于 gRPC 官方推荐)
+ jobManager.launch(JOB_SESSION_EVENT) {
+     flow {
+         grpcClient.subscribeSessionEvents(effectivePartyId).collect { event ->
+             emit(event)
+         }
+     }
+     .retryWhen { cause, attempt ->
+         android.util.Log.w("TssRepository", "Event stream failed (attempt ${attempt + 1}), retrying in ${kotlin.math.min(attempt + 1, 30)}s: ${cause.message}")
+         delay(kotlin.math.min(attempt + 1, 30) * 1000L)  // 指数退避,最多 30 秒
+         true  // 永远重试
+     }
+     .collect { event ->
+         // ... 原有的事件处理逻辑(完全不变)...
+     }
+ }

第2043-2087行 - 重构消息路由(添加 retryWhen

+ // Part 1: Collect outgoing messages from TSS and route via gRPC
+ jobManager.launch(JOB_MESSAGE_SENDING) {  // 改名为 JOB_MESSAGE_SENDING
+     tssNativeBridge.outgoingMessages.collect { message ->
+         // ... 发送逻辑 ...
+     }
+ }
+
+ // Part 2: Subscribe to incoming messages from gRPC and send to TSS
+ // 添加 Flow.retryWhen 实现自动重连(基于 gRPC 官方推荐)
+ jobManager.launch(JOB_MESSAGE_COLLECTION) {
+     flow {
+         grpcClient.subscribeMessages(sessionId, effectivePartyId).collect { message ->
+             emit(message)
+         }
+     }
+     .retryWhen { cause, attempt ->
+         android.util.Log.w("TssRepository", "Message stream failed (attempt ${attempt + 1}), retrying in ${kotlin.math.min(attempt + 1, 30)}s: ${cause.message}")
+         delay(kotlin.math.min(attempt + 1, 30) * 1000L)  // 指数退避,最多 30 秒
+         true  // 永远重试
+     }
+     .collect { message ->
+         // ... 原有的消息处理逻辑(完全不变)...
+     }
+ }

第592行 - 修改检查方法:

- val isActive = streamManager.isEventStreamActive()
+ val isActive = jobManager.isActive(JOB_SESSION_EVENT)

3. 完全不变的内容

GrpcClient.kt - Keep-Alive 配置(保持不变):

// Line 143-150 - 完全不变
.keepAliveTime(20, TimeUnit.SECONDS)
.keepAliveTimeout(5, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.idleTimeout(Long.MAX_VALUE, TimeUnit.DAYS)

GrpcClient.kt - 网络监听(保持不变):

// Line 151-183 - 完全不变
fun setupNetworkMonitoring(context: Context) {
    val callback = object : ConnectivityManager.NetworkCallback() {
        override fun onAvailable(network: Network) {
            channel?.resetConnectBackoff()
        }
    }
}

TssRepository.kt - 事件处理逻辑(保持不变):

// Line 522-573 - 完全不变
when (event.eventType) {
    "session_started" -> {
        // ... 原有的 RACE-FIX 逻辑 ...
        sessionEventCallback?.invoke(event)
    }
    "party_joined", "participant_joined" -> {
        sessionEventCallback?.invoke(event)
    }
    // ...
}

TssRepository.kt - 消息处理逻辑(保持不变):

// Line 2071-2084 - 完全不变
val fromPartyIndex = session?.participants?.find { it.partyId == message.fromParty }?.partyIndex
if (fromPartyIndex != null) {
    tssNativeBridge.sendIncomingMessage(
        fromPartyIndex = fromPartyIndex,
        isBroadcast = message.isBroadcast,
        payload = message.payload
    )
}

TssRepository.kt - markPartyReady 重试机制(保持不变):

// Line ~2140 - 完全不变
repeat(5) { attempt ->
    if (markReadySuccess) return@repeat
    val markReadyResult = grpcClient.markPartyReady(sessionId, partyId)
    if (markReadyResult.isSuccess) {
        markReadySuccess = true
        return@repeat
    }
    delay((attempt + 1) * 500L)
}

4. 核心改动总结

之前df9f9914

streamManager.startEventStream(
    partyId = effectivePartyId,
    onEvent = { event -> /* callback */ },
    onError = { error -> /* callback */ }
)

现在bfbd062e

jobManager.launch(JOB_SESSION_EVENT) {
    flow {
        grpcClient.subscribeSessionEvents(effectivePartyId).collect { emit(it) }
    }
    .retryWhen { cause, attempt ->
        delay(min(attempt + 1, 30) * 1000L)
        true  // 自动重连
    }
    .collect { event ->
        // 原有的事件处理逻辑(完全不变)
    }
}

5. 统计数据

  • 删除StreamManager.kt (282 行) + TssRepository.kt 中的引用 (约 66 行) = 348 行
  • 添加WORKING_CODE_ANALYSIS.md (269 行) + REFACTORING_SUMMARY.md (200 行) + TssRepository.kt 改动 (45 行) = 514 行
  • 净变化:+166 行(主要是文档)
  • 代码净变化:-21 行(代码更简洁)

6. 风险评估

低风险

  1. 事件处理逻辑完全不变(只是外面包了 retryWhen
  2. 消息处理逻辑完全不变(只是外面包了 retryWhen
  3. gRPC Keep-Alive 配置保持不变
  4. 网络监听保持不变
  5. markPartyReady 重试保持不变

需要测试 ⚠️

  1. registerParty 错误检查是否正常
  2. retryWhen 自动重连是否工作
  3. 网络断开后是否自动恢复

消除的风险

  1. StreamManager 的潜在 bug
  2. 复杂的 callback 机制
  3. init 块监听重连的问题