深度强化学习(七)策略梯度

深度强化学习(七)策略梯度

策略学习的目的是通过求解一个优化问题,学出最优策略函数或它的近似函数(比如策略网络)

一.策略网络

假设动作空间是离散的,,比如 A = { 左 , 右 , 上 } \cal A=\{左,右,上\} A={,,},策略函数 π \pi π是个条件概率函数:
π ( a ∣ s ) = P ( A = a ∣ S = s ) \pi(a\mid s)=\Bbb P(A=a\mid S=s) π(as)=P(A=aS=s)
D Q N DQN DQN类似,我们可以用神经网络 π ( a ∣ s ; θ ) \pi(a \mid s ; \boldsymbol{\theta}) π(as;θ)去近似策略函数 π ( a ∣ s ) \pi(a\mid s) π(as), θ \boldsymbol \theta θ是我们需要训练的神经网络的参数。

回忆动作价值函数的定义是
Q π ( a t , s t ) = E A t + 1 , S t + 1 … [ U t ∣ A t = a t , S t = s t ] Q_{\pi}(a_t,s_t)=\Bbb E_{A_{t+1},S_{t+1}\ldots}[U_t\mid A_t=a_t,S_t=s_t] Qπ(at,st)=EAt+1,St+1[UtAt=at,St=st]
状态价值函数的定义是
V π ( s t ) = E A t ∼ π ( a ∣ s ) [ Q π ( A t , s t ) ] V_{\pi}(s_t)=\Bbb E_{A_t\sim \pi(a\mid s)}[Q_{\pi}(A_t,s_t)] Vπ(st)=EAtπ(as)[Qπ(At,st)]
状态价值既依赖于当前状态  s t , 也依赖于策略网络  π 的参数  θ 。  \text { 状态价值既依赖于当前状态 } s_t \text {, 也依赖于策略网络 } \pi \text { 的参数 } \boldsymbol{\theta} \text { 。 }  状态价值既依赖于当前状态 st也依赖于策略网络 π 的参数 θ  

为排除状态对策略的影响,我们对状态 S t S_t St求期望,得出
J ( θ ) = E S t [ V π ( S t ) ] J(\boldsymbol \theta)=\Bbb E_{S_t}[V_{\pi}(S_t)] J(θ)=ESt[Vπ(St)]
这个目标函数排除掉了状态 S S S 的因素,只依赖于策略网络 π \pi π的参数 θ \boldsymbol \theta θ;策略越好,则 J J J越大。所以策略学习可以描述为这样一个优化问题
Max θ J ( θ ) \text{Max}_{\boldsymbol \theta} \quad J(\boldsymbol \theta) MaxθJ(θ)
由于是求最大化问题,我们可利用梯度上升对 J ( θ ) J(\boldsymbol \theta) J(θ)进行更新,问题的关键是计算 ∇ θ J ( θ ) \nabla_{\boldsymbol \theta}J(\boldsymbol \theta) θJ(θ)

二.策略梯度定理推导

Theorem:递归公式,其中 S ′ S' S是 下一时刻的状态。
∂ V π ( s ) ∂ θ = E A ∼ π ( ⋅ ∣ s ; θ ) [ ∂ ln ⁡ π ( A ∣ s ; θ ) ∂ θ ⋅ Q π ( s , A ) + γ ⋅ E S ′ ∼ p ( ⋅ ∣ s , A ) [ ∂ V π ( S ′ ) ∂ θ ] ] (2.1) \frac{\partial V_\pi(s)}{\partial \boldsymbol{\theta}}=\mathbb{E}_{A \sim \pi(\cdot \mid s ; \boldsymbol{\theta})}\left[\frac{\partial \ln \pi(A \mid s ; \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \cdot Q_\pi(s, A)+\gamma \cdot \mathbb{E}_{S^{\prime} \sim p(\cdot \mid s, A)}\left[\frac{\partial V_\pi\left(S^{\prime}\right)}{\partial \boldsymbol{\theta}}\right]\right]\tag{2.1} θVπ(s)=EAπ(s;θ)[θlnπ(As;θ)Qπ(s,A)+γESp(s,A)[θVπ(S)]](2.1)

Proof:
∂ V π ( s ) ∂ θ = ∂ ∂ θ [ E A ∼ π ( ⋅ ∣ s ; θ ) [ Q π ( s , A ) ] ] = ∂ ∂ θ [ ∑ A π ( a ∣ s ; θ ) Q π ( s , a ) ] = ∑ A [ ∂ π ( a ∣ s ; θ ) ∂ θ Q π ( s , a ) + π ( a ∣ s ; θ ) ∂ Q π ( s , a ) ∂ θ ] = ∑ A [ π ( a ∣ s ; θ ) ⋅ ∂ ln ⁡ π ( a ∣ s ; θ ) ∂ θ ⋅ Q π ( s , a ) + π ( a ∣ s ; θ ) ∂ Q π ( s , a ) ∂ θ ] = E A ∼ π ( ⋅ ∣ s ; θ ) [ ∂ ln ⁡ π ( A ∣ s ; θ ) ∂ θ ⋅ Q π ( s , A ) ] + E A ∼ π ( ⋅ ∣ s ; θ ) [ ∂ Q π ( s , a ) ∂ θ ] . = E A ∼ π ( ⋅ ∣ s ; θ ) [ ∂ ln ⁡ π ( A ∣ s ; θ ) ∂ θ ⋅ Q π ( s , A ) + ∂ Q π ( s , a ) ∂ θ ] \begin{aligned} \frac{\partial V_\pi(s)}{\partial \boldsymbol{\theta}} &=\frac{\partial}{\partial \boldsymbol \theta}[\Bbb E_{A\sim \pi(\cdot \mid s;\boldsymbol \theta)}[Q_{\pi}(s,A)]]\\ &= \frac{\partial}{\partial \boldsymbol \theta}[\sum_{A}\pi(a\mid s;\boldsymbol \theta)Q_{\pi}(s,a)]\\ &=\sum_{A}[\frac{\partial \pi(a\mid s;\boldsymbol \theta)}{\partial \boldsymbol \theta}Q_{\pi}(s,a)+\pi(a\mid s;\boldsymbol \theta)\frac{\partial Q_{\pi}(s,a)}{\partial \boldsymbol \theta}]\\ &=\sum_{A}[\pi(a\mid s;\boldsymbol \theta)\cdot\frac{\partial \ln \pi(a\mid s;\boldsymbol \theta)}{\partial \boldsymbol \theta}\cdot Q_{\pi}(s,a)+\pi(a\mid s;\boldsymbol \theta)\frac{\partial Q_{\pi}(s,a)}{\partial \boldsymbol \theta}] \\ & =\mathbb{E}_{A \sim \pi(\cdot \mid s ; \boldsymbol{\theta})}\left[\frac{\partial \ln \pi(A \mid s ; \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \cdot Q_\pi(s, A)\right]+\mathbb{E}_{A \sim \pi(\cdot \mid s ; \boldsymbol{\theta})}\left[\frac{\partial Q_\pi(s, a)}{\partial \boldsymbol{\theta}}\right] . \\ &= \mathbb{E}_{A \sim \pi(\cdot \mid s ; \boldsymbol{\theta})}[\frac{\partial \ln \pi(A \mid s ; \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \cdot Q_\pi(s, A)+\frac{\partial Q_\pi(s, a)}{\partial \boldsymbol{\theta}}] \end{aligned} θVπ(s)=θ[EAπ(s;θ)[Qπ(s,A)]]=θ[Aπ(as;θ)Qπ(s,a)]=A[θπ(as;θ)Qπ(s,a)+π(as;θ)θQπ(s,a)]=A[π(as;θ)θlnπ(as;θ)Qπ(s,a)+π(as;θ)θQπ(s,a)]=EAπ(s;θ)[θlnπ(As;θ)Qπ(s,A)]+EAπ(s;θ)[θQπ(s,a)].=EAπ(s;θ)[θlnπ(As;θ)Qπ(s,A)+θQπ(s,a)]
接下来仅需证明 ∂ Q π ( s , a ) ∂ θ = γ E S ′ ∼ p ( ⋅ ∣ s , A ) [ ∂ V π ( S ′ ) ∂ θ ] \frac{\partial Q_\pi(s, a)}{\partial \boldsymbol{\theta}}=\gamma \mathbb{E}_{S^{\prime} \sim p(\cdot \mid s, A)}[\frac{\partial V_\pi\left(S^{\prime}\right)}{\partial \boldsymbol{\theta}}] θQπ(s,a)=γESp(s,A)[θVπ(S)],贝尔曼方程为
Q π ( s , a ) = E S ′ ∼ p ( ⋅ ∣ s , a ) [ R ( s , a , S ′ ) + γ ⋅ V π ( s ′ ) ] = ∑ s ′ ∈ S p ( s ′ ∣ s , a ) ⋅ [ R ( s , a , s ′ ) + γ ⋅ V π ( s ′ ) ] = ∑ s ′ ∈ S p ( s ′ ∣ s , a ) ⋅ R ( s , a , s ′ ) + γ ⋅ ∑ s ′ ∈ S p ( s ′ ∣ s , a ) ⋅ V π ( s ′ ) . \begin{aligned} Q_\pi(s, a) & =\mathbb{E}_{S^{\prime} \sim p(\cdot \mid s, a)}\left[R\left(s, a, S^{\prime}\right)+\gamma \cdot V_\pi\left(s^{\prime}\right)\right] \\ & =\sum_{s^{\prime} \in \mathcal{S}} p\left(s^{\prime} \mid s, a\right) \cdot\left[R\left(s, a, s^{\prime}\right)+\gamma \cdot V_\pi\left(s^{\prime}\right)\right] \\ & =\sum_{s^{\prime} \in \mathcal{S}} p\left(s^{\prime} \mid s, a\right) \cdot R\left(s, a, s^{\prime}\right)+\gamma \cdot \sum_{s^{\prime} \in \mathcal{S}} p\left(s^{\prime} \mid s, a\right) \cdot V_\pi\left(s^{\prime}\right) . \end{aligned} Qπ(s,a)=ESp(s,a)[R(s,a,S)+γVπ(s)]=sSp(ss,a)[R(s,a,s)+γVπ(s)]=sSp(ss,a)R(s,a,s)+γsSp(ss,a)Vπ(s).

在观测到 s 、 a 、 s ′ s 、 a 、 s^{\prime} sas 之后, p ( s ′ ∣ s , a ) p\left(s^{\prime} \mid s, a\right) p(ss,a) R ( s , a , s ′ ) R\left(s, a, s^{\prime}\right) R(s,a,s) 都与策略网络 π \pi π 无关, 因此
∂ ∂ θ [ p ( s ′ ∣ s , a ) ⋅ R ( s , a , s ′ ) ] = 0. \frac{\partial}{\partial \boldsymbol{\theta}}\left[p\left(s^{\prime} \mid s, a\right) \cdot R\left(s, a, s^{\prime}\right)\right]=0 . θ[p(ss,a)R(s,a,s)]=0.

可得:
∂ Q π ( s , a ) ∂ θ = ∑ s ′ ∈ S ∂ ∂ θ [ p ( s ′ ∣ s , a ) ⋅ R ( s , a , s ′ ) ] ⏟ 等于零  + γ ⋅ ∑ s ′ ∈ S ∂ ∂ θ [ p ( s ′ ∣ s , a ) ⋅ V π ( s ′ ) ] = γ ⋅ ∑ s ′ ∈ S p ( s ′ ∣ s , a ) ⋅ ∂ V π ( s ′ ) ∂ θ = γ ⋅ E S ′ ∼ p ( ⋅ ∣ s , a ) [ ∂ V π ( S ′ ) ∂ θ ] . \begin{aligned} \frac{\partial Q_\pi(s, a)}{\partial \boldsymbol{\theta}} & =\sum_{s^{\prime} \in \mathcal{S}} \underbrace{\frac{\partial}{\partial \boldsymbol{\theta}}\left[p\left(s^{\prime} \mid s, a\right) \cdot R\left(s, a, s^{\prime}\right)\right]}_{\text {等于零 }}+\gamma \cdot \sum_{s^{\prime} \in \mathcal{S}} \frac{\partial}{\partial \boldsymbol{\theta}}\left[p\left(s^{\prime} \mid s, a\right) \cdot V_\pi\left(s^{\prime}\right)\right] \\ & =\gamma \cdot \sum_{s^{\prime} \in \mathcal{S}} p\left(s^{\prime} \mid s, a\right) \cdot \frac{\partial V_\pi\left(s^{\prime}\right)}{\partial \boldsymbol{\theta}} \\ & =\gamma \cdot \mathbb{E}_{S^{\prime} \sim p(\cdot \mid s, a)}\left[\frac{\partial V_\pi\left(S^{\prime}\right)}{\partial \boldsymbol{\theta}}\right] . \end{aligned} θQπ(s,a)=sS等于零  θ[p(ss,a)R(s,a,s)]+γsSθ[p(ss,a)Vπ(s)]=γsSp(ss,a)θVπ(s)=γESp(s,a)[θVπ(S)].

证毕

g ( s , a ; θ ) ≜ Q π ( s , a ) ⋅ ∂ ln ⁡ π ( a ∣ s ; θ ) ∂ θ \boldsymbol{g}(s, a ; \boldsymbol{\theta}) \triangleq Q_\pi(s, a) \cdot \frac{\partial \ln \pi(a \mid s ; \theta)}{\partial \boldsymbol{\theta}} g(s,a;θ)Qπ(s,a)θlnπ(as;θ) 。设一局游戏在第 n n n 步之后结束。那么
∂ J ( θ ) ∂ θ = E S 1 , A 1 [ g ( S 1 , A 1 ; θ ) ] + γ ⋅ E S 1 , A 1 , S 2 , A 2 [ g ( S 2 , A 2 ; θ ) ] + γ 2 ⋅ E S 1 , A 1 , S 2 , A 2 , S 3 , A 3 [ g ( S 3 , A 3 ; θ ) ] + ⋯ + γ n − 1 ⋅ E S 1 , A 1 , S 2 , A 2 , S 3 , A 3 , ⋯ S n , A n [ g ( S n , A n ; θ ) ] (2.2) \begin{aligned} \frac{\partial J(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}= & \mathbb{E}_{S_1, A_1}\left[\boldsymbol{g}\left(S_1, A_1 ; \boldsymbol{\theta}\right)\right] \\ & +\gamma \cdot \mathbb{E}_{S_1, A_1, S_2, A_2}\left[\boldsymbol{g}\left(S_2, A_2 ; \boldsymbol{\theta}\right)\right] \\ & +\gamma^2 \cdot \mathbb{E}_{S_1, A_1, S_2, A_2, S_3, A_3}\left[\boldsymbol{g}\left(S_3, A_3 ; \boldsymbol{\theta}\right)\right] \\ & +\cdots \\ & \left.+\gamma^{n-1} \cdot \mathbb{E}_{S_1, A_1, S_2, A_2, S_3, A_3, \cdots S_n, A_n}[\boldsymbol{g}\left(S_n, A_n ; \boldsymbol{\theta}\right)\right] \end{aligned} \tag{2.2} θJ(θ)=ES1,A1[g(S1,A1;θ)]+γES1,A1,S2,A2[g(S2,A2;θ)]+γ2ES1,A1,S2,A2,S3,A3[g(S3,A3;θ)]++γn1ES1,A1,S2,A2,S3,A3,Sn,An[g(Sn,An;θ)](2.2)

Proof:由式 2.1 2.1 2.1可知
∇ θ V π ( s t ) = E A t ∼ π ( ⋅ ∣ s t ; θ ) [ ∂ ln ⁡ π ( A t ∣ s t ; θ ) ∂ θ ⋅ Q π ( s t , A t ) + γ ⋅ E S t + 1 ∼ p ( ⋅ ∣ s t , A t ) [ ∇ θ V π ( S t + 1 ) ] ] = E A t ∼ π ( ⋅ ∣ s t ; θ ) [ g ( s t , A t ; θ ) + γ ⋅ E S t + 1 [ ∇ θ V π ( S t + 1 ) ∣ A t , S t = s t ] ] = E A t [ g ( s t , A t ; θ ) ∣ S t = s t ] + γ E A t [ E S t + 1 [ ∇ θ V π ( S t + 1 ) ∣ A t , S t = s t ] ∣ S t = s t ] = E A t [ g ( s t , A t ; θ ) ∣ S t = s t ] + γ E A t , S t + 1 [ ∇ θ V π ( S t + 1 ) ∣ S t = s t ] \begin{aligned} \nabla_{\boldsymbol \theta }V_{\pi}(s_t)&=\mathbb{E}_{A_t \sim \pi(\cdot \mid s_t ; \boldsymbol{\theta})}\left[\frac{\partial \ln \pi(A_t \mid s_t ; \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \cdot Q_\pi(s_t, A_t)+\gamma \cdot \mathbb{E}_{S_{t+1} \sim p(\cdot \mid s_t, A_t)}[\nabla _{\boldsymbol \theta}V_\pi\left(S_{t+1}\right)]\right]\\ &=\mathbb{E}_{A_t \sim \pi(\cdot \mid s_t ; \boldsymbol{\theta})}\left[\boldsymbol g(s_t,A_t;\boldsymbol \theta)+\gamma \cdot \mathbb{E}_{S_{t+1} }[\nabla _{\boldsymbol \theta}V_\pi\left(S_{t+1}\right)\mid A_t,S_t=s_t]\right]\\ &=\Bbb E_{A_t}[\boldsymbol g(s_t,A_t;\boldsymbol \theta)\mid S_t=s_t]+\gamma \Bbb E_{A_t}[\Bbb E_{S_{t+1}}[\nabla_{\boldsymbol \theta}V_{\pi}(S_{t+1})\mid A_t,S_t=s_t]\mid S_t=s_t]\\ &=\Bbb E_{A_t}[\boldsymbol g(s_t,A_t;\boldsymbol \theta)\mid S_t=s_t]+\gamma \Bbb E_{A_t,S_{t+1}}[\nabla_{\boldsymbol \theta}V_{\pi}(S_{t+1})\mid S_t=s_t] \end{aligned} θVπ(st)=EAtπ(st;θ)[θlnπ(Atst;θ)Qπ(st,At)+γESt+1p(st,At)[θVπ(St+1)]]=EAtπ(st;θ)[g(st,At;θ)+γESt+1[θVπ(St+1)At,St=st]]=EAt[g(st,At;θ)St=st]+γEAt[ESt+1[θVπ(St+1)At,St=st]St=st]=EAt[g(st,At;θ)St=st]+γEAt,St+1[θVπ(St+1)St=st]
∇ θ V π ( S t + 1 ) = E A t + 1 [ g ( S t + 1 , A t + 1 ; θ ) ∣ S t + 1 ] + γ E A t + 1 , S t + 2 [ ∇ θ V π ( S t + 2 ) ∣ S t + 1 ] \nabla_{\boldsymbol \theta }V_{\pi}(S_{t+1})=\Bbb E_{A_{t+1}}[\boldsymbol g(S_{t+1},A_{t+1};\boldsymbol \theta)\mid S_{t+1}]+\gamma \Bbb E_{A_{t+1},S_{t+2}}[\nabla_{\boldsymbol \theta}V_{\pi}(S_{t+2})\mid S_{t+1}] θVπ(St+1)=EAt+1[g(St+1,At+1;θ)St+1]+γEAt+1,St+2[θVπ(St+2)St+1],带入上式中可得
∇ θ V π ( s t ) = E A t [ g ( s t , A t ; θ ) ∣ S t = s t ] + γ E A t , S t + 1 [ ∇ θ V π ( S t + 1 ) ∣ S t = s t ] = E A t [ g ( s t , A t ; θ ) ∣ S t = s t ] + γ E A t , S t + 1 [ E A t + 1 [ g ( S t + 1 , A t + 1 ; θ ) ∣ S t + 1 ] + γ E A t + 1 , S t + 2 [ ∇ θ V π ( S t + 2 ) ∣ S t + 1 ] ∣ S t = s t ] = E A t [ g ( s t , A t ; θ ) ∣ S t = s t ] + γ E A t , S t + 1 [ E A t + 1 [ g ( S t + 1 , A t + 1 ; θ ) ∣ S t + 1 , S t = s t , A t ] + γ E A t + 1 , S t + 2 [ [ ∇ θ V π ( S t + 2 ) ∣ S t + 1 ] ∣ S t = s t ] 马尔可可夫性 = E A t [ g ( s t , A t ; θ ) ∣ S t = s t ] + γ E A t , S t + 1 , A t + 1 [ g ( S t + 1 , A t + 1 ; θ ) ∣ S t = s t ] + γ E A t + 1 , S t + 2 [ [ ∇ θ V π ( S t + 2 ) ∣ S t + 1 ] ∣ S t = s t ] \begin{aligned} \nabla_{\boldsymbol \theta }V_{\pi}(s_t)&=\Bbb E_{A_t}[\boldsymbol g(s_t,A_t;\boldsymbol \theta)\mid S_t=s_t]+\gamma \Bbb E_{A_t,S_{t+1}}[\nabla_{\boldsymbol \theta}V_{\pi}(S_{t+1})\mid S_t=s_t]\\ &=\Bbb E_{A_t}[\boldsymbol g(s_t,A_t;\boldsymbol \theta)\mid S_t=s_t]+\gamma \Bbb E_{A_t,S_{t+1}}[\Bbb E_{A_{t+1}}[\boldsymbol g(S_{t+1},A_{t+1};\boldsymbol \theta)\mid S_{t+1}]+\gamma \Bbb E_{A_{t+1},S_{t+2}}[\nabla_{\boldsymbol \theta}V_{\pi}(S_{t+2})\mid S_{t+1}]\mid S_t=s_t]\\ &=\Bbb E_{A_t}[\boldsymbol g(s_t,A_t;\boldsymbol \theta)\mid S_t=s_t]+\gamma \Bbb E_{A_t,S_{t+1}}[\Bbb E_{A_{t+1}}[\boldsymbol g(S_{t+1},A_{t+1};\boldsymbol \theta)\mid S_{t+1},S_t=s_t,A_t]+\gamma \Bbb E_{A_{t+1},S_{t+2}}[[\nabla_{\boldsymbol \theta}V_{\pi}(S_{t+2})\mid S_{t+1}]\mid S_t=s_t]\text{马尔可可夫性}\\ &= \Bbb E_{A_t}[\boldsymbol g(s_t,A_t;\boldsymbol \theta)\mid S_t=s_t]+\gamma\Bbb E_{A_t,S_{t+1},A_{t+1}}[\boldsymbol g(S_{t+1},A_{t+1};\boldsymbol \theta)\mid S_t=s_t]+\gamma \Bbb E_{A_{t+1},S_{t+2}}[[\nabla_{\boldsymbol \theta}V_{\pi}(S_{t+2})\mid S_{t+1}]\mid S_t=s_t] \end{aligned} θVπ(st)=EAt[g(st,At;θ)St=st]+γEAt,St+1[θVπ(St+1)St=st]=EAt[g(st,At;θ)St=st]+γEAt,St+1[EAt+1[g(St+1,At+1;θ)St+1]+γEAt+1,St+2[θVπ(St+2)St+1]St=st]=EAt[g(st,At;θ)St=st]+γEAt,St+1[EAt+1[g(St+1,At+1;θ)St+1,St=st,At]+γEAt+1,St+2[[θVπ(St+2)St+1]St=st]马尔可可夫性=EAt[g(st,At;θ)St=st]+γEAt,St+1,At+1[g(St+1,At+1;θ)St=st]+γEAt+1,St+2[[θVπ(St+2)St+1]St=st]
继续利用上式反复带入,最后可得
∂ V π ( S 1 ) ∂ θ = E A 1 [ g ( S 1 , A 1 ; θ ) ∣ S 1 ] + γ ⋅ E A 1 , S 2 , A 2 [ g ( S 2 , A 2 ; θ ) ∣ S 1 ] + γ 2 ⋅ E A 1 , S 2 , A 2 , S 3 , A 3 [ g ( S 3 , A 3 ; θ ) ∣ S 1 ] + ⋯ + γ n − 1 ⋅ E A 1 , S 2 , A 2 , S 3 , A 3 , ⋯ S n , A n [ g ( S n , A n ; θ ) ∣ S 1 ] + γ n ⋅ E A 1 , S 2 , A 2 , S 3 , A 3 , ⋯ S n , A n , S n + 1 [ ∂ V π ( S n + 1 ) ∂ θ ⏟ 等于零  ∣ S 1 ] \begin{aligned} \frac{\partial V_\pi\left(S_1\right)}{\partial \boldsymbol{\theta}}= & \mathbb{E}_{A_1}\left[\boldsymbol{g}\left(S_1, A_1 ; \boldsymbol{\theta}\right)\mid S_1\right] \\ & +\gamma \cdot \mathbb{E}_{A_1, S_2, A_2}\left[\boldsymbol{g}\left(S_2, A_2 ; \boldsymbol{\theta}\right)\mid S_1\right] \\ & +\gamma^2 \cdot \mathbb{E}_{A_1, S_2, A_2, S_3, A_3}\left[\boldsymbol{g}\left(S_3, A_3 ; \boldsymbol{\theta}\right)\mid S_1\right] \\ & +\cdots \\ & +\gamma^{n-1} \cdot \mathbb{E}_{A_1, S_2, A_2, S_3, A_3, \cdots S_n, A_n}\left[\boldsymbol{g}\left(S_n, A_n ; \boldsymbol{\theta}\right)\mid S_1\right] \\ &+\gamma^n \cdot \mathbb{E}_{A_1, S_2, A_2, S_3, A_3, \cdots S_n, A_n, S_{n+1}}[\underbrace{\frac{\partial V_\pi\left(S_{n+1}\right)}{\partial \boldsymbol{\theta}}}_{\text {等于零 }}\mid S_1] \end{aligned} θVπ(S1)=EA1[g(S1,A1;θ)S1]+γEA1,S2,A2[g(S2,A2;θ)S1]+γ2EA1,S2,A2,S3,A3[g(S3,A3;θ)S1]++γn1EA1,S2,A2,S3,A3,Sn,An[g(Sn,An;θ)S1]+γnEA1,S2,A2,S3,A3,Sn,An,Sn+1[等于零  θVπ(Sn+1)S1]
上式中最后一项等于零,原因是游戏在n时刻后结束,而 n + 1 n+1 n+1时刻之后没有奖励,所以 n + 1 n+1 n+1时刻的回报和价值都是零。最后,由上面的公式和,最后,由 J ( θ ) J(\boldsymbol \theta) J(θ)定义知
∂ J ( θ ) ∂ θ = E S 1 [ ∂ V π ( S 1 ) ∂ θ ] \frac{\partial J(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}=\mathbb{E}_{S_1}\left[\frac{\partial V_\pi\left(S_1\right)}{\partial \boldsymbol{\theta}}\right] θJ(θ)=ES1[θVπ(S1)]
证毕

稳态分布:想要严格证明策略梯度定理, 需要用到马尔科夫链 (Markov chain) 的稳态分布 (stationary distribution)。设状态 S ′ S^{\prime} S 是这样得到的: S → A → S ′ S \rightarrow A \rightarrow S^{\prime} SAS 。回忆一下, 状态转移函数 p ( S ′ ∣ S , A ) p\left(S^{\prime} \mid S, A\right) p(SS,A), 是一个概率质量函数。设 f ( S ) f(S) f(S) 是状态 S S S 的概率质量函数那么状态 S ′ S^{\prime} S的边缘分布 f ( S ′ ) f(S') f(S)
f ( S ′ ) = E S , A [ p ( S ′ ∣ A , S ) ] = E S [ E A [ p ( S ′ ∣ A , S ) ∣ S ] ] = E S [ ∑ A p ( S ′ ∣ a , S ) ⋅ π ( a ∣ S ) ] = ∑ S ∑ A p ( S ′ ∣ a , s ) ⋅ π ( a ∣ s ) ⋅ f ( s ) \begin{aligned} f(S')&=\Bbb E_{S,A}[p(S'\mid A,S)]\\ &=\Bbb E_{S}[\Bbb E_{A}[p(S'\mid A,S)\mid S]]\\ &=\Bbb E_{S}[\sum_{A}p(S'\mid a,S)\cdot \pi(a\mid S)]\\ &=\sum_{S}\sum_{A}p(S'\mid a,s)\cdot \pi(a\mid s)\cdot f(s) \end{aligned} f(S)=ES,A[p(SA,S)]=ES[EA[p(SA,S)S]]=ES[Ap(Sa,S)π(aS)]=SAp(Sa,s)π(as)f(s)
如果 f ( S ′ ) f(S') f(S) f ( S ) f(S) f(S) 是相同的概率质量函数, 即 $f(S)=f(S’) $, 则意味着马尔科夫链达到稳态, 而 f ( S ) f(S) f(S) 就是稳态时的概率质量函数。

Theorem:

f ( S ) f(S) f(S) 是马尔科夫链稳态时的概率质量 (密度) 函数。那么对于任意函数 G ( S ′ ) G\left(S^{\prime}\right) G(S),
E S ∼ f ( ⋅ ) [ E A ∼ π ( ⋅ ∣ S ; θ ) [ E S ′ ∼ p ( ⋅ ∣ s , A ) [ G ( S ′ ) ] ] ] = E S ′ ∼ f ( ⋅ ) [ G ( S ′ ) ] (2.3) \mathbb{E}_{S \sim f(\cdot)}\left[\mathbb{E}_{A \sim \pi(\cdot \mid S ; \boldsymbol{\theta})}\left[\mathbb{E}_{S^{\prime} \sim p(\cdot \mid s, A)}\left[G\left(S^{\prime}\right)\right]\right]\right]=\mathbb{E}_{S^{\prime} \sim f(\cdot)}\left[G\left(S^{\prime}\right)\right]\tag{2.3} ESf()[EAπ(S;θ)[ESp(s,A)[G(S)]]]=ESf()[G(S)](2.3)

Proof:
E S ∼ f ( ⋅ ) [ E A ∼ π ( ⋅ ∣ S ; θ ) [ E S ′ ∼ p ( ⋅ ∣ S , A ) [ G ( S ′ ) ] ] ] = E S ∼ f ( ⋅ ) [ E A [ E S ′ [ G ( S ′ ) ∣ S , A ] ∣ S ] ] = E S ∼ f ( ⋅ ) [ E A , S ′ [ G ( S ′ ) ∣ S ] ] = E S , A , S ′ [ G ( S ′ ) ] = E S ′ [ G ( S ′ ) ] \begin{aligned} \mathbb{E}_{S \sim f(\cdot)}\left[\mathbb{E}_{A \sim \pi(\cdot \mid S ; \boldsymbol{\theta})}\left[\mathbb{E}_{S^{\prime} \sim p(\cdot \mid S, A)}\left[G\left(S^{\prime}\right)\right]\right]\right]&= \Bbb E_{S\sim f(\cdot)}[\Bbb E_{A}[\Bbb E_{S'}[G(S')\mid S,A]\mid S]]\\ &=\Bbb E_{S\sim f(\cdot)}[\Bbb E_{A,S'}[G(S')\mid S]]\\ &=\Bbb E_{S,A,S'}[G(S')]\\ &=\Bbb E_{S'}[G(S')] \end{aligned} ESf()[EAπ(S;θ)[ESp(S,A)[G(S)]]]=ESf()[EA[ES[G(S)S,A]S]]=ESf()[EA,S[G(S)S]]=ES,A,S[G(S)]=ES[G(S)]
又因 S , S ′ S,S' S,S有相同的分布 f ( ⋅ ) f(\cdot) f(),所以 E S ′ [ G ( S ′ ) ] = E S ′ ∼ f ( ⋅ ) [ G ( S ′ ) ] \Bbb E_{S'}[G(S')]=\mathbb{E}_{S^{\prime} \sim f(\cdot)}\left[G\left(S^{\prime}\right)\right] ES[G(S)]=ESf()[G(S)]

Theorem:策略梯度定理

设目标函数为 J ( θ ) = E S ∼ f ( ⋅ ) [ V π ( S ) ] J(\boldsymbol{\theta})=\mathbb{E}_{S \sim f(\cdot)}\left[V_\pi(S)\right] J(θ)=ESf()[Vπ(S)], 设 f ( S ) f(S) f(S) 为马尔科夫链稳态分布的概率质量 (密度) 函数。那么
∂ J ( θ ) ∂ θ = ( 1 + γ + γ 2 + ⋯ + γ n − 1 ) ⋅ E S ∼ f ( ⋅ ) [ E A ∼ π ( ⋅ ∣ S ; θ ) [ ∂ ln ⁡ π ( A ∣ S ; θ ) ∂ θ ⋅ Q π ( S , A ) ] ] \frac{\partial J(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}=\left(1+\gamma+\gamma^2+\cdots+\gamma^{n-1}\right) \cdot \mathbb{E}_{S \sim f(\cdot)}\left[\mathbb{E}_{A \sim \pi(\cdot \mid S ; \boldsymbol{\theta})}\left[\frac{\partial \ln \pi(A \mid S ; \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \cdot Q_\pi(S, A)\right]\right] θJ(θ)=(1+γ+γ2++γn1)ESf()[EAπ(S;θ)[θlnπ(AS;θ)Qπ(S,A)]]

Proof:设初始状态 S 1 S_1 S1 服从马尔科夫链的稳态分布,它的概率质量函数是 f ( S 1 ) f\left(S_1\right) f(S1) 。对于所有的 t = 1 , ⋯ , n t=1, \cdots, n t=1,,n, 动作 A t A_t At 根据策略网络抽样得到:
A t ∼ π ( ⋅ ∣ S t ; θ ) A_t \sim \pi\left(\cdot \mid S_t ; \boldsymbol{\theta}\right) Atπ(St;θ)
对于任意函数 G G G, 反复应用式 2.3 可得:
E A 1 , … , A t − 1 , S 1 , … , S t [ G ( S t ) ] = E S 1 ∼ f { E A 1 ∼ π , S 2 ∼ p { E A 2 , S 3 , A 3 , S 4 , ⋯ , A t − 1 , S t [ G ( S t ) ] } } = E S 2 ∼ f { E A 2 , S 3 , A 3 , S 4 , ⋯ , A t − 1 , S t [ G ( S t ) ] } = E S 2 ∼ f { E A 2 ∼ π , S 3 ∼ p { E A 3 , S 4 , A 4 , S 5 , ⋯ , A t − 1 , S t [ G ( S t ) ] } } = E S 3 ∼ f { E A 3 , S 4 , A 4 , S 5 , ⋯ , A t − 1 , S t [ G ( S t ) ] } ⋮ = E S t − 1 ∼ f { E A t − 1 ∼ π , S t ∼ p { G ( S t ) } } = E S t ∼ f { G ( S t ) } . \begin{aligned} \Bbb E_{A_1,\ldots,A_{t-1},S_1,\ldots,S_{t}}[G(S_t)] & =\mathbb{E}_{S_1 \sim f}\left\{\mathbb{E}_{A_1 \sim \pi, S_2 \sim p}\left\{\mathbb{E}_{A_2, S_3, A_3, S_4, \cdots, A_{t-1}, S_t}\left[G\left(S_t\right)\right]\right\}\right\} \\ & =\mathbb{E}_{S_2 \sim f}\left\{\mathbb{E}_{A_2, S_3, A_3, S_4, \cdots, A_{t-1}, S_t}\left[G\left(S_t\right)\right]\right\} \quad \\ & =\mathbb{E}_{S_2 \sim f}\left\{\mathbb{E}_{A_2 \sim \pi, S_3 \sim p}\left\{\mathbb{E}_{A_3, S_4, A_4, S_5, \cdots, A_{t-1}, S_t}\left[G\left(S_t\right)\right]\right\}\right\} \\ & =\mathbb{E}_{S_3 \sim f}\left\{\mathbb{E}_{A_3, S_4, A_4, S_5, \cdots, A_{t-1}, S_t}\left[G\left(S_t\right)\right]\right\} \quad \\ & \vdots \\ & =\mathbb{E}_{S_{t-1} \sim f}\left\{\mathbb{E}_{A_{t-1} \sim \pi, S_t \sim p}\left\{G\left(S_t\right)\right\}\right\} \\ & =\mathbb{E}_{S_t \sim f}\left\{G\left(S_t\right)\right\} . \end{aligned} EA1,,At1,S1,,St[G(St)]=ES1f{EA1π,S2p{EA2,S3,A3,S4,,At1,St[G(St)]}}=ES2f{EA2,S3,A3,S4,,At1,St[G(St)]}=ES2f{EA2π,S3p{EA3,S4,A4,S5,,At1,St[G(St)]}}=ES3f{EA3,S4,A4,S5,,At1,St[G(St)]}=ESt1f{EAt1π,Stp{G(St)}}=EStf{G(St)}.
g ( s , a ; θ ) ≜ Q π ( s , a ) ⋅ ∂ ln ⁡ π ( a ∣ s ; θ ) ∂ θ \boldsymbol{g}(s, a ; \boldsymbol{\theta}) \triangleq Q_\pi(s, a) \cdot \frac{\partial \ln \pi(a \mid s ; \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} g(s,a;θ)Qπ(s,a)θlnπ(as;θ) 。设一局游戏在第 n n n 步之后结束。由式2.2与上面的公式可得:
∂ J ( θ ) ∂ θ = E S 1 , A 1 [ g ( S 1 , A 1 ; θ ) ] + γ ⋅ E S 1 , A 1 , S 2 , A 2 [ g ( S 2 , A 2 ; θ ) ] + γ 2 ⋅ E S 1 , A 1 , S 2 , A 2 , S 3 , A 3 [ g ( S 3 , A 3 ; θ ) ] + ⋯ + γ n − 1 ⋅ E S 1 , A 1 , S 2 , A 2 , S 3 , A 3 , ⋯ S n , A n [ g ( S n , A n ; θ ) ] ] = E S 1 ∼ f ( ⋅ ) { E A 1 ∼ π ( ⋅ ∣ S 1 ; θ ) [ g ( S 1 , A 1 ; θ ) ] } + γ ⋅ E S 2 ∼ f ( ⋅ ) { E A 2 ∼ π ( ⋅ ∣ S 2 ; θ ) [ g ( S 2 , A 2 ; θ ) ] } + γ 2 ⋅ E S 3 ∼ f ( ⋅ ) { E A 3 ∼ π ( ⋅ ∣ S 3 ; θ ) [ g ( S 3 , A 3 ; θ ) ] } + ⋯ + γ n − 1 ⋅ E S n ∼ f ( ⋅ ) { E A n ∼ π ( ⋅ ∣ S n ; θ ) [ g ( S n , A n ; θ ) ] } = ( 1 + γ + γ 2 + ⋯ + γ n − 1 ) ⋅ E S ∼ f ( ⋅ ) { E A ∼ π ( ⋅ ∣ S ; θ ) [ g ( S , A ; θ ) ] } . \begin{aligned} \frac{\partial J(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}= & \mathbb{E}_{S_1, A_1}\left[\boldsymbol{g}\left(S_1, A_1 ; \boldsymbol{\theta}\right)\right] \\ & +\gamma \cdot \mathbb{E}_{S_1, A_1, S_2, A_2}\left[\boldsymbol{g}\left(S_2, A_2 ; \boldsymbol{\theta}\right)\right] \\ & +\gamma^2 \cdot \mathbb{E}_{S_1, A_1, S_2, A_2, S_3, A_3}\left[\boldsymbol{g}\left(S_3, A_3 ; \boldsymbol{\theta}\right)\right] \\ & +\cdots \\ & \left.+\gamma^{n-1} \cdot \mathbb{E}_{S_1, A_1, S_2, A_2, S_3, A_3, \cdots S_n, A_n}\left[\boldsymbol{g}\left(S_n, A_n ; \boldsymbol{\theta}\right)\right]\right] \\ = & \mathbb{E}_{S_1 \sim f(\cdot)}\left\{\mathbb{E}_{A_1 \sim \pi\left(\cdot \mid S_1 ; \boldsymbol{\theta}\right)}\left[\boldsymbol{g}\left(S_1, A_1 ; \boldsymbol{\theta}\right)\right]\right\} \\ & +\gamma \cdot \mathbb{E}_{S_2 \sim f(\cdot)}\left\{\mathbb{E}_{A_2 \sim \pi\left(\cdot \mid S_2 ; \boldsymbol{\theta}\right)}\left[\boldsymbol{g}\left(S_2, A_2 ; \boldsymbol{\theta}\right)\right]\right\} \\ & +\gamma^2 \cdot \mathbb{E}_{S_3 \sim f(\cdot)}\left\{\mathbb{E}_{A_3 \sim \pi\left(\cdot \mid S_3 ; \boldsymbol{\theta}\right)}\left[\boldsymbol{g}\left(S_3, A_3 ; \boldsymbol{\theta}\right)\right]\right\} \\ & +\cdots \\ & +\gamma^{n-1} \cdot \mathbb{E}_{S_n \sim f(\cdot)}\left\{\mathbb{E}_{A_n \sim \pi\left(\cdot \mid S_n ; \boldsymbol{\theta}\right)}\left[\boldsymbol{g}\left(S_n, A_n ; \boldsymbol{\theta}\right)\right]\right\} \\ = & \left(1+\gamma+\gamma^2+\cdots+\gamma^{n-1}\right) \cdot \mathbb{E}_{S \sim f(\cdot)}\left\{\mathbb{E}_{A \sim \pi(\cdot \mid S ; \boldsymbol{\theta})}[\boldsymbol{g}(S, A ; \boldsymbol{\theta})]\right\} . \end{aligned} θJ(θ)===ES1,A1[g(S1,A1;θ)]+γES1,A1,S2,A2[g(S2,A2;θ)]+γ2ES1,A1,S2,A2,S3,A3[g(S3,A3;θ)]++γn1ES1,A1,S2,A2,S3,A3,Sn,An[g(Sn,An;θ)]]ES1f(){EA1π(S1;θ)[g(S1,A1;θ)]}+γES2f(){EA2π(S2;θ)[g(S2,A2;θ)]}+γ2ES3f(){EA3π(S3;θ)[g(S3,A3;θ)]}++γn1ESnf(){EAnπ(Sn;θ)[g(Sn,An;θ)]}(1+γ+γ2++γn1)ESf(){EAπ(S;θ)[g(S,A;θ)]}.

证毕

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/749417.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【零基础学习06】嵌入式linux驱动中PWM驱动基本实现

大家好,今天给大家分享一下,如何利用PWM外设来实现LCD背光调节,本次实验使用Linux系统中PWM控制器以及PWM子系统来控制对应的功能。 第一:设备树下PWM控制节点 PWM对应的节点信息如下: pwm3: pwm@02088000 {compatible = "fsl,imx6ul-pwm", "fsl,imx27-pwm…

操作系统系列学习——一个实际的schedule函数

文章目录 前言一个实际的schedule函数 前言 一个本硕双非的小菜鸡,备战24年秋招,计划学习操作系统并完成6.0S81,加油! 本文总结自B站【哈工大】操作系统 李治军(全32讲) 老师课程讲的非常好,感…

我打算修一段时间仙,望周知

1、大科学家牛顿也修过仙,虽然修的是西方的仙;我们东方人不信那个邪,有自己优秀的传统文化,我只修东方的仙;另外,作为普通凡人我成就和智慧都无法望牛顿老人家项背的普通人,即使现在暂时“修仙”…

【代码】求出指定图片的平均RGB颜色值

import cv2求出指定图片的平均颜色值# 读取图片 image cv2.imread(D:\\Desktop\\0001.png)# 计算平均颜色 # cv2.mean()函数会返回图像所有通道的平均值 # 这里的平均值是按通道分别计算的,返回值是一个包含每个通道平均值的元组 average_color_per_channel cv2.m…

FFmpeg工作流程及视频文件分析

FFmpeg工作流程: 解封装(Demuxing)--->解码(Decoding)--->编码(Encoding)--->封装(Muxing) FFmpeg转码工作流程: 读取输入流--->音视频解封装--->解码音视频帧--->编码音视频帧--->音视频封装--->输出目标流 可简单理解为如下流程: 读文件-->解…

软件测试工程师简历要怎么写,才能让HR看到

作为软件测试的从业者,面试或者被面试都是常有的事。 可是不管怎样,和简历有着理不清的关系,面试官要通过简历了解面试者的基本信息、过往经历等。 面试者希望通过简历把自己最好的一面体现给面试官,所以在这场博弈中&#xff0…

MySQL和Redis如何保证数据一致性?

前言 由于缓存的高并发和高性能已经在各种项目中被广泛使用,在读取缓存这方面基本都是一致的,大概都是按照下图的流程进行操作: 但是在更新缓存方面,是更新完数据库再更新缓存还是直接删除缓存呢?又或者是先删除缓存再…

什么又是线程呢??

线程: 线程可以并发的执行,但是线程的地址是可以共享的 进程与线程的比较: 进程>线程 线程分三种: 用户线程 只有用户程序的库函数来 用户线程 因为操作系统感知不到 线程,如果有线程在运行,然后不交…

如何使用Python进行数据可视化:Matplotlib和Seaborn指南【第123篇—Matplotlib和Seaborn指南】

如何使用Python进行数据可视化:Matplotlib和Seaborn指南 数据可视化是数据科学和分析中不可或缺的一部分,而Python中的Matplotlib和Seaborn库为用户提供了强大的工具来创建各种可视化图表。本文将介绍如何使用这两个库进行数据可视化,并提供…

element-plus表格,多样本比较,动态渲染表头

问题: 公司给了个excel表格,让比较两个样本之间的数据,并且动态渲染,搞了半天没搞出来,最后让大佬解决了,特此写篇博客记录一下。 我之前的思路是合并行,大概效果是这样: 但是最终…

DataGrip 面试题及答案整理,最新面试题

DataGrip的数据库兼容性和多数据库支持如何实现? DataGrip实现数据库兼容性和多数据库支持的方式包括: 1、广泛的数据库支持: DataGrip支持多种数据库,包括但不限于MySQL, PostgreSQL, SQL Server, Oracle, SQLite, 和MongoDB&a…

小米笔记本出现no bootable devices

原因:硬件松动 解决方案: 1、不妨翻个面敲几下,上下左右晃晃试试 2、这个问题也困扰我好久了 搜了好多答案说是硬盘出了问题 前几次就是摇一摇,拍一拍就好了 后来怎么拍也没有用了 没办法自己动手,买一套螺丝刀&…

Docker----Dockerfile构建微服务镜像

目录 一、关键步骤 二、具体步骤 1、准备后端jar包(这里以java后端演示) 2、编写Dockerfile 3、构建镜像 4、运行镜像容器 5、测试是否成功 一、关键步骤 1、准备后端jar包(这里以java后端演示) 2、编写Dockerfile 3、构建镜像 4、运行镜像容器 5、测试是否成功 二…

Web前端依赖版本管理最佳实践

本文需要读者懂一点点前端的构建知识: 1. package.json文件的作用之一是管理外部依赖;2. .npmrc是npm命令默认配置,放在工程根目录。 Web前端构建一直都是一个不难,但是非常烦人的问题,在DevOps、CI/CD领域。 烦人的是…

HTTPS证书很贵吗?

首先,我们需要明确一点,HTTPS证书的价格并不是一成不变的,它受到多种因素的影响。其中最主要的因素包括证书的类型、颁发机构以及所需的验证级别。 从类型上来看,HTTPS证书主要分为单域名证书、多域名证书和通配符证书。单域名证书…

pta上的几个例题

c语言中的小小白-CSDN博客c语言中的小小白关注算法,c,c语言,贪心算法,链表,mysql,动态规划,后端,线性回归,数据结构,排序算法领域.https://blog.csdn.net/bhbcdxb123?spm1001.2014.3001.5343 给大家分享一句我很喜欢我话: 知不足而奋进,望远山而前行&am…

Android Studio实现内容丰富的安卓校园新闻浏览平台

获取源码请点击文章末尾QQ名片联系,源码不免费,尊重创作,尊重劳动 项目编号070 1.开发环境android stuido jdk1.8 eclipse mysql tomcat 2.功能介绍 安卓端: 1.注册登录 2.查看新闻列表 3.查看新闻详情 4.评论新闻 5.收藏新闻 6…

解决VMware无法检测此光盘映像中的操作系统

今天我本想在VMware上安装一个win10系统,可是遇到“无法检测此光盘映像中的操作系统。您需要指定要安装的操作系统。”的错误。报错如下: 图一 遇到的问题 开始还以为是ISO文件有问题,重新下载了几个还是不行(一个ISO文件好几个G&…

APP专项测试

一、介绍 APP测试除了功能测试外,还需要进行一些专项测试来发现更为深层的问题,这些问题主要是针对某个特殊方面进行,如安装卸载升级测试、兼容性测试、弱网测试、中断测试、流量测试、耗电量测试等。 二 、安装卸载升级测试 APP开发后&am…

联想拯救者刃7000K2024游戏电脑主机仅售6999元

这款联想拯救者刀锋7000K 2024游戏电脑主机在京东促销中售价仅为6999元,相比原价7499元有相当大的折扣。 这是一款功能强大的游戏电脑,配备了全新的 15-14400(F) 处理器和 RTX™ 4060 显卡,以及 16GB DDR5 内存和 1TB 固态硬盘。 外观方面&a…