Importance Sampling(重要性采样),也是常用估计函数价值在某个概率分布下的期望的一个方法。这篇博文先简要介绍IS,再将其在策略评估中的应用。
Importance Sampling
- 目标:估计一个函数 f ( x ) f(x) f(x),在遵循某个概率分布 p ( x ) p(x) p(x)条件下的期望值 E x ∼ p [ f ( x ) ] mathbb{E}_{xsim p}[f(x)] Ex∼p[f(x)]
- 有从分布 q ( s ) q(s) q(s)上采样而来的数据 x 1 , x 2 , . . . , x n x_1, x_2,...,x_n x1,x2,...,xn
- 处于一定假设之下,我们可以使用采样来得到一个无偏估计 E x ∼ q [ f ( x ) ] mathbb{E}_{xsim q}[f(x)] Ex∼q[f(x)]
E x ∼ q [ f ( x ) ] = ∫ x q ( x ) f ( x ) mathbb{E}_{xsim q}[f(x)] = int_xq(x)f(x) Ex∼q[f(x)]=∫xq(x)f(x)
Importance Sampling(IS) for Policy Evaluation
记
h
j
h_j
hj为轮次
j
j
j关于状态、动作、奖励的历史:
h
j
=
(
s
j
,
1
,
a
j
,
1
,
r
j
,
1
,
s
j
,
2
,
a
j
,
2
,
r
j
,
2
,
.
.
.
,
s
j
,
L
j
(
t
e
r
m
i
n
a
l
)
)
h_j=(s_{j,1},a_{j,1},r_{j,1},s_{j,2},a_{j,2},r_{j,2},...,s_j,L_j(terminal))
hj=(sj,1,aj,1,rj,1,sj,2,aj,2,rj,2,...,sj,Lj(terminal))
那么
p
(
h
j
∣
π
,
s
=
s
j
,
1
)
=
p
(
a
j
,
1
∣
s
j
,
1
)
p
(
r
j
,
1
∣
s
j
,
1
,
a
j
,
1
)
p
(
s
j
,
2
∣
s
j
,
1
a
j
,
1
)
p
(
a
j
,
2
∣
s
j
,
2
)
p
(
r
j
,
2
∣
s
j
,
2
,
a
j
,
2
)
p
(
s
j
,
3
∣
s
j
,
2
a
j
,
2
)
.
.
.
=
∏
t
=
1
L
j
−
1
p
(
a
j
,
t
∣
s
j
,
t
)
p
(
r
j
,
t
∣
s
j
,
t
,
a
j
,
t
)
p
(
a
j
,
t
+
1
∣
s
j
,
t
,
a
j
,
t
)
=
∏
t
=
1
L
j
−
1
π
(
a
j
,
t
∣
s
j
,
t
)
p
(
r
j
,
t
∣
s
j
,
t
,
a
j
,
t
)
p
(
a
j
,
t
+
1
∣
s
j
,
t
,
a
j
,
t
)
egin{aligned} p(h_j|pi,s=s_{j,1}) & =p(a_{j,1}|s_{j,1})p(r_{j,1}|s_{j,1},a_{j,1})p(s_{j,2}|s_{j,1}a_{j,1}) p(a_{j,2}|s_{j,2})p(r_{j,2}|s_{j,2},a_{j,2})p(s_{j,3}|s_{j,2}a_{j,2})... \ & = prod_{t=1}^{L_j-1} p(a_{j,t}|s_{j,t})p(r_{j,t}|s_{j,t},a_{j,t})p(a_{j,t+1}|s_{j,t},a_{j,t}) \ & = prod_{t=1}^{L_j-1} pi(a_{j,t}|s_{j,t})p(r_{j,t}|s_{j,t},a_{j,t})p(a_{j,t+1}|s_{j,t},a_{j,t}) end{aligned}
p(hj∣π,s=sj,1)=p(aj,1∣sj,1)p(rj,1∣sj,1,aj,1)p(sj,2∣sj,1aj,1)p(aj,2∣sj,2)p(rj,2∣sj,2,aj,2)p(sj,3∣sj,2aj,2)...=t=1∏Lj−1p(aj,t∣sj,t)p(rj,t∣sj,t,aj,t)p(aj,t+1∣sj,t,aj,t)=t=1∏Lj−1π(aj,t∣sj,t)p(rj,t∣sj,t,aj,t)p(aj,t+1∣sj,t,aj,t)
如果记
h
j
h_j
hj为轮次
j
j
j关于状态、动作、奖励的历史,其中动作是从策略
π
2
pi_2
π2采样而来:
h
j
=
(
s
j
,
1
,
a
j
,
1
,
r
j
,
1
,
s
j
,
2
,
a
j
,
2
,
r
j
,
2
,
.
.
.
,
s
j
,
L
j
(
t
e
r
m
i
n
a
l
)
)
h_j=(s_{j,1},a_{j,1},r_{j,1},s_{j,2},a_{j,2},r_{j,2},...,s_j,L_j(terminal))
hj=(sj,1,aj,1,rj,1,sj,2,aj,2,rj,2,...,sj,Lj(terminal))
那么
V
π
1
(
s
)
≈
∑
j
=
1
n
p
(
h
j
∣
π
1
,
s
)
p
(
h
j
∣
π
2
,
s
)
G
(
h
j
)
V^{pi_1}(s)approx sum_{j=1}^n frac{p(h_j|pi_1,s)}{p(h_j|pi_2,s)}G(h_j)
Vπ1(s)≈j=1∑np(hj∣π2,s)p(hj∣π1,s)G(hj)
Importance Sampling(IS) for Policy Evaluation
-
目标:在给定由策略 π 2 pi_2 π2产生的轮次(episodes)下,评估策略 π 1 pi_1 π1的价值 V π ( s ) V^pi(s) Vπ(s)
- s 1 , a 1 , r 1 , s 2 , a 2 , r 2 , . . . . s_1,a_1,r_1,s_2,a_2,r_2,.... s1,a1,r1,s2,a2,r2,....其中的action是由 π 2 pi_2 π2采样而来
-
能够访问 MDP模型M在策略 π pi π下产生的收益为 G t = r t + γ r t + 1 + γ 2 r t + 2 + γ 3 r t + 3 + . . . . G_t=r_t+gamma r_{t+1} + gamma^2r_{t+2}+gamma^3r_{t+3}+.... Gt=rt+γrt+1+γ2rt+2+γ3rt+3+....
-
想求 V 1 π s = E π 1 [ G t ∣ s t = s ] V^pi_1{s}=mathbb{E}_{pi_1}[G_t|s_t=s] V1πs=Eπ1[Gt∣st=s]
-
IS = 蒙特·卡罗尔off policy估计数据
-
不依赖模型的方法
-
不需要马尔科夫假设
-
在一定的假设下,无偏且一致的 V π 1 V^{pi_1} Vπ1的估计器
-
可以被用于agent在和环境使用非agent控制策略进行交互的情况下估计策略的价值
-
也可以使用批学习(batch learning)