• 强化学习的REIINFORCE算法和交叉熵RL算法


    注意:

    本文并不讲REINFORCE算法,而是讲强化学习的交叉熵算法,关于REINFORCE算法可以参看:

    https://www.cnblogs.com/devilmaycry812839668/p/15889282.html

    ==========================================

    强化学习有多种分类方法,其中一类分法为:

    • 基于值函数的。该种类型的强化学习算法,比较有代表的基础算法有Q-learning算法、Sarsa算法等。
    • 基于策略梯度的。该种类型的强化学习算法,比较有代表的基础算法有REINFORCE、交叉熵RL算法等。

    本文主要讲交叉熵RL算法。交叉熵RL不同于REINFORCE算法,损失函数中是不使用奖励值的。交叉熵RL在每次和环境交互采集一定数量的episodes数据后根据奖励值选择其中一定比例的episodes数据,然后根据这些选定数据中动作的选择和对应的概率来进行交叉熵损失计算。如果在选定的episodes数据中有某个step,该step中状态可选择的动作为a0,a1,a2,a3这四个动作,假设agent最终选择的动作为a2,计算损失函数时得到在该step下选择a2的概率为p2,那么计算时使用交叉熵函数则可以写为  -(0*logp0  + 0*logp1 + 1*logp2  +  0*logp3= -logp2  。在对episodes数据进行选择时,我们可以根据最终奖励值的大小选择一定百分比的episodes,如选择最好的30%的episodes (在下面代码中百分位数设为70,就是选择最好的30%数据)。

    需要注意的是交叉熵RL算法是十分基础的RL算法,缺点也很多,现在很少会有人使用,了解这个算法重要意义在于学习。在交叉熵RL算法可以使用对以往表现好的episodes数据进行保存,然后和新获得的数据一起进行再次训练,该种方式一般叫做保留精英操作。

    给出CartPole环境下的一个交叉熵RL算法的代码:(Pytorch框架)

    import gym
    from collections import namedtuple
    import numpy as np
    from tensorboardX import SummaryWriter
    
    import torch
    import torch.nn as nn
    import torch.optim as optim
    
    
    HIDDEN_SIZE = 128
    BATCH_SIZE = 16
    PERCENTILE = 70
    
    
    class Net(nn.Module):
        def __init__(self, obs_size, hidden_size, n_actions):
            super(Net, self).__init__()
            self.net = nn.Sequential(
                nn.Linear(obs_size, hidden_size),
                nn.ReLU(),
                nn.Linear(hidden_size, n_actions)
            )
    
        def forward(self, x):
            return self.net(x)
    
    
    Episode = namedtuple('Episode', field_names=['reward', 'steps'])
    EpisodeStep = namedtuple('EpisodeStep', field_names=['observation', 'action'])
    
    
    def iterate_batches(env, net, batch_size):
        batch = []
        episode_reward = 0.0
        episode_steps = []
        obs = env.reset()
        sm = nn.Softmax(dim=1)
        while True:
            obs_v = torch.FloatTensor([obs])
            act_probs_v = sm(net(obs_v))
            act_probs = act_probs_v.data.numpy()[0]
            action = np.random.choice(len(act_probs), p=act_probs)
            next_obs, reward, is_done, _ = env.step(action)
            episode_reward += reward
            step = EpisodeStep(observation=obs, action=action)
            episode_steps.append(step)
            if is_done:
                e = Episode(reward=episode_reward, steps=episode_steps)
                batch.append(e)
                episode_reward = 0.0
                episode_steps = []
                next_obs = env.reset()
                if len(batch) == batch_size:
                    yield batch
                    batch = []
            obs = next_obs
    
    
    def filter_batch(batch, percentile):
        rewards = list(map(lambda s: s.reward, batch))
        reward_bound = np.percentile(rewards, percentile)
        reward_mean = float(np.mean(rewards))
    
        train_obs = []
        train_act = []
        for reward, steps in batch:
            if reward < reward_bound:
                continue
            train_obs.extend(map(lambda step: step.observation, steps))
            train_act.extend(map(lambda step: step.action, steps))
    
        train_obs_v = torch.FloatTensor(train_obs)
        train_act_v = torch.LongTensor(train_act)
        return train_obs_v, train_act_v, reward_bound, reward_mean
    
    
    if __name__ == "__main__":
        env = gym.make("CartPole-v0")
        # env = gym.wrappers.Monitor(env, directory="mon", force=True)
        obs_size = env.observation_space.shape[0]
        n_actions = env.action_space.n
    
        net = Net(obs_size, HIDDEN_SIZE, n_actions)
        objective = nn.CrossEntropyLoss()
        optimizer = optim.Adam(params=net.parameters(), lr=0.01)
        writer = SummaryWriter(comment="-cartpole")
    
        for iter_no, batch in enumerate(iterate_batches(
                env, net, BATCH_SIZE)):
            obs_v, acts_v, reward_b, reward_m = \
                filter_batch(batch, PERCENTILE)
            optimizer.zero_grad()
            action_scores_v = net(obs_v)
            loss_v = objective(action_scores_v, acts_v)
            loss_v.backward()
            optimizer.step()
            print("%d: loss=%.3f, reward_mean=%.1f, rw_bound=%.1f" % (
                iter_no, loss_v.item(), reward_m, reward_b))
            writer.add_scalar("loss", loss_v.item(), iter_no)
            writer.add_scalar("reward_bound", reward_b, iter_no)
            writer.add_scalar("reward_mean", reward_m, iter_no)
            if reward_m > 199:
                print("Solved!")
                break
        writer.close()

    ============================================

    强化学习的REIINFORCE算法和交叉熵算法作为比较基础的算法经常作为baseline被提及,关于REIINFORCE算法可以参看:

    https://www.cnblogs.com/devilmaycry812839668/p/15889282.html

    ============================================

  • 相关阅读:
    POJ3189 Steady Cow Assignment(二分图多重匹配)
    POJ2112 Optimal Milking(二分图多重匹配)
    POJ2289 Jamie's Contact Groups(二分图多重匹配)
    安装jhipster
    AngularJS版本下载
    业务平台技术架构一些注意事项
    反向数据库表
    近期需要关注的内容
    一些不太常见但很有用的java类
    文件复制
  • 原文地址:https://www.cnblogs.com/devilmaycry812839668/p/16725678.html
Copyright © 2020-2023  润新知