【深入了解强化学习】

文章目录

  • 前言
  • 1. Q learning
  • 2. Sarsa
  • 3. Deep Q Network(DQN)
  • 4. 总结

  • 前言

    强化学习是机器学习中的一大类,它可以让机器学着如何在环境中拿到高分, 表现出优秀的成绩. 而这些成绩背后却是他所付出的辛苦劳动, 不断的试错, 不断地尝试, 累积经验, 学习经验.

    强化学习的方法可以分为理不理解所处环境。不理解环境,环境给什么就是什么,称为model-free,包含 Q learning, Sarsa, Policy Gradients 等方法。 理解环境,用多一个模型去表示环境,就是 model-based 方法。 OpenAI gym 环境库是一个编写好了多种交互环境的库,而自己编写环境是一个很耗时间的过程,以下均不涉及环境的编写。


    1. Q learning

    Q learning 是一种model-free方法,它的核心在于构建一个Q表,这个表表示了处于每一种状态(state)时进行各个行动(action)的奖励值。 举例而言(莫烦python的例子),下图就是一个强化学习的过程,有16个state(位置),4个可选的action(上下左右)。让探索者(红框)学会走迷宫. 黄色的是天堂 (reward 1), 黑色的地狱 (reward -1)。

    那么,Q learning 的流程如下。

    包含了不断重复的三个步骤。

  • 给定当前状态s和Q表, 使用贪婪算法采取一个行动a
  • 给定当前状态s和行动a,由环境交互给出下一个状态s’和奖励r
  • 由s、s’、a、Q表,更新得到新的Q表
    每次更新我们都用到了 Q 现实和 Q 估计, 而且 Q learning 的迷人之处就是 在 Q(s1, a2) 现实 中, 也包含了一个 Q(s2) 的最大估计值, 将对下一步的衰减的最大估计和当前所得到的奖励当成这一步的现实.
  • 代码如下:

    import numpy as np
    import pandas as pd
    
    class QLearningTable:
        def __init__(self, actions, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9):
            self.actions = actions  # a list
            self.lr = learning_rate
            self.gamma = reward_decay
            self.epsilon = e_greedy
            self.q_table = pd.DataFrame(columns=self.actions, dtype=np.float64)
    
        def choose_action(self, observation):
            self.check_state_exist(observation)
            # action selection
            if np.random.uniform() < self.epsilon:
                # choose best action
                state_action = self.q_table.loc[observation, :]
                # some actions may have the same value, randomly choose on in these actions
                action = np.random.choice(state_action[state_action == np.max(state_action)].index)
            else:
                # choose random action
                action = np.random.choice(self.actions)
            return action
    
        def learn(self, s, a, r, s_):
            self.check_state_exist(s_)
            q_predict = self.q_table.loc[s, a]
            if s_ != 'terminal':
                q_target = r + self.gamma * self.q_table.loc[s_, :].max()  # next state is not terminal
            else:
                q_target = r  # next state is terminal
            self.q_table.loc[s, a] += self.lr * (q_target - q_predict)  # update
    
        def check_state_exist(self, state):
            if state not in self.q_table.index:
                # append new state to q table
                self.q_table = self.q_table.append(
                    pd.Series(
                        [0]*len(self.actions),
                        index=self.q_table.columns,
                        name=state,
                    )
                )
    
    
    from maze_env import Maze
    from RL_brain import QLearningTable
    
    
    def update():
        for episode in range(100):
            # initial observation
            observation = env.reset()
    
            while True:
                # fresh env
                env.render()
    
                # RL choose action based on observation
                action = RL.choose_action(str(observation))
    
                # RL take action and get next observation and reward
                observation_, reward, done = env.step(action)
    
                # RL learn from this transition
                RL.learn(str(observation), action, reward, str(observation_))
    
                # swap observation
                observation = observation_
    
                # break while loop when end of this episode
                if done:
                    break
    
        # end of game
        print('game over')
        env.destroy()
    
    if __name__ == "__main__":
        env = Maze()
        RL = QLearningTable(actions=list(range(env.n_actions)))
    
        env.after(100, update)
        env.mainloop()
    

    2. Sarsa

    Sarsa 和 Q learning 很类似,差别在于Sarsa会更‘胆小’一点,不太敢尝试。它的流程如下。

    可以看出,它和 Q learning 差别仅在于更新环节,具体来讲:

  • 他在当前 state 已经想好了 state 对应的 action, 而且想好了 下一个 state_ 和下一个 action_ (Qlearning 还没有想好下一个 action_)
  • 更新 Q(s,a) 的时候基于的是下一个贪婪算法的 Q(s_, a_) (Qlearning 是基于 maxQ(s_))
    这种不同之处使得 Sarsa 相对于 Qlearning, 更加的胆小. 因为 Qlearning 永远都是想着 maxQ 最大化, 因为这个 maxQ 而变得贪婪, 不考虑其他非 maxQ 的结果. 我们可以理解成 Qlearning 是一种贪婪, 大胆, 勇敢的算法, 对于错误, 死亡并不在乎. 而 Sarsa 是一种保守的算法, 他在乎每一步决策, 对于错误和死亡比较敏感.
  • import numpy as np
    import pandas as pd
    
    
    class RL(object):
        def __init__(self, action_space, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9):
            self.actions = action_space  # a list
            self.lr = learning_rate
            self.gamma = reward_decay
            self.epsilon = e_greedy
    
            self.q_table = pd.DataFrame(columns=self.actions, dtype=np.float64)
    
        def check_state_exist(self, state):
            if state not in self.q_table.index:
                # append new state to q table
                self.q_table = self.q_table.append(
                    pd.Series(
                        [0]*len(self.actions),
                        index=self.q_table.columns,
                        name=state,
                    )
                )
    
        def choose_action(self, observation):
            self.check_state_exist(observation)
            # action selection
            if np.random.rand() < self.epsilon:
                # choose best action
                state_action = self.q_table.loc[observation, :]
                # some actions may have the same value, randomly choose on in these actions
                action = np.random.choice(state_action[state_action == np.max(state_action)].index)
            else:
                # choose random action
                action = np.random.choice(self.actions)
            return action
    
        def learn(self, *args):
            pass
    
    
    # off-policy
    class QLearningTable(RL):
        def __init__(self, actions, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9):
            super(QLearningTable, self).__init__(actions, learning_rate, reward_decay, e_greedy)
    
        def learn(self, s, a, r, s_):
            self.check_state_exist(s_)
            q_predict = self.q_table.loc[s, a]
            if s_ != 'terminal':
                q_target = r + self.gamma * self.q_table.loc[s_, :].max()  # next state is not terminal
            else:
                q_target = r  # next state is terminal
            self.q_table.loc[s, a] += self.lr * (q_target - q_predict)  # update
    
    
    # on-policy
    class SarsaTable(RL):
    
        def __init__(self, actions, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9):
            super(SarsaTable, self).__init__(actions, learning_rate, reward_decay, e_greedy)
    
        def learn(self, s, a, r, s_, a_):
            self.check_state_exist(s_)
            q_predict = self.q_table.loc[s, a]
            if s_ != 'terminal':
                q_target = r + self.gamma * self.q_table.loc[s_, a_]  # next state is not terminal
            else:
                q_target = r  # next state is terminal
            self.q_table.loc[s, a] += self.lr * (q_target - q_predict)  # update
    
    
    from maze_env import Maze
    from RL_brain import SarsaTable
    
    def update():
        for episode in range(100):
            # 初始化环境
            observation = env.reset()
    
            # Sarsa 根据 state 观测选择行为
            action = RL.choose_action(str(observation))
    
            while True:
                # 刷新环境
                env.render()
    
                # 在环境中采取行为, 获得下一个 state_ (obervation_), reward, 和是否终止
                observation_, reward, done = env.step(action)
    
                # 根据下一个 state (obervation_) 选取下一个 action_
                action_ = RL.choose_action(str(observation_))
    
                # 从 (s, a, r, s, a) 中学习, 更新 Q_tabel 的参数 ==> Sarsa
                RL.learn(str(observation), action, reward, str(observation_), action_)
    
                # 将下一个当成下一步的 state (observation) and action
                observation = observation_
                action = action_
    
                # 终止时跳出循环
                if done:
                    break
    
        # 大循环完毕
        print('game over')
        env.destroy()
    
    if __name__ == "__main__":
        env = Maze()
        RL = SarsaTable(actions=list(range(env.n_actions)))
    
        env.after(100, update)
        env.mainloop()
    

    3. Deep Q Network(DQN)

    DQN 是一种结合了神经网络的强化学习。普通的强化学习中需要生成一个Q表,而如果状态数太多的话Q表也极为耗内存,所以 DQN 提出了用神经网络来代替Q表的功能。网络输入一个状态,输出各个动作的Q值。网络通过对Q估计和Q现实使用RMSprop来更新参数。Q估计就是网络输出,而Q现实等于奖励+下一状态的前模型的Q估计。流程图如下:

    整个算法乍看起来很复杂, 不过我们拆分一下, 就变简单了. 也就是个 Q learning 主框架上加了些装饰,包括:

  • 记忆库 (用于重复学习)
  • 神经网络计算 Q 值
  • 暂时冻结 q_target 参数 (切断相关性)
  • 具体而言,记忆库是通过存储一堆数据在一个不断更新的记忆库里,训练时随机抽取数据出来训练。神经网络用来针对输入的状态来输出采取各个行动的Q值。共用了两个网络,他们的结构一模一样,但 q_target 网络用的是主网络之前很多个step的参数,这是为了形成一种延迟,切断他们的相关性。

    import numpy as np
    import pandas as pd
    import tensorflow as tf
    
    np.random.seed(1)
    tf.set_random_seed(1)
    
    
    # Deep Q Network off-policy
    class DeepQNetwork:
        def __init__(
                self,
                n_actions,
                n_features,
                learning_rate=0.01,
                reward_decay=0.9,
                e_greedy=0.9,
                replace_target_iter=300,
                memory_size=500,
                batch_size=32,
                e_greedy_increment=None,
                output_graph=False,
        ):
            self.n_actions = n_actions
            self.n_features = n_features
            self.lr = learning_rate
            self.gamma = reward_decay
            self.epsilon_max = e_greedy
            self.replace_target_iter = replace_target_iter
            self.memory_size = memory_size
            self.batch_size = batch_size
            self.epsilon_increment = e_greedy_increment
            self.epsilon = 0 if e_greedy_increment is not None else self.epsilon_max
    
            # total learning step
            self.learn_step_counter = 0
    
            # initialize zero memory [s, a, r, s_]
            self.memory = np.zeros((self.memory_size, n_features * 2 + 2))
    
            # consist of [target_net, evaluate_net]
            self._build_net()
            t_params = tf.get_collection('target_net_params')
            e_params = tf.get_collection('eval_net_params')
            self.replace_target_op = [tf.assign(t, e) for t, e in zip(t_params, e_params)]
    
            self.sess = tf.Session()
    
            if output_graph:
                # $ tensorboard --logdir=logs
                # tf.train.SummaryWriter soon be deprecated, use following
                tf.summary.FileWriter("logs/", self.sess.graph)
    
            self.sess.run(tf.global_variables_initializer())
            self.cost_his = []
    
        def _build_net(self):
            # ------------------ build evaluate_net ------------------
            self.s = tf.placeholder(tf.float32, [None, self.n_features], name='s')  # input
            self.q_target = tf.placeholder(tf.float32, [None, self.n_actions], name='Q_target')  # for calculating loss
            with tf.variable_scope('eval_net'):
                # c_names(collections_names) are the collections to store variables
                c_names, n_l1, w_initializer, b_initializer = \
                    ['eval_net_params', tf.GraphKeys.GLOBAL_VARIABLES], 10, \
                    tf.random_normal_initializer(0., 0.3), tf.constant_initializer(0.1)  # config of layers
    
                # first layer. collections is used later when assign to target net
                with tf.variable_scope('l1'):
                    w1 = tf.get_variable('w1', [self.n_features, n_l1], initializer=w_initializer, collections=c_names)
                    b1 = tf.get_variable('b1', [1, n_l1], initializer=b_initializer, collections=c_names)
                    l1 = tf.nn.relu(tf.matmul(self.s, w1) + b1)
    
                # second layer. collections is used later when assign to target net
                with tf.variable_scope('l2'):
                    w2 = tf.get_variable('w2', [n_l1, self.n_actions], initializer=w_initializer, collections=c_names)
                    b2 = tf.get_variable('b2', [1, self.n_actions], initializer=b_initializer, collections=c_names)
                    self.q_eval = tf.matmul(l1, w2) + b2
    
            with tf.variable_scope('loss'):
                self.loss = tf.reduce_mean(tf.squared_difference(self.q_target, self.q_eval))
            with tf.variable_scope('train'):
                self._train_op = tf.train.RMSPropOptimizer(self.lr).minimize(self.loss)
    
            # ------------------ build target_net ------------------
            self.s_ = tf.placeholder(tf.float32, [None, self.n_features], name='s_')    # input
            with tf.variable_scope('target_net'):
                # c_names(collections_names) are the collections to store variables
                c_names = ['target_net_params', tf.GraphKeys.GLOBAL_VARIABLES]
    
                # first layer. collections is used later when assign to target net
                with tf.variable_scope('l1'):
                    w1 = tf.get_variable('w1', [self.n_features, n_l1], initializer=w_initializer, collections=c_names)
                    b1 = tf.get_variable('b1', [1, n_l1], initializer=b_initializer, collections=c_names)
                    l1 = tf.nn.relu(tf.matmul(self.s_, w1) + b1)
    
                # second layer. collections is used later when assign to target net
                with tf.variable_scope('l2'):
                    w2 = tf.get_variable('w2', [n_l1, self.n_actions], initializer=w_initializer, collections=c_names)
                    b2 = tf.get_variable('b2', [1, self.n_actions], initializer=b_initializer, collections=c_names)
                    self.q_next = tf.matmul(l1, w2) + b2
    
        def store_transition(self, s, a, r, s_):
            if not hasattr(self, 'memory_counter'):
                self.memory_counter = 0
    
            transition = np.hstack((s, [a, r], s_))
    
            # replace the old memory with new memory
            index = self.memory_counter % self.memory_size
            self.memory[index, :] = transition
    
            self.memory_counter += 1
    
        def choose_action(self, observation):
            # to have batch dimension when feed into tf placeholder
            observation = observation[np.newaxis, :]
    
            if np.random.uniform() < self.epsilon:
                # forward feed the observation and get q value for every actions
                actions_value = self.sess.run(self.q_eval, feed_dict={self.s: observation})
                action = np.argmax(actions_value)
            else:
                action = np.random.randint(0, self.n_actions)
            return action
    
        def learn(self):
            # check to replace target parameters
            if self.learn_step_counter % self.replace_target_iter == 0:
                self.sess.run(self.replace_target_op)
                print('\ntarget_params_replaced\n')
    
            # sample batch memory from all memory
            if self.memory_counter > self.memory_size:
                sample_index = np.random.choice(self.memory_size, size=self.batch_size)
            else:
                sample_index = np.random.choice(self.memory_counter, size=self.batch_size)
            batch_memory = self.memory[sample_index, :]
    
            q_next, q_eval = self.sess.run(
                [self.q_next, self.q_eval],
                feed_dict={
                    self.s_: batch_memory[:, -self.n_features:],  # fixed params
                    self.s: batch_memory[:, :self.n_features],  # newest params
                })
    
            # change q_target w.r.t q_eval's action
            q_target = q_eval.copy()
    
            batch_index = np.arange(self.batch_size, dtype=np.int32)
            eval_act_index = batch_memory[:, self.n_features].astype(int)
            reward = batch_memory[:, self.n_features + 1]
    
            q_target[batch_index, eval_act_index] = reward + self.gamma * np.max(q_next, axis=1)
    
            """
            For example in this batch I have 2 samples and 3 actions:
            q_eval =
            [[1, 2, 3],
             [4, 5, 6]]
            q_target = q_eval =
            [[1, 2, 3],
             [4, 5, 6]]
            Then change q_target with the real q_target value w.r.t the q_eval's action.
            For example in:
                sample 0, I took action 0, and the max q_target value is -1;
                sample 1, I took action 2, and the max q_target value is -2:
            q_target =
            [[-1, 2, 3],
             [4, 5, -2]]
            So the (q_target - q_eval) becomes:
            [[(-1)-(1), 0, 0],
             [0, 0, (-2)-(6)]]
            We then backpropagate this error w.r.t the corresponding action to network,
            leave other action as error=0 cause we didn't choose it.
            """
    
            # train eval network
            _, self.cost = self.sess.run([self._train_op, self.loss],
                                         feed_dict={self.s: batch_memory[:, :self.n_features],
                                                    self.q_target: q_target})
            self.cost_his.append(self.cost)
    
            # increasing epsilon
            self.epsilon = self.epsilon + self.epsilon_increment if self.epsilon < self.epsilon_max else self.epsilon_max
            self.learn_step_counter += 1
    
        def plot_cost(self):
            import matplotlib.pyplot as plt
            plt.plot(np.arange(len(self.cost_his)), self.cost_his)
            plt.ylabel('Cost')
            plt.xlabel('training steps')
            plt.show()
    
    
    from maze_env import Maze
    from RL_brain import DeepQNetwork
    
    def run_maze():
        step = 0    # 用来控制什么时候学习
        for episode in range(300):
            # 初始化环境
            observation = env.reset()
    
            while True:
                # 刷新环境
                env.render()
    
                # DQN 根据观测值选择行为
                action = RL.choose_action(observation)
    
                # 环境根据行为给出下一个 state, reward, 是否终止
                observation_, reward, done = env.step(action)
    
                # DQN 存储记忆
                RL.store_transition(observation, action, reward, observation_)
    
                # 控制学习起始时间和频率 (先累积一些记忆再开始学习)
                if (step > 200) and (step % 5 == 0):
                    RL.learn()
    
                # 将下一个 state_ 变为 下次循环的 state
                observation = observation_
    
                # 如果终止, 就跳出循环
                if done:
                    break
                step += 1   # 总步数
    
        # end of game
        print('game over')
        env.destroy()
    
    
    if __name__ == "__main__":
        env = Maze()
        RL = DeepQNetwork(env.n_actions, env.n_features,
                          learning_rate=0.01,
                          reward_decay=0.9,
                          e_greedy=0.9,
                          replace_target_iter=200,  # 每 200 步替换一次 target_net 的参数
                          memory_size=2000, # 记忆上限
                          # output_graph=True   # 是否输出 tensorboard 文件
                          )
        env.after(100, run_maze)
        env.mainloop()
        RL.plot_cost()  # 观看神经网络的误差曲线
    

    4. 总结

    强化学习本身是不依赖于深度学习的,它更多的是一种思想,通过行为与环境的交互产生奖励值,从而来更新Q表(或相同功能的神经网络)。它没有一种固定的代码,只有一套模式,具体代码还得根据实际应用与交互环境来编写。

    物联沃分享整理
    物联沃-IOTWORD物联网 » 【深入了解强化学习】

    发表评论