2

I've been trying to solve the pong atari with a DQN. I'm using OpenAI gym for the pong environment.

I've made a custom ObservationWrapper but I'm unable to figure out whats the problem with the reset() method I've overriden.

Error:

Traceback (most recent call last):
  File "C:\Users\berna\Documents\Pytorch Experiment\Torching the Dead Grass\DeepQLearning\training.py", line 123, in <module>
    agent = Agent(env, buffer)
  File "C:\Users\berna\Documents\Pytorch Experiment\Torching the Dead Grass\DeepQLearning\training.py", line 56, in __init__
    self._reset()
  File "C:\Users\berna\Documents\Pytorch Experiment\Torching the Dead Grass\DeepQLearning\training.py", line 59, in _reset
    self.state = env.reset()
  File "C:\Users\berna\AppData\Local\Programs\Python\Python310\lib\site-packages\gym\core.py", line 379, in reset
    obs, info = self.env.reset(**kwargs)
  File "C:\Users\berna\Documents\Pytorch Experiment\Torching the Dead Grass\DeepQLearning\wrappers.py", line 106, in reset
    return self.observation(self.env.reset())
  File "C:\Users\berna\AppData\Local\Programs\Python\Python310\lib\site-packages\gym\core.py", line 379, in reset
    obs, info = self.env.reset(**kwargs)
  File "C:\Users\berna\AppData\Local\Programs\Python\Python310\lib\site-packages\gym\core.py", line 379, in reset
    obs, info = self.env.reset(**kwargs)
ValueError: too many values to unpack (expected 2)

Process finished with exit code 1

and the code:

Agent:

class Agent:
    def __init__(self, env, exp_buffer):
        self.env = env
        self.exp_buffer = exp_buffer
        self._reset()

    def _reset(self):
        self.state = env.reset()
        self.total_reward = 0.0

wrapper:

class BufferWrapper(gym.ObservationWrapper):
    def __init__(self, env, n_steps, dtype=np.float32):
        super(BufferWrapper, self).__init__(env)
        self.dtype = dtype
        old_space = env.observation_space
        self.observation_space = gym.spaces.Box(old_space.low.repeat(n_steps, axis=0),
                                                old_space.high.repeat(n_steps, axis=0), dtype=dtype)

    def reset(self):
        self.buffer = np.zeros_like(self.observation_space.low, dtype=self.dtype)
        return self.observation(self.env.reset())

    def observation(self, observation):
        self.buffer[:-1] = self.buffer[1:]
        self.buffer[-1] = observation
        return self.buffer

Can someone helping me understand why I'm receiving that error?

Levenlol
  • 305
  • 5
  • 17

1 Answers1

1

You have to make 2 changes in your code.

  1. In the reset method you have to return, not just the obervation as you did, but also the return_info parameter. https://gymnasium.farama.org/api/env/#gymnasium.Env.reset

  2. Also in the reset method you should accept seed and options. By including **kwargs as an argument you will be covered.

your code should be:

class BufferWrapper(gym.ObservationWrapper):
    def __init__(self, env, n_steps, dtype=np.float32):
        super(BufferWrapper, self).__init__(env)
        self.dtype = dtype
        old_space = env.observation_space
        self.observation_space = gym.spaces.Box(old_space.low.repeat(n_steps, axis=0),
                                                old_space.high.repeat(n_steps, axis=0), dtype=dtype)

    def reset(self, **kwargs):
        self.buffer = np.zeros_like(self.observation_space.low, dtype=self.dtype)
        obs, info = self.env.reset(**kwargs)
        return self.observation(obs), info

    def observation(self, observation):
        self.buffer[:-1] = self.buffer[1:]
        self.buffer[-1] = observation
        return self.buffer

Also i want you to notice that if you have a wrapper acting on the step method, you also have to update it to return parameters terminated and truncated.

Daniel Exxxe
  • 59
  • 1
  • 7