You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
The key has expired.
Simplify APIs of TemporalWrapper: remove feature_extractor and combine parameters, as well as reward shaping support. The reason is that these functionalities, in the OpenAI Gym "philosophy", should be delegated to other Gym wrappers, e.g. ObservationWrapper for combining the features and the automata states.
Remove flloat dependency. Since TemporalGoal now only requires a pythomata.DFA object, it is up to the user to decide how to generate the reward automaton.
Update dependencies to their latest version, e.g. pythomata.
The reset() method of the temporal wrapper now first resets the temporal goals, and then makes a step on each of them according to the fluents extracted from the environment's initial state. This is needed because otherwise the initial state of the wrapped environment is ignored.
The support for terminating conditions from the temporal goals is removed. Again, this is because the only job of the DFAs is to provide rewards according to the history of the episode; any other customization of the underlying envrionment, or the wrapper, must be done by using other wrappers.