Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / danijar/dreamerv2 issues and pull requests

#60 - Question about advantage calculation

Issue - State: open - Opened by leeacord 8 months ago

#58 - Are the actions properly fed into the model?

Issue - State: open - Opened by NagisaZj about 1 year ago

#57 - Cannot reproduce Atari Pong scores

Issue - State: open - Opened by mlinda96 over 1 year ago

#56 - How to reproduce DayDreamer's results in A1 simulator?

Issue - State: open - Opened by Sapio-S over 1 year ago

#55 - Outdated dependencies and broken examples

Issue - State: open - Opened by Namnodorel about 2 years ago - 1 comment

#54 - Fix Docker build settings for January 20, 2023

Pull Request - State: open - Opened by ricrowl about 2 years ago

#52 - Reward different on evaluation

Issue - State: closed - Opened by ipsec about 2 years ago - 1 comment

#51 - the Desire of Hyperparameters of Humanoid-Walk

Issue - State: open - Opened by XueruiSu about 2 years ago

#50 - Understanding re-clipping in Truncated Normal distribution

Issue - State: closed - Opened by pvskand about 2 years ago - 1 comment

#49 - How does dreamerv2 perform on feature-based tasks?

Issue - State: closed - Opened by xlnwel over 2 years ago - 4 comments

#48 - Prediction returning the same action from different observations

Issue - State: closed - Opened by ipsec over 2 years ago

#47 - Update Dockerfile to use tensorflow/tensorflow:2.10.0-gpu

Pull Request - State: open - Opened by ikeyasu over 2 years ago

#46 - Minimal evaluation/example using gym observation

Issue - State: closed - Opened by ipsec over 2 years ago - 4 comments

#45 - ValueError: . Tensor must have rank 4. Received rank 3, shape (208, 64, 64)

Issue - State: closed - Opened by ipsec over 2 years ago - 1 comment

#44 - Why share states across random batches for training the world model?

Issue - State: closed - Opened by sai-prasanna over 2 years ago - 1 comment

#43 - Questions about expl.py and updating the batch dataset

Issue - State: closed - Opened by Ashminator over 2 years ago - 2 comments

#42 - Questions on Imagination MDP and imagination horizon H = 15

Issue - State: closed - Opened by GoingMyWay over 2 years ago

#41 - Why stop-grad on actor's input state in imagine() function ?

Issue - State: closed - Opened by tominku over 2 years ago - 1 comment

#40 - replay data memory usage?

Issue - State: closed - Opened by tominku over 2 years ago - 1 comment

#39 - Can't reproduce riverraid's results

Issue - State: closed - Opened by luizapozzobon over 2 years ago - 2 comments

#38 - Update the way env.py receives RAM state

Pull Request - State: open - Opened by amshin98 almost 3 years ago

#37 - Straight-thru gradients vs Gumbel Softmax

Issue - State: closed - Opened by zplizzi almost 3 years ago - 1 comment

#36 - Should policy state be reset after every episode?

Issue - State: closed - Opened by hueds almost 3 years ago - 1 comment

#35 - Batch size = 16?

Issue - State: closed - Opened by mctigger almost 3 years ago - 1 comment

#34 - Plot.py not working properly

Issue - State: closed - Opened by lcdbezerra almost 3 years ago - 1 comment

#33 - What does "openl" do / mean?

Issue - State: closed - Opened by hueds almost 3 years ago - 1 comment

#32 - Fix sum KL distribution across both latent dims.

Pull Request - State: closed - Opened by fvisin almost 3 years ago - 1 comment

#31 - KeyError: 'dmc' while trying to run walker?

Issue - State: closed - Opened by mrmarten almost 3 years ago - 2 comments

#30 - The result for atari enduro in the paper is not reproduced

Issue - State: closed - Opened by jsikyoon almost 3 years ago - 2 comments

#29 - How many environment steps per update?

Issue - State: closed - Opened by mctigger about 3 years ago - 3 comments

#28 - procgen env

Issue - State: closed - Opened by hlsfin about 3 years ago - 1 comment

#27 - Offsets in actor loss calculation

Issue - State: closed - Opened by mctigger over 3 years ago - 1 comment

#26 - How to save and reload trained dreamerv2 models

Issue - State: closed - Opened by Adaickalavan over 3 years ago - 1 comment

#25 - Lamba Target Equation

Issue - State: closed - Opened by lewisboyd over 3 years ago - 4 comments

#24 - How to run dreamerv2 on atari games

Issue - State: closed - Opened by KimiakiShirahama over 3 years ago - 1 comment

#23 - Question about Plan2explore

Issue - State: closed - Opened by TachikakaMin over 3 years ago - 1 comment

#22 - AssertionError and AttributeError dreamerv2 in jupyter-notebook

Issue - State: closed - Opened by balloch over 3 years ago - 1 comment

#21 - Change ```eval_envs``` to ```num_eval_envs```

Issue - State: closed - Opened by alirahkay over 3 years ago - 1 comment

#20 - Does the actor-critc train using only the stochastic state?

Issue - State: closed - Opened by lewisboyd over 3 years ago - 4 comments

#19 - Skipped short episode of length 10.

Issue - State: closed - Opened by robjlyons over 3 years ago - 1 comment

#18 - Discount predictor invalid log_prob targets?

Issue - State: closed - Opened by niklasdbs over 3 years ago - 1 comment

#17 - Questions about atari evaluation protocol

Issue - State: closed - Opened by jmkim0309 over 3 years ago - 1 comment

#16 - Pickle and shape issues

Issue - State: closed - Opened by robjlyons over 3 years ago - 1 comment

#15 - Tuple Actions Space

Issue - State: closed - Opened by robjlyons over 3 years ago - 1 comment

#14 - Intrinsic Rewards

Issue - State: closed - Opened by robjlyons over 3 years ago - 2 comments

#13 - Render episodes

Issue - State: closed - Opened by robjlyons over 3 years ago - 1 comment

#12 - Setting random seed

Issue - State: closed - Opened by izkula over 3 years ago - 1 comment

#11 - Commented version of the code

Issue - State: closed - Opened by Julian-CF over 3 years ago - 1 comment

#10 - improve installation and usage

Pull Request - State: closed - Opened by AnthonyPoschen over 3 years ago - 2 comments

#9 - Input shape incompatible

Issue - State: closed - Opened by RyanRTJJ over 3 years ago - 3 comments

#8 - No Improvement in Pong Scores after 18M+ Steps

Issue - State: closed - Opened by ghost over 3 years ago - 17 comments

#7 - Default setting doesn't seem to be learning

Issue - State: closed - Opened by nickuncaged1201 over 3 years ago - 7 comments

#6 - no GPU error

Issue - State: closed - Opened by penguincute almost 4 years ago - 2 comments

#5 - TypeError: unsupported operand type(s) for //=: 'str' and 'int'

Issue - State: closed - Opened by Lufffya almost 4 years ago - 3 comments

#4 - Difference in the KL loss terms in the paper and the code

Issue - State: closed - Opened by shivakanthsujit almost 4 years ago - 1 comment

#3 - Docker support

Pull Request - State: closed - Opened by esmanchik almost 4 years ago - 2 comments

#2 - Have you considered using a PPO actor instead of a normal Actor-Critic?

Issue - State: closed - Opened by outdoteth almost 4 years ago - 1 comment

#1 - Two questions about the paper

Issue - State: closed - Opened by xlnwel about 4 years ago - 3 comments