Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / openai/maddpg issues and pull requests
#82 - MADDPG for custom environment
Issue -
State: open - Opened by kapilgarg7568 5 months ago
#81 - Transferreg
Pull Request -
State: open - Opened by Hasenfus 8 months ago
- 1 comment
#80 - python3.5.4已经进行维护。
Issue -
State: open - Opened by vducd 8 months ago
#79 - How to save the videos of each 1000th episode ?
Issue -
State: open - Opened by longxilong 10 months ago
#78 - Unit yx ma mu ju co
Pull Request -
State: open - Opened by Hasenfus 10 months ago
#77 - Update architecture
Pull Request -
State: closed - Opened by Boxxfish 12 months ago
#76 - ModuleNotFoundError: No module named 'maddpg.common'
Issue -
State: open - Opened by yanmmmm about 1 year ago
- 1 comment
#75 - 测试Pull Request
Pull Request -
State: open - Opened by tetris-4444 about 1 year ago
#74 - 测试Pull Request
Pull Request -
State: open - Opened by tetris-4444 about 1 year ago
#72 - Rgb array
Pull Request -
State: closed - Opened by jin749 over 1 year ago
- 1 comment
#71 - DDPG action space vs multi-agent particle environment action space
Issue -
State: open - Opened by medhijk about 2 years ago
#70 - Loading previous state...
Issue -
State: open - Opened by yanwaiwai over 2 years ago
- 2 comments
#69 - A issue about the saving the model in train.py
Issue -
State: open - Opened by chengdusunny over 2 years ago
#68 - A question about maddpg.py
Issue -
State: open - Opened by chengdusunny over 2 years ago
#67 - Training EVERY step, not every 100
Issue -
State: open - Opened by eflopez1 about 3 years ago
- 1 comment
#66 - Why ‘u_action_space = spaces.Discrete(world.dim_p * 2 + 1)’?
Issue -
State: open - Opened by ScorpioPeng about 3 years ago
- 2 comments
#65 - Question about the way how to update actor
Issue -
State: open - Opened by choasLC about 3 years ago
- 1 comment
#64 - How to run ddpg
Issue -
State: open - Opened by EvaluationResearch over 3 years ago
#63 - All scenario will fail but except scenario simple
Issue -
State: closed - Opened by EvaluationResearch over 3 years ago
- 1 comment
#62 - Update train.py
Pull Request -
State: closed - Opened by duanzy18 over 3 years ago
- 1 comment
#61 - change '-' in arguments' name to '_'
Pull Request -
State: closed - Opened by duanzy18 over 3 years ago
#60 - Is policy ensambles implemented in this repository?
Issue -
State: open - Opened by 2015211289 over 3 years ago
#59 - ../../..
Issue -
State: open - Opened by devarani10 over 3 years ago
#58 - Using any scenario rather than the "simple" one gives error during loading the model after training
Issue -
State: closed - Opened by dr-smgad over 3 years ago
#57 - how to evaluate maddpg?
Issue -
State: open - Opened by Xinlei-Ren over 3 years ago
#56 - how to evaluate maddpg?
Issue -
State: open - Opened by Xinlei-Ren over 3 years ago
#55 - How to normalize the data in table of Appendix to obtain Figure 3 in paper?
Issue -
State: open - Opened by flammingRaven almost 4 years ago
- 1 comment
#54 - How to turn continuous action into discrete action
Issue -
State: open - Opened by Sexu121 almost 4 years ago
- 1 comment
#53 - How can I use DDPG to train it?
Issue -
State: closed - Opened by CHH3213 about 4 years ago
#52 - TypeError: must be str, not NoneType
Issue -
State: closed - Opened by CHH3213 about 4 years ago
#51 - reward is too large
Issue -
State: open - Opened by Sherry-97 about 4 years ago
#50 - ImportError: cannot import name 'prng' when run train.py
Issue -
State: closed - Opened by silkyrose about 4 years ago
- 2 comments
#49 - I have train it success but how to run? How to see the visible answer?
Issue -
State: open - Opened by zhijiejia over 4 years ago
- 2 comments
#47 - Episode in cooperative navigation env
Issue -
State: open - Opened by kargarisaac over 4 years ago
#46 - question about p_reg in p_train
Issue -
State: open - Opened by yeshenpy over 4 years ago
#45 - Trying to set the random seeds, any idea how?
Issue -
State: open - Opened by hossein-haeri over 4 years ago
- 1 comment
#44 - Typo in train.py
Issue -
State: open - Opened by opt12 over 4 years ago
#43 - Question regarding the replay buffers and the Critic networks. (duplicates in the state)
Issue -
State: open - Opened by opt12 over 4 years ago
#42 - urls to the code of policy ensemble and estimate
Pull Request -
State: closed - Opened by jxwuyi over 4 years ago
#41 - Spark
Issue -
State: open - Opened by diemanalytics-ewd over 4 years ago
#41 - Spark
Issue -
State: open - Opened by diemanalytics-ewd over 4 years ago
#40 - SoftMultiCategoricalPd
Issue -
State: open - Opened by sandeepnRES almost 5 years ago
- 1 comment
#39 - run code
Issue -
State: open - Opened by lionel-xie almost 5 years ago
- 1 comment
#39 - run code
Issue -
State: open - Opened by lionel-xie almost 5 years ago
- 1 comment
#38 - Animate
Pull Request -
State: closed - Opened by RavenPillmann almost 5 years ago
#37 - Can someone tell me why agents go beyond bounds when testing?
Issue -
State: open - Opened by glong1997 almost 5 years ago
- 1 comment
#36 - Hello! I encountered some problems while running the train.py file under the MADDPG file and would like to seek your help.
Issue -
State: open - Opened by dcy0324 almost 5 years ago
- 1 comment
#36 - Hello! I encountered some problems while running the train.py file under the MADDPG file and would like to seek your help.
Issue -
State: open - Opened by dcy0324 almost 5 years ago
- 1 comment
#35 - what's benchmark used for?
Issue -
State: open - Opened by KK666-AI almost 5 years ago
- 1 comment
#34 - The code does not converged
Issue -
State: open - Opened by sjq19960802 over 5 years ago
- 1 comment
#33 - There is no provision to run ddpg.
Issue -
State: open - Opened by frenzytejask98 over 5 years ago
- 3 comments
#32 - Two problem about update function
Issue -
State: open - Opened by YuanyeMa over 5 years ago
- 5 comments
#31 - Error when setting display to true
Issue -
State: open - Opened by njfdiem over 5 years ago
- 3 comments
#30 - TypeError: set_color() got multiple values for argument 'alpha' in Simple-Crypto
Issue -
State: open - Opened by marwanihab over 5 years ago
- 6 comments
#29 - TypeError: must be str, not NoneType . run train.py
Issue -
State: closed - Opened by SHYang1210 over 5 years ago
- 2 comments
#28 - Nontype flaw in "train.py", line 182
Issue -
State: open - Opened by DailinH almost 6 years ago
- 3 comments
#27 - Can this algorithm be generalised to work with multiple (60) agents competing against eachother?
Issue -
State: closed - Opened by alexanderkell almost 6 years ago
- 2 comments
#26 - Cumulative rewards are not promoted when use MADDPG
Issue -
State: open - Opened by jhcknzzm almost 6 years ago
#25 - use tf.layers and add gpu_options.allow_growth=True
Pull Request -
State: closed - Opened by GoingMyWay almost 6 years ago
#24 - update README with repo status
Pull Request -
State: closed - Opened by christopherhesse almost 6 years ago
#23 - The result is not that ideal like the paper showed
Issue -
State: open - Opened by Jarvis-K about 6 years ago
- 3 comments
#22 - Having trouble with import maddpg
Issue -
State: open - Opened by ishanivyas about 6 years ago
- 1 comment
#21 - Please add a description to this repo
Issue -
State: open - Opened by clintonyeb over 6 years ago
#20 - Calculating Success Rate for Physical Deception
Issue -
State: closed - Opened by ZishunYu over 6 years ago
- 1 comment
#19 - Q divergence
Issue -
State: open - Opened by rbrigden over 6 years ago
#18 - How or why the gaussian distribution contributes to the training?
Issue -
State: open - Opened by Chen-Joe-ZY over 6 years ago
- 4 comments
#17 - Import Errors
Issue -
State: closed - Opened by murtazarang over 6 years ago
- 6 comments
#16 - How maddpg update actor?
Issue -
State: closed - Opened by newbieyxy over 6 years ago
#15 - Error in scenario simple_reference with gym.spaces.MultiDiscrete
Issue -
State: closed - Opened by hcch0912 over 6 years ago
- 1 comment
#14 - displaying agent behaviors on the screen
Issue -
State: closed - Opened by williamyuanv0 over 6 years ago
- 1 comment
#13 - When I run train.py,it shows "TypeError: Can't convert 'NoneType' object to str implicitly".
Issue -
State: closed - Opened by seahawkk over 6 years ago
- 4 comments
#12 - Cannot reproduce experiment results
Issue -
State: closed - Opened by arbaazkhan2 over 6 years ago
- 3 comments
#11 - The reward and action is nan ?
Issue -
State: closed - Opened by yexm-ze over 6 years ago
- 3 comments
#10 - How can i use it for "simple_world_comm" in MPE? ---- "AssertionError: nvec should be a 1d array (or list) of ints"
Issue -
State: closed - Opened by zimoqingfeng over 6 years ago
- 1 comment
#9 - action exploration & Gumbel-Softmax
Issue -
State: open - Opened by djbitbyte over 6 years ago
- 9 comments
#8 - It seems that you don't use "Policy ensembles" and "Inferring policies of other agent" in this code?
Issue -
State: closed - Opened by pengzhenghao over 6 years ago
- 2 comments
#7 - It seems that the training is decentralized?
Issue -
State: closed - Opened by pengzhenghao over 6 years ago
- 1 comment
#6 - Running train.py doesn't seem to work
Issue -
State: closed - Opened by suryabhupa over 6 years ago
- 5 comments
#5 - when I run train.py,it shows "module 'tensorflow' has no attribute 'float32'"
Issue -
State: closed - Opened by williamyuanv0 over 6 years ago
- 8 comments
#4 - add multiagent-particle-envs to PYTHONPATH
Issue -
State: closed - Opened by djbitbyte over 6 years ago
- 4 comments
#3 - fix some miss
Pull Request -
State: closed - Opened by wwxFromTju over 6 years ago
#2 - rnn_]cell=None is a Syntax Error in both Python 2 and 3
Issue -
State: closed - Opened by cclauss over 6 years ago
- 1 comment
#1 - Remove pycache and add gitignore
Pull Request -
State: closed - Opened by himanshub16 over 6 years ago