GitHub / lucidrains/mlp-mixer-pytorch issues and pull requests
#16 - expansion_factor on tokens should applied to hidden dim not to num_patches
Issue -
State: open - Opened by ease-zh 8 months ago
#15 - First of all, thank you for your great creation. Is there a 3D data version of MLP-Mixer
Issue -
State: closed - Opened by lxy51 11 months ago
- 13 comments
#14 - Expansion factor choices
Issue -
State: open - Opened by zhaoyanlyu almost 2 years ago
#13 - Question about Parameters
Issue -
State: closed - Opened by ChesterXcw over 2 years ago
- 3 comments
#12 - The order of token-mixer and channel-mixer
Issue -
State: closed - Opened by ShijianXu almost 3 years ago
#11 - expansion_factor on tokens is actually a bottleneck in original codebase
Issue -
State: closed - Opened by chazzmoney over 3 years ago
- 1 comment
#10 - Added ability to work with non-square patches
Pull Request -
State: closed - Opened by adelkaiarullin almost 4 years ago
- 1 comment
#9 - I want to know why initialize dim,token_dim and channel_dim using 512,256,2048?
Issue -
State: open - Opened by jiantenggei about 4 years ago
- 1 comment
#8 - feat(MLPMixer): add image channels number argument
Pull Request -
State: closed - Opened by mirth about 4 years ago
#7 - Does the performance get dropped by converting patch embedding from CNN version to rearrange version?
Issue -
State: open - Opened by 1338199 about 4 years ago
#6 - What GPU did you use to run the experiment?
Issue -
State: closed - Opened by Ha0Tang about 4 years ago
#5 - Question: dynamic size?
Issue -
State: closed - Opened by pfeatherstone about 4 years ago
- 2 comments
#4 - How to load pretrained modeled?
Issue -
State: open - Opened by jikerWRN about 4 years ago
- 1 comment
#3 - how about simpler pooling?
Pull Request -
State: closed - Opened by arogozhnikov about 4 years ago
- 2 comments
#2 - Dall-E implementation
Issue -
State: closed - Opened by robvanvolt about 4 years ago
- 1 comment
#1 - training script
Issue -
State: closed - Opened by fcakyon about 4 years ago
- 2 comments