Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / lucidrains/linear-attention-transformer issues and pull requests
#21 - [Feature request] Self-attention with Persistent Memory
Issue -
State: closed - Opened by MarcusLoppe 5 months ago
- 1 comment
#20 - implemented mup
Pull Request -
State: closed - Opened by thomasfortin1 7 months ago
- 1 comment
#19 - Image linear attention reference
Issue -
State: open - Opened by pravn 9 months ago
#18 - Why dim != dim_head * heads?
Issue -
State: open - Opened by zzczzc20 9 months ago
#17 - How to perform training?
Issue -
State: open - Opened by pangshengwei about 2 years ago
#16 - Is the causal attention really works here?
Issue -
State: open - Opened by charlesxu90 over 2 years ago
- 2 comments
#15 - Tooooo many functions added, but no annotations
Issue -
State: open - Opened by charlesxu90 over 2 years ago
#14 - Scaling factors
Issue -
State: closed - Opened by radandreicristian over 2 years ago
- 1 comment
#13 - Challenge in replacing SelfAttention with ImageLinearAttention in Vision Transformer
Issue -
State: open - Opened by monajalal about 3 years ago
#13 - Challenge in replacing SelfAttention with ImageLinearAttention in Vision Transformer
Issue -
State: open - Opened by monajalal about 3 years ago
#11 - ImageLinearAttention showcase
Issue -
State: closed - Opened by monajalal about 3 years ago
#10 - Causal linear attention from which paper ? please tell me thx
Issue -
State: open - Opened by hquzhuguofeng over 3 years ago
#10 - Causal linear attention from which paper ? please tell me thx
Issue -
State: open - Opened by hquzhuguofeng over 3 years ago
#9 - dalle
Issue -
State: closed - Opened by adamonkey over 3 years ago
- 1 comment
#9 - dalle
Issue -
State: closed - Opened by adamonkey over 3 years ago
- 1 comment
#8 - Questions on the implementation of a linear variant and reference
Issue -
State: closed - Opened by scaomath over 3 years ago
- 1 comment
#7 - Where does this constant come from?
Issue -
State: closed - Opened by aluo-x over 3 years ago
- 1 comment
#7 - Where does this constant come from?
Issue -
State: closed - Opened by aluo-x over 3 years ago
- 1 comment
#6 - Loss returns Nan
Issue -
State: open - Opened by terencenwz over 3 years ago
- 3 comments
#6 - Loss returns Nan
Issue -
State: open - Opened by terencenwz over 3 years ago
- 3 comments
#5 - causal = True
Issue -
State: open - Opened by wajihullahbaig over 3 years ago
- 2 comments
#5 - causal = True
Issue -
State: open - Opened by wajihullahbaig over 3 years ago
- 2 comments
#4 - Autopadder doesn't work with LinearAttentionTransformer
Issue -
State: closed - Opened by jamarju about 4 years ago
- 1 comment
#4 - Autopadder doesn't work with LinearAttentionTransformer
Issue -
State: closed - Opened by jamarju about 4 years ago
- 1 comment
#3 - Positional encoding?
Issue -
State: closed - Opened by matthew-jurewicz over 4 years ago
- 2 comments
#3 - Positional encoding?
Issue -
State: closed - Opened by matthew-jurewicz over 4 years ago
- 2 comments
#2 - [Question] Merging with Trans-XL?
Issue -
State: open - Opened by gaceladri over 4 years ago
- 40 comments
#2 - [Question] Merging with Trans-XL?
Issue -
State: open - Opened by gaceladri over 4 years ago
- 40 comments
#1 - seq2seq decoder ids
Issue -
State: closed - Opened by ghost over 4 years ago
- 4 comments
#1 - seq2seq decoder ids
Issue -
State: closed - Opened by ghost over 4 years ago
- 4 comments