Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / Lightning-AI/litgpt issues and pull requests
#99 - `max_returned_tokens` as a generate input
Pull Request -
State: closed - Opened by carmocca over 1 year ago
- 2 comments
#98 - Update howto for finetuning on MPS
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#97 - doc: add `pip install huggingface_hub` to howtos
Pull Request -
State: closed - Opened by emilesilvis over 1 year ago
#96 - Should `huggingface_hub` be added to requirements.txt?
Issue -
State: closed - Opened by emilesilvis over 1 year ago
- 5 comments
#95 - Query Regarding Minimum Hardware Requirements for Fine-tuning and Inference
Issue -
State: closed - Opened by martinb-ai over 1 year ago
- 6 comments
#94 - Be able to set custom accelerator, precision and dtype for MPS accelerators (Apple M1 silicon)
Pull Request -
State: closed - Opened by agmo1993 over 1 year ago
- 2 comments
#93 - Fix an assertion during training issue #65
Pull Request -
State: closed - Opened by ArturK-85 over 1 year ago
- 5 comments
#92 - tpu howto timing adjustments
Pull Request -
State: closed - Opened by gkroiz over 1 year ago
#91 - Reset all caches on xla devices
Pull Request -
State: closed - Opened by gkroiz over 1 year ago
- 1 comment
#90 - Adding padding preprocessing
Pull Request -
State: closed - Opened by gkroiz over 1 year ago
- 3 comments
#89 - Reorder adapter and subclass CausalSelfAttention
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#88 - Revert "Adding `vocab_size` to `config.py` (#68)"
Pull Request -
State: closed - Opened by carmocca over 1 year ago
- 6 comments
#87 - python3 chat.py --checkpoint_dir checkpoints/stabilityai/stablelm-tuned-alpha-7b --quantize "gptq.int4" fails
Issue -
State: closed - Opened by RDouglasSharp over 1 year ago
- 1 comment
#86 - Restructure repo into directories as Lit-LLaMA
Pull Request -
State: closed - Opened by carmocca over 1 year ago
- 3 comments
#85 - Port Adapter_V2 from lit-llama pull request #303
Pull Request -
State: closed - Opened by ArturK-85 over 1 year ago
- 5 comments
#84 - Add chat script for adapter checkpoints
Issue -
State: closed - Opened by RDouglasSharp over 1 year ago
- 3 comments
Labels: enhancement, generation
#83 - reset cache after generation
Pull Request -
State: closed - Opened by bkiat1123 over 1 year ago
- 7 comments
#82 - Add adapter multihead gating
Pull Request -
State: closed - Opened by rasbt over 1 year ago
- 1 comment
#81 - Updated TPU docs to download nightly torch dependencies
Pull Request -
State: closed - Opened by gkroiz over 1 year ago
#80 - Do not use symlinks when downloading for cloud support
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#79 - Fix max_iters logic
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#78 - Model finetuned using finetune_adapter not directly usable in generaete/chat... How to convert?
Issue -
State: closed - Opened by RDouglasSharp over 1 year ago
- 3 comments
Labels: enhancement, generation
#77 - Caches should not persist across multiple generate.
Issue -
State: closed - Opened by bkiat1123 over 1 year ago
- 1 comment
Labels: enhancement, generation
#76 - fix typo
Pull Request -
State: closed - Opened by bkiat1123 over 1 year ago
#75 - fix unpack issue
Pull Request -
State: closed - Opened by bkiat1123 over 1 year ago
#74 - fix unpacking error
Pull Request -
State: closed - Opened by bkiat1123 over 1 year ago
- 1 comment
#73 - too many values to unpack in Block forward
Issue -
State: closed - Opened by bkiat1123 over 1 year ago
#72 - fix quantization error
Pull Request -
State: closed - Opened by bkiat1123 over 1 year ago
- 2 comments
#71 - Add T_new as an argument in generate, reference in issue #65 #69
Pull Request -
State: closed - Opened by ArturK-85 over 1 year ago
#70 - Add T_max as an argument in generate, reference to #65
Pull Request -
State: closed - Opened by ArturK-85 over 1 year ago
#69 - Add adapter tests
Issue -
State: closed - Opened by carmocca over 1 year ago
#68 - Adding `vocab_size` to `config.py`
Pull Request -
State: closed - Opened by gkroiz over 1 year ago
#67 - Update generate.py
Pull Request -
State: closed - Opened by ArturK-85 over 1 year ago
#66 - Adding caching for adapter KVs
Pull Request -
State: closed - Opened by gkroiz over 1 year ago
#65 - Assert in generate.py needs to go...
Issue -
State: closed - Opened by RDouglasSharp over 1 year ago
- 5 comments
#64 - Set dtype of kv_caches correctly
Pull Request -
State: closed - Opened by lantiga over 1 year ago
#63 - Port packed_dataset hotfix from lit-llama
Pull Request -
State: closed - Opened by lantiga over 1 year ago
#62 - Fine tune adapter device = 2 deepspeed error
Issue -
State: closed - Opened by ArturK-85 over 1 year ago
- 1 comment
Labels: bug
#61 - Problem with finetune_adapter.py along with fix
Issue -
State: closed - Opened by RDouglasSharp over 1 year ago
- 8 comments
#60 - Apply proposed changes by @RDouglasSharp from issue #57 #58
Pull Request -
State: closed - Opened by ArturK-85 over 1 year ago
- 1 comment
#59 - Config cannot be overwritten through kwargs
Issue -
State: closed - Opened by bkiat1123 over 1 year ago
- 2 comments
#58 - Cached KVs not implemented on Adapter causing errors.
Issue -
State: closed - Opened by bkiat1123 over 1 year ago
- 2 comments
#57 - /lit_parrot/model.py:201 in forward
Issue -
State: closed - Opened by RDouglasSharp over 1 year ago
- 2 comments
#56 - Cleanup reply output in chat.py
Pull Request -
State: closed - Opened by gkroiz over 1 year ago
#55 - Allow generate seed
Pull Request -
State: closed - Opened by Sciumo over 1 year ago
#54 - Generate should allow seed, rather than setting to 1234.
Issue -
State: closed - Opened by Sciumo over 1 year ago
- 5 comments
#53 - Get Attempting to unscale FP16 gradients while finetune on float16
Issue -
State: closed - Opened by bkiat1123 over 1 year ago
- 7 comments
#52 - Fix n_embd for larger Pythia models
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#51 - Adding cached KVs
Pull Request -
State: closed - Opened by gkroiz over 1 year ago
- 2 comments
#50 - Pythia embedding dimension mismatch
Issue -
State: closed - Opened by rcmalli over 1 year ago
- 1 comment
#49 - Fix training argument
Pull Request -
State: closed - Opened by lantiga over 1 year ago
#48 - fix unrecognized command error
Pull Request -
State: closed - Opened by aniketmaurya over 1 year ago
- 3 comments
#47 - Add generation and chat scripts support for XLA devices
Pull Request -
State: closed - Opened by gkroiz over 1 year ago
#46 - change checkpoint dir test to renable convert to Parrot (but still check)
Pull Request -
State: closed - Opened by Sciumo over 1 year ago
- 3 comments
#45 - Restore generation longer than block size
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#44 - Download documentation needs updating, --repo_id required
Issue -
State: closed - Opened by Sciumo over 1 year ago
- 3 comments
#43 - Generation of text that is longer than the context window is no longer possible
Issue -
State: closed - Opened by awaelchli over 1 year ago
#42 - Fix model configuration
Pull Request -
State: closed - Opened by rcmalli over 1 year ago
#41 - Remove checkpoint_dir check when converting weights
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#40 - Minor train_redpajama fixes
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#39 - Fix max_new_tokens logic
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#38 - Parrot rename
Pull Request -
State: closed - Opened by lantiga over 1 year ago
#37 - ERROR: Could not find a version that satisfies the requirement torch>=2.1.0dev
Issue -
State: closed - Opened by nestorvw over 1 year ago
- 2 comments
#36 - Updated README and cleaned up howto
Pull Request -
State: closed - Opened by lantiga over 1 year ago
- 1 comment
#35 - Port pre-training scripts
Pull Request -
State: closed - Opened by lantiga over 1 year ago
#34 - Update finetune_adapter.py
Pull Request -
State: closed - Opened by awaelchli over 1 year ago
#33 - Update howto for finetuning with adapter
Pull Request -
State: closed - Opened by awaelchli over 1 year ago
#32 - Update download instructions
Pull Request -
State: closed - Opened by awaelchli over 1 year ago
#31 - Porting adapter from lit-llama
Pull Request -
State: closed - Opened by ArturK-85 over 1 year ago
- 5 comments
#30 - lit-stablelm/finetune_adapter.py
Pull Request -
State: closed - Opened by ArturK-85 over 1 year ago
#29 - lit-stablelm/adapter
Pull Request -
State: closed - Opened by ArturK-85 over 1 year ago
#28 - Better UX for discovering checkpoints
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#27 - Add mascots to README
Pull Request -
State: closed - Opened by lantiga over 1 year ago
#26 - Split configs into a separate file
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#25 - Add INCITE howto
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#24 - Support INCITE checkpoints
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#23 - Single checkpoint dir to simplify script arguments
Pull Request -
State: closed - Opened by carmocca over 1 year ago
- 1 comment
#22 - Add interactive chatting script
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#21 - KV cache for faster generation
Issue -
State: closed - Opened by carmocca over 1 year ago
Labels: enhancement, generation
#20 - Port finetuning from Lit-LLaMA
Issue -
State: closed - Opened by carmocca over 1 year ago
- 2 comments
Labels: enhancement, fine-tuning
#19 - Add Pythia howto
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#18 - Quantization is not working
Issue -
State: closed - Opened by carmocca over 1 year ago
- 2 comments
Labels: bug, quantization
#17 - Add gif animation
Pull Request -
State: closed - Opened by lantiga over 1 year ago
#16 - Update memory usage in docs
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#15 - Add all StableLM and Pythia configs
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#14 - Which should be our default model?
Issue -
State: closed - Opened by carmocca over 1 year ago
- 2 comments
Labels: documentation
#13 - RoPE precision issue
Issue -
State: closed - Opened by carmocca over 1 year ago
- 3 comments
Labels: bug
#12 - Load smaller model in generation test
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#11 - Install torch nightly for flash attention
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#10 - Fix CPU OOM on Windows
Issue -
State: closed - Opened by carmocca over 1 year ago
#9 - Improve UX for discovering available checkpoints
Issue -
State: closed - Opened by carmocca over 1 year ago
- 1 comment
Labels: enhancement
#8 - Support all StableLM and Pythia checkpoint configs
Issue -
State: closed - Opened by carmocca over 1 year ago
#7 - Write Pythia checkpoint howto
Issue -
State: closed - Opened by carmocca over 1 year ago
Labels: documentation
#6 - Support tuned-model mode during generation
Issue -
State: closed - Opened by carmocca over 1 year ago
- 1 comment
Labels: enhancement
#5 - Support Pythia checkpoints
Pull Request -
State: closed - Opened by carmocca over 1 year ago
- 3 comments
#4 - Add README
Pull Request -
State: closed - Opened by lantiga over 1 year ago
#3 - Fix generate.py on Windows
Pull Request -
State: closed - Opened by carmocca over 1 year ago
#2 - Flash attention support
Issue -
State: closed - Opened by carmocca over 1 year ago
- 2 comments
#1 - Initial commit
Pull Request -
State: closed - Opened by carmocca over 1 year ago
- 1 comment