Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / huggingface/peft issues and pull requests
#815 - Tst extend unit tests 8bit quantization
Pull Request -
State: closed - Opened by BenjaminBossan about 1 year ago
- 3 comments
#814 - Fixed test issue for SD model with disabled LoRA adapter
Pull Request -
State: closed - Opened by kovalexal about 1 year ago
- 10 comments
#814 - Fixed test issue for SD model with disabled LoRA adapter
Pull Request -
State: open - Opened by kovalexal about 1 year ago
- 9 comments
#813 - add docs chatbot v0
Pull Request -
State: open - Opened by pacman100 about 1 year ago
- 5 comments
#813 - add docs chatbot v0
Pull Request -
State: closed - Opened by pacman100 about 1 year ago
- 6 comments
#812 - Question -- is `TaskType` used for anything?
Issue -
State: closed - Opened by radekosmulski about 1 year ago
- 4 comments
#812 - Question -- is `TaskType` used for anything?
Issue -
State: closed - Opened by radekosmulski about 1 year ago
- 4 comments
#811 - Prompt tuning: Tensors must have same number of dimensions: got 2 and 1
Issue -
State: open - Opened by andysingal about 1 year ago
#811 - Prompt tuning: Tensors must have same number of dimensions: got 2 and 1
Issue -
State: closed - Opened by andysingal about 1 year ago
- 1 comment
#810 - Why can't we evaluate wer/cer while training using PEFT?
Issue -
State: closed - Opened by MightyStud about 1 year ago
- 2 comments
#810 - Why can't we evaluate wer/cer while training using PEFT?
Issue -
State: open - Opened by MightyStud about 1 year ago
- 1 comment
#809 - Fix seq2seq prompt tuning (#439)
Pull Request -
State: open - Opened by glerzing about 1 year ago
- 2 comments
#809 - Fix seq2seq prompt tuning (#439)
Pull Request -
State: closed - Opened by glerzing about 1 year ago
- 2 comments
#808 - What is the correct way to apply LoRA on a custom model (not models on HuggingFace)?
Issue -
State: open - Opened by DtYXs about 1 year ago
- 15 comments
#808 - What is the correct way to apply LoRA on a custom model (not models on HuggingFace)?
Issue -
State: closed - Opened by DtYXs about 1 year ago
- 16 comments
#807 - [skip ci] [WIP] Move tuners to subpackages
Pull Request -
State: open - Opened by BenjaminBossan about 1 year ago
- 1 comment
#807 - MNT: Move tuners to subpackages
Pull Request -
State: closed - Opened by BenjaminBossan about 1 year ago
- 4 comments
#806 - HFValidationError on local PeftModel.from_pretrained
Issue -
State: closed - Opened by LazerJesus about 1 year ago
- 2 comments
#804 - Multi adapter weight conflicts when services are concurrent
Issue -
State: closed - Opened by Nipi64310 about 1 year ago
- 10 comments
#804 - Multi adapter weight conflicts when services are concurrent
Issue -
State: closed - Opened by Nipi64310 about 1 year ago
- 22 comments
#803 - Error when sharding model within deepspeed.zero.Init context
Issue -
State: closed - Opened by softsweetengineer about 1 year ago
- 1 comment
#802 - Improve config handling
Issue -
State: closed - Opened by BenjaminBossan about 1 year ago
- 1 comment
#801 - BERT and LoRA BERT Number of Parameters Mismatch
Issue -
State: closed - Opened by uygarkurt about 1 year ago
- 1 comment
#796 - Add support for Pix2Struct
Issue -
State: open - Opened by NielsRogge about 1 year ago
- 2 comments
#796 - Add support for Pix2Struct
Issue -
State: closed - Opened by NielsRogge about 1 year ago
- 3 comments
#795 - Loading the base model and the PEFT model at the same time
Issue -
State: closed - Opened by CandyPanda-LS about 1 year ago
- 2 comments
#794 - Fix unbound error in ia3.py
Pull Request -
State: closed - Opened by His-Wardship about 1 year ago
- 14 comments
#794 - Fix unbound error in ia3.py
Pull Request -
State: closed - Opened by His-Wardship about 1 year ago
- 14 comments
#793 - Lora model (after fine tuning) working exactly the same as base model
Issue -
State: closed - Opened by Luke-4 about 1 year ago
- 25 comments
#793 - Lora model (after fine tuning) working exactly the same as base model
Issue -
State: closed - Opened by Luke-4 about 1 year ago
- 19 comments
#792 - multiple adapter load_model fails when adding bias
Issue -
State: closed - Opened by DavidPeleg6 about 1 year ago
- 4 comments
#790 - PR #389 breaks Flash Attention 2 with peft
Issue -
State: open - Opened by rationalism about 1 year ago
- 10 comments
#790 - PR #389 breaks Flash Attention 2 with peft
Issue -
State: closed - Opened by rationalism about 1 year ago
- 11 comments
#785 - GPU memory consumption increases when using a qunatized model with the PEFT
Issue -
State: open - Opened by shamanez about 1 year ago
- 1 comment
#785 - GPU memory consumption increases when using a qunatized model with the PEFT
Issue -
State: closed - Opened by shamanez about 1 year ago
- 2 comments
#784 - Peft model signature
Pull Request -
State: closed - Opened by kiansierra about 1 year ago
- 20 comments
#784 - Peft model signature
Pull Request -
State: closed - Opened by kiansierra about 1 year ago
- 20 comments
#783 - Automatic Model Signature for PeftModel
Issue -
State: closed - Opened by kiansierra about 1 year ago
- 1 comment
#783 - Automatic Model Signature for PeftModel
Issue -
State: closed - Opened by kiansierra about 1 year ago
- 1 comment
#782 - It just doesn't calculate the metrics
Issue -
State: closed - Opened by xiehuanyi about 1 year ago
- 2 comments
#782 - It just doesn't calculate the metrics
Issue -
State: open - Opened by xiehuanyi about 1 year ago
- 2 comments
#781 - Can I apply peft to my own model?
Issue -
State: closed - Opened by moonriver0922 about 1 year ago
- 4 comments
#780 - Add new method: GLoRA
Issue -
State: closed - Opened by HsunGong about 1 year ago
- 3 comments
#780 - Add new method: GLoRA
Issue -
State: open - Opened by HsunGong about 1 year ago
- 1 comment
#777 - Is it possible to have a peft model on a pefted model?
Issue -
State: open - Opened by 2catycm about 1 year ago
- 3 comments
#777 - Is it possible to have a peft model on a pefted model?
Issue -
State: closed - Opened by 2catycm about 1 year ago
- 3 comments
#776 - Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
Issue -
State: open - Opened by pranayj77 about 1 year ago
- 1 comment
#776 - Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
Issue -
State: closed - Opened by pranayj77 about 1 year ago
- 4 comments
#774 - Peft Offload_dir should be offload_folder
Issue -
State: open - Opened by larawehbe about 1 year ago
- 1 comment
#774 - Peft Offload_dir should be offload_folder
Issue -
State: closed - Opened by larawehbe about 1 year ago
- 1 comment
#771 - GPTQ Integration
Pull Request -
State: closed - Opened by SunMarc about 1 year ago
- 2 comments
#771 - GPTQ Integration
Pull Request -
State: closed - Opened by SunMarc about 1 year ago
- 2 comments
#768 - Improve Training Loss
Issue -
State: closed - Opened by ahmadmustafaanis about 1 year ago
- 2 comments
#766 - LoRA `Linear4bit` is unmergeable
Issue -
State: closed - Opened by SergeyTsimfer about 1 year ago
- 2 comments
#766 - LoRA `Linear4bit` is unmergeable
Issue -
State: closed - Opened by SergeyTsimfer about 1 year ago
- 5 comments
#765 - Performance degradation with HF Trainer when training LayoutLMv3
Issue -
State: open - Opened by anthony2261 about 1 year ago
- 1 comment
#765 - Performance degradation with HF Trainer when training LayoutLMv3
Issue -
State: closed - Opened by anthony2261 about 1 year ago
- 4 comments
#764 - Peft for SEQ_CLS not compatible with BloomForSequenceClassification
Issue -
State: closed - Opened by poedator about 1 year ago
- 1 comment
#763 - Extend AdaptionPrompt and Add Multi-Modal AdaptionPromptV2
Pull Request -
State: closed - Opened by PanQiWei about 1 year ago
- 7 comments
#763 - Extend AdaptionPrompt and Add Multi-Modal AdaptionPromptV2
Pull Request -
State: open - Opened by PanQiWei about 1 year ago
- 6 comments
#762 - Size Mismatch Error When Loading Pretrained Model with Expanded Embedding Layer
Issue -
State: open - Opened by hzphzp about 1 year ago
- 3 comments
#762 - Size Mismatch Error When Loading Pretrained Model with Expanded Embedding Layer
Issue -
State: closed - Opened by hzphzp about 1 year ago
- 6 comments
#761 - fine-tuning OpenClip with Hugingface's PEFT (such as LoRA)
Issue -
State: open - Opened by KyanChen about 1 year ago
- 48 comments
#761 - fine-tuning OpenClip with Hugingface's PEFT (such as LoRA)
Issue -
State: open - Opened by KyanChen about 1 year ago
- 46 comments
#760 - Calling prepare_model_for_kbit_training if you aren't quantizing the model can freeze all parameters when doing LoRA training
Pull Request -
State: open - Opened by njbrake about 1 year ago
- 8 comments
#760 - Calling prepare_model_for_kbit_training if you aren't quantizing the model can freeze all parameters when doing LoRA training
Pull Request -
State: closed - Opened by njbrake about 1 year ago
- 9 comments
#759 - Mixed-task batching
Issue -
State: open - Opened by einarbmag about 1 year ago
- 4 comments
#759 - Mixed-task batching
Issue -
State: closed - Opened by einarbmag about 1 year ago
- 5 comments
#756 - PEFT for Multiple Choice?
Issue -
State: closed - Opened by jacob-morrison about 1 year ago
- 7 comments
#754 - Merging LORA weights and saving safetensors to launch a TGI server
Issue -
State: open - Opened by Mohamedhabi about 1 year ago
- 2 comments
#754 - Merging LORA weights and saving safetensors to launch a TGI server
Issue -
State: closed - Opened by Mohamedhabi about 1 year ago
- 2 comments
#752 - specify rank per layer or per tensor type
Issue -
State: closed - Opened by MichelNivard about 1 year ago
- 2 comments
#749 - [`core`] PEFT refactor + introducing `inject_adapter_in_model` public method
Pull Request -
State: closed - Opened by younesbelkada about 1 year ago
- 2 comments
#749 - [`core`] PEFT refactor + introducing `inject_adapter_in_model` public method
Pull Request -
State: closed - Opened by younesbelkada about 1 year ago
- 2 comments
#748 - Feature for using onnx model as base model when using lora weight
Issue -
State: closed - Opened by bohyunshin about 1 year ago
- 5 comments
#748 - Feature for using onnx model as base model when using lora weight
Issue -
State: open - Opened by bohyunshin about 1 year ago
- 4 comments
#746 - Please fix Lora model resume in transformers when using DeepSpeed
Issue -
State: closed - Opened by lucasjinreal over 1 year ago
- 9 comments
#746 - Please fix Lora model resume in transformers when using DeepSpeed
Issue -
State: open - Opened by lucasjinreal over 1 year ago
- 6 comments
#739 - CUDA out of memory. Is it a matter of data size? Always stop at the same step!
Issue -
State: closed - Opened by dsj96 about 1 year ago
- 2 comments
#738 - Method to unload an adapter, to allow the memory to be freed
Issue -
State: closed - Opened by uyhcire about 1 year ago
- 3 comments
Labels: solved
#738 - Method to unload an adapter, to allow the memory to be freed
Issue -
State: closed - Opened by uyhcire about 1 year ago
- 2 comments
Labels: solved
#735 - Add dense layers to TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING
Issue -
State: closed - Opened by BramVanroy about 1 year ago
- 2 comments
#735 - Add dense layers to TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING
Issue -
State: closed - Opened by BramVanroy about 1 year ago
- 2 comments
#732 - [minor bugfix] prevent lora Linear4bit from sneakily changing compute_dtype to float32
Pull Request -
State: open - Opened by justheuristic about 1 year ago
- 5 comments
#732 - [minor bugfix] prevent lora Linear4bit from sneakily changing compute_dtype to float32
Pull Request -
State: closed - Opened by justheuristic about 1 year ago
- 6 comments
#725 - Use device_map to load adapters weight onto specified device
Pull Request -
State: open - Opened by zhangyilun about 1 year ago
- 5 comments
#725 - Use device_map to load adapters weight onto specified device
Pull Request -
State: closed - Opened by zhangyilun about 1 year ago
- 7 comments
#723 - quantize open-Flamingo error : init reverted
Issue -
State: closed - Opened by YerongLi about 1 year ago
- 2 comments
#723 - quantize open-Flamingo error : init reverted
Issue -
State: closed - Opened by YerongLi about 1 year ago
- 2 comments
#720 - Use LoRA and prompt tuning at the same time?
Issue -
State: closed - Opened by chengyin38 about 1 year ago
- 3 comments
#720 - Use LoRA and prompt tuning at the same time?
Issue -
State: closed - Opened by chengyin38 about 1 year ago
- 2 comments
#719 - [WIP] Add functionality to support AdaMix
Pull Request -
State: closed - Opened by rishabbala about 1 year ago
- 21 comments
#717 - Release version 0.5.0.dev0
Pull Request -
State: closed - Opened by pacman100 about 1 year ago
- 1 comment
#715 - FIX: Removes warnings about unknown pytest marker
Pull Request -
State: closed - Opened by BenjaminBossan about 1 year ago
- 1 comment
#714 - `AutoTokenizer` bug when using `LlamaTokenizer`
Issue -
State: closed - Opened by sidnb13 about 1 year ago
- 1 comment
#714 - `AutoTokenizer` bug when using `LlamaTokenizer`
Issue -
State: open - Opened by sidnb13 about 1 year ago
- 1 comment
#713 - Can prefix tuning be used for multi-query model like bigcode/starcoder?
Issue -
State: closed - Opened by ainilian about 1 year ago
- 4 comments
#713 - Can prefix tuning be used for multi-query model like bigcode/starcoder?
Issue -
State: closed - Opened by ainilian about 1 year ago
- 4 comments
#711 - How to change the location of soft tokens in prompt tuning
Issue -
State: closed - Opened by XueTianci about 1 year ago
- 4 comments
#711 - How to change the location of soft tokens in prompt tuning
Issue -
State: open - Opened by XueTianci about 1 year ago
- 3 comments