Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / huggingface/peft issues and pull requests
#1375 - Fix LoRA module mapping for Phi models
Pull Request -
State: closed - Opened by arnavgarg1 9 months ago
- 1 comment
#1374 - How to activate, and keep frozen, multiple adapters?
Issue -
State: closed - Opened by EricLBuehler 9 months ago
- 9 comments
#1373 - [docs] IA3
Pull Request -
State: closed - Opened by stevhliu 9 months ago
- 2 comments
#1372 - Fix for "leaf Variable that requires grad" Error in In-Place Operation
Pull Request -
State: closed - Opened by DopeorNope-Lee 9 months ago
- 28 comments
#1371 - [docs] Lora-like guides
Pull Request -
State: closed - Opened by stevhliu 9 months ago
- 1 comment
#1370 - account for the new merged/unmerged weight to perform the quantization again
Pull Request -
State: closed - Opened by pacman100 9 months ago
- 2 comments
#1368 - Add support for layer replication in LoRA
Pull Request -
State: closed - Opened by siddartha-RE 9 months ago
- 24 comments
#1367 - Handle resizing of embedding layers for AutoPeftModel
Pull Request -
State: closed - Opened by pacman100 9 months ago
- 1 comment
#1366 - deepspeed zero3 context is not invoked when creating new layers, so adapter layers are not wrapped by deepspeed
Issue -
State: closed - Opened by haixpham 9 months ago
- 7 comments
#1365 - Added missing getattr dunder methods for mixed model
Pull Request -
State: closed - Opened by kovalexal 9 months ago
- 2 comments
#1364 - Add new merging methods
Pull Request -
State: closed - Opened by pacman100 9 months ago
- 20 comments
#1363 - Error while fetching adapter layer from huggingface library
Issue -
State: closed - Opened by Muskanb 9 months ago
- 2 comments
#1362 - Potentially a bug? Dimension mismatch when using adaption_prompt in Llama2
Issue -
State: closed - Opened by LEE94-1 9 months ago
- 8 comments
#1361 - NotImplementedError: Requested bias: None, is not implemented.
Issue -
State: closed - Opened by KaifAhmad1 9 months ago
#1360 - RuntimeError: Caught RuntimeError in replica 0 on device 0. on distribution training
Issue -
State: closed - Opened by Aisuko 9 months ago
- 2 comments
#1359 - Remove unnecessary code `len(unique_adapters)`
Pull Request -
State: closed - Opened by TaeWoo21 9 months ago
- 1 comment
#1358 - AttributeError: 'LoraModel' object has no attribute 'prepare_inputs_for_generation'
Issue -
State: closed - Opened by belegubesa 9 months ago
- 6 comments
#1357 - Improve documentation for the `all-linear` flag
Pull Request -
State: closed - Opened by SumanthRH 9 months ago
- 1 comment
#1356 - [docs] Docstring link
Pull Request -
State: closed - Opened by stevhliu 9 months ago
- 1 comment
#1355 - FIX: Make merging of adapter weights idempotent
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 1 comment
#1354 - DOC Add PeftMixedModel to API docs
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 1 comment
#1353 - [Docs] make add_weighted_adapter example clear in the docs.
Pull Request -
State: closed - Opened by sayakpaul 9 months ago
- 1 comment
#1352 - fix `prepare_inputs_for_generation` logic for Prompt Learning methods
Pull Request -
State: closed - Opened by pacman100 9 months ago
- 5 comments
#1351 - ImportError: cannot import name 'is_g2p_en_available' from 'transformers.utils' (/usr/local/lib/python3.10/dist-packages/transformers/utils/__init__.py)
Issue -
State: closed - Opened by kli017 9 months ago
- 2 comments
#1349 - 'PeftModelForCausalLM' object has no attribute 'base_model' for a deepspeed stage 3 model
Issue -
State: closed - Opened by olegsinavski 9 months ago
- 3 comments
#1348 - New transformers caching ETA now v4.38
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 1 comment
#1347 - FIX Setting active adapter for quantized layers
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 2 comments
#1346 - Zero Trainable parameters when using get_peft_model() and a custom adapter name
Issue -
State: closed - Opened by psych0v0yager 9 months ago
- 1 comment
#1345 - the generation results contain some null samples
Issue -
State: closed - Opened by FayeXXX 9 months ago
- 1 comment
#1344 - LoftQ: update example files and README
Pull Request -
State: closed - Opened by yxli2123 9 months ago
- 4 comments
#1343 - DOC: Update docstring for the config classes
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 1 comment
#1342 - Training a 4-bit quantized model directly without Peft when model is loaded with torch_dtype=torch.bfloat16
Issue -
State: closed - Opened by alexcpn 9 months ago
- 5 comments
#1341 - Doc about AdaLoraModel.update_and_allocate
Pull Request -
State: closed - Opened by kuronekosaiko 9 months ago
- 2 comments
#1340 - Training a PeftModel loaded with `from_pretrained` causes a grad-related RuntimeError
Issue -
State: closed - Opened by EricLBuehler 9 months ago
- 2 comments
#1339 - MistralForCasualLM object has no attribute merge_and_upload
Issue -
State: closed - Opened by giux78 9 months ago
- 2 comments
#1338 - fix some args desc
Pull Request -
State: closed - Opened by zspo 9 months ago
- 1 comment
#1337 - There is no direct support for loading those old adapters. One thing you could do which may work: Download the repo to a local folder, edit the `adapter_config.json` and remove the line `"enable_lora": null,`, then try to load the model from your local folder instead of from HF. If that works, you can create your own HF repo and upload the "fixed" version there for the future.
Issue -
State: closed - Opened by Clement-Lelievre 9 months ago
- 7 comments
#1336 - DOC Troubleshooting for unscaling error with fp16
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 5 comments
#1335 - DOC Extending the vocab and storing embeddings
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 2 comments
#1334 - when we use inject_adapter_in_model method to inject the adapters directly into a PyTorch model, how to merge the Lora weight with the base model in the inference stage?
Issue -
State: closed - Opened by mikiyukio 9 months ago
- 2 comments
#1333 - Fix bug when load the prompt tuning in inference.
Pull Request -
State: closed - Opened by yileld 9 months ago
- 1 comment
#1332 - [docs] Task guides
Pull Request -
State: closed - Opened by stevhliu 9 months ago
- 2 comments
#1331 - Setting LORA parameters trainable= False
Issue -
State: closed - Opened by MahavirDabas18 9 months ago
- 5 comments
#1330 - ENH: Add attribute to show targeted module names
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 1 comment
#1329 - Lora converted with convert_peft_sd_lora_to_kohya_ss is unusable
Issue -
State: closed - Opened by Dentoty 9 months ago
- 5 comments
#1328 - PeftModel failing to load after finetuning. Size Mismatch Error
Issue -
State: closed - Opened by tanmaylaud 9 months ago
- 6 comments
#1327 - AdaLora rank_pattern is None, Docs and Warnings and Integrations should be improved up on.
Issue -
State: closed - Opened by kuronekosaiko 9 months ago
- 6 comments
#1326 - Adding BOFT: Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization
Pull Request -
State: closed - Opened by yfeng95 9 months ago
- 43 comments
#1325 - Add PeftModel.generate
Pull Request -
State: closed - Opened by EricLBuehler 9 months ago
#1324 - [WIP] Add LoRA multihead attention module
Pull Request -
State: open - Opened by BenjaminBossan 9 months ago
- 25 comments
#1323 - _IncompatibleKeys returned from PeftModel.load_adapter
Issue -
State: closed - Opened by EricLBuehler 9 months ago
- 17 comments
#1322 - The question about LoRA structure after load from peft.
Issue -
State: closed - Opened by fxb392 9 months ago
- 1 comment
#1321 - Can we load different lora in the same time?
Issue -
State: closed - Opened by fxb392 9 months ago
#1320 - FIX Use torch.long instead of torch.int in LoftQ for PyTorch versions <2.x
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 1 comment
#1319 - Refactor dispatching logic of LoRA layers
Pull Request -
State: closed - Opened by BenjaminBossan 9 months ago
- 1 comment
#1318 - QOL improvements and doc updates
Pull Request -
State: closed - Opened by pacman100 9 months ago
- 1 comment
#1317 - fix diffusers tests
Pull Request -
State: closed - Opened by pacman100 9 months ago
- 1 comment
#1316 - Mistral IA3 config defaults
Pull Request -
State: closed - Opened by pacman100 9 months ago
- 1 comment
#1314 - fix the embedding saving for adaption prompt
Pull Request -
State: closed - Opened by pacman100 9 months ago
- 2 comments
#1313 - loaded state dict has a different number of parameter groups when optimizer.load_state_dict(saved_lora/optimizer.pt)
Issue -
State: closed - Opened by kuronekosaiko 9 months ago
- 1 comment
#1312 - Clone for 8bit
Pull Request -
State: closed - Opened by athatheo 9 months ago
- 2 comments
#1310 - What is the best way for the inference process in LORA in PEFT approach
Issue -
State: closed - Opened by pradeepdev-1995 9 months ago
- 5 comments
#1309 - AdaLora SVDLayer dtype error in inference
Issue -
State: closed - Opened by yileld 9 months ago
- 2 comments
#1308 - How to check the gradients of lora layers when training a peft model
Issue -
State: closed - Opened by stardusts-hj 9 months ago
- 2 comments
#1307 - An error occurs when using LoftQ. IndexError
Issue -
State: closed - Opened by qwopqwop200 9 months ago
- 8 comments
#1306 - Does peft support to save and load lora weights for a customed model
Issue -
State: closed - Opened by stardusts-hj 9 months ago
- 5 comments
#1305 - [BNB] fix dockerfile for single gpu
Pull Request -
State: closed - Opened by SunMarc 9 months ago
- 1 comment
#1304 - Lora resume training from checkpoint error
Issue -
State: closed - Opened by nuoma 9 months ago
- 2 comments
#1303 - training a multi-adapter model
Issue -
State: closed - Opened by denizyuret 9 months ago
- 12 comments
#1302 - fix fsdp auto wrap policy
Pull Request -
State: closed - Opened by pacman100 9 months ago
- 1 comment
#1301 - Add LayerNorm tuning model
Pull Request -
State: closed - Opened by DTennant 9 months ago
- 45 comments
#1300 - Regarding bugs generated by tasks that require the addition of custom tokens.
Issue -
State: closed - Opened by SatireY 9 months ago
- 4 comments
#1299 - IndexError: Invalid key: 47682 is out of bounds for size 0 while using PEFT
Issue -
State: closed - Opened by MahavirDabas18 9 months ago
- 5 comments
#1298 - [Question] What is the main difference between "modules_to_save" and "target_modules"?
Issue -
State: closed - Opened by SatireY 9 months ago
- 5 comments
#1297 - Add cpoly
Pull Request -
State: closed - Opened by TaoSunVoyage 9 months ago
- 1 comment
#1296 - using LORA with multi-gpu
Issue -
State: closed - Opened by wxyzjp123 9 months ago
- 4 comments
#1295 - Add an option 'ALL' to include all linear layers as target modules
Pull Request -
State: closed - Opened by SumanthRH 9 months ago
- 5 comments
#1294 - Fix bnb lora layers not setting active adapter
Pull Request -
State: closed - Opened by tdrussell 9 months ago
- 2 comments
#1293 - support 2bit quip# method
Issue -
State: closed - Opened by Minami-su 9 months ago
- 25 comments
Labels: PRs welcome to address this
#1292 - Selecting LoRA layers from PeftModel
Issue -
State: closed - Opened by PCanavelli 9 months ago
- 2 comments
#1291 - [BNB] Fix bnb dockerfile for latest version
Pull Request -
State: closed - Opened by SunMarc 9 months ago
- 1 comment
#1290 - DOC: Improve target modules description
Pull Request -
State: closed - Opened by BenjaminBossan 10 months ago
- 1 comment
#1289 - What's the default setting for "target_modules" for TaskType.CAUSAL_LM (LoraConfig)
Issue -
State: closed - Opened by suchunxie 10 months ago
- 4 comments
#1288 - Cleanup the README
Pull Request -
State: closed - Opened by BlGene 10 months ago
- 2 comments
#1287 - [`bnb-nightly`] Address final comments
Pull Request -
State: closed - Opened by younesbelkada 10 months ago
- 2 comments
#1286 - Experiment fast lora train
Pull Request -
State: closed - Opened by BenjaminBossan 10 months ago
- 2 comments
#1285 - Only use trainable params in prompt tuning optimizer
Pull Request -
State: closed - Opened by Ben-Epstein 10 months ago
- 4 comments
#1284 - Prompt Tuning documentation trains all parameters
Issue -
State: closed - Opened by Ben-Epstein 10 months ago
- 7 comments
#1283 - Can not import in python file.
Issue -
State: closed - Opened by aaghiijnnuz 10 months ago
- 1 comment
#1282 - [`bnb`] Add bnb nightly workflow
Pull Request -
State: closed - Opened by younesbelkada 10 months ago
- 5 comments
#1281 - Fixed several errors in StableDiffusion adapter conversion script
Pull Request -
State: closed - Opened by kovalexal 10 months ago
- 1 comment
#1280 - How can we enable continuous learning with the LLM model ?
Issue -
State: closed - Opened by TapendraBaduwal 10 months ago
- 8 comments
#1279 - TST: Enable LoftQ 8bit tests
Pull Request -
State: closed - Opened by BenjaminBossan 10 months ago
- 1 comment
#1278 - How to add trainable parameters? (bugs in 'modules_to_save')
Issue -
State: closed - Opened by shawnricecake 10 months ago
- 5 comments
#1277 - Mistral: Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cuda:7! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
Issue -
State: closed - Opened by amritgupta98 10 months ago
- 11 comments
#1276 - LoftQ: edit README.md and example files
Pull Request -
State: closed - Opened by yxli2123 10 months ago
- 1 comment
#1275 - [`Tests`] Add bitsandbytes installed from source on new docker images
Pull Request -
State: closed - Opened by younesbelkada 10 months ago
- 3 comments
#1274 - TST: Extend LoftQ tests to check CPU initialization
Pull Request -
State: closed - Opened by BenjaminBossan 10 months ago
- 1 comment
#1272 - Adding LoRA like additional trainable parameters for nn.Parameter() or nn.Embedding()
Issue -
State: closed - Opened by lawrence-cj 10 months ago
- 8 comments
#1271 - remove a duplicated description in peft BaseTuner
Pull Request -
State: closed - Opened by butyuhao 10 months ago
- 1 comment