Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / huggingface/peft issues and pull requests
#1586 - Error in LoraModel docstring
Issue -
State: closed - Opened by brynhayder 6 months ago
- 3 comments
#1584 - [feat] Add `lru_cache` to `import_utils` calls that did not previously have it
Pull Request -
State: closed - Opened by tisles 6 months ago
- 3 comments
#1583 - cannot import name 'prepare_model_for_int8_training' from 'peft'
Issue -
State: closed - Opened by Eyict 6 months ago
- 8 comments
#1582 - FIX / Docs: Fix doc link for layer replication
Pull Request -
State: closed - Opened by younesbelkada 6 months ago
- 1 comment
#1581 - FIX Minor issues in docs, re-raising exception
Pull Request -
State: closed - Opened by BenjaminBossan 6 months ago
- 1 comment
#1580 - PeftModel is_trainable=True causes generate output to be garbage.
Issue -
State: closed - Opened by o1lo01ol1o 6 months ago
- 6 comments
#1579 - error merge_and_unload for adapter with a prefix
Issue -
State: closed - Opened by afalf 6 months ago
- 23 comments
#1578 - Bump version to 0.10.1.dev0
Pull Request -
State: closed - Opened by BenjaminBossan 6 months ago
- 1 comment
#1577 - element 0 of tensors does not require grad and does not have a grad_fn
Issue -
State: closed - Opened by mxjyst 6 months ago
- 4 comments
#1576 - Loading LORA weights in `diffusers` with a `peft` backend increases in latency as more paths are added to `PYTHONPATH`
Issue -
State: closed - Opened by tisles 6 months ago
- 4 comments
#1575 - MPS: Cannot add LoRA to Unet (LoftQ)
Issue -
State: closed - Opened by bghira 7 months ago
- 6 comments
#1574 - 'set_adapter()' throws "ValueError: Adapter not found in odict_keys" after 'load_adapter()'
Issue -
State: closed - Opened by YnezT0311 7 months ago
- 11 comments
#1573 - Release: v0.10.0
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 2 comments
#1572 - PP or TP supported for multi-node training?
Issue -
State: closed - Opened by mxjyst 7 months ago
- 3 comments
#1571 - Custom Training for multiple LoRAs for the same model.
Issue -
State: closed - Opened by bellos1203 7 months ago
- 4 comments
#1569 - Error saving LoRaModel with Wav2vec2forCTC basemodel
Issue -
State: closed - Opened by geoffvdr 7 months ago
- 7 comments
#1568 - When peft>=0.7.0, fine-tuning ChatGLM3-6B causes the model to become dumb with a loss of 0
Issue -
State: closed - Opened by Tangent-90C 7 months ago
- 13 comments
#1567 - Base Model Revision
Issue -
State: closed - Opened by mnoukhov 7 months ago
- 8 comments
#1566 - cannot load int8/4 model with deepspeed zero3
Issue -
State: closed - Opened by mxjyst 7 months ago
- 5 comments
#1565 - Update style with ruff 0.2.2
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 1 comment
#1564 - Adds Vera (Vector Based Random Matrix Adaption) #2
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 11 comments
#1563 - Add a new (not so new and it is typical) fine-tuning method called VPT
Issue -
State: closed - Opened by 2catycm 7 months ago
- 3 comments
#1562 - How do I easily inherit and register the new method?
Issue -
State: closed - Opened by mrwu-mac 7 months ago
- 2 comments
#1561 - error using load_lora_weights with dora
Issue -
State: open - Opened by SlZeroth 7 months ago
- 11 comments
#1560 - Add a new fine-tuning method called Conv-Lora
Issue -
State: closed - Opened by 2catycm 7 months ago
- 2 comments
#1558 - FEAT Mixing different LoRA adapters in same batch
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 2 comments
#1556 - TST Report slowest tests
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 2 comments
#1555 - How to properly set the parameters of add_weighted_adapter()
Issue -
State: open - Opened by zhengzehao123 7 months ago
- 1 comment
#1554 - size mismatch of embedding layer without adding any token
Issue -
State: closed - Opened by rangehow 7 months ago
- 1 comment
#1553 - adding lora adapter to embedding layer while using bitsandbytes and mixed precision training gives "RuntimeError: a leaf Variable that requires grad is being used in an in-place operation."
Issue -
State: closed - Opened by cai-rishabh 7 months ago
- 7 comments
#1552 - MNT: Use BitsAndBytesConfig as load_in_* is deprecated
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 2 comments
#1551 - FIX: Make adaptation prompt CI happy for transformers 4.39.0
Pull Request -
State: closed - Opened by younesbelkada 7 months ago
- 1 comment
#1550 - Changes to support fsdp+qlora and dsz3+qlora
Pull Request -
State: closed - Opened by pacman100 7 months ago
- 2 comments
#1549 - adapter_config.json is saved as "loftq_config": { "loftq_bits": "4bit", ... }
Issue -
State: closed - Opened by lottopotato 7 months ago
- 1 comment
#1548 - Update prompt_based_methods.md
Pull Request -
State: closed - Opened by insist93 7 months ago
- 1 comment
#1545 - How to use lora finetune moe model
Issue -
State: closed - Opened by Minami-su 7 months ago
- 2 comments
#1544 - Fail to use multi-GPUs with peft model
Issue -
State: closed - Opened by TheoDpPro 7 months ago
- 3 comments
#1543 - More convenient way to initialize LoftQ
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 2 comments
#1542 - Fixed minor grammatical and code bugs
Pull Request -
State: closed - Opened by gremlin97 7 months ago
- 1 comment
#1541 - CUSTOM_TOKEN Tuner
Pull Request -
State: closed - Opened by marcusinthesky 7 months ago
- 4 comments
#1540 - FIX Allow AdaLoRA rank to be 0
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 1 comment
#1539 - load adalora weights error in resize_modules_by_rank_pattern;r=0
Issue -
State: closed - Opened by AEProgrammer 7 months ago
- 4 comments
#1538 - [Feature request] Support LoftQ with CPU offloading
Issue -
State: closed - Opened by peterjc123 7 months ago
- 2 comments
#1537 - Remove future annotation for LoraConfig to fix compatibility with `HfArgumentParser`
Pull Request -
State: closed - Opened by DengYiping 7 months ago
- 4 comments
#1536 - AQLM with LORA too slow
Issue -
State: closed - Opened by DavidAkinpelu 7 months ago
- 6 comments
#1535 - FIX [`CI`] Fix test docker CI
Pull Request -
State: closed - Opened by younesbelkada 7 months ago
- 4 comments
#1534 - CI: temporary disable workflow
Pull Request -
State: closed - Opened by younesbelkada 7 months ago
- 1 comment
#1533 - Chore: rework the workflow for testing docker image builds.
Pull Request -
State: closed - Opened by sayakpaul 7 months ago
- 2 comments
#1532 - Fix LoftQ docs and tests
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 3 comments
#1531 - docs: highlight difference between num_parameters() and get_nb_trainable_parameters() in PEFT
Pull Request -
State: closed - Opened by kmehant 7 months ago
- 1 comment
#1530 - Expose bias attribute on tuner layers
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 3 comments
#1529 - FIX [`Docs`/ `bnb` / `DeepSpeed`] Add clarification on bnb + PEFT + DS compatibilities
Pull Request -
State: closed - Opened by younesbelkada 7 months ago
- 1 comment
#1528 - How do I unfreeze the base model parameters while fine-tuning?
Issue -
State: closed - Opened by estuday 7 months ago
#1527 - Optimize levenshtein_distance algorithm in peft_lora_seq2seq_accelera…
Pull Request -
State: closed - Opened by SUNGOD3 7 months ago
- 9 comments
#1526 - Inconsistency between get_nb_trainable_parameters and num_parameters(only_trainable=True) for prompt tuning
Issue -
State: closed - Opened by kmehant 7 months ago
- 9 comments
#1525 - LoftQ does not seem to quantify the base model
Issue -
State: closed - Opened by Mr-KenLee 7 months ago
- 4 comments
#1524 - `lora.Linear.bias` should point to `lora.Linear.base_layer.bias` as the `lora.Linear.weight` does
Issue -
State: closed - Opened by ArthurZucker 7 months ago
- 1 comment
#1523 - Possible to build a LoRA that doesn't inject into the transformer?
Issue -
State: closed - Opened by AngledLuffa 7 months ago
- 37 comments
#1522 - Different versions seem to have an impact on the results
Issue -
State: closed - Opened by passby111 7 months ago
- 24 comments
#1521 - Use peft_doc as notebook path as doc-builder expects <package>_doc for the Google Colab and AWS Studio links
Pull Request -
State: closed - Opened by DriesVerachtert 7 months ago
- 5 comments
#1520 - integrate ResLoRA
Issue -
State: closed - Opened by hllj 7 months ago
- 2 comments
#1519 - fix: fail when required args not passed when prompt_tuning_init==TEXT
Pull Request -
State: closed - Opened by kmehant 7 months ago
- 7 comments
#1518 - QDoRA: Support DoRA with BnB quantization
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 3 comments
#1517 - Bump version to 0.9.1.dev0
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 1 comment
#1516 - Feat: add support for Conv2D DoRA
Pull Request -
State: closed - Opened by sayakpaul 7 months ago
- 8 comments
#1515 - QLoRA bf16 + model.generate() in TrainerCallback: "RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16"
Issue -
State: closed - Opened by geronimi73 7 months ago
- 2 comments
#1514 - Tinyllama with Lora consumes more memory than full-finetuning
Issue -
State: closed - Opened by rohitgr7 7 months ago
- 23 comments
#1513 - Some problems arise when finetune large language models
Issue -
State: open - Opened by zhengzehao123 7 months ago
- 5 comments
#1512 - Bump versions for release 0.9.0
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 1 comment
#1511 - TypeError: LoraConfig.__init__() got an unexpected keyword argument 'use_original_init'
Issue -
State: closed - Opened by moghadas76 7 months ago
- 4 comments
#1510 - Implementing GLora
Issue -
State: closed - Opened by aravindhv10 7 months ago
- 3 comments
#1509 - Add lora+ implentation
Pull Request -
State: closed - Opened by moghadas76 7 months ago
- 40 comments
#1508 - AttributeError: 'PromptEncoder' object has no attribute 'mlp_head'
Issue -
State: closed - Opened by linguoqi 7 months ago
- 1 comment
#1507 - StackLlaMa 2 dpo train with deepspeed oom
Issue -
State: closed - Opened by fancyerii 7 months ago
- 3 comments
#1506 - Example Notebook: Semantic Segmentation : Metrics show extremely low accuracies, and NaN results, RuntimeError in divide
Issue -
State: closed - Opened by cleong110 7 months ago
- 5 comments
#1505 - FIX Safe merging with LoHa and LoKr
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 1 comment
#1504 - Feature Request: Integrate Lora+/different learning rates for adapter matrices A and B
Issue -
State: closed - Opened by cleong110 7 months ago
- 22 comments
#1503 - ENH: [`Docker`] Notify us when docker build pass or fail
Pull Request -
State: closed - Opened by younesbelkada 7 months ago
- 4 comments
#1502 - FIX Bug in prompt learning after disabling adapter
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 1 comment
#1501 - PeftModel.disable_adapter bug
Issue -
State: closed - Opened by yanadranker 7 months ago
- 1 comment
#1500 - I have a llama2-7b model and a checkpoint fine-tuned using p-tuning. How do I load the base model and checkpoint using PEFT for inference?
Issue -
State: closed - Opened by linguoqi 7 months ago
- 6 comments
#1499 - Add default LoRA and IA3 target modules for Gemma
Pull Request -
State: closed - Opened by arnavgarg1 7 months ago
- 1 comment
#1498 - make `all-linear` as default for `target_modules`
Pull Request -
State: closed - Opened by pacman100 7 months ago
- 3 comments
#1497 - Cannot load adapters from Peft.from_pretrained
Issue -
State: closed - Opened by dame-cell 7 months ago
#1496 - Raise error on wrong type for to modules_to_save
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 1 comment
#1495 - covert SVDLinear dtype
Pull Request -
State: closed - Opened by PHOSPHENES8 7 months ago
- 1 comment
#1494 - Update peft_bnb_whisper_large_v2_training.ipynb: Fix a typo
Pull Request -
State: closed - Opened by martin0258 7 months ago
- 2 comments
#1493 - FIX: [`CI` / `Adaptation Prompt`] Fix CI on transformers main
Pull Request -
State: closed - Opened by younesbelkada 7 months ago
- 1 comment
#1492 - ModulesToSaveWrapper not working with ModulesDict dictionary methods
Issue -
State: closed - Opened by SamGalanakis 7 months ago
- 9 comments
#1491 - Integrate X-LoRA
Pull Request -
State: closed - Opened by EricLBuehler 7 months ago
- 69 comments
#1490 - Fix issue with unloading double wrapped modules
Pull Request -
State: closed - Opened by BenjaminBossan 7 months ago
- 1 comment
#1489 - add example and update deepspeed/FSDP docs
Pull Request -
State: closed - Opened by pacman100 7 months ago
- 3 comments
#1488 - HF Pipeline support for PeftModelForSequenceClassification
Issue -
State: closed - Opened by ddofer 7 months ago
- 7 comments
#1487 - FIX [`CI` / `Docker`] Follow up from #1481
Pull Request -
State: closed - Opened by younesbelkada 8 months ago
- 2 comments
#1486 - Save label information for seq/token classification
Issue -
State: closed - Opened by nbroad1881 8 months ago
- 2 comments
#1485 - all-linear + classification models have double-wrapped linear layers
Issue -
State: closed - Opened by nbroad1881 8 months ago
- 1 comment
#1484 - FIX [`PromptTuning`] Simple fix for transformers >= 4.38
Pull Request -
State: closed - Opened by younesbelkada 8 months ago
- 1 comment
#1483 - IA3 with decoder-only LLMs containing "query_key_value" parameters
Issue -
State: closed - Opened by ospanbatyr 8 months ago
- 2 comments
#1482 - ENH [`CI`] Run tests only when relevant files are modified
Pull Request -
State: closed - Opened by younesbelkada 8 months ago
- 1 comment
#1481 - ENH: [`CI` / `Docker`]: Create a workflow to temporarly build docker images in case dockerfiles are modified
Pull Request -
State: closed - Opened by younesbelkada 8 months ago
- 4 comments