Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / ericlbuehler/candle-lora issues and pull requests
#23 - Flux Model
Issue -
State: open - Opened by super-fun-surf about 2 months ago
- 1 comment
#22 - Fine-tuning Llama guide/example?
Issue -
State: closed - Opened by Tameflame 2 months ago
- 1 comment
#21 - Bert model doesn't seem to instantiate with lora weights
Issue -
State: open - Opened by jcrist1 5 months ago
- 2 comments
#20 - Bump candle version to 0.6.0
Pull Request -
State: closed - Opened by EricLBuehler 6 months ago
#19 - Is config.rank <= 0 ever true?
Issue -
State: open - Opened by def-roth 6 months ago
- 1 comment
#18 - Installation does not work with newer version of candle (0.6.0)
Issue -
State: closed - Opened by czonios 7 months ago
#17 - Could we have a written walkthrough of finetuning llama/mistral with this?
Issue -
State: open - Opened by Tameflame 9 months ago
- 4 comments
#16 - Bump candle version
Pull Request -
State: closed - Opened by EricLBuehler 10 months ago
#15 - Updated candle-core, candle-nn [0.5.0] release breaks installation of candle-lora and candle-lora-macro dependencies
Issue -
State: closed - Opened by Andycharalambous 10 months ago
#14 - In Llama model, only the embedding layer is converted to lora layer.
Issue -
State: open - Opened by Adamska1008 10 months ago
- 5 comments
#13 - Support saving LoRA layers
Pull Request -
State: closed - Opened by EricLBuehler 10 months ago
Labels: candle-lora-macro, candle-lora
#12 - Is there any way to save lora-converted model?
Issue -
State: closed - Opened by Adamska1008 10 months ago
- 5 comments
Labels: candle-lora
#11 - update candle to 0.4.1
Pull Request -
State: closed - Opened by getong 11 months ago
Labels: candle-lora-macro, candle-lora, candle-lora-transformers
#10 - How to use canle_lora modle with rust auxm web server
Issue -
State: closed - Opened by arthasyou 11 months ago
- 4 comments
Labels: candle-lora-macro
#9 - Can lora also be implemented with stable diffusion?
Issue -
State: closed - Opened by staru09 11 months ago
- 9 comments
#8 - error[E0277]: expected a `Fn<(&candle_core::Tensor,)>` closure, found `BatchNorm`
Issue -
State: closed - Opened by EricLBuehler about 1 year ago
Labels: bug, help wanted, candle-lora-transformers
#7 - any example for llama_lora training
Issue -
State: closed - Opened by arthasyou about 1 year ago
- 2 comments
#6 - Model Merging
Issue -
State: closed - Opened by okpatil4u about 1 year ago
- 6 comments
#5 - replace_layer_fields and AutoLoraConvert not working as expected
Issue -
State: closed - Opened by edwin0cheng about 1 year ago
- 1 comment
Labels: bug
#4 - Add more LoRA transformers
Issue -
State: closed - Opened by EricLBuehler over 1 year ago
Labels: enhancement, good first issue
#3 - QA-LoRA Implementation and Review
Issue -
State: closed - Opened by okpatil4u over 1 year ago
- 3 comments
#2 - Examples for Llama model architecture
Issue -
State: closed - Opened by okpatil4u over 1 year ago
- 5 comments
#1 - Question: Could we use the same mechanism for Quantization?
Issue -
State: closed - Opened by LLukas22 over 1 year ago
- 4 comments