Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / lxe/simple-llm-finetuner issues and pull requests
#63 - How to work with vLLM such as LLAVA
Issue -
State: open - Opened by dinhquangsonimip 6 months ago
#62 - will this work with quantized GGUF files?
Issue -
State: open - Opened by sengiv about 1 year ago
#61 - [Request] Retrain adapter from checkpoint?
Issue -
State: open - Opened by zaanind about 1 year ago
#60 - About llama-2-70B fine-tuning
Issue -
State: open - Opened by RickMeow about 1 year ago
#59 - Getting the repo id error from the web interface
Issue -
State: open - Opened by matt7salomon over 1 year ago
#58 - AMD GPU compability or CPU
Issue -
State: open - Opened by LeLaboDuGame over 1 year ago
#57 - M1/M2 Metal support?
Issue -
State: open - Opened by itsPreto over 1 year ago
#56 - [Request] Mac ARM support
Issue -
State: open - Opened by voidcenter over 1 year ago
#55 - [Request] QLoRA support
Issue -
State: open - Opened by CoolOppo over 1 year ago
#54 - RuntimeError: unscale_() has already been called on this optimizer since the last update().
Issue -
State: closed - Opened by junxu-ai over 1 year ago
- 3 comments
#53 - In trainer.py, ignore the last token is not suitable for all situations.
Issue -
State: open - Opened by HCTsai over 1 year ago
#52 - RuntimeError: expected scalar type Half but found Float
Issue -
State: open - Opened by jasperan over 1 year ago
- 3 comments
#51 - Multi GPU running
Issue -
State: open - Opened by Shashika007 over 1 year ago
#48 - Allow specifying custom hostname and port
Pull Request -
State: closed - Opened by recursionbane over 1 year ago
- 1 comment
#47 - Performance after FineTuning
Issue -
State: open - Opened by Datta0 over 1 year ago
- 3 comments
#46 - Getting OOM
Issue -
State: open - Opened by alior101 over 1 year ago
- 2 comments
#45 - Resolve Error: Adapter lora/{MODE_NAME}-{ADAPTER_NAME} not found.
Pull Request -
State: closed - Opened by 64-bit over 1 year ago
#44 - Error: Adapter lora/decapoda-research_llama-{ADAPTER_NAME} not found.
Issue -
State: closed - Opened by 64-bit over 1 year ago
#43 - Progressive output and cancel button for 'Inference' tab
Pull Request -
State: open - Opened by 64-bit over 1 year ago
- 2 comments
#42 - Issue in train in colab
Issue -
State: open - Opened by fermions75 over 1 year ago
- 7 comments
Labels: colab
#41 - How do I merge trained Lora an Llama7b weight?
Issue -
State: open - Opened by Gitterman69 over 1 year ago
- 2 comments
#40 - "The tokenizer class you load from this checkpoint is 'LLaMATokenizer'."
Issue -
State: closed - Opened by Gitterman69 over 1 year ago
- 3 comments
#39 - Verbose function to find out what leads to crash during training?
Issue -
State: closed - Opened by Gitterman69 over 1 year ago
- 1 comment
#38 - How should I prepare the dataset for generative question answering on the private documents?
Issue -
State: open - Opened by AayushSameerShah over 1 year ago
- 50 comments
#37 - Full rework: Version 2 release
Pull Request -
State: closed - Opened by lxe over 1 year ago
#36 - AttributeError: type object 'Dataset' has no attribute 'from_list'
Issue -
State: open - Opened by Datta0 over 1 year ago
- 3 comments
#35 - add cpu support
Pull Request -
State: open - Opened by swap357 over 1 year ago
- 3 comments
#34 - add cpu training support using main-cpu.py
Pull Request -
State: closed - Opened by swap357 over 1 year ago
#33 - How to use CPU instead of GPU
Issue -
State: open - Opened by Shreyas-ITB over 1 year ago
- 2 comments
#32 - Suggestion to improve UX
Issue -
State: open - Opened by ch3rn0v over 1 year ago
- 3 comments
#30 - how to finetune with 'system information'
Issue -
State: open - Opened by mhyeonsoo over 1 year ago
- 1 comment
#29 - "error" in training - AttributeError: 'CastOutputToFloat' object has no attribute 'weight', RuntimeError: Only Tensors of floating point and complex dtype can require gradients
Issue -
State: open - Opened by GreenTeaBD over 1 year ago
- 5 comments
Labels: bug
#28 - Attempting to use 13B in the simple tuner -
Issue -
State: open - Opened by Atlas3DSS over 1 year ago
- 2 comments
Labels: bug
#27 - How the finetuning output looks like?
Issue -
State: closed - Opened by mhyeonsoo over 1 year ago
- 1 comment
#26 - Not a problem - but like people should know
Issue -
State: open - Opened by Atlas3DSS over 1 year ago
- 2 comments
Labels: documentation
#25 - Error during Training RuntimeError: mat1 and mat2 shapes cannot be multiplied (511x2 and 3x4096)
Issue -
State: open - Opened by kasakh over 1 year ago
- 2 comments
Labels: bug
#24 - Training using long stories instead of question/response
Issue -
State: open - Opened by leszekhanusz over 1 year ago
- 3 comments
Labels: question
#23 - Question: Native windows support
Issue -
State: closed - Opened by Paillat-dev over 1 year ago
- 3 comments
#22 - `LLaMATokenizer` vs `LlamaTokenizer` class names
Issue -
State: open - Opened by vadi2 over 1 year ago
- 5 comments
Labels: question
#21 - Clarify that 16GB VRAM in itself is enough
Pull Request -
State: closed - Opened by vadi2 over 1 year ago
#20 - question: could the model trained be used for alpaca.cpp?
Issue -
State: open - Opened by goog over 1 year ago
- 2 comments
Labels: question
#19 - Host on Hugging Face Spaces
Issue -
State: closed - Opened by osanseviero over 1 year ago
- 4 comments
Labels: enhancement
#18 - Add A cli version
Pull Request -
State: open - Opened by HackerAIOfficial over 1 year ago
- 2 comments
#17 - Slow generation speed: around 10 minutes / loading forever on rtx3090 with 64gb ram....
Issue -
State: open - Opened by Gitterman69 over 1 year ago
- 3 comments
Labels: bug
#16 - How can I use the finetuned model with text-generation-webui or KoboldAI?
Issue -
State: open - Opened by Gitterman69 over 1 year ago
- 4 comments
Labels: question
#15 - Finetuning in unsupported language
Issue -
State: open - Opened by jumasheff over 1 year ago
- 2 comments
Labels: question
#14 - Set allowed minimum on temperature
Pull Request -
State: closed - Opened by vadi2 over 1 year ago
- 1 comment
#13 - (WSL2) - No GPU / Cuda detected....
Issue -
State: closed - Opened by Gitterman69 over 1 year ago
- 6 comments
#12 - Inference works just once
Issue -
State: closed - Opened by vadi2 over 1 year ago
- 12 comments
Labels: bug
#11 - Examples to get started with
Issue -
State: open - Opened by vadi2 over 1 year ago
- 4 comments
Labels: enhancement
#10 - Inference doesn't work after training
Issue -
State: closed - Opened by vadi2 over 1 year ago
- 2 comments
#9 - Document Python 3.10 and conda create
Pull Request -
State: closed - Opened by vadi2 over 1 year ago
- 1 comment
#8 - Is CUDA 12.0 supported?
Issue -
State: open - Opened by vadi2 over 1 year ago
- 1 comment
Labels: question
#7 - Can Nivdia 3090 with 24G video memory support finetune?
Issue -
State: open - Opened by pczzy over 1 year ago
- 4 comments
Labels: question
#6 - Traceback during inference.
Issue -
State: open - Opened by Hello1024 over 1 year ago
- 8 comments
Labels: bug
#4 - Question: Is fine tuning suitable for factual answers from custom data, or is it better to use vector databases and use only the relevant chunk in the prompt for factual answers?
Issue -
State: closed - Opened by petrbrzek over 1 year ago
- 2 comments
Labels: question
#3 - Where are the downloaded ".bin" files for the llama model stored on the disk?
Issue -
State: closed - Opened by ashishb over 1 year ago
- 2 comments
#2 - Collecting info on memory requirements
Issue -
State: open - Opened by jmiskovic over 1 year ago
- 1 comment
Labels: question
#1 - Inference output text keeps running on...
Issue -
State: open - Opened by lxe over 1 year ago
- 1 comment
Labels: bug, question