Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / lxe/simple-llm-finetuner issues and pull requests

#63 - How to work with vLLM such as LLAVA

Issue - State: open - Opened by dinhquangsonimip 4 months ago

#62 - will this work with quantized GGUF files?

Issue - State: open - Opened by sengiv 10 months ago

#61 - [Request] Retrain adapter from checkpoint?

Issue - State: open - Opened by zaanind 11 months ago

#60 - About llama-2-70B fine-tuning

Issue - State: open - Opened by RickMeow about 1 year ago

#59 - Getting the repo id error from the web interface

Issue - State: open - Opened by matt7salomon about 1 year ago

#58 - AMD GPU compability or CPU

Issue - State: open - Opened by LeLaboDuGame about 1 year ago

#57 - M1/M2 Metal support?

Issue - State: open - Opened by itsPreto about 1 year ago

#56 - [Request] Mac ARM support

Issue - State: open - Opened by voidcenter about 1 year ago

#55 - [Request] QLoRA support

Issue - State: open - Opened by CoolOppo about 1 year ago

#52 - RuntimeError: expected scalar type Half but found Float

Issue - State: open - Opened by jasperan over 1 year ago - 3 comments

#51 - Multi GPU running

Issue - State: open - Opened by Shashika007 over 1 year ago

#48 - Allow specifying custom hostname and port

Pull Request - State: closed - Opened by recursionbane over 1 year ago - 1 comment

#47 - Performance after FineTuning

Issue - State: open - Opened by Datta0 over 1 year ago - 3 comments

#46 - Getting OOM

Issue - State: open - Opened by alior101 over 1 year ago - 2 comments

#45 - Resolve Error: Adapter lora/{MODE_NAME}-{ADAPTER_NAME} not found.

Pull Request - State: closed - Opened by 64-bit over 1 year ago

#43 - Progressive output and cancel button for 'Inference' tab

Pull Request - State: open - Opened by 64-bit over 1 year ago - 2 comments

#42 - Issue in train in colab

Issue - State: open - Opened by fermions75 over 1 year ago - 7 comments
Labels: colab

#41 - How do I merge trained Lora an Llama7b weight?

Issue - State: open - Opened by Gitterman69 over 1 year ago - 2 comments

#40 - "The tokenizer class you load from this checkpoint is 'LLaMATokenizer'."

Issue - State: closed - Opened by Gitterman69 over 1 year ago - 3 comments

#39 - Verbose function to find out what leads to crash during training?

Issue - State: closed - Opened by Gitterman69 over 1 year ago - 1 comment

#37 - Full rework: Version 2 release

Pull Request - State: closed - Opened by lxe over 1 year ago

#36 - AttributeError: type object 'Dataset' has no attribute 'from_list'

Issue - State: open - Opened by Datta0 over 1 year ago - 3 comments

#35 - add cpu support

Pull Request - State: open - Opened by swap357 over 1 year ago - 3 comments

#34 - add cpu training support using main-cpu.py

Pull Request - State: closed - Opened by swap357 over 1 year ago

#33 - How to use CPU instead of GPU

Issue - State: open - Opened by Shreyas-ITB over 1 year ago - 2 comments

#32 - Suggestion to improve UX

Issue - State: open - Opened by ch3rn0v over 1 year ago - 3 comments

#30 - how to finetune with 'system information'

Issue - State: open - Opened by mhyeonsoo over 1 year ago - 1 comment

#28 - Attempting to use 13B in the simple tuner -

Issue - State: open - Opened by Atlas3DSS over 1 year ago - 2 comments
Labels: bug

#27 - How the finetuning output looks like?

Issue - State: closed - Opened by mhyeonsoo over 1 year ago - 1 comment

#26 - Not a problem - but like people should know

Issue - State: open - Opened by Atlas3DSS over 1 year ago - 2 comments
Labels: documentation

#25 - Error during Training RuntimeError: mat1 and mat2 shapes cannot be multiplied (511x2 and 3x4096)

Issue - State: open - Opened by kasakh over 1 year ago - 2 comments
Labels: bug

#24 - Training using long stories instead of question/response

Issue - State: open - Opened by leszekhanusz over 1 year ago - 3 comments
Labels: question

#23 - Question: Native windows support

Issue - State: closed - Opened by Paillat-dev over 1 year ago - 3 comments

#22 - `LLaMATokenizer` vs `LlamaTokenizer` class names

Issue - State: open - Opened by vadi2 over 1 year ago - 5 comments
Labels: question

#21 - Clarify that 16GB VRAM in itself is enough

Pull Request - State: closed - Opened by vadi2 over 1 year ago

#20 - question: could the model trained be used for alpaca.cpp?

Issue - State: open - Opened by goog over 1 year ago - 2 comments
Labels: question

#19 - Host on Hugging Face Spaces

Issue - State: closed - Opened by osanseviero over 1 year ago - 4 comments
Labels: enhancement

#18 - Add A cli version

Pull Request - State: open - Opened by HackerAIOfficial over 1 year ago - 2 comments

#17 - Slow generation speed: around 10 minutes / loading forever on rtx3090 with 64gb ram....

Issue - State: open - Opened by Gitterman69 over 1 year ago - 3 comments
Labels: bug

#16 - How can I use the finetuned model with text-generation-webui or KoboldAI?

Issue - State: open - Opened by Gitterman69 over 1 year ago - 4 comments
Labels: question

#15 - Finetuning in unsupported language

Issue - State: open - Opened by jumasheff over 1 year ago - 2 comments
Labels: question

#14 - Set allowed minimum on temperature

Pull Request - State: closed - Opened by vadi2 over 1 year ago - 1 comment

#13 - (WSL2) - No GPU / Cuda detected....

Issue - State: closed - Opened by Gitterman69 over 1 year ago - 6 comments

#12 - Inference works just once

Issue - State: closed - Opened by vadi2 over 1 year ago - 12 comments
Labels: bug

#11 - Examples to get started with

Issue - State: open - Opened by vadi2 over 1 year ago - 4 comments
Labels: enhancement

#10 - Inference doesn't work after training

Issue - State: closed - Opened by vadi2 over 1 year ago - 2 comments

#9 - Document Python 3.10 and conda create

Pull Request - State: closed - Opened by vadi2 over 1 year ago - 1 comment

#8 - Is CUDA 12.0 supported?

Issue - State: open - Opened by vadi2 over 1 year ago - 1 comment
Labels: question

#7 - Can Nivdia 3090 with 24G video memory support finetune?

Issue - State: open - Opened by pczzy over 1 year ago - 4 comments
Labels: question

#6 - Traceback during inference.

Issue - State: open - Opened by Hello1024 over 1 year ago - 8 comments
Labels: bug

#5 - typo

Pull Request - State: closed - Opened by SimoMay over 1 year ago

#3 - Where are the downloaded ".bin" files for the llama model stored on the disk?

Issue - State: closed - Opened by ashishb over 1 year ago - 2 comments

#2 - Collecting info on memory requirements

Issue - State: open - Opened by jmiskovic over 1 year ago - 1 comment
Labels: question

#1 - Inference output text keeps running on...

Issue - State: open - Opened by lxe over 1 year ago - 1 comment
Labels: bug, question