Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / Alpha-VLLM/LLaMA2-Accessory issues and pull requests

#204 - Multi-Modal Full-Parameter Finetuning

Issue - State: open - Opened by hekkang about 1 month ago

#192 - SPHINX交流群无法扫码直接加入

Issue - State: open - Opened by fyting 5 months ago - 2 comments

#100 - The Link in the SPHINX readme leads to a file that no longer exists

Issue - State: closed - Opened by StrangeTcy 10 months ago - 3 comments

#99 - saving and loading multiple lora weights

Issue - State: closed - Opened by wj210 10 months ago - 1 comment

#98 - Can SPHINX be fine-tuned?

Issue - State: closed - Opened by xulinui 10 months ago - 2 comments

#97 - NonDynamicallyQuantizableLinear object has no attribute 'weight'

Issue - State: closed - Opened by Keeo 10 months ago - 3 comments

#96 - Citation information for SPHINX

Issue - State: closed - Opened by taesiri 10 months ago - 3 comments

#95 - Great work! Can I know the difference between SPHINX and long SPHINX? Thanks

Issue - State: closed - Opened by WilTay1 11 months ago - 1 comment

#94 - cant find the sphinx model from huggingface

Issue - State: closed - Opened by yjhdhr 11 months ago - 3 comments

#92 - multi_turn_mm_box not working for Sphinx

Issue - State: open - Opened by saffie91 11 months ago - 12 comments

#91 - full parameter finetuning on A100 40G

Issue - State: closed - Opened by ZhenYangIACAS 11 months ago - 3 comments

#90 - [Question] SPHINX: In-context learning

Issue - State: open - Opened by baptistecolle 11 months ago - 1 comment

#89 - MLM pretraining objective

Issue - State: closed - Opened by wj210 11 months ago - 1 comment

#88 - MLM pretraining objective.

Issue - State: closed - Opened by wj210 11 months ago

#87 - How can we transform the hugging format weight into the meta format?

Issue - State: closed - Opened by ZhenYangIACAS 11 months ago - 1 comment

#86 - Label

Issue - State: closed - Opened by yeonju7kim 11 months ago - 3 comments

#85 - FusedAdam

Issue - State: closed - Opened by yeonju7kim 11 months ago - 6 comments

#84 - Failed to convert to HF

Issue - State: closed - Opened by arbindpd96 11 months ago - 4 comments

#83 - Warning instead of Error

Issue - State: closed - Opened by yeonju7kim 11 months ago - 1 comment

#82 - How can I change model tensor type to float16?

Issue - State: closed - Opened by yeonju7kim 11 months ago - 2 comments

#80 - How to run `single_turn.py` without distributed mode?

Issue - State: closed - Opened by EricWiener 12 months ago - 1 comment

#79 - Light eval

Pull Request - State: closed - Opened by HelanHu 12 months ago

#78 - InternLM inference and training are problematic

Issue - State: closed - Opened by June01 12 months ago - 2 comments

#77 - significant difference between median and global averaged loss

Issue - State: closed - Opened by ZhenYangIACAS 12 months ago - 1 comment

#75 - Parquet Files - Pretraining

Issue - State: closed - Opened by gian-g3dai 12 months ago - 1 comment

#74 - The stage_2 checkpoint include QFormer part?

Issue - State: open - Opened by WeiXuanLi-1024 12 months ago - 3 comments

#73 - add evaluation code for LLaMA2-Accessory

Pull Request - State: closed - Opened by void721 12 months ago

#72 - model.LLM.llama_qformerv2_peft.Transformer can not load

Issue - State: closed - Opened by WeiXuanLi-1024 almost 1 year ago - 6 comments

#71 - Finetuning MM results in `runtimeerror: cuda error: invalid device ordinal`

Issue - State: closed - Opened by lukszam about 1 year ago - 3 comments

#70 - Could you please provide script to convert huggingface InternLM model to .pth?

Issue - State: closed - Opened by June01 about 1 year ago - 3 comments

#69 - Finetuning with raw text?

Issue - State: closed - Opened by wj210 about 1 year ago - 1 comment

#68 - Input tokens for the generate function

Issue - State: closed - Opened by jblamare about 1 year ago - 2 comments

#67 - Finetuning using quantized models

Issue - State: closed - Opened by wj210 about 1 year ago - 7 comments

#66 - Improved param group support

Pull Request - State: closed - Opened by linziyi96 about 1 year ago

#65 - 7B pretraining results in OOM; model_parallel=2 can't load official 7B ckpt

Issue - State: closed - Opened by miokomioko about 1 year ago - 2 comments

#64 - webbook documentation 错误百出

Issue - State: closed - Opened by HLearning about 1 year ago - 1 comment

#63 - LLaMA2-Adaptor x Region Demo setting

Issue - State: closed - Opened by erjui about 1 year ago - 5 comments

#62 - Can I use the language-only parameter-efficient fine-tuning for multi-turn?

Issue - State: closed - Opened by qkrtnskfk23 about 1 year ago - 1 comment

#61 - Config of Two-Stage Training of Multi-Modal LLaMA2

Issue - State: closed - Opened by CaraJ7 about 1 year ago - 2 comments

#60 - Support lazy model init

Pull Request - State: open - Opened by linziyi96 about 1 year ago

#59 - update Quantization doc

Pull Request - State: closed - Opened by kriskrisliu about 1 year ago

#58 - sync with main

Pull Request - State: closed - Opened by kriskrisliu about 1 year ago

#57 - change dataset config in yaml files to dictionary format

Pull Request - State: closed - Opened by ChrisLiu6 about 1 year ago

#56 - ERROR:torch.distributed.elastic.multiprocessing.api:failed

Issue - State: open - Opened by stwrd about 1 year ago - 3 comments

#55 - sphinx-style document

Pull Request - State: closed - Opened by ChrisLiu6 about 1 year ago

#54 - Failed to fine-tune

Issue - State: closed - Opened by qkrtnskfk23 about 1 year ago - 4 comments

#53 - fix quant lora error

Pull Request - State: closed - Opened by kriskrisliu about 1 year ago

#52 - sync with main

Pull Request - State: closed - Opened by kriskrisliu about 1 year ago

#51 - Add the tool to convert weights to huggingface format

Pull Request - State: closed - Opened by linziyi96 about 1 year ago

#50 - about demos/single_turn_mm.py

Issue - State: closed - Opened by 2201957 about 1 year ago - 2 comments

#49 - Which Pre-trained Path to Use at When

Issue - State: closed - Opened by qihan96 about 1 year ago - 4 comments

#48 - Loss Value Range for Reasonable Output

Issue - State: closed - Opened by qihan96 about 1 year ago - 2 comments

#47 - Can I load instructblip and finetune?

Issue - State: open - Opened by hubei-peng about 1 year ago - 1 comment

#46 - fix typos

Pull Request - State: open - Opened by omahs about 1 year ago

#45 - Update requirements.txt

Pull Request - State: open - Opened by nbardy about 1 year ago

#44 - trouble downloading llama2-qformer-peft-13B delta weights

Issue - State: closed - Opened by qihan96 about 1 year ago - 2 comments

#43 - attention mask not used

Issue - State: closed - Opened by wj210 about 1 year ago - 2 comments

#42 - Finetuning with quant (Integer parameters are unsupported)

Issue - State: closed - Opened by qkrtnskfk23 about 1 year ago - 2 comments

#41 - script to merge the result of main_finetune.py with llama2 weights

Issue - State: closed - Opened by qihan96 about 1 year ago - 3 comments

#40 - Pre-training code

Issue - State: closed - Opened by Rajratnpranesh about 1 year ago - 3 comments

#39 - update Quantization doc

Pull Request - State: closed - Opened by kriskrisliu about 1 year ago

#38 - sync with main

Pull Request - State: closed - Opened by kriskrisliu about 1 year ago

#37 - 7B Multimodal checkpoint

Issue - State: closed - Opened by kai-wen-yang about 1 year ago - 8 comments

#36 - sync with main

Pull Request - State: closed - Opened by kriskrisliu about 1 year ago

#35 - fix non-diff download

Pull Request - State: closed - Opened by kriskrisliu about 1 year ago

#34 - merge my own fork to this quantization branch

Pull Request - State: closed - Opened by kriskrisliu about 1 year ago

#33 - Where is the support of imageBind support?

Issue - State: closed - Opened by svjack about 1 year ago - 1 comment

#32 - Embedding Concatenation in forward() Function

Issue - State: closed - Opened by qihan96 about 1 year ago - 5 comments

#31 - Out of memory caused by NativeScaler

Issue - State: closed - Opened by ZhenYangIACAS about 1 year ago - 5 comments

#30 - Discrepancy in Weights Output from main_finetune.py

Issue - State: closed - Opened by qihan96 about 1 year ago - 2 comments

#29 - Add quantization support

Pull Request - State: closed - Opened by linziyi96 about 1 year ago

#28 - Cannot replicate the fine-tuning results on llava_instruct_150k.

Issue - State: closed - Opened by liminghao0914 about 1 year ago - 2 comments

#27 - Where is the codes of Flash Attention 2 and QLoRA?

Issue - State: closed - Opened by shushengyuan about 1 year ago - 6 comments

#26 - llama2 13b out of memory on A800

Issue - State: closed - Opened by xdhhh about 1 year ago - 3 comments

#25 - Compatibility Issue with NO-SHARD/Single GPU training

Issue - State: closed - Opened by qihan96 about 1 year ago - 2 comments

#24 - which model provides the ability of text output with object detection result?

Issue - State: closed - Opened by Kanon777 about 1 year ago - 1 comment

#23 - how to transform the weight into consolidated version?

Issue - State: closed - Opened by ZhenYangIACAS about 1 year ago - 10 comments

#22 - 模型输出无意义的内容

Issue - State: closed - Opened by altqxd about 1 year ago - 2 comments

#21 - Hardware requirement for continuing the pretraining

Issue - State: closed - Opened by Kefan-pauline about 1 year ago - 1 comment

#20 - About the model scale used for singel-modal and multi-model data.

Issue - State: closed - Opened by junwenxiong about 1 year ago - 3 comments

#19 - Pass args.max_words during model creation

Pull Request - State: closed - Opened by linziyi96 about 1 year ago

#18 - alpaca dataset support jsonl format

Pull Request - State: closed - Opened by linziyi96 about 1 year ago

#17 - Reference Inference Code

Issue - State: closed - Opened by rsomani95 about 1 year ago - 2 comments

#15 - Refactor dtype / device management code

Pull Request - State: closed - Opened by linziyi96 about 1 year ago

#14 - fix typo in misc.py

Pull Request - State: closed - Opened by eltociear about 1 year ago

#13 - quick fix of demo memory consumption

Pull Request - State: closed - Opened by linziyi96 about 1 year ago

#12 - Limited models for fine-tuning

Issue - State: closed - Opened by kb-open about 1 year ago - 1 comment

#11 - update forward definition syntax to match `llama_peft.py`

Pull Request - State: closed - Opened by tmm1 about 1 year ago

#10 - Minor typo fix

Pull Request - State: closed - Opened by tmm1 about 1 year ago

#9 - Multimodal finetunning code?

Issue - State: closed - Opened by waybarrios about 1 year ago - 6 comments

#6 - Minor issues about model save

Issue - State: open - Opened by linziyi96 about 1 year ago

#5 - support Llama2-chat fine-tuning

Pull Request - State: closed - Opened by csuhan about 1 year ago - 1 comment

#4 - Discussion about fixing dtype incompatibility during inference

Issue - State: closed - Opened by linziyi96 about 1 year ago

#3 - [WIP] remove redundant / unused code

Pull Request - State: open - Opened by linziyi96 about 1 year ago