Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / OpenGVLab/LLaMA-Adapter issues and pull requests
#153 - created a model on colab but cannot load for inference
Issue -
State: open - Opened by sedici16 4 months ago
#130 - I have problem with downloading 7B_chinese in imagebind_LLM.
Issue -
State: closed - Opened by JINAILAB 12 months ago
- 1 comment
#101 - AttributeError: module 'clip' has no attribute 'load'
Issue -
State: closed - Opened by parasmech about 1 year ago
- 2 comments
#100 - error when pretrin the llama-adapterv2-multimodal
Issue -
State: open - Opened by adda1221 about 1 year ago
- 2 comments
#99 - the format of dataset for fintuning on llama_adapter_v2_multimodal
Issue -
State: open - Opened by adda1221 about 1 year ago
- 4 comments
#98 - Code result not the same as in the gradio GUI.
Issue -
State: open - Opened by Practicing7 about 1 year ago
- 3 comments
#97 - Support for llama-2 70B
Issue -
State: open - Opened by qizzzh about 1 year ago
- 7 comments
#96 - Reproduce problems with the model llama-adapter-multimodal-v2
Issue -
State: open - Opened by dongzhiwu about 1 year ago
- 7 comments
#95 - alpaca_finetuning_v1 does not support llama2 checkpoint
Issue -
State: open - Opened by GentleZhu about 1 year ago
- 2 comments
#94 - Training data for audio model?
Issue -
State: open - Opened by jpgard about 1 year ago
- 1 comment
#93 - Pretrain model
Issue -
State: open - Opened by yuntaodu over 1 year ago
- 1 comment
#92 - Error when inference
Issue -
State: closed - Opened by sci-m-wang over 1 year ago
- 3 comments
#91 - Error when running example of imagebind_LLM
Issue -
State: closed - Opened by basteran over 1 year ago
- 7 comments
#90 - Learning rate and batch size
Issue -
State: open - Opened by dhyani15 over 1 year ago
- 3 comments
#89 - llamav2 base Chinese multimodal
Issue -
State: open - Opened by lucasjinreal over 1 year ago
#88 - training time
Issue -
State: open - Opened by cissoidx over 1 year ago
- 2 comments
#87 - How many training epochs to replicate scienceQA result using llama v1
Issue -
State: open - Opened by dhyani15 over 1 year ago
- 12 comments
#86 - Question: How to convert weights to required format (.bin --> .pth)
Issue -
State: closed - Opened by r3shma over 1 year ago
- 1 comment
#84 - Could not replicate the same result.
Issue -
State: open - Opened by poonehmousavi over 1 year ago
- 1 comment
#83 - How long does it take to train v2-multimodal model on Image-Text-V1(a concatenation of LAION400M, COYO, MMC4, SUB, Conceptual Captions, and COCO) via 8 A100(80GB)?
Issue -
State: open - Opened by ifshinelx over 1 year ago
#82 - Does it support multi image input?
Issue -
State: open - Opened by hangzeli08 over 1 year ago
- 1 comment
#81 - Fine Tune on AMD GPU
Issue -
State: open - Opened by LeLaboDuGame over 1 year ago
#80 - LLaMA-Adapter-v2-multimodal evaluation code?
Issue -
State: open - Opened by heliossun over 1 year ago
- 1 comment
#79 - Two different sets of llama module
Issue -
State: closed - Opened by dhyani15 over 1 year ago
- 1 comment
#78 - Example and label for finetuning
Issue -
State: closed - Opened by thao9611 over 1 year ago
#77 - Questions for reproducing llama-adapter v1 (finetuned on COCO Captions)
Issue -
State: open - Opened by miso-choi over 1 year ago
#76 - No sympy found
Issue -
State: closed - Opened by dhyani15 over 1 year ago
- 2 comments
#75 - Save the model weights in a few hundred megabytes size like the BIAS-7B.pth provided by the official.
Pull Request -
State: closed - Opened by Enderfga over 1 year ago
- 4 comments
#74 - The code is inconsistent with v2 paper
Issue -
State: open - Opened by merlinarer over 1 year ago
- 4 comments
#73 - how to reproduce coco caption result in paper LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention ?
Issue -
State: open - Opened by baiyuting over 1 year ago
- 5 comments
#72 - llama-adapterV2 multi modal demo error
Issue -
State: open - Opened by dongzhiwu over 1 year ago
- 5 comments
#71 - Some questions about imagebind_llm code
Issue -
State: closed - Opened by keke-220 over 1 year ago
- 7 comments
#70 - fun-tunning of image?
Issue -
State: open - Opened by web3creator over 1 year ago
- 4 comments
#69 - multi-scale CLIP feature
Issue -
State: closed - Opened by qyx1121 over 1 year ago
- 1 comment
#68 - Question for v2 finetuning on coco caption
Issue -
State: open - Opened by simplewhite9 over 1 year ago
- 5 comments
#67 - NaN Loss while training on a custom instruction dataset
Issue -
State: closed - Opened by Sleepyhead01 over 1 year ago
- 5 comments
#66 - Clarification on Pretrained Weights
Issue -
State: closed - Opened by keke-220 over 1 year ago
- 7 comments
#65 - Inference speed very slow
Issue -
State: closed - Opened by Hsn37 over 1 year ago
- 5 comments
#64 - On the 80k data to train chatbot
Issue -
State: open - Opened by John-Ge over 1 year ago
- 1 comment
#63 - model weights
Issue -
State: open - Opened by 1zhangtianqing over 1 year ago
- 7 comments
#62 - Code for Multi-modal Reasoning on ScienceQA.
Issue -
State: open - Opened by kai-wen-yang over 1 year ago
- 4 comments
#61 - A lot of files not found. Will they be released again?
Issue -
State: open - Opened by Practicing7 over 1 year ago
- 2 comments
#60 - Update misc.py
Pull Request -
State: open - Opened by Nikhil-Paleti over 1 year ago
#59 - loss is nan
Issue -
State: open - Opened by 1zhangtianqing over 1 year ago
- 1 comment
#58 - LLaMA-Adapter V3?
Issue -
State: closed - Opened by gusye1234 over 1 year ago
- 1 comment
#57 - Question : How to effectively add training to specific domain ?
Issue -
State: open - Opened by x4080 over 1 year ago
- 2 comments
#56 - Where is the instruction/finetuning data for 3d?
Issue -
State: open - Opened by mu-cai over 1 year ago
- 2 comments
#55 - Finetuning Of V2?
Issue -
State: open - Opened by poonehmousavi over 1 year ago
- 3 comments
#54 - Add LangChain integration doc and demo for v1 and v2
Pull Request -
State: closed - Opened by SiyuanHuang95 over 1 year ago
- 1 comment
#53 - Can this adapter be run with OpenLLaMA?
Issue -
State: open - Opened by jawhster over 1 year ago
- 2 comments
#52 - Add ImageBind-LLM with 3D point cloud modality
Pull Request -
State: closed - Opened by ZiyuGuo99 over 1 year ago
#51 - What dataset do you use for training the (released) multimodal adapter?
Issue -
State: closed - Opened by dcahn12 over 1 year ago
- 2 comments
#50 - Is it normal to have outputs for example.py that is different from yours?
Issue -
State: open - Opened by yxchng over 1 year ago
- 1 comment
#49 - Is the GPL License correct?
Issue -
State: closed - Opened by RonanKMcGovern over 1 year ago
- 4 comments
#48 - Question: How to save checkpoints after every epoch?
Issue -
State: closed - Opened by salehshadi over 1 year ago
- 2 comments
#47 - Adapter: v1 vs v2 implementation
Issue -
State: closed - Opened by Andrei-Aksionov over 1 year ago
- 2 comments
#46 - how to merge adapter to original weights
Issue -
State: open - Opened by LiuPearl1 over 1 year ago
- 8 comments
#45 - llama_adapter v1 script can be trained on v100 32G?
Issue -
State: open - Opened by zh25714 over 1 year ago
- 1 comment
#44 - Why use 512 as the max sequence length for fine tuning alpaca?
Issue -
State: open - Opened by tetratorus over 1 year ago
- 2 comments
#43 - Loss does not decrease during v1 finetuning.
Issue -
State: closed - Opened by Luzzer over 1 year ago
- 2 comments
#42 - A question about adapter code
Issue -
State: open - Opened by TheShy-Dream over 1 year ago
- 1 comment
#41 - Runtime Error when running fine tuning script.
Issue -
State: open - Opened by seelenbrecher over 1 year ago
- 1 comment
#40 - look up the implement of multimodal adapter
Issue -
State: closed - Opened by TheShy-Dream over 1 year ago
- 6 comments
#39 - should we always take tanh on gate
Issue -
State: open - Opened by dingran over 1 year ago
- 3 comments
#38 - Questions about implementation of llama-adapter-v2's multi-modal ability and training
Issue -
State: closed - Opened by PanQiWei over 1 year ago
- 10 comments
#37 - KeyError: 'adapter_query.weight' on finetuned adaptor
Issue -
State: open - Opened by dittops over 1 year ago
- 2 comments
#36 - Inconsitency of learnable scale parameter between code and paper
Issue -
State: closed - Opened by theAdamColton over 1 year ago
- 2 comments
#35 - Fix hardcoded model embedding size in v1
Pull Request -
State: open - Opened by Ar-Kareem over 1 year ago
#34 - Computing output likelihoods?
Issue -
State: open - Opened by vishaal27 over 1 year ago
- 1 comment
#33 - What is the minimum VRAM of a GPU required for training a model?
Issue -
State: open - Opened by xuantoan0406 over 1 year ago
- 1 comment
#32 - Finetune using V2
Issue -
State: open - Opened by huzerD over 1 year ago
- 1 comment
#31 - Update packaging
Pull Request -
State: closed - Opened by jxtngx over 1 year ago
- 1 comment
#30 - Proper comparison between adapter-tuning, lora-tuning, prompt-tuning, and prefix-tuning?
Issue -
State: closed - Opened by jzhang38 over 1 year ago
- 2 comments
#29 - potential avenues of size reduction.
Issue -
State: open - Opened by Alignment-Lab-AI over 1 year ago
- 2 comments
#28 - Quantization support
Issue -
State: open - Opened by neuhaus over 1 year ago
- 4 comments
#27 - Getting a OOM error when running on a 2xA100-80gb machine
Issue -
State: open - Opened by Jameshskelton over 1 year ago
- 1 comment
#26 - Can this work on a customer GPU?
Issue -
State: open - Opened by davyuan over 1 year ago
- 2 comments
#25 - Mac M1 Pro GPU compatibility
Issue -
State: open - Opened by kirrukirru over 1 year ago
- 1 comment
#24 - Visual Instruction model
Issue -
State: closed - Opened by remixer-dec over 1 year ago
- 10 comments
#23 - Chat 65b demo
Pull Request -
State: closed - Opened by linziyi96 over 1 year ago
#22 - Errors throwns when finetuning
Issue -
State: open - Opened by Petrichoeur over 1 year ago
- 4 comments
#21 - which GPU could I used if I want to do alpaca_finetuning_v1 on a single GPU?
Issue -
State: open - Opened by pengwei-iie over 1 year ago
- 2 comments
#20 - Issues with fine-tuning on the ScienceQA dataset.
Issue -
State: open - Opened by Gary3410 over 1 year ago
- 5 comments
#19 - How to fine-tune with the trained alpaca adaptor as the starting point?
Issue -
State: closed - Opened by alpayariyak over 1 year ago
- 2 comments
#18 - Multi-image inputs to the model
Issue -
State: open - Opened by vishaal27 over 1 year ago
- 1 comment
#17 - what about gpuduring training? I use 16G*4, batch size to 1, and then OOM
Issue -
State: open - Opened by pengwei-iie over 1 year ago
- 2 comments
#16 - multi-gpu
Issue -
State: open - Opened by pengwei-iie over 1 year ago
- 1 comment
#15 - Alpaca finetuning issues
Issue -
State: open - Opened by Gary3410 over 1 year ago
- 3 comments
#14 - Error when running example.py
Issue -
State: open - Opened by reddiamond1234 over 1 year ago
- 3 comments
#13 - Training error occurred
Issue -
State: closed - Opened by laosuan over 1 year ago
- 3 comments
#12 - Question: CUDA synchronize when training
Issue -
State: open - Opened by Jack-ZC8 over 1 year ago
#11 - Question about pytorch and cudatool version
Issue -
State: closed - Opened by wendyhan2020 over 1 year ago
- 2 comments
#10 - Results on more multimodal datasets
Issue -
State: open - Opened by sachit-menon over 1 year ago
- 1 comment
#9 - Question about the initialization of Adapter.
Issue -
State: open - Opened by TitleZ99 over 1 year ago
- 7 comments
#8 - Error while inference
Issue -
State: open - Opened by satani99 over 1 year ago
- 4 comments
#7 - When Training code will be Released? I have prepared the japanese Version after training on my dataset, I would like to see its efficiency.
Issue -
State: open - Opened by ankur92009 over 1 year ago
- 4 comments
#6 - Code for reproducing evaluation results on ScienceQA
Issue -
State: open - Opened by TJKlein over 1 year ago
- 9 comments
#5 - HuggingFace Models
Issue -
State: open - Opened by slavakurilyak over 1 year ago
- 4 comments
#4 - Question: Can the number of "adaptable" layers be significantly reduced? Can training be optimized for "extremely consumer" grade GPUs like the 3060 Ti (8GB VRAM)?
Issue -
State: closed - Opened by Andrey36652 over 1 year ago
- 3 comments
#3 - some questions about LoRA
Issue -
State: open - Opened by suc16 over 1 year ago
- 3 comments