Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / QwenLM/Qwen2.5 issues and pull requests
#1055 - [Bug]: Use of the term "open source" to describe qwen when the training data is not open
Issue -
State: open - Opened by phly95 11 days ago
- 1 comment
#1052 - [Bug]: Model name error in vllm deployment
Issue -
State: closed - Opened by JulioZhao97 12 days ago
- 4 comments
#1049 - [Bug]: AttributeError: Model Qwen2ForCausalLM does not support BitsAndBytes quantization yet.
Issue -
State: open - Opened by yananchen1989 15 days ago
- 1 comment
#1040 - [REQUEST]:
Issue -
State: closed - Opened by DAAworld 19 days ago
- 1 comment
#1031 - [Bug]: vllm启动大模型,超过一定的上下文长度导致大模型回答答非所问
Issue -
State: closed - Opened by Ave-Maria 24 days ago
- 3 comments
#1023 - [Bug]: 在 4 卡 16GB V100 机器上采用 lmdeploy 部署 qwen2.5-32b-instruct-gptq-int4 模型,最高输出速度只有 80token/s ,请问这个速度正常吗?
Issue -
State: open - Opened by SolomonLeon 30 days ago
- 3 comments
#1015 - [Bug]: vllm 启动,openai的swarm 函数调用不正常
Issue -
State: open - Opened by 18600709862 about 1 month ago
- 2 comments
#1006 - [Bug]: Nvidia L20推理Qwen2.5 72B GPTQ-Int8模型不符合预期
Issue -
State: open - Opened by renne444 about 1 month ago
- 1 comment
#1005 - [Badcase]: 使用Qwen2-7b翻译中文,有额外的输出
Issue -
State: open - Opened by cjjjy about 1 month ago
- 1 comment
#998 - [Bug]: vllm部署后,用官方的例子调用报错openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "name 'Extension' is not defined", 'type': 'BadRequestError', 'param': None, 'code': 400}
Issue -
State: open - Opened by 1gst about 2 months ago
- 2 comments
#997 - [Bug]: 文档问答会忽略部分数据,比如证书号是12345 回答的是2345
Issue -
State: open - Opened by daimashenjing about 2 months ago
- 3 comments
#996 - docs: Add OpenLLM
Pull Request -
State: open - Opened by Sherlock113 about 2 months ago
#994 - 关于function参数的格式问题
Issue -
State: closed - Opened by XuyangHao123 about 2 months ago
#992 - [Badcase]: qwen2.5-72b 在昇腾910推理结果不符合预期
Issue -
State: open - Opened by tianshiyisi about 2 months ago
- 6 comments
Labels: help wanted
#991 - [Badcase]: 函数调用出现不正常token(iNdEx)
Issue -
State: open - Opened by abiaoa1314 about 2 months ago
#990 - mlx-lm documentation is corrected
Pull Request -
State: open - Opened by gringocl about 2 months ago
#986 - [Badcase]: Qwen2.5-72B-Instruct-GPTQ-Int4 input_size_per_partition
Issue -
State: open - Opened by hyliush about 2 months ago
- 5 comments
#985 - [Badcase]: Qwen2.5 14B Instruct can't stop generation
Issue -
State: open - Opened by Jeremy-Hibiki about 2 months ago
- 1 comment
Labels: enhancement
#957 - [Badcase]: qwen2.5 instruct 14B SFT后解码重复
Issue -
State: open - Opened by 520jefferson about 2 months ago
- 12 comments
#956 - [Question]: Can I use the QWEN2.5 model to generate sentence vectors?
Issue -
State: closed - Opened by trundleyrg about 2 months ago
- 3 comments
#945 - [Badcase]: Model inference Qwen2.5-32B-Instruct-GPTQ-Int4 appears as garbled text !!!!!!!!!!!!!!!!!!
Issue -
State: open - Opened by zhanaali about 2 months ago
- 12 comments
#935 - [Badcase]: 相同的数据,微调时在qwen2.5 72B预训练模型上的loss是qwen2 72B的3倍,请问2.5除了数据变多了,其他有什么不一样吗
Issue -
State: closed - Opened by boundles about 2 months ago
- 14 comments
#927 - [Bug]: The eos_token of the Qwen 2.5 base model is inconsistent between config.json and tokenizer_config.json.
Issue -
State: closed - Opened by Songjw133 about 2 months ago
- 9 comments
#921 - [Badcase]: Loss does not drop when using Liger Kernel at Qwen2.5
Issue -
State: open - Opened by Se-Hun about 2 months ago
- 2 comments
#918 - [Question]: qwen2.5 agent使用问题
Issue -
State: closed - Opened by lonngxiang about 2 months ago
- 8 comments
#890 - TensorRT Qwen2-72B-Instruct-GPTQ-Int4 can be converted and built into the engine normally, but the inference results are garbled. Have you ever encountered this?
Issue -
State: open - Opened by tianzuishiwo 3 months ago
- 3 comments
#888 - qwen模型可以用哪些gpu芯片?
Issue -
State: closed - Opened by GhostISTA 3 months ago
- 2 comments
Labels: inactive
#885 - About the vocabulary inconsistence
Issue -
State: closed - Opened by patrick-tssn 3 months ago
- 3 comments
Labels: inactive
#883 - how to finetuning qwen2 instruct model with long context
Issue -
State: closed - Opened by ben-8878 3 months ago
- 1 comment
Labels: inactive
#879 - Qwen2怎么扩展tokenizers词表?
Issue -
State: open - Opened by LarryLong45 3 months ago
- 4 comments
#876 - 您好,请问Qwen2-72B-Instruct公开榜单中以下评测集分别是采用多少shot评测的结果?
Issue -
State: closed - Opened by 13416157913 3 months ago
- 2 comments
Labels: inactive
#874 - ft qwen2的时候,flash attn 和core attn的输出相差较大,且attn_mask为false的token, flash attn输出的是全0向量,但core attn输出的是一个正常向量
Issue -
State: open - Opened by seanM29 3 months ago
- 9 comments
Labels: inactive
#872 - Qwen2通过transformer.pipeline调用时,怎么输出每个token的概率
Issue -
State: closed - Opened by xin0623 3 months ago
- 1 comment
Labels: inactive
#871 - [TGI] V100部署qwen2-7b推理服务问题
Issue -
State: closed - Opened by charosen 3 months ago
- 2 comments
Labels: inactive
#870 - How to fine tune qwen2?
Issue -
State: closed - Opened by yangxue-1 3 months ago
- 1 comment
Labels: inactive
#867 - [QUESTION] Is SWA used in Qwen2 long context pretraining?
Issue -
State: closed - Opened by KKCDD 3 months ago
- 2 comments
#863 - Parallel inference is wrong but single card inference is correct. (can't be solved by updating nvidia driver like #331)
Issue -
State: closed - Opened by realCattleya 3 months ago
- 5 comments
Labels: inactive
#860 - qwen1.5微调出错 ValueError: expected sequence of length 289 at dim 1 (got 291)
Issue -
State: closed - Opened by lijiayi980130 3 months ago
- 1 comment
Labels: inactive
#859 - 你好,我在自己的数据集上对qwen2-7b进行sft后,想在相同数据结构上进行评测,遇到了问题
Issue -
State: closed - Opened by xiaomao19970819 3 months ago
- 1 comment
Labels: inactive
#855 - 如何设置参数,让每次回答的答案结果一致
Issue -
State: closed - Opened by xiangxinhello 3 months ago
- 4 comments
Labels: inactive
#852 - [昇腾910B+AscendvLLM] 为什么Qwen2-7B-Instruct推理时总是在最前面加上一个链接
Issue -
State: closed - Opened by zhufeizzz 3 months ago
- 10 comments
#835 - 在fine tune Qwen2-7B-Instruct 保存时候错误
Issue -
State: closed - Opened by JHaoGao 3 months ago
- 3 comments
Labels: inactive
#826 - 请问qwen2的模型支持FP16部署吗,在用V100的推理过程中出现激活后数值的溢出的问题,请问qwen2支持v100推理吗?
Issue -
State: closed - Opened by zhougekaibenchi 4 months ago
- 4 comments
Labels: inactive
#820 - 使用官方提供微调工具finetune.py和finetune.sh,loss始终为0
Issue -
State: closed - Opened by paulxs 4 months ago
- 4 comments
Labels: inactive
#806 - 关于微调后qwen2-72B-instruct-int4-gptq后的压测和长文本测试
Issue -
State: closed - Opened by lxb0425 4 months ago
- 7 comments
Labels: inactive
#791 - Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4 模型加载时间过长(近 2 小时)
Issue -
State: closed - Opened by TimVan1596 4 months ago
- 3 comments
Labels: inactive
#717 - 运行支持的128K上下文后报错
Issue -
State: closed - Opened by lonngxiang 5 months ago
- 34 comments
Labels: inactive
#662 - Function calling support of openai style api for Qwen1.5 and Qwen2 model
Pull Request -
State: open - Opened by Stephen-SMJ 5 months ago
- 7 comments
#576 - 使用vllm部署qwen2-72b-instruct重复生成的问题
Issue -
State: closed - Opened by Kk1984up 5 months ago
- 10 comments
Labels: inactive
#387 - vllm本地部署Qwen1.5-72B-chat,在一次性生成长文本的任务中,出现超出字数限制的提示
Issue -
State: closed - Opened by JSLW 6 months ago
- 4 comments
Labels: enhancement
#368 - error on provided sample code: RuntimeError: CUDA error: device-side assert triggered
Issue -
State: closed - Opened by Raychanan 7 months ago
- 6 comments
#145 - [BUG] 运行openai_api.py报错,cli_demo.py和web_demo.py均正常启动
Issue -
State: closed - Opened by 2277419213 9 months ago
- 11 comments
Labels: inactive
#145 - [BUG] 运行openai_api.py报错,cli_demo.py和web_demo.py均正常启动
Issue -
State: closed - Opened by 2277419213 9 months ago
- 11 comments
Labels: inactive
#99 - [BUG] self.labels in SupervisedDataset
Issue -
State: closed - Opened by gqc666 9 months ago
- 1 comment
#98 - Fix model path in vllm.rst
Pull Request -
State: closed - Opened by Michaelvll 9 months ago
#97 - [QA] Number of training tokens
Issue -
State: closed - Opened by RicardoDominguez 9 months ago
- 6 comments
Labels: inactive
#96 - Update index.rst
Pull Request -
State: closed - Opened by ganeshkrishnan1 9 months ago
#96 - Update index.rst
Pull Request -
State: closed - Opened by ganeshkrishnan1 9 months ago
#95 - Fix minor streamer typo
Pull Request -
State: closed - Opened by osanseviero 9 months ago
#95 - Fix minor streamer typo
Pull Request -
State: closed - Opened by osanseviero 9 months ago
#94 - Qwen1.5的模型参数名称和Qwen不同,考虑开放模型结构代码么?
Issue -
State: closed - Opened by chenzhenbupt 9 months ago
- 2 comments
#93 - qwen1.5沿用qwen1的lora代码和数据后,能力退化明显
Issue -
State: closed - Opened by fanbooo 9 months ago
- 8 comments
#93 - qwen1.5沿用qwen1的lora代码和数据后,能力退化明显
Issue -
State: closed - Opened by fanbooo 9 months ago
- 8 comments
#92 - qwen1.5-7b推理报错chat_stream
Issue -
State: closed - Opened by wxchjay 9 months ago
- 1 comment
#92 - qwen1.5-7b推理报错chat_stream
Issue -
State: closed - Opened by wxchjay 9 months ago
- 1 comment
#91 - Update README.md
Pull Request -
State: closed - Opened by yijia2413 9 months ago
- 1 comment
#90 - 使用ModelScope库pipeline方式加载模型报错
Issue -
State: closed - Opened by chowsu 9 months ago
- 2 comments
#90 - 使用ModelScope库pipeline方式加载模型报错
Issue -
State: closed - Opened by chowsu 9 months ago
- 2 comments
#89 - 有木有像Qwen一样,有自己的推理代码
Issue -
State: closed - Opened by njhouse365 9 months ago
- 2 comments
#89 - 有木有像Qwen一样,有自己的推理代码
Issue -
State: closed - Opened by njhouse365 9 months ago
- 2 comments
#88 - 问一下 Qwen1.5-72B-Chat 至少需要多少显存才可以跑呀,
Issue -
State: closed - Opened by 1920853199 9 months ago
- 6 comments
#88 - 问一下 Qwen1.5-72B-Chat 至少需要多少显存才可以跑呀,
Issue -
State: closed - Opened by 1920853199 9 months ago
- 6 comments
#87 - vllm中的例子跑不通,出现错误{"object":"error","message":"The model `Qwen/Qwen1.5-7B-Chat` does not exist.","type":"NotFoundError","param":null,"code":404}
Issue -
State: closed - Opened by LHB-kk 9 months ago
- 2 comments
#87 - vllm中的例子跑不通,出现错误{"object":"error","message":"The model `Qwen/Qwen1.5-7B-Chat` does not exist.","type":"NotFoundError","param":null,"code":404}
Issue -
State: closed - Opened by LHB-kk 9 months ago
- 2 comments
#86 - Qwen對繁體中文的識別及生成能力
Issue -
State: closed - Opened by ACBBZ 9 months ago
- 1 comment
#86 - Qwen對繁體中文的識別及生成能力
Issue -
State: closed - Opened by ACBBZ 9 months ago
- 1 comment
#85 - How well does this model support Traditional Chinese
Issue -
State: closed - Opened by Jack-devnlp 9 months ago
- 1 comment
#85 - How well does this model support Traditional Chinese
Issue -
State: closed - Opened by Jack-devnlp 9 months ago
- 1 comment
#84 - fine-tuning with SFTTrainer
Issue -
State: closed - Opened by shao-shuai 9 months ago
- 2 comments
#84 - fine-tuning with SFTTrainer
Issue -
State: closed - Opened by shao-shuai 9 months ago
- 2 comments
#83 - [BUG] HuggingFace Inference Endpoints 報錯
Issue -
State: closed - Opened by lkthomas 9 months ago
- 1 comment
#83 - [BUG] HuggingFace Inference Endpoints 報錯
Issue -
State: closed - Opened by lkthomas 9 months ago
- 1 comment
#82 - 请教一下,现在这个model.generate的方式怎么设置stop_words为“Observation:”呢?
Issue -
State: closed - Opened by fataldemon 9 months ago
- 2 comments
#82 - 请教一下,现在这个model.generate的方式怎么设置stop_words为“Observation:”呢?
Issue -
State: closed - Opened by fataldemon 9 months ago
- 2 comments
#81 - 关于仓库自带的Finetune代码target生成的疑惑
Issue -
State: closed - Opened by TankNee 9 months ago
- 3 comments
#81 - 关于仓库自带的Finetune代码target生成的疑惑
Issue -
State: closed - Opened by TankNee 9 months ago
- 3 comments
#80 - Can't find 'adapter_config.json' 用项目微调代码完成训练后,文件中包含有adapter_config.json但是一直提示找不到adapter_config.json。训练过程loss正常下降
Issue -
State: closed - Opened by zwt0204 9 months ago
- 5 comments
Labels: inactive
#79 - GPTQ 量化方法
Issue -
State: closed - Opened by su-zelong 9 months ago
- 1 comment
#79 - GPTQ 量化方法
Issue -
State: closed - Opened by su-zelong 9 months ago
- 1 comment
#78 - Qwen2: PAD token = EOS token?
Issue -
State: closed - Opened by Qubitium 9 months ago
- 2 comments
#77 - Qwen1.5-0.5b-chat 使用example中fintune.py 报错
Issue -
State: closed - Opened by 128Ghe980 9 months ago
- 5 comments
#77 - Qwen1.5-0.5b-chat 使用example中fintune.py 报错
Issue -
State: closed - Opened by 128Ghe980 9 months ago
- 5 comments
#76 - vllm的结果跟hf结果差距较大
Issue -
State: closed - Opened by Nipi64310 9 months ago
- 16 comments
#76 - vllm的结果跟hf结果差距较大
Issue -
State: closed - Opened by Nipi64310 9 months ago
- 17 comments
#75 - When I call the api '/v1/chat/completions' of FastChat API Server to access qwen1.5-72b-chat, it response incomplete results, but qwen-72b-chat response complete results
Issue -
State: closed - Opened by coreyho 9 months ago
- 2 comments
Labels: inactive
#75 - When I call the api '/v1/chat/completions' of FastChat API Server to access qwen1.5-72b-chat, it response incomplete results, but qwen-72b-chat response complete results
Issue -
State: closed - Opened by coreyho 9 months ago
- 2 comments
Labels: inactive
#74 - 运行API服务对话时提示:TypeError: 'NoneType' object is not iterable
Issue -
State: closed - Opened by syusama 9 months ago
- 5 comments
#74 - 运行API服务对话时提示:TypeError: 'NoneType' object is not iterable
Issue -
State: closed - Opened by syusama 9 months ago
- 5 comments
#73 - 沿用qwen1的lora微调脚本,训练有问题;
Issue -
State: closed - Opened by fanbooo 9 months ago
- 8 comments
#72 - vllm deploy with stop_token_ids
Issue -
State: closed - Opened by geasyheart 9 months ago
- 5 comments