Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / luchangli03/export_llama_to_onnx issues and pull requests
#20 - Llama模型转完ONNX之后,输入可以是input_embeds吗
Issue -
State: closed - Opened by OswaldoBornemann 3 months ago
- 2 comments
#19 - 请问一下对应llama3的transformers版本是多少
Issue -
State: closed - Opened by sihouzi21c 3 months ago
- 1 comment
#18 - Please uninstall/disable FlashAttention (and maybe xformers) before model conversion。请问这句话的意思是在模型转换前必须重新训练一个不使用flashattention的模型吗?
Issue -
State: closed - Opened by PeterXingke 4 months ago
- 1 comment
#17 - 请问Qwen转换出错问题:RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 28 but got size 4 for tensor number 1 in the list.
Issue -
State: open - Opened by yanxiao1930 4 months ago
- 5 comments
#16 - 执行export_llama3.py时为什么会导出一大堆中间文件(参数)?
Issue -
State: closed - Opened by Lingzzyy 5 months ago
- 3 comments
#15 - Can you give an example code of how to deduce onnx model after qwen switched to onnx
Issue -
State: open - Opened by Pengjie-W 6 months ago
- 1 comment
#14 - 有导出Qwen-VL的7B模型转onnx的程序吗?
Issue -
State: open - Opened by chantjhang 8 months ago
#13 - 使用onnx库读取转换好的onnx模型报错
Issue -
State: closed - Opened by L1-M1ng 9 months ago
- 7 comments
#12 - 转换QWen-7B错误
Issue -
State: closed - Opened by L1-M1ng 9 months ago
- 2 comments
#11 - 转换llama也提示错误。 AttributeError: 'tuple' object has no attribute 'get_usable_length'
Issue -
State: open - Opened by louwangzhiyuY 11 months ago
- 1 comment
#10 - 转换qwen模型的时候,提示atten_mask:5 error.
Issue -
State: open - Opened by louwangzhiyuY 11 months ago
- 2 comments
#9 - 使用3090导出 QWen-7b,报OOM问题。
Issue -
State: open - Opened by linthy94 about 1 year ago
- 3 comments
#8 - 单卡a6000 50g会oom
Issue -
State: open - Opened by 77281900000 about 1 year ago
- 1 comment
#7 - 有没有llama 的onnx inference 脚本
Issue -
State: open - Opened by hujuntao123 about 1 year ago
- 3 comments
#6 - 用3090 导出7b 和13b llama2 报oom
Issue -
State: closed - Opened by hujuntao123 about 1 year ago
- 1 comment
#5 - 显存占用增加
Issue -
State: closed - Opened by cdxzyc about 1 year ago
- 1 comment
#4 - 请问如何正确推理使用cuda导出的fp16 onnx chatglm2-6b-32k模型?
Issue -
State: open - Opened by yuunnn-w about 1 year ago
- 1 comment
#2 - Polish ChatGLM2 conversion
Pull Request -
State: closed - Opened by duanqn about 1 year ago
#1 - convert Qwen question
Issue -
State: closed - Opened by OneStepAndTwoSteps over 1 year ago
- 4 comments