Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / luchangli03/export_llama_to_onnx issues and pull requests

#20 - Llama模型转完ONNX之后,输入可以是input_embeds吗

Issue - State: closed - Opened by OswaldoBornemann 3 months ago - 2 comments

#19 - 请问一下对应llama3的transformers版本是多少

Issue - State: closed - Opened by sihouzi21c 3 months ago - 1 comment

#14 - 有导出Qwen-VL的7B模型转onnx的程序吗?

Issue - State: open - Opened by chantjhang 8 months ago

#13 - 使用onnx库读取转换好的onnx模型报错

Issue - State: closed - Opened by L1-M1ng 9 months ago - 7 comments

#12 - 转换QWen-7B错误

Issue - State: closed - Opened by L1-M1ng 9 months ago - 2 comments

#10 - 转换qwen模型的时候,提示atten_mask:5 error.

Issue - State: open - Opened by louwangzhiyuY 11 months ago - 2 comments

#9 - 使用3090导出 QWen-7b,报OOM问题。

Issue - State: open - Opened by linthy94 about 1 year ago - 3 comments

#8 - 单卡a6000 50g会oom

Issue - State: open - Opened by 77281900000 about 1 year ago - 1 comment

#7 - 有没有llama 的onnx inference 脚本

Issue - State: open - Opened by hujuntao123 about 1 year ago - 3 comments

#6 - 用3090 导出7b 和13b llama2 报oom

Issue - State: closed - Opened by hujuntao123 about 1 year ago - 1 comment

#5 - 显存占用增加

Issue - State: closed - Opened by cdxzyc about 1 year ago - 1 comment

#4 - 请问如何正确推理使用cuda导出的fp16 onnx chatglm2-6b-32k模型?

Issue - State: open - Opened by yuunnn-w about 1 year ago - 1 comment

#3 - 适用范围

Issue - State: open - Opened by hardlipay about 1 year ago - 1 comment

#2 - Polish ChatGLM2 conversion

Pull Request - State: closed - Opened by duanqn about 1 year ago

#1 - convert Qwen question

Issue - State: closed - Opened by OneStepAndTwoSteps over 1 year ago - 4 comments