Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / xusenlinzy/api-for-open-llm issues and pull requests
#311 - 运行glm4v请求报错
Issue -
State: open - Opened by 760485464 15 days ago
- 2 comments
#310 - 执行streamlit_app.py报错
Issue -
State: open - Opened by louan1998 15 days ago
#309 - not support sglang backend
Issue -
State: open - Opened by colinsongf 21 days ago
#308 - TASKS=llm,rag模式下,出现线程问题报错:RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Issue -
State: open - Opened by syusama 30 days ago
#307 - 部署gte-qwen2-1.5b-instruct请求rerank接口报错
Issue -
State: open - Opened by cowcomic about 1 month ago
#306 - vllm 接口支持vision(minicpm-v)
Pull Request -
State: open - Opened by baisong666 about 1 month ago
#305 - docker运行报错:multiproc_worker_utils.py:226] RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Issue -
State: closed - Opened by syusama about 1 month ago
- 3 comments
#304 - python: can't open file '/workspace/api/server.py': [Errno 2] No such file or directory,Ubuntu上docker-compose部署Qwen2-72B-Instruct-GPTQ-Int4报错
Issue -
State: closed - Opened by syusama about 1 month ago
#303 - 使用Qwen2-7B-Instrut模型出现问题-使用Vllm
Issue -
State: closed - Opened by Empress7211 about 1 month ago
- 3 comments
#302 - RuntimeError: CUDA error: device-side assert triggered
Issue -
State: closed - Opened by ChaoPeng13 about 1 month ago
#301 - 💡 [REQUEST] - 请问可以支持中国电信大模型Telechat吗?流程可以跑通,但是回复content会被截断
Issue -
State: open - Opened by Song345381185 about 2 months ago
- 9 comments
Labels: question
#300 - 💡 [REQUEST] - 请问可以支持中国电信大模型Telechat吗?流程可以跑通,但是回复content会被截断
Issue -
State: closed - Opened by Song345381185 about 2 months ago
Labels: question
#299 - doc chat 使用时报 FileNotFoundError: Table does not exist.Please first call db.create_table(, data) 错误
Issue -
State: closed - Opened by Weiqiang-Li about 2 months ago
- 1 comment
#297 - 【embedding】是不支持最新的SOTA模型吗 ?KeyError: 'Could not automatically map text2vec-base-multilingual to a tokeniser.
Issue -
State: closed - Opened by ForgetThatNight 2 months ago
- 2 comments
#296 - llama3-8B回答后自我交流,不停止
Issue -
State: open - Opened by yd9038074 2 months ago
- 1 comment
#292 - minicpm启动没问题,推理访问报错
Issue -
State: open - Opened by 760485464 3 months ago
- 2 comments
#291 - glm-4v启动正常 访问推理报错
Issue -
State: open - Opened by 760485464 3 months ago
- 10 comments
#288 - glm4 接入dify后无法触发使用工具
Issue -
State: open - Opened by he498 3 months ago
- 1 comment
#189 - ChatGLM3的输入长度超过8k时建议拦截中断执行
Issue -
State: closed - Opened by lzhfe 10 months ago
- 3 comments
#100 - 模型输出<|im_start|> <|im_start|>
Issue -
State: closed - Opened by bh4ffu about 1 year ago
- 2 comments
#99 - baichuan 13b 部署后 使用langchain调用报错
Issue -
State: closed - Opened by zhouzhou0322 about 1 year ago
- 5 comments
#98 - 启动报错 TypeError: issubclass() arg 1 must be a class
Issue -
State: closed - Opened by zhouzhou0322 about 1 year ago
- 5 comments
#97 - codellama-34b-instruct-hf 回复内容出现截断
Issue -
State: closed - Opened by anyshu about 1 year ago
- 3 comments
#96 - Fixed baremetal startup process
Pull Request -
State: closed - Opened by wey-gu about 1 year ago
#95 - 流式接口输出,role为null
Issue -
State: closed - Opened by bh4ffu about 1 year ago
#94 - 现在启动模式使用环境变量配置,如果我需要在一台机器上启动2个模型实例,如何配置?
Issue -
State: closed - Opened by anyshu about 1 year ago
- 2 comments
Labels: question
#93 - 现在启动模式使用环境变量配置,如果我需要在一台机器上启动2个模型实例,如何配置?
Issue -
State: closed - Opened by anyshu about 1 year ago
#92 - 13B模型量化加载提示显存溢出
Issue -
State: closed - Opened by chelovek21 about 1 year ago
- 2 comments
#91 - 新代码运行报错
Issue -
State: closed - Opened by xsun15 about 1 year ago
- 2 comments
#90 - [help]考虑支持一下codellama吧?
Issue -
State: closed - Opened by bh4ffu about 1 year ago
- 4 comments
Labels: enhancement
#89 - ValueError: Out of range float values are not JSON compliant
Issue -
State: closed - Opened by 143heyan about 1 year ago
- 4 comments
#88 - usage.first_tokens = content["usage"].get("first_tokens", None)
Issue -
State: closed - Opened by uulichen about 1 year ago
- 5 comments
#87 - 执行sudo docker build -f docker/Dockerfile -t llm-api:pytorch . 命令中,远程连接仓库报错
Issue -
State: closed - Opened by xxs980 about 1 year ago
- 8 comments
#86 - 对于qwen模型,在使用completion接口时,decode参数存在的问题
Issue -
State: closed - Opened by nlfiasel about 1 year ago
- 1 comment
#85 - Unable to find image 'llm-api:vllm' locally
Issue -
State: closed - Opened by xiechengmude about 1 year ago
- 1 comment
#84 - "GET /v1 HTTP/1.1" 404 Not Found
Issue -
State: closed - Opened by YunFenLei about 1 year ago
- 1 comment
#83 - Qwen model outputs are different from Vllm server vs normal server.
Issue -
State: closed - Opened by monksgoyal about 1 year ago
- 4 comments
#82 - 💡 [REQUEST] - 是否支持chatglm2的多轮对话?
Issue -
State: closed - Opened by huanglx27 about 1 year ago
- 1 comment
Labels: question
#81 - Update requirements.txt
Pull Request -
State: closed - Opened by luchenwei9266 about 1 year ago
#80 - 模型输出没有完毕前,客户端停止输出,服务端报错。
Issue -
State: closed - Opened by jinghai about 1 year ago
- 10 comments
#79 - starchat模型使用vllm推理乱码
Issue -
State: closed - Opened by skingko about 1 year ago
- 3 comments
#78 - 试了下,显存占用明显增加,模型用的Qwen-7B-chat,不知道啥原因
Issue -
State: closed - Opened by bh4ffu about 1 year ago
- 2 comments
#77 - 运行python api/server.py 后报错,ModuleNotFoundError: No module named 'api.config'
Issue -
State: closed - Opened by luchenwei9266 about 1 year ago
- 7 comments
#76 - 可不可以建个群交流啊
Issue -
State: closed - Opened by queensking about 1 year ago
- 2 comments
#75 - VLLM运行Qwen报错
Issue -
State: closed - Opened by jinghai about 1 year ago
- 8 comments
#74 - 💡 [REQUEST] - <title>建议增加并发量配置
Issue -
State: closed - Opened by jinghai about 1 year ago
Labels: question
#73 - 构建vllm的镜像卡在 Installing build dependencies: started
Issue -
State: closed - Opened by bh4ffu about 1 year ago
- 4 comments
#72 - 💡 [REQUEST] - <title>可以单独配置embedding模型使用CPU资源吗?
Issue -
State: closed - Opened by jinghai about 1 year ago
- 2 comments
Labels: question
#71 - 可以提供一个公共的api供其他电脑使用吗?
Issue -
State: closed - Opened by 15899885850 about 1 year ago
- 2 comments
#70 - 关于模型异步调用的问题
Issue -
State: closed - Opened by Isfate about 1 year ago
- 2 comments
#69 - vllm下,启动Qwen-7B-Chat 报错
Issue -
State: closed - Opened by xcpuma about 1 year ago
- 6 comments
#68 - 💡 [REQUEST] - <title>QWen模型流失问答输出个不停,不知道什么问题
Issue -
State: closed - Opened by jinghai about 1 year ago
- 20 comments
Labels: question
#67 - Merge pull request #1 from xusenlinzy/master
Pull Request -
State: closed - Opened by xysnqdd about 1 year ago
#66 - 💡 [REQUEST] - <title> 有什么办法可以让embedding模型的维度扩充到1536维吗,有些系统固定了1536维度,要怎么兼容
Issue -
State: closed - Opened by jinghai about 1 year ago
- 11 comments
Labels: question
#65 - 使用vllm推理starchat报错,错误如下:
Issue -
State: closed - Opened by foxxxx001 about 1 year ago
- 11 comments
#64 - Dev
Pull Request -
State: closed - Opened by xusenlinzy about 1 year ago
#63 - 千问7B在windows环境下 ,ValueError: Unrecognized configuration class <class 'transformers_modules.Qwen-7B-Chat.configuration_qwen.QWenConfig'> for this kind of AutoModel: AutoModel.
Issue -
State: closed - Opened by luckfu about 1 year ago
- 1 comment
#62 - Merge pull request #61 from xusenlinzy/master
Pull Request -
State: closed - Opened by xusenlinzy about 1 year ago
#61 - Fix protocol, improve react code, support stream mode for function call
Pull Request -
State: closed - Opened by xusenlinzy about 1 year ago
#60 - vllm启动方式添加embedding模型报错
Issue -
State: closed - Opened by youzhonghui about 1 year ago
- 1 comment
Labels: bug, help wanted
#59 - 安装成功,调用几次就会报错。
Issue -
State: closed - Opened by Hkaisense about 1 year ago
- 2 comments
Labels: environment
#58 - Torch not compiled with CUDA enabled
Issue -
State: closed - Opened by happyfire about 1 year ago
- 3 comments
Labels: environment
#57 - My Qwen LLM startup failed
Issue -
State: closed - Opened by Hkaisense about 1 year ago
- 7 comments
Labels: solved
#56 - 求助:Baichuan-13b-chat运行报错
Issue -
State: closed - Opened by happyfire about 1 year ago
- 2 comments
#55 - 对接dify后不会显示在dify页面上,应该如何配置才能把模型显示在dify页面上?
Issue -
State: closed - Opened by link-king about 1 year ago
- 1 comment
#54 - tiktoken.model.encoding_for_model 需要联网?
Issue -
State: closed - Opened by TheBobbyliu about 1 year ago
- 5 comments
#53 - 建议向量模型考虑增加bge-large-zh
Issue -
State: closed - Opened by berwinjoule about 1 year ago
- 1 comment
Labels: solved
#52 - stream模式下,function call 不兼容
Issue -
State: closed - Opened by zhengxiang5965 about 1 year ago
- 4 comments
#51 - 为什么每次回答的内容都不一样阿
Issue -
State: closed - Opened by lucheng07082221 about 1 year ago
- 1 comment
#50 - docker启动没有报错,但是端口没有监听,无法访问
Issue -
State: closed - Opened by zengzhenhui about 1 year ago
- 2 comments
#49 - 你好,能出一个非docker环境下的执行流程吗,目前服务器无法安装docker,很多命令都执行不了,谢谢。
Issue -
State: closed - Opened by wenyu332 about 1 year ago
- 3 comments
#48 - 希望并发处理多个请求
Issue -
State: closed - Opened by Huangyajuan-123 about 1 year ago
- 3 comments
Labels: enhancement
#47 - 千问的加速依赖是要加到docker中吗?
Issue -
State: closed - Opened by jinghai about 1 year ago
- 2 comments
#46 - 报错:更新了最新代码后报错
Issue -
State: closed - Opened by jinghai about 1 year ago
- 5 comments
#45 - 建议增加llama2系列模型的call function功能
Issue -
State: closed - Opened by skingko about 1 year ago
- 3 comments
Labels: enhancement
#44 - Update prompt_adapter.py 修改Qwen bug
Pull Request -
State: closed - Opened by markliuyuxiang about 1 year ago
- 1 comment
#43 - chat/completions和completions接口的结果差异较大
Issue -
State: closed - Opened by TheBobbyliu about 1 year ago
- 4 comments
#42 - 推理速度疑问(很快)
Issue -
State: closed - Opened by onlyfish79 about 1 year ago
- 8 comments
#41 - 可以支持llama2-hf么?
Issue -
State: closed - Opened by Smile-L about 1 year ago
- 2 comments
#40 - 用本服务调用baichuan输出结果较差,经常不可用。跟原版调用差别很大
Issue -
State: closed - Opened by askintution about 1 year ago
- 4 comments
#39 - Support new hope model
Issue -
State: closed - Opened by 2214962083 about 1 year ago
- 2 comments
#38 - 尝试加载本地firefly-baichuan13b的模型时显存占用翻倍
Issue -
State: closed - Opened by bswaterb about 1 year ago
- 3 comments
#37 - 用chatgpt.py做api转发后,是不支持流式输出了么,我前端接的next-web
Issue -
State: closed - Opened by tonyliu088 about 1 year ago
- 15 comments
#36 - 为什么不支持word呢?
Issue -
State: closed - Opened by lucheng07082221 about 1 year ago
- 2 comments
#35 - 知识库支持上传文件夹吗?
Issue -
State: closed - Opened by lucheng07082221 about 1 year ago
- 2 comments
#34 - 访问出错
Issue -
State: closed - Opened by lucheng07082221 about 1 year ago
- 2 comments
#33 - fix 命令行参数context_len默认值由2048改为None,否则不会读取model config中的context_len
Pull Request -
State: closed - Opened by calehh about 1 year ago
#32 - 请问如何使用指定的卡
Issue -
State: closed - Opened by jinghai about 1 year ago
- 1 comment
#31 - 对baichuan 13b chat的代码存在疑问。
Issue -
State: closed - Opened by askintution about 1 year ago
- 3 comments
#30 - 建议增加文心一言和星火商业模型的支持
Issue -
State: closed - Opened by jinghai about 1 year ago
- 1 comment
#29 - 想要加载文件至向量库,报错"zipfile.BadZipFile: File is not a zip file"
Issue -
State: closed - Opened by TheBobbyliu about 1 year ago
- 1 comment
#28 - 添加知识库后报错
Issue -
State: closed - Opened by whh1009 about 1 year ago
- 4 comments
#27 - 请求增加Aquila模型 | Request for adding Aquila Family
Issue -
State: closed - Opened by myweihp about 1 year ago
- 3 comments
#26 - 建议支持starcoder
Issue -
State: closed - Opened by praguepp about 1 year ago
- 1 comment
Labels: enhancement
#25 - baichuan-13b-chat 500 Internal Server Error
Issue -
State: closed - Opened by anyshu about 1 year ago
- 2 comments
#24 - 关于多个LLM的同时部署的问题
Issue -
State: closed - Opened by myweihp about 1 year ago
- 2 comments
#23 - bug fix:指定GPU无效bug,具体原因是,如果先加载torch模块,那么os.environment设置的变量就是无效的,所以必须要先设置可用GPU变量,然后加载torch模块,重命名app文件,使得原始加载路径不变
Pull Request -
State: closed - Opened by gsy44355 about 1 year ago
#22 - bug:fix 非docker镜像下,加载模型使用--gpus参数指定无效
Issue -
State: closed - Opened by gsy44355 about 1 year ago
- 3 comments
#21 - baichuan模型服务的的基础镜像是什么?
Issue -
State: closed - Opened by Smile-L about 1 year ago
- 1 comment
#20 - 无法加载m3e-base模型
Issue -
State: closed - Opened by 760485464 about 1 year ago
- 1 comment