Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / THUDM/GLM-4 issues and pull requests
#451 - Rust Candle Framework Support
Issue -
State: closed - Opened by donjuanplatinum 3 months ago
#447 - GLM-4 and Dify Response Reception Issue
Issue -
State: closed - Opened by ZYMCCX 3 months ago
- 2 comments
#443 - You: bug GLM-4:The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Issue -
State: closed - Opened by 2662007798 3 months ago
- 18 comments
#442 - 为啥system消息会传两次?
Issue -
State: open - Opened by ciaoyizhen 3 months ago
- 5 comments
#439 - TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'
Issue -
State: closed - Opened by Mellow156 3 months ago
- 10 comments
#438 - 微调报错 ValueError: 151337 is not in list
Issue -
State: open - Opened by TTXS123OK 3 months ago
- 10 comments
#437 - 请问如何加载ptuning微调后的模型,如下图所示,提示不支持prompt learning,但是相同的代码可以加载lora微调的模型,代码就是贵方给出的加载代码
Issue -
State: closed - Opened by LeeGitHub1 3 months ago
- 3 comments
#433 - [glm-4v-9b]关于eval情况下的padding问题。
Issue -
State: closed - Opened by marko1616 3 months ago
- 2 comments
#424 - ChatGLM4 batch推理时padding 细节
Issue -
State: closed - Opened by geekchen007 4 months ago
- 2 comments
#408 - GLM-4V-9B继续微调报错,显示超显存
Issue -
State: closed - Opened by HouYueJie 4 months ago
- 5 comments
#395 - 运行basic_demo报错:The attention mask is not set and cannot be inferred from input
Issue -
State: closed - Opened by WENBO-Z-H 4 months ago
- 5 comments
#390 - GLM4V在推理时候偶发出现重复循环输出相同文字
Issue -
State: closed - Opened by simoncai519 4 months ago
- 3 comments
#386 - 使用脚本微调glm-4v9b不成功
Issue -
State: closed - Opened by chenyangMl 4 months ago
- 15 comments
#381 - 模型使用vllm推理报错
Issue -
State: closed - Opened by shatang123 4 months ago
- 1 comment
#375 - 4张3090 lora微调时报错 OutOfMemoryError: CUDA out of memory. Tried to allocate 214.00 MiB. GPU
Issue -
State: closed - Opened by ShepherdX 4 months ago
- 10 comments
#362 - chatglm3与glm4的tokenizer有什么不同吗? chatglm3可以使用outlines,但是glm4会报错
Issue -
State: closed - Opened by Mewral 4 months ago
- 3 comments
#333 - Ollama运行GLM-4-9b后乱码
Issue -
State: closed - Opened by lalahaohaizi 4 months ago
- 20 comments
#324 - 使用A40进行单机单卡方案进行微调时显存爆炸
Issue -
State: closed - Opened by NewDingW 4 months ago
- 6 comments
#323 - 利用Ollama部署的GLM4在进行信息识别任务时输出大量G
Issue -
State: closed - Opened by letdo1945 4 months ago
- 11 comments
#322 - 两张3090但是显存爆满,无法微调
Issue -
State: closed - Opened by ATIpiu 4 months ago
- 4 comments
#318 - 推理错误,使用官方推理脚本,仅更改模型存储路径:RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
Issue -
State: closed - Opened by HouChenXD 4 months ago
- 8 comments
#281 - CUDA running out of memory for a very small dataset (7 sample training data)
Issue -
State: closed - Opened by theharshithh 4 months ago
- 21 comments
#278 - 建议给个requiremens.txt
Issue -
State: closed - Opened by robator0127 4 months ago
- 2 comments
#271 - sft.yaml出现RuntimeError: 'weight' must be 2-D
Issue -
State: closed - Opened by zzx528 5 months ago
- 7 comments
#248 - lora 微调,eval的时候出现”'NoneType' object has no attribute 'to'“
Issue -
State: closed - Opened by Text2-m 5 months ago
- 8 comments
#220 - 我无法使用ChatGLMForSequenceClassification进行分类
Issue -
State: closed - Opened by Mr-Lnan 5 months ago
- 5 comments
#219 - 模型许可证
Issue -
State: closed - Opened by Pickpate 5 months ago
- 4 comments
#216 - 怎么取消生成回答中带有表情
Issue -
State: closed - Opened by kawayi12318 5 months ago
- 4 comments
#148 - 可以给出不用vllm方式的多卡部署openai_api_server方式吗
Issue -
State: closed - Opened by a624090359 5 months ago
- 11 comments
#105 - batch运行GLM4V报错
Issue -
State: closed - Opened by wciq1208 5 months ago
- 3 comments
#104 - BUG in openai_api_server.py
Issue -
State: closed - Opened by yanjian1978 5 months ago
- 2 comments
#103 - 3090-24G多卡推理GLM-4V显存问题
Issue -
State: closed - Opened by caojialun 5 months ago
- 6 comments
#102 - lora微调后模型,怎样通过vllm部署调用
Issue -
State: closed - Opened by dannypei 5 months ago
- 2 comments
#101 - 异步线程异常,使用的是openai_api_server.py这个文件
Issue -
State: closed - Opened by yellowaug 5 months ago
- 1 comment
#100 - composite_demo 中 GLM-4 Demo 文档解读对话报错,全量和Int4都报同样错误。
Issue -
State: closed - Opened by wikeeyang 5 months ago
- 6 comments
#99 - GLM4 训练显存以及网络结构疑问
Issue -
State: closed - Opened by Tendo33 5 months ago
- 1 comment
#98 - 感谢开源,请问GLM4是否会提供CHATGLM3中的CPU加速推理demo?
Issue -
State: closed - Opened by RichardFans 5 months ago
- 1 comment
#97 - 多卡微调报错
Issue -
State: closed - Opened by chengfusheng-code 5 months ago
- 1 comment
#96 - glm4的微调一定要使用bf16的类型吗?可以换成其他的吗?
Issue -
State: closed - Opened by Alexzhibin 5 months ago
- 2 comments
#95 - GLM4 模型测试报错
Issue -
State: closed - Opened by chenjf2015103095 5 months ago
- 13 comments
#94 - 感谢开源,HF 和 魔搭 上没看到 glm-4v-9b-int4 的量化模型,是没发布,还是要自己量化?
Issue -
State: closed - Opened by heuet 5 months ago
- 6 comments
#93 - 运行streamlit run src/main.py,然后用多模态问答,报错,问题:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)
Issue -
State: closed - Opened by qihanghou726 5 months ago
- 3 comments
#92 - GLM-4V-9B支持上传多张图的多轮对话吗?
Issue -
State: closed - Opened by Sisi0518 5 months ago
- 1 comment
#91 - 增加设置提示词的块
Pull Request -
State: closed - Opened by ztxtech 5 months ago
- 7 comments
#90 - Add info of SWIFT
Pull Request -
State: closed - Opened by tastelikefeet 5 months ago
#89 - GLM-4 9B从零开始需要怎么做?
Issue -
State: closed - Opened by Micla-SHL 5 months ago
- 2 comments
#88 - 运行`basic_demo/trans_web_demo.py`的时候报错,无法找到模型路径
Issue -
State: closed - Opened by fjqz177 5 months ago
- 5 comments
#87 - GLM-4-9B官方的例子base_demo/openai_api_request.py流式输出内容为空
Issue -
State: closed - Opened by zhoumz123 5 months ago
- 10 comments
#86 - 多GPU记载有问题,按照指示更新了modeling chatglm,还是不行的
Issue -
State: closed - Opened by xbsdsongnan 5 months ago
- 3 comments
#85 - vllm docker 部署模型无法停止、乱说话的问题
Issue -
State: closed - Opened by sycamore792 5 months ago
- 2 comments
#84 - GLM-4V是否支持多张图片
Issue -
State: closed - Opened by EthanLeo-LYX 5 months ago
- 3 comments
#83 - glm4v里面的huggingface issue的bug可以修复一下吗
Issue -
State: closed - Opened by lucasjinreal 5 months ago
- 11 comments
#82 - 官方能出一个底层不是vllm的,支持int4的 openai_api_server_2 demo 程序吗?谢谢
Issue -
State: closed - Opened by triumph 5 months ago
- 8 comments
#81 - lm_eval 测试报错
Issue -
State: closed - Opened by SefaZeng 5 months ago
- 3 comments
#80 - Slack link no longer active
Issue -
State: closed - Opened by CharlieJCJ 5 months ago
- 1 comment
#79 - Proper prompt to invoke tool use.
Issue -
State: closed - Opened by Fanjia-Yan 5 months ago
- 4 comments
#78 - stream 运行main.py文件后,点击连接出现match mode: ^ SyntaxError: invalid syntax
Issue -
State: closed - Opened by yangpyoung 5 months ago
- 1 comment
#77 - THUDM/glm-4v-9b 什么时候支持Finetune?感谢!
Issue -
State: closed - Opened by ljch2018 5 months ago
- 2 comments
#76 - `inf`, `nan` or element < 0
Issue -
State: closed - Opened by cmx4869 5 months ago
- 4 comments
#75 - 直接进行多卡推理的bug处理
Issue -
State: closed - Opened by cbigeyes 5 months ago
- 5 comments
#74 - composite_demo/main.py处理docx有bug
Issue -
State: closed - Opened by inorixu 5 months ago
- 1 comment
#73 - GLM-4V-9B用bitsandbytes量化后,input错误
Issue -
State: closed - Opened by JoeAu 5 months ago
- 10 comments
#72 - 本地无法部署GLM-4,很多依赖包无法正常安装
Issue -
State: closed - Opened by diypyh 5 months ago
- 2 comments
#71 - self-llm《开源大模型食用指南》更新了对GLM-4-9B-chat模型的部署教程与微调教程!
Issue -
State: closed - Opened by KMnO4-zx 5 months ago
- 1 comment
#70 - Fix openai_api_server request_id issue
Pull Request -
State: closed - Opened by T-Atlas 5 months ago
#69 - fix: first system prompt not worked
Pull Request -
State: closed - Opened by liuzhenghua 5 months ago
- 2 comments
#68 - GLM-4-9B-chat,运行报错: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasGemmStridedBatchedExFix
Issue -
State: closed - Opened by sly123197811 5 months ago
- 4 comments
#67 - 多卡运行trans_cli_vision_demo.py报错
Issue -
State: closed - Opened by YiXinChenChen 5 months ago
- 5 comments
#66 - function call调用格式化输出
Issue -
State: closed - Opened by cxjtju 5 months ago
- 1 comment
#65 - first system prompt not worked (need add an empty system prompt first).
Issue -
State: closed - Opened by liuzhenghua 5 months ago
- 1 comment
#64 - 能否将openai_api_server.py升级到gpt4的tools调用相兼容的api
Issue -
State: closed - Opened by anming81 5 months ago
- 4 comments
#63 - requirements 安装失败
Issue -
State: closed - Opened by tiandaoyuxi 5 months ago
- 2 comments
#62 - 使用微调后的模型报错,包括inference.py和vllm.py
Issue -
State: closed - Opened by JohnnyBoyzzz 5 months ago
- 2 comments
#61 - 如何调用tools的用例
Issue -
State: closed - Opened by Awyshw 5 months ago
- 1 comment
#60 - 智谱流失输出的时候结果为空
Issue -
State: closed - Opened by chwang589 5 months ago
#59 - 使用composite_demo运行多模态模型出现错误RuntimeError: view size is not compatible with input tensor's size....
Issue -
State: closed - Opened by kailiu9237 5 months ago
- 8 comments
#58 - glm4 9b 1m 启动报错
Issue -
State: closed - Opened by brightzhu2020 5 months ago
- 12 comments
#57 - request for gradio demo for GLM4-vision
Issue -
State: closed - Opened by MontaEllis 5 months ago
- 2 comments
#55 - 想问一下不同上下文长度对内存的需求
Issue -
State: closed - Opened by mojerro 5 months ago
- 2 comments
#54 - Any forword plain to support llama.cpp
Issue -
State: closed - Opened by SolomonLeon 5 months ago
- 1 comment
#53 - low_cpu_mem_usage=True 参数使用上可能有问题
Issue -
State: closed - Opened by dogvane 5 months ago
- 3 comments
#52 - 请教处理多模态的话,GLM4好还是CogVLM2好?
Issue -
State: closed - Opened by wciq1208 5 months ago
- 6 comments
#51 - basic_demo/openai_api_server.py 启动的服务通过流式接口调用没有结果返回
Issue -
State: closed - Opened by YassinYin 5 months ago
- 1 comment
#50 - 搜索和长文件功能出现问题
Issue -
State: closed - Opened by lingyezhixing 5 months ago
- 8 comments
#49 - 您好,按照所提供微调代码,工具微调时observation 部分显示是需要计算loss的?这个确定不需要计算loss么还是我哪个计算错了?
Issue -
State: closed - Opened by zywang-work 5 months ago
- 2 comments
#48 - 请教用tiktoken替换sentencepiece的原因
Issue -
State: closed - Opened by akiragy 5 months ago
- 1 comment
#47 - vllm_cli_demo报错
Issue -
State: closed - Opened by dannypei 5 months ago
- 3 comments
#46 - 关于多轮对话微调的loss计算
Issue -
State: closed - Opened by RyanOvO 5 months ago
- 4 comments
#45 - OCR效果
Issue -
State: closed - Opened by buptlihang 5 months ago
- 7 comments
#44 - GLM-4V-9B 支持 llama.cpp
Issue -
State: closed - Opened by thesby 5 months ago
- 3 comments
#43 - remove peft in demo
Pull Request -
State: closed - Opened by arkohut 5 months ago
- 3 comments
#42 - fix composite_demo/readme
Pull Request -
State: closed - Opened by SkyFlap 5 months ago
#39 - any plan for releasing the tech report?
Issue -
State: closed - Opened by bpwl0121 5 months ago
- 1 comment
#38 - TypeError: Fraction.__new__() got an unexpected keyword argument '_normalize'
Issue -
State: closed - Opened by ArlanCooper 5 months ago
- 2 comments
#37 - fix: add missing dependency peft
Pull Request -
State: closed - Opened by arkohut 5 months ago
- 2 comments
#36 - 建议优化openai_api_server代码,使用funcation_calling返回时与openai格式保持一致
Issue -
State: closed - Opened by Mars-1990 5 months ago
- 4 comments
#35 - tokenization_chatglm.py报错
Issue -
State: closed - Opened by Ksuriuri 5 months ago
- 5 comments
#34 - composite_demo All Tools模式, 支持int4吗? 要是支持,如何修改呢? 谢谢
Issue -
State: closed - Opened by triumph 5 months ago
- 2 comments
#33 - AsyncLLMEngine.generate() got an unexpected keyword argument 'inputs'
Issue -
State: closed - Opened by zky001 5 months ago
- 4 comments
#32 - python trans_web_demo.py多卡运行提示错误,RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)
Issue -
State: closed - Opened by fredliu168 5 months ago
- 8 comments