Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / ModelTC/lightllm issues and pull requests
#31 - generate text garbled
Issue -
State: closed - Opened by ChristineSeven over 1 year ago
- 14 comments
Labels: help wanted
#30 - Refactoring code for readability
Pull Request -
State: closed - Opened by hiworldwzj over 1 year ago
#29 - Add error reports from sub process
Pull Request -
State: closed - Opened by llehtahw over 1 year ago
#28 - OOM when prompt length exceeds 1020.
Issue -
State: closed - Opened by tonylin52 over 1 year ago
- 5 comments
#27 - add nccl_port to startup args
Pull Request -
State: closed - Opened by hiworldwzj over 1 year ago
#26 - LLama 模型支持问题
Issue -
State: closed - Opened by freeliuzc over 1 year ago
- 6 comments
#25 - Can nccl init port address be exposed to command line interface?
Issue -
State: closed - Opened by haoranchen06 over 1 year ago
- 2 comments
#24 - Implementation of Positional Interpolation (PI) Feature
Pull Request -
State: closed - Opened by andy-yang-1 over 1 year ago
#23 - Encounter error when serving with vicuna-13b-v1.3.
Issue -
State: closed - Opened by shanshanpt over 1 year ago
- 9 comments
#22 - api server args batch_max_tokens auto set value, To avoid the problem of a program getting stuck
Pull Request -
State: closed - Opened by hiworldwzj over 1 year ago
#21 - llama2-70B 加载完成后,无法启动服务
Issue -
State: closed - Opened by zackdist over 1 year ago
- 7 comments
#20 - benchmark stuck
Issue -
State: closed - Opened by leiwen83 over 1 year ago
- 18 comments
#19 - triton kernel compile error
Issue -
State: closed - Opened by leiwen83 over 1 year ago
- 7 comments
#18 - Install lightllm in dockerfile
Pull Request -
State: closed - Opened by hamelsmu over 1 year ago
- 1 comment
#17 - Stream output
Issue -
State: closed - Opened by tonylin52 over 1 year ago
- 6 comments
#16 - 调用报错
Issue -
State: open - Opened by xxm1668 over 1 year ago
- 14 comments
#15 - I tried test_llama.py,but.... help...T^T
Issue -
State: open - Opened by MissQueen over 1 year ago
- 4 comments
#14 - Usage instructions for the server parameter
Pull Request -
State: closed - Opened by hiworldwzj over 1 year ago
#13 - my A800 80G*8
Issue -
State: open - Opened by weisihao over 1 year ago
- 7 comments
#12 - Docker iamges
Issue -
State: closed - Opened by Vincent131499 over 1 year ago
- 1 comment
Labels: enhancement
#11 - Optimize test code to reduce duplicate code.
Pull Request -
State: closed - Opened by hiworldwzj over 1 year ago
#10 - provide some examples
Issue -
State: closed - Opened by lucasjinreal over 1 year ago
- 1 comment
#9 - [feature request] add prompt styles support
Issue -
State: open - Opened by jiacheo over 1 year ago
- 4 comments
Labels: enhancement
#8 - Comparison with deepspeed inference?
Issue -
State: closed - Opened by allanj over 1 year ago
- 1 comment
#7 - the stream output is same to OpenAI?
Issue -
State: closed - Opened by moseshu over 1 year ago
- 1 comment
#6 - Fix filter batch
Pull Request -
State: closed - Opened by llehtahw over 1 year ago
#5 - Reduce some gpu ops
Pull Request -
State: closed - Opened by llehtahw over 1 year ago
#4 - why not support the n parameter?
Issue -
State: closed - Opened by BaiMoHan over 1 year ago
- 2 comments
#3 - add dep package "safetensors" in setup.py and requirements.txt for feature "support llama model load from safetensor format"
Pull Request -
State: closed - Opened by hiworldwzj over 1 year ago
#2 - Llama2 and llama-30b does not work
Issue -
State: closed - Opened by sureshbhusare over 1 year ago
- 26 comments
#1 - [feat]: support safetensors for llama && llama2
Pull Request -
State: closed - Opened by XFPlus over 1 year ago
- 1 comment