GitHub / abetlen/llama-cpp-python issues and pull requests
Labelled with: documentation
#897 - [BUG] Server cant handle two streaming connections in same time
Issue -
State: open - Opened by ArtyomZemlyak about 2 years ago
- 5 comments
Labels: bug, documentation
#853 - Unable to install llama-cpp-python Package in Python - Wheel Building Process gets Stuck
Issue -
State: open - Opened by Illanser about 2 years ago
- 5 comments
Labels: bug, documentation, build
#847 - (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')) on M2
Issue -
State: open - Opened by alessandropaticchio about 2 years ago
- 19 comments
Labels: bug, documentation, build
#664 - how to download the model?
Issue -
State: closed - Opened by futianfan over 2 years ago
- 10 comments
Labels: documentation, question
#487 - Roadmap for v0.2
Issue -
State: open - Opened by abetlen over 2 years ago
- 1 comment
Labels: documentation, enhancement
#386 - rand=-1 to use a random seed, not rand=0 as documented
Issue -
State: closed - Opened by rmngllnn over 2 years ago
- 10 comments
Labels: bug, documentation
#250 - How to install with GPU support via cuBLAS and CUDA
Issue -
State: closed - Opened by DavidBurela over 2 years ago
- 9 comments
Labels: documentation, enhancement
#202 - Add docs for build and model settings
Issue -
State: closed - Opened by abetlen over 2 years ago
Labels: documentation, enhancement
#69 - Improve error message when model file is not found
Issue -
State: closed - Opened by abetlen over 2 years ago
Labels: documentation, enhancement
#26 - Add performance optimization example
Issue -
State: closed - Opened by abetlen over 2 years ago
- 6 comments
Labels: documentation, enhancement
#11 - Possible Typo in fastapi_server.py: n_ctx vs n_batch
Issue -
State: closed - Opened by MillionthOdin16 over 2 years ago
- 1 comment
Labels: bug, documentation
#7 - Interactive mode/Session
Issue -
State: closed - Opened by BlackLotus over 2 years ago
- 7 comments
Labels: documentation, enhancement