GitHub / abetlen/llama-cpp-python issues and pull requests
Labelled with: llama.cpp
#628 - GGUF Support
Issue -
State: closed - Opened by abetlen over 2 years ago
- 3 comments
Labels: enhancement, high-priority, llama.cpp
#603 - Models not loaded into RAM on CPU-Only setup. Is the library using the disk as RAM?
Issue -
State: closed - Opened by jacmkno over 2 years ago
- 3 comments
Labels: wontfix, llama.cpp
#546 - Insufficent memory pool on GPU
Issue -
State: closed - Opened by DenizK7 over 2 years ago
- 7 comments
Labels: llama.cpp
#509 - LLama cpp problem ( gpu support)
Issue -
State: open - Opened by xajanix over 2 years ago
- 25 comments
Labels: bug, hardware, llama.cpp