Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / ollama/ollama issues and pull requests
#5399 - Please support models of rerank type
Issue -
State: open - Opened by yushengliao 3 months ago
- 7 comments
Labels: model request
#5395 - CUBLAS_STATUS_ALLOC_FAILED with deepseek-coder-v2:16b
Issue -
State: open - Opened by hgourvest 3 months ago
- 10 comments
Labels: bug, nvidia, memory
#5394 - Ollama loads gemma2 27b with --ctx-size 16384
Issue -
State: closed - Opened by chigkim 3 months ago
- 1 comment
Labels: bug
#5394 - Ollama loads gemma2 27b with --ctx-size 16384
Issue -
State: closed - Opened by chigkim 3 months ago
- 1 comment
Labels: bug
#5393 - Inject beginning of message for assistant like "Certainly," in chat.
Issue -
State: closed - Opened by chigkim 3 months ago
- 6 comments
Labels: feature request
#5381 - Add Nosia to Community Integrations
Pull Request -
State: open - Opened by cbldev 3 months ago
- 2 comments
#5366 - docs: Add configuration docs (env vars)
Pull Request -
State: closed - Opened by sammcj 3 months ago
- 2 comments
#5365 - convert gemma2
Pull Request -
State: open - Opened by mxyng 3 months ago
- 1 comment
#5365 - convert gemma2
Pull Request -
State: closed - Opened by mxyng 3 months ago
- 1 comment
#5360 - Support for Snapdragon X Elite NPU & GPU
Issue -
State: open - Opened by flyfox666 3 months ago
- 22 comments
Labels: feature request, windows
#5356 - allow for num_ctx parameter in the openai API compatibility
Issue -
State: closed - Opened by PabloRMira 3 months ago
- 6 comments
Labels: feature request
#5348 - Enable grammar and JSON Schema support
Pull Request -
State: open - Opened by mitar 3 months ago
- 9 comments
#5341 - Gemma 2 9B and 27B is not behaving right
Issue -
State: closed - Opened by jayakumark 3 months ago
- 20 comments
Labels: bug
#5335 - Double text during installation
Issue -
State: open - Opened by DuckyBlender 3 months ago
- 1 comment
Labels: bug, wsl
#5328 - Add a `stop model` command to CLI.
Pull Request -
State: closed - Opened by asdf93074 3 months ago
- 3 comments
#5320 - Update faq.md
Pull Request -
State: closed - Opened by Dino-Burger 3 months ago
- 1 comment
#5320 - Update faq.md
Pull Request -
State: closed - Opened by Dino-Burger 3 months ago
- 1 comment
#5319 - Fine-tuned model responding incorrectly to my prompts
Issue -
State: open - Opened by giannisak 3 months ago
- 3 comments
Labels: bug
#5306 - Do not reinstall the CLI tools if they are already installed on macOS
Pull Request -
State: closed - Opened by seanchristians 3 months ago
- 2 comments
#5305 - Application should skip the CLI tool install page during first run if they have already been installed. (macOS)
Issue -
State: open - Opened by seanchristians 3 months ago
- 1 comment
Labels: bug
#5298 - Internal error at url manifests/sha256:
Issue -
State: open - Opened by alexeu1994 3 months ago
- 6 comments
Labels: bug
#5287 - llama: Support both old and new runners with a toggle with release build rigging
Pull Request -
State: closed - Opened by dhiltgen 3 months ago
- 1 comment
#5275 - ROCm on WSL
Issue -
State: open - Opened by justinkb 3 months ago
- 12 comments
Labels: feature request, amd, wsl
#5268 - Add Windows on ARM64 build instructions
Pull Request -
State: open - Opened by hmartinez82 3 months ago
#5268 - Add Windows on ARM64 build instructions
Pull Request -
State: open - Opened by hmartinez82 3 months ago
#5245 - Allow importing multi-file GGUF models
Issue -
State: open - Opened by jmorganca 3 months ago
- 2 comments
Labels: bug
#5211 - fp16 shows `quantization unknown` when running `ollama show`
Issue -
State: closed - Opened by jmorganca 3 months ago
- 2 comments
Labels: bug
#5211 - fp16 shows `quantization unknown` when running `ollama show`
Issue -
State: closed - Opened by jmorganca 3 months ago
- 2 comments
Labels: bug
#5210 - [email protected] - Add LTO
Pull Request -
State: closed - Opened by cabelo 3 months ago
#5207 - add insert support to generate endpoint
Pull Request -
State: closed - Opened by mxyng 3 months ago
- 5 comments
#5200 - Add support for stream_options
Issue -
State: open - Opened by igo 3 months ago
- 2 comments
Labels: feature request
#5193 - Correct Ollama Show Precision of Parameter
Pull Request -
State: open - Opened by royjhan 3 months ago
- 2 comments
#5193 - Correct Ollama Show Precision of Parameter
Pull Request -
State: open - Opened by royjhan 3 months ago
- 2 comments
#5190 - Remove Quotes from Parameters in Ollama Show
Pull Request -
State: closed - Opened by royjhan 3 months ago
- 1 comment
#5186 - AMD Ryzen NPU support
Issue -
State: open - Opened by ivanbrash 3 months ago
- 10 comments
Labels: feature request, amd
#5186 - AMD Ryzen NPU support
Issue -
State: open - Opened by ivanbrash 3 months ago
- 4 comments
Labels: feature request, amd
#5185 - florance vision model
Issue -
State: open - Opened by iplayfast 3 months ago
- 5 comments
Labels: model request
#5185 - florance vision model
Issue -
State: open - Opened by iplayfast 3 months ago
- 2 comments
Labels: model request
#5168 - Models don't respond and ollama gets stuck after long time
Issue -
State: closed - Opened by luisgg98 3 months ago
- 5 comments
Labels: bug
#5143 - AMD iGPU works in docker with override but not on host
Issue -
State: open - Opened by smellouk 3 months ago
- 21 comments
Labels: bug, amd
#5139 - Update requirements.txt
Pull Request -
State: closed - Opened by dcasota 3 months ago
#5130 - add MiniCPM-Llama3-V 2.5 muiltmodal model
Issue -
State: closed - Opened by green-dalii 3 months ago
- 2 comments
Labels: model request
#5091 - KV Cache Quantization
Issue -
State: open - Opened by sammcj 3 months ago
- 8 comments
Labels: feature request
#5068 - please add nvidia/Nemotron-4-340B-Instruct
Issue -
State: open - Opened by gileneusz 3 months ago
- 9 comments
Labels: model request
#5059 - Add Vulkan support to ollama
Pull Request -
State: open - Opened by pufferffish 3 months ago
- 26 comments
#5059 - Add Vulkan support to ollama
Pull Request -
State: open - Opened by pufferffish 3 months ago
- 22 comments
#5054 - Windows - `go generate` failing on build_cpu
Issue -
State: open - Opened by JerrettDavis 3 months ago
- 3 comments
Labels: bug, windows
#5049 - Cuda v12
Pull Request -
State: closed - Opened by dhiltgen 3 months ago
- 9 comments
#5040 - chore: add openapi 3.1 spec for public api
Pull Request -
State: open - Opened by JerrettDavis 3 months ago
- 1 comment
#5034 - Re-introduce the `llama` package
Pull Request -
State: open - Opened by jmorganca 3 months ago
#5030 - Update README.md
Pull Request -
State: open - Opened by Drlordbasil 3 months ago
- 3 comments
#5026 - Can I customize OLLAMA_TMPDIR ?
Issue -
State: closed - Opened by prince21000 3 months ago
- 4 comments
Labels: question
#5021 - Some APIs in registry.ollama returns 404
Issue -
State: open - Opened by stonezdj 3 months ago
- 2 comments
Labels: bug
#4995 - Ollama GPU not loding properly
Issue -
State: closed - Opened by tankvpython 3 months ago
- 8 comments
Labels: question, performance
#4977 - qwen2-72b start to output gibberish at some point if i set num_ctx to 8192
Issue -
State: open - Opened by Mikhael-Danilov 3 months ago
- 4 comments
Labels: bug
#4958 - Cuda 12 runner
Issue -
State: closed - Opened by jmorganca 3 months ago
Labels: feature request, nvidia
#4955 - Ollama should error with insufficient system memory and VRAM
Issue -
State: closed - Opened by jmorganca 3 months ago
- 7 comments
Labels: bug
#4943 - enable flash attention by default
Pull Request -
State: closed - Opened by jmorganca 3 months ago
#4933 - Error: Pull Model Manifest - Timeout
Issue -
State: closed - Opened by ulhaqi12 3 months ago
- 2 comments
Labels: bug
#4920 - Update docs/tutorials/windows.md for Windows Uninstall
Issue -
State: closed - Opened by Suvoo 3 months ago
Labels: documentation
#4917 - convert bert model from safetensors
Pull Request -
State: open - Opened by mxyng 3 months ago
- 1 comment
#4917 - convert bert model from safetensors
Pull Request -
State: closed - Opened by mxyng 3 months ago
- 1 comment
#4900 - MiniCPM-Llama3-V-2_5
Issue -
State: open - Opened by kotaxyz 3 months ago
- 19 comments
Labels: model request
#4900 - MiniCPM-Llama3-V-2_5
Issue -
State: open - Opened by kotaxyz 3 months ago
- 19 comments
Labels: model request
#4895 - Add "use_mmap" to environment variable
Issue -
State: open - Opened by sisi399 3 months ago
- 1 comment
Labels: feature request
#4861 - Jetson - "ollama run" command loads until timeout
Issue -
State: open - Opened by Vassar-HARPER-Project 4 months ago
- 8 comments
Labels: bug, nvidia
#4836 - llama runner process has terminated: exit status 127
Issue -
State: closed - Opened by kruimol 4 months ago
- 7 comments
Labels: bug, amd, needs more info, gpu
#4834 - Cannot pull models when http_proxy/HTTP_PROXY are set.
Issue -
State: closed - Opened by janukarhisa 4 months ago
- 1 comment
Labels: bug
#4828 - Ability to choose different installation location in Windows
Issue -
State: closed - Opened by nviraj 4 months ago
- 3 comments
Labels: feature request, windows
#4806 - codegemma broken on releases after v0.1.39
Issue -
State: open - Opened by evertjr 4 months ago
- 16 comments
Labels: bug
#4798 - The rocm driver rx7900xtx has been installed but cannot be used normally.
Issue -
State: closed - Opened by HaoZhang66 4 months ago
- 4 comments
Labels: amd, needs more info, gpu
#4771 - Ignoring env, being weird with env
Issue -
State: closed - Opened by RealMrCactus 4 months ago
- 1 comment
Labels: bug
#4764 - ollama stop [id of running model]
Issue -
State: closed - Opened by mrdev023 4 months ago
- 2 comments
Labels: feature request
#4732 - Unable to Change Ollama Models Directory on Linux (Rocky 9)
Issue -
State: open - Opened by pykeras 4 months ago
- 21 comments
Labels: bug
#4730 - llama3:8b-instruct performs much worse than llama3-8b-8192 on groq
Issue -
State: open - Opened by mitar 4 months ago
- 7 comments
Labels: bug
#4729 - dolphin-2.9.2-mixtral-8x22b
Issue -
State: closed - Opened by psyv282j9d 4 months ago
- 2 comments
Labels: model request
#4724 - empty response
Issue -
State: closed - Opened by themw123 4 months ago
- 5 comments
Labels: bug
#4710 - s390x build ollama : running gcc failed
Issue -
State: open - Opened by woale 4 months ago
- 8 comments
Labels: bug
#4701 - Quick model updates with `ollama pull`
Issue -
State: closed - Opened by LaurentBonnaud 4 months ago
- 5 comments
Labels: feature request
#4700 - please support minicpmv2.5
Issue -
State: closed - Opened by chaoqunxie 4 months ago
- 1 comment
Labels: model request
#4698 - ValueError: Error raised by inference API HTTP code: 500, {"error":"failed to generate embedding"}
Issue -
State: closed - Opened by uzumakinaruto19 4 months ago
- 3 comments
#4695 - codeqwen 7b q8 and fp16
Issue -
State: closed - Opened by StefanIvovic 4 months ago
- 1 comment
Labels: bug
#4693 - Add binary support for Nvidia Jetson Xavier- JetPack 5
Issue -
State: open - Opened by ZanMax 4 months ago
- 6 comments
Labels: bug, nvidia
#4670 - llama3 8b BF16 error
Issue -
State: closed - Opened by ccbadd 4 months ago
- 4 comments
Labels: bug, needs more info, importing
#4670 - llama3 8b BF16 error
Issue -
State: closed - Opened by ccbadd 4 months ago
- 4 comments
Labels: bug, needs more info, importing
#4643 - Llama.cpp now supports distributed inference across multiple machines.
Issue -
State: open - Opened by AncientMystic 4 months ago
- 13 comments
Labels: feature request
#4643 - Llama.cpp now supports distributed inference across multiple machines.
Issue -
State: open - Opened by AncientMystic 4 months ago
- 14 comments
Labels: feature request
#4632 - make cache_prompt as an option
Pull Request -
State: open - Opened by Windfarer 4 months ago
- 13 comments
#4625 - server/download.go: Fix downloading with too much EOF error
Pull Request -
State: open - Opened by coolljt0725 4 months ago
- 5 comments
#4601 - Error: llama runner process has terminated: signal: segmentation fault
Issue -
State: closed - Opened by guiniao 4 months ago
- 8 comments
Labels: bug
#4591 - Phi-3 Vision
Issue -
State: closed - Opened by ddpasa 4 months ago
- 8 comments
Labels: model request
#4545 - Ollama stops serving requests after 10-15 minutes
Issue -
State: open - Opened by iganev 4 months ago
- 33 comments
Labels: bug
#4521 - implement tunable registry defaults for registry and update mirrors
Pull Request -
State: closed - Opened by ghost 4 months ago
#4517 - Enhanced GPU discovery and multi-gpu support with concurrency
Pull Request -
State: closed - Opened by dhiltgen 4 months ago
- 2 comments
#4510 - Would it be possible for Ollama to support re-rank models?
Issue -
State: open - Opened by lyfuci 4 months ago
- 16 comments
Labels: feature request
#4499 - paligemma
Issue -
State: open - Opened by wwjCMP 4 months ago
- 7 comments
Labels: model request
#4498 - Add option to disable Autoupdate
Issue -
State: closed - Opened by Moulick 4 months ago
- 4 comments
Labels: feature request, macos
#4494 - How to load a model from local disk path?
Issue -
State: closed - Opened by quzhixue-Kimi 4 months ago
- 4 comments
Labels: feature request
#4486 - Not compiled with GPU offload support
Issue -
State: closed - Opened by oldgithubman 4 months ago
- 20 comments
Labels: bug, needs more info
#4476 - langchain-python-rag-privategpt "Cannot submit more than 5,461 embeddings at once"
Issue -
State: closed - Opened by dcasota 4 months ago
- 3 comments
Labels: bug