Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / ollama/ollama issues and pull requests
#6382 - cuda error out of memory
Issue -
State: open - Opened by qazimurtazafair about 1 month ago
- 12 comments
Labels: bug, nvidia, memory
#6382 - cuda error out of memory
Issue -
State: open - Opened by qazimurtazafair about 1 month ago
- 11 comments
Labels: bug
#6381 - fix: Add tooltip to system tray icon
Pull Request -
State: closed - Opened by eust-w about 1 month ago
- 1 comment
#6381 - fix: Add tooltip to system tray icon
Pull Request -
State: closed - Opened by eust-w about 1 month ago
- 1 comment
#6380 - Hangs after 20-30 mins, a perdiocal restart of the ollama service is required
Issue -
State: open - Opened by itinance about 1 month ago
- 2 comments
Labels: bug
#6380 - Hangs after 20-30 mins, a perdiocal restart of the ollama service is required
Issue -
State: open - Opened by itinance about 1 month ago
- 6 comments
Labels: bug
#6379 - Separate ARM64 CPU builds from x64 CPU builds and use Clang instead
Pull Request -
State: closed - Opened by hmartinez82 about 1 month ago
- 1 comment
#6379 - Separate ARM64 CPU builds from x64 CPU builds and use Clang instead
Pull Request -
State: open - Opened by hmartinez82 about 1 month ago
#6378 - Add confichat to README.md
Pull Request -
State: open - Opened by 1runeberg about 1 month ago
#6378 - Add confichat to README.md
Pull Request -
State: open - Opened by 1runeberg about 1 month ago
#6377 - Full(er) JSON Schema support for tool calling
Issue -
State: open - Opened by mitar about 1 month ago
- 2 comments
Labels: feature request
#6376 - Dynamic Functions Load
Issue -
State: open - Opened by ivostoykov about 1 month ago
Labels: feature request
#6376 - Dynamic Functions Load
Issue -
State: open - Opened by ivostoykov about 1 month ago
Labels: feature request
#6375 - Update Dockerfile
Pull Request -
State: closed - Opened by kallados about 1 month ago
- 2 comments
#6375 - Update Dockerfile
Pull Request -
State: closed - Opened by kallados about 1 month ago
- 2 comments
#6374 - feat: support for markdown rendering in the cli - wip
Pull Request -
State: open - Opened by ukd1 about 1 month ago
- 1 comment
#6374 - feat: support for markdown rendering in the cli - wip
Pull Request -
State: open - Opened by ukd1 about 1 month ago
#6373 - The layer of model created by Modelfile has 600 permission
Issue -
State: closed - Opened by zwwhdls about 1 month ago
- 6 comments
Labels: bug
#6373 - The layer of model created by Modelfile has 600 permission
Issue -
State: open - Opened by zwwhdls about 1 month ago
- 6 comments
Labels: bug
#6372 - System tray icon is empty
Issue -
State: closed - Opened by Biyakuga about 1 month ago
- 1 comment
Labels: bug, windows
#6372 - System tray icon is empty
Issue -
State: closed - Opened by Biyakuga about 1 month ago
- 1 comment
Labels: bug, windows
#6371 - Modelfile created by importing a GGUF does not seem to do template detection
Issue -
State: closed - Opened by jparismorgan about 1 month ago
- 8 comments
Labels: bug
#6370 - Error: llama runner process no longer running: -1
Issue -
State: closed - Opened by josephyuzb about 1 month ago
- 3 comments
Labels: bug, needs more info
#6369 - Ubuntu22.04 - Warning: could not connect to a running Ollama instance
Issue -
State: closed - Opened by ACodingfreak about 1 month ago
- 3 comments
Labels: bug
#6369 - Ubuntu22.04 - Warning: could not connect to a running Ollama instance
Issue -
State: closed - Opened by ACodingfreak about 1 month ago
- 3 comments
Labels: bug
#6368 - Please add socks5 proxy func to download models !Because the internet in China is not accessible
Issue -
State: closed - Opened by icetech233 about 1 month ago
- 2 comments
Labels: feature request
#6368 - Please add socks5 proxy func to download models !Because the internet in China is not accessible
Issue -
State: closed - Opened by icetech233 about 1 month ago
- 1 comment
Labels: feature request
#6367 - Please add support for Audio models such as Qwen2-Audio
Issue -
State: closed - Opened by carlosforster about 1 month ago
- 2 comments
Labels: feature request
#6366 - Unable to Pull Model Manifest - "Get https://registry.ollama.ai/v2/library/llama3/manifests/latest: EOF"
Issue -
State: closed - Opened by uestcxt about 1 month ago
- 4 comments
Labels: bug, needs more info
#6366 - Unable to Pull Model Manifest - "Get https://registry.ollama.ai/v2/library/llama3/manifests/latest: EOF"
Issue -
State: open - Opened by uestcxt about 1 month ago
- 1 comment
Labels: bug
#6365 - Moondream fails at some images, unexpected output/messages?
Issue -
State: closed - Opened by carlosforster about 1 month ago
- 12 comments
Labels: bug
#6364 - docker container can't detect Nvidia GPU
Issue -
State: open - Opened by fahadshery about 1 month ago
- 26 comments
Labels: bug, nvidia, docker
#6363 - fix: noprune on pull
Pull Request -
State: closed - Opened by mxyng about 1 month ago
#6363 - fix: noprune on pull
Pull Request -
State: closed - Opened by mxyng about 1 month ago
#6358 - Segmentation fault
Issue -
State: closed - Opened by yicheng-2019 about 1 month ago
- 4 comments
Labels: bug, needs more info
#6358 - Segmentation fault
Issue -
State: open - Opened by yicheng-2019 about 1 month ago
- 3 comments
Labels: bug
#6357 - Error: unknown data type: U8
Issue -
State: closed - Opened by YaBoyBigPat about 1 month ago
- 26 comments
#6357 - Error: unknown data type: U8
Issue -
State: open - Opened by YaBoyBigPat about 1 month ago
- 9 comments
#6356 - AMD Multiple GPU support
Issue -
State: open - Opened by VitalickS about 1 month ago
- 2 comments
Labels: bug, windows, amd
#6354 - Embedding interface routing
Issue -
State: closed - Opened by xuzeyu91 about 1 month ago
- 2 comments
Labels: feature request
#6354 - Embedding interface routing
Issue -
State: closed - Opened by xuzeyu91 about 1 month ago
- 3 comments
Labels: feature request
#6353 - Very slow API generate endpoint
Issue -
State: closed - Opened by mann1x about 1 month ago
- 9 comments
Labels: bug, nvidia, needs more info
#6352 - The quality of answer significantly deteriorates after Automatic Quantization
Issue -
State: open - Opened by garyyang85 about 1 month ago
Labels: bug
#6351 - ollama tool input: string with newline character (\n) is cutoff
Issue -
State: open - Opened by remon-rakibul about 1 month ago
- 2 comments
Labels: bug
#6350 - Is this wrong in https://ollama.com/blog/gemma2
Issue -
State: closed - Opened by wonpn about 1 month ago
- 2 comments
Labels: bug
#6350 - Is this wrong in https://ollama.com/blog/gemma2
Issue -
State: closed - Opened by wonpn about 1 month ago
- 2 comments
Labels: bug
#6349 - add `CONTRIBUTING.md`
Pull Request -
State: closed - Opened by jmorganca about 1 month ago
#6349 - add `CONTRIBUTING.md`
Pull Request -
State: closed - Opened by jmorganca about 1 month ago
#6348 - Mistral 7B, running on CPU only - can't fix it
Issue -
State: closed - Opened by openSourcerer9000 about 1 month ago
- 2 comments
Labels: bug
#6348 - Mistral 7B, running on CPU only - can't fix it
Issue -
State: closed - Opened by openSourcerer9000 about 1 month ago
- 2 comments
Labels: bug
#6347 - server: reduce max connections used in download
Pull Request -
State: closed - Opened by bmizerany about 1 month ago
- 5 comments
#6346 - lint
Pull Request -
State: closed - Opened by mxyng about 1 month ago
#6346 - lint
Pull Request -
State: closed - Opened by mxyng about 1 month ago
#6345 - Update openai.md to remove extra checkbox for vision
Pull Request -
State: closed - Opened by pamelafox about 1 month ago
#6344 - update chatml template format to latest
Pull Request -
State: closed - Opened by BruceMacD about 1 month ago
#6344 - update chatml template format to latest
Pull Request -
State: closed - Opened by BruceMacD about 1 month ago
#6343 - Go back to a pinned Go version
Pull Request -
State: closed - Opened by dhiltgen about 1 month ago
#6342 - Windows Defender
Issue -
State: closed - Opened by Eniti-Codes about 1 month ago
- 1 comment
Labels: bug
#6342 - Windows Defender
Issue -
State: closed - Opened by Eniti-Codes about 1 month ago
- 1 comment
Labels: bug
#6341 - Llama 3.1 70B high-quality HQQ quantized model - 99%+ quality of fp16
Issue -
State: open - Opened by gileneusz about 1 month ago
- 1 comment
Labels: model request
#6341 - Llama 3.1 70B high-quality HQQ quantized model - 99%+ quality of fp16
Issue -
State: open - Opened by gileneusz about 1 month ago
- 2 comments
Labels: model request
#6340 - Add new chat app LLMChat.co
Pull Request -
State: open - Opened by deep93333 about 1 month ago
- 3 comments
#6340 - Add new chat app LLMChat.co
Pull Request -
State: open - Opened by deep93333 about 1 month ago
#6339 - ollama - default tool support
Issue -
State: open - Opened by Kreijstal about 1 month ago
Labels: feature request
#6339 - ollama - default tool support
Issue -
State: open - Opened by Kreijstal about 1 month ago
Labels: feature request
#6338 - ollama slower than llama.cpp
Issue -
State: open - Opened by phly95 about 1 month ago
- 8 comments
Labels: bug, performance, windows, nvidia
#6338 - ollama slower than llama.cpp
Issue -
State: open - Opened by phly95 about 1 month ago
- 9 comments
Labels: bug, performance, windows, nvidia
#6337 - Why is the occupancy of my Llama 3 model not high when using the GPU NV T2000, but instead it is computing using the CPU?
Issue -
State: open - Opened by pewjs about 1 month ago
- 6 comments
Labels: bug
#6336 - AMD Discrete GPU Version info not found
Issue -
State: open - Opened by safe049 about 1 month ago
- 2 comments
Labels: bug
#6336 - AMD Discrete GPU Version info not found
Issue -
State: open - Opened by safe049 about 1 month ago
- 1 comment
Labels: bug
#6335 - Bug in Continuous Questioning and Output Content on Windows
Issue -
State: open - Opened by Lucas-SJY about 1 month ago
- 1 comment
Labels: bug
#6334 - ./ollama-linux-amd64 pull llama3.1:405b the rate very slow, only several Kb/s
Issue -
State: closed - Opened by EGOIST5 about 1 month ago
- 2 comments
Labels: bug
#6333 - "couldn't remove unused layers: invalid character '\x00' looking for beginning of value"
Issue -
State: closed - Opened by FellowTraveler about 1 month ago
- 5 comments
Labels: bug
#6332 - Error: pull model manifest:
Issue -
State: closed - Opened by w123456789zy about 1 month ago
- 1 comment
Labels: bug
#6332 - Error: pull model manifest:
Issue -
State: open - Opened by w123456789zy about 1 month ago
Labels: bug
#6331 - llama: initial vision support for `runner`
Pull Request -
State: open - Opened by jmorganca about 1 month ago
#6330 - Finetuned LLAMA 3.1 8B Instruct is giving random output
Issue -
State: closed - Opened by krisbianprabowo about 1 month ago
- 2 comments
Labels: bug
#6330 - Finetuned LLAMA 3.1 8B Instruct is giving random output
Issue -
State: closed - Opened by krisbianprabowo about 1 month ago
- 2 comments
Labels: bug
#6329 - Change log for updated models on website?
Issue -
State: open - Opened by coodoo about 1 month ago
Labels: feature request
#6329 - Change log for updated models on website?
Issue -
State: open - Opened by coodoo about 1 month ago
Labels: feature request
#6327 - convert safetensor adapters into GGUF
Pull Request -
State: closed - Opened by pdevine about 1 month ago
#6327 - convert safetensor adapters into GGUF
Pull Request -
State: open - Opened by pdevine about 1 month ago
#6325 - Load Embedding Model on Empty Input
Pull Request -
State: closed - Opened by royjhan about 1 month ago
#6325 - Load Embedding Model on Empty Input
Pull Request -
State: closed - Opened by royjhan about 1 month ago
#6322 - Why role must be "system" or "user" or "assistant"? How can I add a custom role like "tool"?
Issue -
State: closed - Opened by zhangsheng377 about 1 month ago
- 12 comments
#6318 - ollama.app cannot open on my macbookpro with m3 pro
Issue -
State: open - Opened by Spockkk0225 about 1 month ago
- 6 comments
Labels: bug
#6317 - Feature request : Tools support of Qwen2
Issue -
State: open - Opened by trinhkiet0105 about 1 month ago
- 4 comments
Labels: feature request
#6316 - ollama create will use a large amount of disk space in the /tmp
Issue -
State: closed - Opened by garyyang85 about 1 month ago
- 3 comments
Labels: bug
#6315 - Sharing computing power in a decentralized P2P network
Issue -
State: open - Opened by trymeouteh about 1 month ago
- 3 comments
Labels: feature request
#6314 - Better guidance for using `with_structured_output` with `ChatOllama`
Issue -
State: closed - Opened by GuyPaddock about 1 month ago
- 1 comment
Labels: feature request
#6313 - openbmb / MiniCPM-Llama3-V-2_5
Issue -
State: closed - Opened by chigkim about 1 month ago
- 2 comments
Labels: model request
#6312 - how to force ollama to use different cpu runners / how to compile windows avx512 runner?
Issue -
State: open - Opened by AncientMystic about 1 month ago
- 15 comments
Labels: feature request, nvidia
#6311 - Error: no suitable llama servers found
Issue -
State: closed - Opened by vagitablebirdcode about 1 month ago
- 7 comments
Labels: bug
#6310 - llama3.1 8b template seems to be different from that in huggingface
Issue -
State: open - Opened by fzyzcjy about 1 month ago
- 2 comments
Labels: bug
#6309 - Added a go example for mistral's native function calling
Pull Request -
State: open - Opened by Binozo about 1 month ago
#6308 - Getting `Error: unexpected status code 200` when pulling a model from an internal registry v0.3.1 and above
Issue -
State: open - Opened by killerwhile about 1 month ago
- 2 comments
Labels: bug
#6307 - add MiniCPM-V-2_5
Issue -
State: closed - Opened by Forevery1 about 1 month ago
- 5 comments
Labels: model request
#6306 - Running ollama on island device with no Internet connection
Issue -
State: closed - Opened by whatdhack about 1 month ago
- 11 comments
Labels: feature request
#6305 - add integration obook-summary
Pull Request -
State: closed - Opened by cognitivetech about 1 month ago
#6304 - Latest version (0.3.4) not detecting AMD GPUs (Instinct MI210)
Issue -
State: open - Opened by aimanyounises1 about 1 month ago
- 5 comments
Labels: bug, linux, amd