Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / ollama/ollama issues and pull requests
#6734 - Error: pull model manifest: file does not exist (again)
Issue -
State: closed - Opened by jamiejackherer 9 days ago
- 2 comments
Labels: bug
#6733 - curl
Issue -
State: closed - Opened by ayttop 9 days ago
- 1 comment
Labels: bug
#6732 - add *_proxy to env map for debugging
Pull Request -
State: closed - Opened by mxyng 9 days ago
#6731 - error wile install on opensuse leap 15.6
Issue -
State: closed - Opened by kc8pdr205 9 days ago
- 5 comments
Labels: bug
#6730 - api/embed return 404
Issue -
State: closed - Opened by Maluuck 9 days ago
- 2 comments
Labels: bug
#6729 - Feature: Add Support for Distributed Inferencing
Pull Request -
State: open - Opened by ecyht2 9 days ago
- 1 comment
#6728 - Add alias of /quit and /exit for /bye.
Issue -
State: open - Opened by bulrush15 9 days ago
Labels: feature request
#6727 - Does ollama check for free disk space BEFORE pulling a new model?
Issue -
State: closed - Opened by bulrush15 9 days ago
- 1 comment
Labels: feature request
#6726 - "No Healthy Upstream" Error on Multiple Networks and Devices
Issue -
State: closed - Opened by shake-hakobyan-rau 9 days ago
- 1 comment
Labels: bug
#6725 - Incorrect AppDir when creating banner script (Preview)
Issue -
State: open - Opened by DJStompZone 9 days ago
Labels: windows
#6724 - Tools Tag with "ollama show" command
Issue -
State: closed - Opened by LilPiep 9 days ago
- 2 comments
Labels: feature request
#6723 - How to change the system memory folder ?
Issue -
State: closed - Opened by mdabir1203 9 days ago
- 2 comments
Labels: bug
#6722 - MiniCPM3 support
Issue -
State: open - Opened by IuvenisSapiens 9 days ago
- 1 comment
Labels: model request
#6721 - Error loading model architecture for miniCPM3-4B: Unknown architecture 'minicpm3'
Issue -
State: closed - Opened by ChuiyuWang1 9 days ago
- 2 comments
Labels: model request
#6720 - Can you specify a graphics card in the ollama deployment model?
Issue -
State: open - Opened by LIUKAI0815 9 days ago
- 1 comment
Labels: feature request
#6719 - (111) Connection refused
Issue -
State: closed - Opened by SheltonLiu-N 9 days ago
- 4 comments
Labels: bug
#6718 - docs: update llama3 to llama3.1
Pull Request -
State: closed - Opened by jmorganca 9 days ago
#6717 - Bubble up cuda library error codes with some retries
Pull Request -
State: open - Opened by dhiltgen 9 days ago
#6716 - Quiet down dockers new lint warnings
Pull Request -
State: closed - Opened by dhiltgen 9 days ago
#6715 - PC (GPU) crashing continusly with ollama and deepseek
Issue -
State: open - Opened by pitziro 9 days ago
- 2 comments
Labels: bug
#6714 - catch when model vocab size is set correctly
Pull Request -
State: closed - Opened by pdevine 9 days ago
#6713 - Talking to Mistral-Nemo via OpenAI tool calling - fails
Issue -
State: open - Opened by ChristianWeyer 10 days ago
- 5 comments
Labels: bug
#6712 - 400 Bad Request when running behind Nginx Proxy Manager
Issue -
State: open - Opened by Joly0 10 days ago
- 12 comments
Labels: bug
#6710 - Docker: P8 State Power Usage double with 0.3.8+
Issue -
State: closed - Opened by t3chn0m4g3 10 days ago
- 7 comments
Labels: bug, nvidia
#6709 - ERROR unable to locate llm runner directory. Set OLLAMA_RUNNERS_DIR to the location of 'ollama/runners'
Issue -
State: closed - Opened by Harsha0056 10 days ago
- 3 comments
Labels: bug, windows
#6708 - Support tool/tool call ids when multiple tool calls are requested.
Issue -
State: open - Opened by ggozad 10 days ago
- 6 comments
Labels: bug
#6707 - Generate endpoint intermittently misses final token before done
Issue -
State: closed - Opened by tarbard 10 days ago
- 6 comments
Labels: bug, nvidia
#6706 - Reflection 70B has significant issue with the weights
Issue -
State: closed - Opened by gileneusz 10 days ago
- 4 comments
Labels: model request
#6704 - ollama model not support tool calling
Issue -
State: open - Opened by sunshine19870316 10 days ago
- 8 comments
Labels: feature request
#6703 - ollama and curl
Issue -
State: closed - Opened by ayttop 11 days ago
- 3 comments
Labels: bug, question
#6701 - Windows app gets confused if wsl2 based server is still running
Issue -
State: closed - Opened by ares0027 11 days ago
- 4 comments
Labels: feature request, windows, wsl
#6700 - MiniCPM3 not supported
Issue -
State: closed - Opened by sataliulan 11 days ago
- 3 comments
Labels: bug
#6697 - IGPUMemLimit/rocmMinimumMemory are undefined
Issue -
State: closed - Opened by wangzd0209 11 days ago
- 1 comment
Labels: question
#6694 - A mixture of experts model
Issue -
State: closed - Opened by iplayfast 11 days ago
- 1 comment
Labels: model request
#6692 - [Feature request] compatibility with vm balloon ram
Issue -
State: open - Opened by Xyz00777 11 days ago
- 2 comments
Labels: feature request
#6691 - Is everything fine with `phi3` model?
Issue -
State: open - Opened by eirnym 12 days ago
- 5 comments
Labels: bug
#6689 - Reflection 70B fix?
Issue -
State: open - Opened by gileneusz 12 days ago
- 2 comments
Labels: model request
#6688 - Align OpenAI Chat option processing with Completion option processing
Pull Request -
State: closed - Opened by rick-github 12 days ago
#6687 - Align OpenAI Chat option processing with Completion option processing
Pull Request -
State: closed - Opened by rick-github 12 days ago
#6686 - Model shows wrong date.
Issue -
State: open - Opened by ghaisasadvait 12 days ago
- 1 comment
Labels: bug
#6685 - AMD 7900XTX fails with `"Could not initialize Tensile host: No devices found"`
Issue -
State: closed - Opened by svaningelgem 12 days ago
- 48 comments
Labels: bug, docker
#6684 - Deepseek v2.5 sha256 digest mismatch
Issue -
State: open - Opened by mintisan 12 days ago
Labels: bug
#6683 - docs: improve linux install documentation
Pull Request -
State: closed - Opened by jmorganca 12 days ago
#6682 - Remove go server debug logging
Pull Request -
State: closed - Opened by jessegross 12 days ago
#6681 - readme: add Plasmoid Ollama Control to community integrations
Pull Request -
State: closed - Opened by imoize 13 days ago
#6680 - adding Archyve to community integrations list
Pull Request -
State: closed - Opened by nickthecook 13 days ago
#6679 - HTTP_PROXY Not Being Used in Model Requests
Issue -
State: open - Opened by cmilhaupt 13 days ago
- 21 comments
Labels: bug
#6678 - OLLAMA_LOAD_TIMEOUT env variable not being applied
Issue -
State: closed - Opened by YetheSamartaka 13 days ago
- 7 comments
Labels: bug
#6677 - VG
Issue -
State: closed - Opened by vioricavg 13 days ago
- 2 comments
#6676 - on ollama.com , the centrate new profile picture page , looked on andro chrome canary , out of bound
Issue -
State: open - Opened by fxmbsw7 13 days ago
Labels: bug, ollama.com
#6675 - Bugfix for #6656 (Fixed redirect check if direct URL is already Present)
Pull Request -
State: open - Opened by Tobix99 13 days ago
- 2 comments
#6674 - Simplify the input of multi-line text
Issue -
State: open - Opened by linkoog 13 days ago
- 1 comment
Labels: feature request
#6673 - Ollama-rocm on Kubernetes with shared AMD GPU seems to have problems allocating vram
Issue -
State: closed - Opened by kubax 13 days ago
- 4 comments
Labels: bug
#6672 - Inconsistent `prompt_eval_count` for Large Prompts in Ollama Python Library
Issue -
State: closed - Opened by surajyadav91 13 days ago
- 1 comment
Labels: bug
#6671 - Reflection 70B NEED Tools
Issue -
State: closed - Opened by xiaoyu9982 13 days ago
- 4 comments
Labels: model request
#6670 - expose slots data through API
Issue -
State: closed - Opened by aiseei 13 days ago
- 1 comment
Labels: feature request
#6669 - Ubuntu GPU not used
Issue -
State: closed - Opened by Andrii-suncor 13 days ago
- 1 comment
Labels: bug
#6668 - Every installed model disappeared
Issue -
State: closed - Opened by yilmaz08 13 days ago
- 6 comments
Labels: bug
#6667 - (Rebased) Add Braina AI as an Ollama Desktop GUI #2
Pull Request -
State: open - Opened by wallacelance 13 days ago
- 1 comment
#6666 - Improve logging on GPU too small
Pull Request -
State: closed - Opened by dhiltgen 13 days ago
#6665 - Fix "presence_penalty_penalty" typo, add test.
Pull Request -
State: closed - Opened by rick-github 13 days ago
- 3 comments
#6664 - Reflection 70B model request
Issue -
State: closed - Opened by gileneusz 13 days ago
- 2 comments
Labels: model request
#6663 - Document uninstall on windows
Pull Request -
State: closed - Opened by dhiltgen 13 days ago
#6658 - openai: support for structured outputs
Pull Request -
State: open - Opened by iscy 14 days ago
- 2 comments
#6654 - Multi-instance seems not working
Issue -
State: closed - Opened by bigsausage 14 days ago
- 4 comments
Labels: feature request
#6652 - Add Dracarys-Llama-3.1-70B-Instruct support
Issue -
State: open - Opened by LSeu-Open 14 days ago
- 2 comments
Labels: model request
#6649 - Intel GPU - model > 4b nonsense?
Issue -
State: open - Opened by cyear 14 days ago
- 5 comments
Labels: bug, intel
#6646 - POST /v1/chat/completions returns 404 not 400 for model not found
Issue -
State: closed - Opened by codefromthecrypt 14 days ago
- 3 comments
Labels: bug
#6645 - Fix gemma2 2b conversion
Pull Request -
State: closed - Opened by pdevine 14 days ago
#6640 - OpenAI endpoint JSON output malformed
Issue -
State: closed - Opened by defaultsecurity 15 days ago
- 4 comments
Labels: bug
#6638 - Llama 3.1 8b not generating answers since past few days
Issue -
State: open - Opened by ToshiKBhat 15 days ago
- 8 comments
Labels: bug
#6637 - cuda device unavailable error results in failed memory update leading to concurrent model load when no space actually available
Issue -
State: open - Opened by iplayfast 15 days ago
- 8 comments
Labels: bug, nvidia
#6631 - Add model Phi3-Vision
Issue -
State: open - Opened by asmit203 15 days ago
Labels: model request
#6630 - docs(integrations): add claude-dev
Pull Request -
State: open - Opened by sammcj 15 days ago
#6629 - Fail to Convert Huggingface Llama3.1 with ollama create
Issue -
State: closed - Opened by YueChenkkk 15 days ago
- 4 comments
Labels: bug
#6628 - no space left on device - ubuntu
Issue -
State: open - Opened by fahadshery 15 days ago
Labels: bug
#6627 - Add preliminary support for riscv64
Pull Request -
State: open - Opened by mengzhuo 15 days ago
- 2 comments
#6626 - unable to load cuda driver library . symbol lookup for cuCtxCreate_v3 failed
Issue -
State: closed - Opened by Wangzg97 15 days ago
- 1 comment
Labels: bug
#6625 - Support for HuatuoGPT-Vision-7B
Issue -
State: open - Opened by Chuyun-Shen 15 days ago
Labels: model request
#6624 - Update README.md with PyOllaMx
Pull Request -
State: closed - Opened by kspviswa 15 days ago
- 1 comment
#6623 - nvidia/NV-Embed-v2 support
Issue -
State: open - Opened by youxiaoxing 15 days ago
- 4 comments
Labels: model request
#6622 - [Bug] open-webui integration error when ui docker listen on 11434
Issue -
State: open - Opened by zydmtaichi 15 days ago
- 1 comment
Labels: bug
#6621 - llama: sync llama.cpp to commit 8962422
Pull Request -
State: open - Opened by jmorganca 15 days ago
#6620 - Use cuda v11 for driver 525 and older
Pull Request -
State: closed - Opened by dhiltgen 15 days ago
#6619 - Go Server Health Reporting
Pull Request -
State: closed - Opened by jessegross 15 days ago
#6618 - llm: update llama.cpp commit to 8962422
Pull Request -
State: closed - Opened by jmorganca 15 days ago
- 1 comment
#6617 - Log system memory at info
Pull Request -
State: closed - Opened by dhiltgen 15 days ago
#6616 - A100 shared GPU - Server not responding (always after some time where it works)
Issue -
State: closed - Opened by Ida-Ida 15 days ago
- 12 comments
Labels: bug, performance, nvidia
#6615 - api: add Client.BaseURL method
Pull Request -
State: open - Opened by presbrey 15 days ago
#6614 - Update README.md
Pull Request -
State: open - Opened by cfjedimaster 16 days ago
#6607 - docker image for rocm-3.5.1 to run on older AMD gpus
Issue -
State: closed - Opened by drhboss 16 days ago
- 1 comment
Labels: feature request
#6604 - `your nvidia driver is too old or missing` error
Issue -
State: open - Opened by my106 16 days ago
- 1 comment
Labels: nvidia, needs more info
#6601 - when i try to visit https://xxxxxxxx.com/api/chat,it is very slow
Issue -
State: closed - Opened by lessuit 16 days ago
- 3 comments
Labels: needs more info
#6600 - In ollama, do these llama3.1 models refer to the pretrained basic models or instruction tuned models?
Issue -
State: closed - Opened by icecream-and-tea 16 days ago
- 2 comments
Labels: question
#6596 - Unloading a model
Issue -
State: closed - Opened by tallesl 17 days ago
- 4 comments
Labels: feature request
#6595 - 4 AMD GPUs with mixed VRAM sizes: layer predictions incorrect leads to runner crash
Issue -
State: open - Opened by MikeLP 17 days ago
- 13 comments
Labels: bug, amd, memory
#6594 - Please fix Linux installer, so any Environment in /etc/systemd/system/ollama.service isn't overwritten
Issue -
State: closed - Opened by nightness 17 days ago
- 1 comment
Labels: bug
#6593 - Get supported models with API
Issue -
State: open - Opened by angelozerr 17 days ago
Labels: feature request
#6592 - Model whitelisting for generate endpoint
Issue -
State: open - Opened by JTHesse 17 days ago
Labels: feature request
#6591 - Ollama failing with `CUDA error: PTX JIT compiler library not found`
Issue -
State: closed - Opened by leobenkel 17 days ago
- 2 comments
Labels: bug