Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / ollama/ollama issues and pull requests

#5905 - Forcing Ollama to bind to 0.0.0.0 instead of localhost

Issue - State: closed - Opened by MrLinks75 about 2 months ago - 4 comments
Labels: feature request

#5898 - server: speed up single gguf creates

Pull Request - State: open - Opened by joshyan1 about 2 months ago - 2 comments

#5896 - Linear-time chat API

Issue - State: closed - Opened by MostAwesomeDude about 2 months ago - 2 comments
Labels: feature request

#5894 - feat: K/V cache quantisation (massive vRAM improvement!)

Pull Request - State: closed - Opened by sammcj about 2 months ago - 45 comments

#5889 - Please provide Q_2 for Llama 3.1 405B

Issue - State: closed - Opened by gileneusz about 2 months ago - 25 comments
Labels: model request

#5887 - cmd/server: utilizing OS copy to transfer blobs if the server is local

Pull Request - State: open - Opened by joshyan1 about 2 months ago - 1 comment

#5882 - Generate actionable error message when a model meets insufficient GPU memory or RAM

Issue - State: closed - Opened by sagarrandive about 2 months ago
Labels: feature request

#5881 - Is llama 3.1 already supported (on 2.8) or should we wait another update ?

Issue - State: closed - Opened by Qualzz about 2 months ago - 20 comments
Labels: bug

#5877 - Ollama API not seeing messages provided in conversation_history

Issue - State: closed - Opened by barclaybrown about 2 months ago - 4 comments
Labels: bug

#5876 - Allow file-only access

Issue - State: open - Opened by QuickHare about 2 months ago - 1 comment
Labels: feature request

#5875 - Wrong template for NuExtract model

Issue - State: closed - Opened by adam-the about 2 months ago - 3 comments
Labels: bug

#5872 - [Ascend ] add ascend npu support

Pull Request - State: open - Opened by zhongTao99 about 2 months ago - 24 comments

#5871 - finetuning model can't runing in ollama

Issue - State: open - Opened by DanielSunHub about 2 months ago - 4 comments
Labels: bug

#5867 - Nvidia Minitron Please!

Issue - State: open - Opened by txhno about 2 months ago - 1 comment
Labels: model request

#5860 - auth: update auth

Pull Request - State: open - Opened by joshyan1 about 2 months ago - 1 comment

#5851 - Lowercase hostname for CORS.

Pull Request - State: open - Opened by rick-github about 2 months ago

#5847 - Reduce docker image size

Pull Request - State: open - Opened by yeahdongcn about 2 months ago

#5847 - Reduce docker image size

Pull Request - State: open - Opened by yeahdongcn about 2 months ago

#5841 - Manage internlm2 models

Issue - State: open - Opened by RunningLeon about 2 months ago
Labels: bug, model request

#5835 - orian-ollama-webui

Issue - State: closed - Opened by werruww about 2 months ago - 2 comments
Labels: feature request

#5831 - Please add a tipp

Issue - State: closed - Opened by commitcompanion 2 months ago - 6 comments
Labels: feature request

#5816 - orian ollama webui

Issue - State: closed - Opened by werruww 2 months ago - 32 comments
Labels: feature request

#5814 - Always output GGGGGGG when encountering problems that will not occur... .

Issue - State: closed - Opened by enryteam 2 months ago - 8 comments
Labels: bug

#5810 - Tinyllama has issues understanding the Modelfile

Issue - State: closed - Opened by DuilioPerez 2 months ago - 2 comments
Labels: bug

#5808 - please add `https://huggingface.co/nvidia/Nemotron-4-340B-Instruct` to `https://ollama.com/library`

Issue - State: closed - Opened by hemangjoshi37a 2 months ago - 3 comments
Labels: model request

#5806 - allowing ollama 3 to access local txt files for a larger memory?

Issue - State: closed - Opened by boba1234567890 2 months ago - 3 comments
Labels: model request

#5800 - Enable using llama.cpp's --model-draft <model> feature for speculative decoding

Issue - State: open - Opened by sammcj 2 months ago - 4 comments
Labels: feature request, performance

#5797 - support for arm linux

Issue - State: closed - Opened by olumolu 2 months ago - 4 comments
Labels: feature request

#5796 - Streaming for tool calls is unsupported

Issue - State: open - Opened by vertrue 2 months ago - 17 comments
Labels: bug

#5788 - Support LoRA GGUF Adapters

Issue - State: closed - Opened by suncloudsmoon 2 months ago - 1 comment
Labels: feature request

#5787 - ollama run deepseek-coder-v2 creates gibberish output

Issue - State: closed - Opened by flo-ivar 2 months ago - 8 comments
Labels: bug

#5786 - Request to add support for InternVL-2 model

Issue - State: open - Opened by CNEA-lw 2 months ago - 4 comments
Labels: model request

#5785 - add GraphRAG

Issue - State: closed - Opened by tqangxl 2 months ago - 2 comments
Labels: model request

#5784 - How to Deploy LLM Based on ollama in an offline environment?

Issue - State: closed - Opened by RyanOvO 2 months ago - 15 comments

#5781 - Error 500 on `/api/embed`

Issue - State: closed - Opened by jmorganca 2 months ago - 3 comments
Labels: bug

#5781 - Error 500 on `/api/embed`

Issue - State: closed - Opened by jmorganca 2 months ago - 3 comments
Labels: bug

#5772 - Add Verbis project to README.md

Pull Request - State: closed - Opened by alexmavr 2 months ago - 1 comment

#5760 - Make llama.cpp's cache_prompt parameter configurable

Pull Request - State: open - Opened by sayap 2 months ago - 2 comments

#5744 - Model Cold Storage and user manual management possibility

Issue - State: open - Opened by nikhil-swamix 2 months ago - 5 comments
Labels: feature request

#5740 - support minicpm language model

Issue - State: closed - Opened by LDLINGLINGLING 2 months ago - 2 comments
Labels: model request

#5737 - Releases page: please also generate an archive with dependencies

Issue - State: closed - Opened by vitaly-zdanevich 2 months ago - 1 comment
Labels: feature request, linux

#5736 - bug: Open WebUI RAG Malfunction with Ollama Versions Post 0.2.1

Issue - State: closed - Opened by silentoplayz 2 months ago - 15 comments
Labels: bug

#5731 - SmolLM family

Issue - State: closed - Opened by DuckyBlender 2 months ago - 8 comments
Labels: model request

#5728 - Prompt Tokens for Image Chat

Issue - State: closed - Opened by royjhan 2 months ago - 2 comments
Labels: bug

#5725 - Mistral Codestral Mamba 7B

Issue - State: open - Opened by lestan 2 months ago - 11 comments
Labels: model request

#5712 - Add Windows arm64 support to official builds

Pull Request - State: open - Opened by dhiltgen 2 months ago - 14 comments

#5700 - zfs ARC leads to incorrect system memory prediction and refusal to load models that could work

Issue - State: open - Opened by arthurmelton 2 months ago - 4 comments
Labels: feature request

#5698 - add support MiniCPM-Llama3-V-2_5

Issue - State: closed - Opened by LDLINGLINGLING 2 months ago - 2 comments
Labels: model request

#5693 - Per-Model Concurrency

Issue - State: closed - Opened by ProjectMoon 2 months ago - 3 comments
Labels: feature request

#5689 - System wide old version of cuda v11 used instead of bundled version - runner fails to start due to missing symbols

Issue - State: closed - Opened by hljhyb 2 months ago - 23 comments
Labels: bug, windows, nvidia

#5689 - System wide old version of cuda v11 used instead of bundled version - runner fails to start due to missing symbols

Issue - State: closed - Opened by hljhyb 2 months ago - 23 comments
Labels: bug, windows, nvidia

#5681 - Adding instructions when user doesn't have sudo privileges

Pull Request - State: open - Opened by Ivanknmk 2 months ago - 1 comment

#5673 - Ollama spins up USB HDD

Issue - State: open - Opened by bkev 2 months ago - 1 comment
Labels: bug, needs more info

#5668 - Glm4 in ollama v0.2.3 still returns gibberish G's

Issue - State: closed - Opened by loveyume520 2 months ago - 17 comments
Labels: bug

#5668 - Glm4 in ollama v0.2.3 still returns gibberish G's

Issue - State: open - Opened by loveyume520 2 months ago - 16 comments
Labels: bug

#5664 - Fix sprintf to snprintf

Pull Request - State: open - Opened by FellowTraveler 2 months ago

#5656 - llm: reorder gguf tensors

Pull Request - State: closed - Opened by joshyan1 2 months ago - 1 comment

#5656 - llm: reorder gguf tensors

Pull Request - State: closed - Opened by joshyan1 2 months ago - 1 comment

#5654 - Failure to Generate Response After Model Unloading

Issue - State: open - Opened by NWBx01 2 months ago - 2 comments
Labels: bug

#5631 - Refactor linux packaging

Pull Request - State: closed - Opened by dhiltgen 2 months ago - 3 comments

#5629 - Crashing or gibberish output on 3x Radeon GPUs

Issue - State: open - Opened by darwinvelez58 2 months ago - 18 comments
Labels: bug, amd

#5624 - Make full use of all GPU resources for inference

Issue - State: closed - Opened by HeroSong666 2 months ago - 8 comments
Labels: nvidia, needs more info

#5622 - ollama run glm4 error - `CUBLAS_STATUS_NOT_INITIALIZED`

Issue - State: closed - Opened by SunMacArenas 2 months ago - 10 comments
Labels: bug, nvidia, memory

#5615 - cmd/llama.cpp: quantize progress

Pull Request - State: open - Opened by joshyan1 2 months ago

#5610 - /clear - clears the terminal

Issue - State: closed - Opened by dannyoo 2 months ago - 7 comments
Labels: feature request

#5601 - Configure the systemd service via a separate file.

Pull Request - State: open - Opened by ykhrustalev 2 months ago - 5 comments

#5600 - What is "Error: unsupported content type: text/plain; charset=utf-8"?

Issue - State: open - Opened by k2rw 2 months ago - 5 comments
Labels: bug

#5593 - Support intel igpus

Pull Request - State: open - Opened by zhewang1-intc 2 months ago - 50 comments

#5587 - Update README.md

Pull Request - State: open - Opened by emrgnt-cmplxty 2 months ago - 1 comment

#5575 - Update README.md

Pull Request - State: open - Opened by elearningshow 2 months ago - 1 comment

#5572 - Create SECURITY.md

Pull Request - State: closed - Opened by Senipostol 2 months ago - 1 comment

#5572 - Create SECURITY.md

Pull Request - State: closed - Opened by Senipostol 2 months ago - 1 comment

#5563 - glm-4-9b-chat responding not correctly

Issue - State: closed - Opened by loveyume520 2 months ago - 8 comments
Labels: bug

#5561 - Ollama working issue

Issue - State: closed - Opened by noisyboy22 2 months ago - 1 comment
Labels: bug

#5556 - feat: Support Moore Threads GPU

Pull Request - State: open - Opened by yeahdongcn 2 months ago - 3 comments

#5556 - feat: Support Moore Threads GPU

Pull Request - State: open - Opened by yeahdongcn 2 months ago - 5 comments

#5528 - Error Pulling Manifest MacOSX

Issue - State: closed - Opened by Moonlight1220 2 months ago - 2 comments
Labels: bug

#5527 - Add Environment Variable For Row Split

Pull Request - State: open - Opened by datacrystals 2 months ago - 1 comment

#5524 - allow converting adapters from npz

Pull Request - State: closed - Opened by pdevine 2 months ago - 1 comment

#5524 - allow converting adapters from npz

Pull Request - State: closed - Opened by pdevine 2 months ago - 1 comment

#5519 - Ultraslow Inference on Chromebook

Issue - State: closed - Opened by MeDott29 2 months ago - 8 comments
Labels: bug, needs more info

#5509 - usage templating

Pull Request - State: open - Opened by mxyng 3 months ago

#5499 - Error Pull Model Manifest

Issue - State: closed - Opened by Moonlight1220 3 months ago - 6 comments
Labels: needs more info

#5494 - H100s (via Vast.ai) generate GPU warning + fetching/loading models appears very slow

Issue - State: open - Opened by wkoszek 3 months ago - 11 comments
Labels: bug, nvidia

#5493 - unable to load nvcuda

Issue - State: closed - Opened by yake-cyber 3 months ago - 7 comments
Labels: bug

#5471 - Available memory calculation on AMD APU no longer takes GTT into account

Issue - State: open - Opened by Ph0enix89 3 months ago - 2 comments
Labels: bug, amd, gpu

#5464 - `Ollama` fails to work with `CUDA` after `Linux` suspend/resume, unlike other `CUDA` services

Issue - State: open - Opened by bwnjnOEI 3 months ago - 6 comments
Labels: bug, nvidia

#5453 - ollama dos not work on GPU

Issue - State: closed - Opened by tianfan007 3 months ago - 21 comments
Labels: bug, nvidia

#5446 - update faq

Pull Request - State: closed - Opened by mxyng 3 months ago

#5443 - add conversion for microsoft phi 3 mini/medium 4k, 128k

Pull Request - State: closed - Opened by mxyng 3 months ago

#5441 - cmd: createBlob with copy on disk if local server

Pull Request - State: closed - Opened by joshyan1 3 months ago - 2 comments

#5426 - Enable AMD iGPU 780M in Linux, Create amd-igpu-780m.md

Pull Request - State: open - Opened by alexhegit 3 months ago - 6 comments

#5426 - Enable AMD iGPU 780M in Linux, Create amd-igpu-780m.md

Pull Request - State: open - Opened by alexhegit 3 months ago - 6 comments

#5415 - [Feat] Support Api key for ollama apis

Pull Request - State: closed - Opened by bugaosuni59 3 months ago - 1 comment

#5412 - Update README.md: Add Ollama-GUI to web & desktop

Pull Request - State: open - Opened by chyok 3 months ago

#5399 - Please support models of rerank type

Issue - State: open - Opened by yushengliao 3 months ago - 9 comments
Labels: model request