Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / ollama/ollama issues and pull requests

#6841 - Add python examples for `bespoke-minicheck`

Pull Request - State: open - Opened by RyanMarten about 9 hours ago

#6840 - no nvidia devices detected

Issue - State: open - Opened by deardeer7 about 11 hours ago
Labels: bug

#6839 - ollama request llama3.1 fail.

Issue - State: open - Opened by microbitcswcss about 12 hours ago
Labels: bug

#6838 - Old Context Information fetched

Issue - State: open - Opened by atul-siriusai about 13 hours ago - 2 comments
Labels: bug

#6837 - Add support for the ppc64le architecture

Pull Request - State: open - Opened by mkumatag about 16 hours ago - 1 comment

#6836 - CUDA error

Issue - State: open - Opened by harshallakare about 19 hours ago
Labels: bug

#6834 - CI: dist directories no longer present

Pull Request - State: closed - Opened by dhiltgen 1 day ago

#6833 - make patches git am-able

Pull Request - State: open - Opened by mxyng 1 day ago

#6832 - CI: clean up naming, fix tagging latest

Pull Request - State: closed - Opened by dhiltgen 1 day ago

#6831 - cache: Clear old KV cache entries when evicting a slot

Pull Request - State: closed - Opened by jessegross 1 day ago

#6830 - llama: doc: explain golang objc linker warning

Pull Request - State: closed - Opened by dhiltgen 1 day ago

#6828 - fix typo in import docs

Pull Request - State: closed - Opened by pdevine 1 day ago

#6827 - GPU Support for older CPUs lacking AVX

Issue - State: closed - Opened by kaleid1337 1 day ago - 3 comments
Labels: feature request

#6826 - Massive performance regression on 0.1.32 -> GGML_CUDA_FORCE_MMQ: (SET TO NO, after 0.1.31)

Issue - State: open - Opened by jsa2 1 day ago - 3 comments
Labels: bug, performance, nvidia

#6825 - LLava:13B Model Outputting ############### After Period of Inactivity

Issue - State: open - Opened by Atharvaaat 1 day ago - 1 comment
Labels: bug

#6824 - How to remove this

Issue - State: closed - Opened by lezi-fun 1 day ago
Labels: bug

#6822 - 为什么现在用Modelfile创建模型后,调api不能自动加载模型了

Issue - State: open - Opened by czhcc 2 days ago - 1 comment
Labels: bug

#6821 - 使用Modelfile加载本地gguf文件,会胡乱输出内容

Issue - State: closed - Opened by czhcc 2 days ago - 1 comment
Labels: bug

#6820 - Typo in Gemma 2 model card

Issue - State: closed - Opened by nonetrix 2 days ago - 2 comments

#6819 - Solar Pro

Issue - State: open - Opened by nonetrix 2 days ago
Labels: model request

#6818 - Add vim-intelligence-bridge to Terminal section in README

Pull Request - State: closed - Opened by pepo-ec 2 days ago

#6817 - llama 3.1 8b params downloaded from huggingface, strange num_ctx behavior

Issue - State: open - Opened by akseg73 2 days ago - 3 comments
Labels: bug

#6816 - High GPU and CPU usage

Issue - State: closed - Opened by akseg73 2 days ago - 4 comments
Labels: bug

#6815 - Idea: Model Pre-Pulling on Startup

Issue - State: open - Opened by adrianliechti 2 days ago - 1 comment
Labels: feature request

#6814 - Multi-user installation for Ollama on MacOS

Issue - State: open - Opened by davidrpugh 3 days ago - 1 comment
Labels: feature request

#6813 - how do l install the model to D hard drive?

Issue - State: closed - Opened by 0bubble0 3 days ago - 4 comments
Labels: feature request

#6811 - iiiorg/piiranha-v1-detect-personal

Issue - State: open - Opened by myrulezzz 3 days ago - 1 comment
Labels: model request

#6810 - Create docker-image.yml

Pull Request - State: closed - Opened by liufriendd 3 days ago - 1 comment

#6807 - Slow model load and cache ram does not free.

Issue - State: open - Opened by pisoiu 3 days ago - 4 comments
Labels: bug, needs more info

#6806 - slow

Issue - State: open - Opened by ayttop 3 days ago - 9 comments
Labels: bug, needs more info

#6803 - Support AMD RX580 graphics card

Issue - State: open - Opened by Tamila-2017 3 days ago - 2 comments
Labels: feature request, amd, gpu

#6801 - Ollama can't update the binary

Issue - State: open - Opened by suizideFloat 4 days ago - 3 comments
Labels: bug, needs more info

#6800 - about jetson atx orin 64G run ollama

Issue - State: open - Opened by cplasfwst 4 days ago
Labels: bug

#6799 - Is it possible to configure ollama deployed in docker?

Issue - State: open - Opened by wizounovziki 4 days ago - 1 comment
Labels: feature request

#6798 - Running the model under jetpack6 failed

Issue - State: open - Opened by litao-zhx 4 days ago - 1 comment

#6797 - Add the ability to remove a parameter using a Modelfile

Issue - State: closed - Opened by dpkirchner 4 days ago - 6 comments
Labels: feature request

#6796 - Model Library per api call

Issue - State: closed - Opened by Leon-Sander 4 days ago - 2 comments
Labels: feature request

#6794 - Wrong response at math question!

Issue - State: closed - Opened by lsalamon 5 days ago - 2 comments
Labels: bug

#6793 - yi-coder:9b-chat-fp16 can not stop output when I use it in Zed.

Issue - State: open - Opened by wwjCMP 5 days ago - 1 comment
Labels: bug

#6792 - The system parameter OLLAMA_NUM_PALLEL is invalid for embeding model

Issue - State: open - Opened by black-fox-user 5 days ago - 3 comments
Labels: bug

#6790 - openai tools streaming support coming soon?

Issue - State: closed - Opened by LuckLittleBoy 5 days ago - 1 comment
Labels: feature request

#6788 - add Agents-Flex Libraries in README.md

Pull Request - State: closed - Opened by yangfuhai 5 days ago

#6787 - Support googles new "DataGemma" model

Issue - State: open - Opened by muehlburger 5 days ago
Labels: model request

#6786 - Isn't it time to move onto Onmi models?

Issue - State: open - Opened by Meshwa428 5 days ago
Labels: feature request

#6785 - Ollama model custom model download directory not running

Issue - State: closed - Opened by aksk01 5 days ago - 3 comments
Labels: bug

#6784 - openai: support include_usage stream option to return final usage chunk

Pull Request - State: open - Opened by anuraaga 5 days ago - 1 comment

#6782 - Windows Portable Mode

Issue - State: open - Opened by SmilerRyan 5 days ago
Labels: feature request

#6781 - ollama minicpm-v refused to deal with images

Issue - State: closed - Opened by colin4k 5 days ago
Labels: bug

#6780 - Fix incremental builds on linux

Pull Request - State: closed - Opened by dhiltgen 5 days ago - 2 comments

#6779 - Use GOARCH for build dirs

Pull Request - State: closed - Opened by dhiltgen 5 days ago

#6778 - Would be nice to have a "continue last message" option with the `/api/chat` endpoint

Issue - State: closed - Opened by hammer-ai 5 days ago - 2 comments
Labels: feature request

#6777 - Attribute about model's tool use capability in model_info

Issue - State: open - Opened by StarPet 5 days ago - 1 comment
Labels: feature request

#6776 - Pixtral model request

Issue - State: closed - Opened by iplayfast 5 days ago - 3 comments
Labels: model request

#6775 - Notify systemd that ollama server is ready

Pull Request - State: open - Opened by JingWoo 6 days ago

#6774 - Add Tokenizer functionality to API

Issue - State: open - Opened by Master-Pr0grammer 6 days ago - 1 comment
Labels: feature request

#6773 - ROCm 6.2 upgrade?

Issue - State: open - Opened by svaningelgem 6 days ago
Labels: feature request

#6771 - Inconsistent Responses from Identical Models

Issue - State: open - Opened by wahidur028 6 days ago - 1 comment
Labels: bug

#6770 - Library missing from ollama when running it in Docker

Issue - State: closed - Opened by factor3 6 days ago - 3 comments
Labels: question, linux, nvidia, docker

#6769 - OLLAMA_FLASH_ATTENTION regression on 0.3.10?

Issue - State: open - Opened by coodoo 6 days ago - 2 comments
Labels: bug

#6768 - Model update history on ollama.com

Issue - State: open - Opened by vYLQs6 6 days ago - 1 comment
Labels: feature request

#6767 - runner: Flush pending responses before returning

Pull Request - State: closed - Opened by jessegross 6 days ago

#6766 - documentation for stopping a model

Pull Request - State: open - Opened by pdevine 6 days ago

#6765 - Flush pending responses before returning (#6707)

Pull Request - State: closed - Opened by jessegross 6 days ago

#6764 - llama3.1:70B 16fp not working on nvidia H100

Issue - State: open - Opened by AliAhmedNada 6 days ago - 3 comments
Labels: bug, nvidia, needs more info, memory

#6763 - `ollama show` displays context length in scientific notation

Issue - State: closed - Opened by jmorganca 6 days ago
Labels: bug, good first issue

#6762 - refactor show ouput

Pull Request - State: closed - Opened by mxyng 6 days ago

#6760 - IBM granite/granitemoe architecture support

Pull Request - State: open - Opened by gabe-l-hart 6 days ago

#6759 - reflection

Issue - State: closed - Opened by ayttop 6 days ago - 6 comments
Labels: bug

#6758 - Model Request: Reader-LM

Issue - State: closed - Opened by Xe 6 days ago - 1 comment
Labels: model request

#6757 - DO NOT MERGE - ci testing

Pull Request - State: closed - Opened by dhiltgen 6 days ago

#6756 - Yet another "segmentation fault" issue with AMD GPU

Issue - State: open - Opened by remon-nashid 6 days ago - 16 comments
Labels: bug, amd

#6755 - Add support for TPUs accelerators such as Coral.AI TPUs.

Issue - State: closed - Opened by gtherond 6 days ago - 2 comments
Labels: feature request

#6754 - Added QodeAssist link to README.md

Pull Request - State: closed - Opened by Palm1r 7 days ago

#6753 - `image_url` support for vision models

Issue - State: open - Opened by madroidmaq 7 days ago
Labels: feature request

#6752 - Update README.md

Pull Request - State: closed - Opened by rapidarchitect 7 days ago

#6751 - encountered an error while using the new model minicpm-v

Issue - State: closed - Opened by supersaiyan2019 7 days ago - 6 comments
Labels: bug

#6750 - mattw/loganalyzer 无法ollama run

Issue - State: closed - Opened by syuan-Boom 7 days ago - 4 comments
Labels: model request

#6749 - Add version when the docker container is starting

Issue - State: closed - Opened by svaningelgem 7 days ago - 2 comments
Labels: feature request

#6748 - Support Mistral's new visual model: Pixtral-12b-240910

Issue - State: open - Opened by awaescher 7 days ago - 6 comments
Labels: model request

#6747 - ERROR: llama runner process has terminated: error loading modelvocabulary: _Map_base::at

Issue - State: closed - Opened by CjhHa1 7 days ago - 4 comments
Labels: bug

#6746 - add support for Reflection-Llama-3.1

Issue - State: closed - Opened by clipsheep6 7 days ago - 2 comments
Labels: model request

#6744 - Polish loganalyzer example

Pull Request - State: closed - Opened by codefromthecrypt 7 days ago

#6742 - Add OLMoE 1b-7b

Issue - State: open - Opened by Meshwa428 7 days ago - 1 comment
Labels: model request

#6741 - Llama 3.1 70b 128k context not fitting 96Gb

Issue - State: open - Opened by dmatora 7 days ago - 2 comments
Labels: bug, nvidia, memory

#6740 - `ollama show` spaces out everything with empty lines for custom Modelfile

Issue - State: closed - Opened by songyang-dev 7 days ago - 1 comment
Labels: bug

#6739 - add "stop" command

Pull Request - State: closed - Opened by pdevine 7 days ago - 1 comment

#6738 - It is recommended to add a stop to a running model

Issue - State: closed - Opened by mrhuangyong 7 days ago
Labels: feature request

#6737 - Model looses modelfile context

Issue - State: open - Opened by kayloren 7 days ago - 1 comment
Labels: bug

#6736 - Verify permissions for AMD GPU

Pull Request - State: closed - Opened by dhiltgen 7 days ago - 1 comment

#6735 - runner.go: Prompt caching

Pull Request - State: closed - Opened by jessegross 7 days ago