Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / ollama/ollama issues and pull requests

#6482 - passthrough OLLAMA_HOST path to client

Pull Request - State: closed - Opened by mxyng 27 days ago

#6481 - gork2

Issue - State: closed - Opened by olumolu 27 days ago - 2 comments
Labels: model request

#6480 - Ensure driver version set before variant

Pull Request - State: closed - Opened by dhiltgen 27 days ago

#6479 - v0.3.7-rc5 no longer uses multiple GPUs for a single model

Issue - State: closed - Opened by Maltz42 27 days ago - 7 comments
Labels: bug

#6478 - Add linux start command to docs

Issue - State: closed - Opened by bdytx5 27 days ago - 5 comments
Labels: feature request

#6476 - fix: remove duplicated func call

Pull Request - State: open - Opened by alwqx 27 days ago

#6475 - The issue of high CPU utilization in Ollama

Issue - State: closed - Opened by fenggaobj 27 days ago - 2 comments
Labels: bug, nvidia

#6474 - Phi3.5 broken behaviour

Issue - State: open - Opened by derluke 27 days ago - 1 comment
Labels: bug

#6473 - OpenAI Structured Output Compatability

Issue - State: open - Opened by jd-solanki 27 days ago - 3 comments
Labels: feature request

#6472 - 404 one download

Issue - State: open - Opened by vorticalbox 27 days ago - 4 comments
Labels: bug

#6471 - Issue when running smollm:360m and also smollm:135m

Issue - State: open - Opened by NEWbie0709 27 days ago - 4 comments
Labels: bug

#6470 - registry.ollama.ai: returning text/plain for manifest requests

Issue - State: closed - Opened by codefromthecrypt 27 days ago - 2 comments
Labels: bug

#6469 - Link Time Optimization - [email protected]

Pull Request - State: open - Opened by cabelo 27 days ago - 1 comment

#6468 - bug: Nested model in registry - cannot access model settings on my own model at https://ollama.com/

Issue - State: closed - Opened by BobMerkus 28 days ago - 3 comments
Labels: bug, ollama.com

#6467 - Fix embeddings memory corruption

Pull Request - State: closed - Opened by dhiltgen 28 days ago - 2 comments

#6466 - I can not push 8g model to Ollama

Issue - State: open - Opened by hololeo 28 days ago - 9 comments
Labels: bug

#6465 - Adding 'Ollama App' as community integrations

Pull Request - State: open - Opened by JHubi1 28 days ago

#6464 - Error: unsupported content type: unknown

Issue - State: closed - Opened by CorrectPath 28 days ago - 8 comments
Labels: bug

#6463 - Unable to Access Linux Package for Installation

Issue - State: closed - Opened by f1mahesh 28 days ago - 7 comments
Labels: bug

#6462 - Make tool call response compatible with OpenAI format

Issue - State: closed - Opened by eliasfroehner 28 days ago - 2 comments
Labels: feature request

#6461 - "/clear" command is not clearing history

Issue - State: closed - Opened by devstefancho 28 days ago - 2 comments
Labels: bug

#6460 - glm-4v-9b

Issue - State: open - Opened by sdcb 28 days ago - 3 comments
Labels: model request

#6459 - Add autogpt integration to list of community integrations

Pull Request - State: open - Opened by aarushik93 28 days ago - 1 comment

#6457 - Request official guidelines

Issue - State: open - Opened by qzc438 28 days ago
Labels: feature request

#6456 - Ollama not using 20GB of VRAM from Tesla P40 card

Issue - State: open - Opened by Happydragun4now 28 days ago - 5 comments
Labels: bug

#6455 - Align cmake define for cuda no peer copy

Pull Request - State: closed - Opened by dhiltgen 28 days ago - 1 comment

#6453 - Inconsistent GPU Usage

Issue - State: closed - Opened by gru3zi 29 days ago - 4 comments
Labels: bug

#6452 - feat: function calling on stream

Pull Request - State: open - Opened by venjiang 29 days ago - 3 comments

#6451 - cannot unmarshal array into Go struct field ChatRequest.messages of type string

Issue - State: closed - Opened by McCannDahl 29 days ago - 2 comments
Labels: bug

#6450 - WSL 2 is not an upgrade, it's a different type

Pull Request - State: open - Opened by erkinalp 30 days ago

#6449 - Microsoft Phi-3.5 models

Issue - State: open - Opened by animaldomestico 30 days ago - 10 comments
Labels: model request

#6448 - snowflake-arctic-embed:22m model cause an error on loading

Issue - State: closed - Opened by Abdulrahman392011 30 days ago - 41 comments
Labels: bug

#6447 - Ollama instance restart when using Mistral Nemo, tried different mistral nemo models

Issue - State: open - Opened by Hyphaed 30 days ago - 1 comment
Labels: bug, needs more info

#6446 - Model Library: Ability to update model manifest via editor

Issue - State: open - Opened by MaxJa4 30 days ago
Labels: feature request

#6445 - Update manual instructions with discrete ROCm bundle

Pull Request - State: closed - Opened by dhiltgen 30 days ago

#6444 - Update model parameters for SmolLM (and other models)

Issue - State: closed - Opened by DuckyBlender 30 days ago - 2 comments
Labels: bug

#6443 - Error: llama runner process no longer running: -1

Issue - State: closed - Opened by ZINE-KHER 30 days ago - 6 comments
Labels: bug

#6442 - How to start and stop Ollama in the process?

Issue - State: closed - Opened by qzc438 30 days ago - 3 comments
Labels: bug

#6441 - Please add socks5 proxy func to download models !Because the internet in China is not accessible

Issue - State: closed - Opened by Akagisaunchigo 30 days ago - 4 comments
Labels: feature request

#6440 - Model architecture Gemma2ForCausalLm

Issue - State: closed - Opened by luisgg98 30 days ago - 2 comments
Labels: bug

#6439 - How to load multiple but same species models on different GPUs?

Issue - State: open - Opened by EGOIST5 about 1 month ago - 12 comments
Labels: feature request

#6438 - LlaVA OneVision

Issue - State: open - Opened by ddpasa about 1 month ago - 1 comment
Labels: model request

#6437 - how to use batch when using llm

Issue - State: open - Opened by PassStory about 1 month ago - 1 comment
Labels: feature request

#6435 - 0.3.6 /api/embed return 500 if more items are provided in input

Issue - State: closed - Opened by davidliudev about 1 month ago - 5 comments
Labels: bug

#6433 - Manage output length?

Issue - State: closed - Opened by nic0711 about 1 month ago - 1 comment
Labels: question

#6432 - Split rocm back out of bundle

Pull Request - State: closed - Opened by dhiltgen about 1 month ago

#6431 - GLM4 tools support

Issue - State: open - Opened by napa3um about 1 month ago
Labels: feature request

#6430 - Linux Doc cosmetic fixes.

Pull Request - State: open - Opened by fujitatomoya about 1 month ago

#6429 - CI: remove directories from dist dir before upload step

Pull Request - State: closed - Opened by dhiltgen about 1 month ago - 1 comment

#6428 - Runner.go Context Window Shifting

Pull Request - State: closed - Opened by jessegross about 1 month ago - 2 comments

#6427 - CI: handle directories during checksum

Pull Request - State: closed - Opened by dhiltgen about 1 month ago

#6426 - convert: vocab conversion incorrect

Issue - State: closed - Opened by jmorganca about 1 month ago - 1 comment
Labels: bug

#6425 - waiting forever running llama3.1:405b

Issue - State: open - Opened by fabiounixpi about 1 month ago - 13 comments
Labels: bug

#6424 - Fix overlapping artifact name on CI

Pull Request - State: closed - Opened by dhiltgen about 1 month ago

#6423 - Running on MI300X via Docker fails with `rocBLAS error: Could not initialize Tensile host: No devices found`

Issue - State: closed - Opened by peterschmidt85 about 1 month ago - 9 comments
Labels: bug, amd, docker

#6422 - ollama golang client hides API errors

Issue - State: open - Opened by dcarrier about 1 month ago
Labels: bug

#6420 - Is the speed of the Olama running model related to the CUDA version?

Issue - State: open - Opened by TianWuYuJiangHenShou about 1 month ago - 2 comments
Labels: bug, nvidia, needs more info, gpu

#6419 - Ollama Tools - random results without providing tools in second call

Issue - State: closed - Opened by jprogramista about 1 month ago - 2 comments
Labels: bug

#6418 - Everytime -d doesnot work

Issue - State: closed - Opened by Sakethsreeram7 about 1 month ago - 4 comments
Labels: bug

#6417 - MiniCPM-V 2.6

Issue - State: open - Opened by enryteam about 1 month ago - 12 comments
Labels: model request

#6416 - Computer crashes after switching several Ollama models in a relatively short amount of time

Issue - State: closed - Opened by elsatch about 1 month ago - 5 comments
Labels: bug

#6415 - Feature Request: Adding FalconMamba 7B Instruct in `ollama`

Issue - State: open - Opened by younesbelkada about 1 month ago
Labels: model request

#6414 - Ollama embedding is slow

Issue - State: closed - Opened by yuanjie-ai about 1 month ago - 2 comments
Labels: feature request

#6413 - Ollama Docker Rocm facing constant issues

Issue - State: closed - Opened by harshb20 about 1 month ago - 7 comments
Labels: bug

#6411 - server: limit upload parts to 16

Pull Request - State: closed - Opened by jmorganca about 1 month ago

#6410 - How can I check model's default temperature in ollama

Issue - State: open - Opened by xugy16 about 1 month ago - 6 comments

#6409 - End and Home buttons don't work in ollama in tmux

Issue - State: open - Opened by yurivict about 1 month ago - 5 comments
Labels: bug

#6408 - 404 POST "/api/chat"

Issue - State: closed - Opened by turndown about 1 month ago - 10 comments
Labels: bug, needs more info

#6407 - Please add an easy way to automatically load only layers that can fit into the GPU

Issue - State: open - Opened by yurivict about 1 month ago - 6 comments
Labels: feature request

#6406 - Ollama (WindowsSetup) fail to access from external ip

Issue - State: closed - Opened by MorrisLu-Taipei about 1 month ago - 3 comments
Labels: bug

#6405 - Implement layer-by-layer paging from CPU RAM into GPU for large models.

Issue - State: open - Opened by Speedway1 about 1 month ago - 1 comment
Labels: feature request

#6403 - feature: simple webclient

Pull Request - State: open - Opened by TecDroiD about 1 month ago

#6402 - Override numParallel in pickBestPartialFitByLibrary() only if unset.

Pull Request - State: closed - Opened by rick-github about 1 month ago

#6401 - embeddings models keep_alive

Issue - State: closed - Opened by Abdulrahman392011 about 1 month ago - 2 comments
Labels: feature request

#6400 - Add arm64 cuda jetpack variants

Pull Request - State: open - Opened by dhiltgen about 1 month ago - 2 comments

#6399 - IMPROVE: add ultra ai library

Pull Request - State: open - Opened by VaibhavAcharya about 1 month ago

#6398 - When running ollama via docker, it won't respond to any request by API-call or python-client-library

Issue - State: closed - Opened by itinance about 1 month ago - 25 comments
Labels: bug

#6395 - Make new tokenizer logic conditional

Pull Request - State: closed - Opened by dhiltgen about 1 month ago - 4 comments

#6393 - Paligemma Support

Pull Request - State: open - Opened by royjhan about 1 month ago

#6391 - doc: fixed spelling error

Pull Request - State: open - Opened by Carter907 about 1 month ago

#6390 - model xe/hermes3 doesn't correctly parse tool call tokens

Issue - State: open - Opened by Xe about 1 month ago - 5 comments
Labels: bug

#6389 - OLLAMA_ORIGINS environment variables appends instead of sets

Issue - State: open - Opened by saddy001 about 1 month ago
Labels: bug

#6388 - The Hermes 3 Series of Models

Issue - State: open - Opened by tomasmcm about 1 month ago - 9 comments
Labels: model request

#6387 - Add MiniCPM_V Model

Issue - State: closed - Opened by xiaopa233 about 1 month ago - 3 comments
Labels: model request

#6386 - fix: chmod new layer to 0o644 when creating it

Pull Request - State: open - Opened by zwwhdls about 1 month ago

#6386 - fix: chmod new layer to 0o644 when creating it

Pull Request - State: closed - Opened by zwwhdls about 1 month ago

#6385 - Significant Drop in Prompt Adherence in Updated Gemma2 Model

Issue - State: open - Opened by shzhou12 about 1 month ago - 1 comment
Labels: bug

#6384 - Open WebUI: Server Connection Error

Issue - State: open - Opened by ChaoYue97 about 1 month ago
Labels: bug

#6384 - Open WebUI: Server Connection Error

Issue - State: open - Opened by ChaoYue97 about 1 month ago - 2 comments
Labels: bug

#6383 - update to CUDA v12.2 libraries in docker container?

Issue - State: closed - Opened by juancaoviedo about 1 month ago - 8 comments
Labels: feature request, nvidia, docker