Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / ollama/ollama issues and pull requests
#6590 - couldn't remove unused layers: invalid character '\x00' looking for beginning of value
Issue -
State: open - Opened by hhhaiai 17 days ago
- 2 comments
Labels: bug
#6589 - Can this be used with "LM Studio" to share models? If so, how can it be modified?
Issue -
State: open - Opened by Willy-Shenn 17 days ago
#6588 - Intel ARC PRO not working on Windows install.
Issue -
State: closed - Opened by Solaris17 17 days ago
- 1 comment
Labels: bug
#6587 - Update faq.md
Pull Request -
State: open - Opened by SnoopyTlion 17 days ago
- 1 comment
#6586 - Expose Tokenize and Detokenize API
Pull Request -
State: open - Opened by Yurzs 18 days ago
#6585 - function_name = function_json_output["function"] KeyError: 'function' on MemGPT but Ollama not formating the response that's MemGPT
Issue -
State: open - Opened by vivienneanthony 18 days ago
Labels: bug
#6584 - Add serve step to quickstart
Pull Request -
State: open - Opened by anitagraser 18 days ago
#6583 - Update README.md
Pull Request -
State: open - Opened by jonathanhecl 18 days ago
#6582 - adding Archyve to community integrations list
Pull Request -
State: closed - Opened by nickthecook 18 days ago
- 1 comment
#6581 - Add findutils to base images
Pull Request -
State: closed - Opened by dhiltgen 19 days ago
#6580 - phi3.5:3.8b-mini-instruct is missing parameters on Ollama's website.
Issue -
State: open - Opened by vYLQs6 19 days ago
- 1 comment
Labels: bug
#6579 - fix(cmd): show info may have nil ModelInfo
Pull Request -
State: closed - Opened by vimalk78 19 days ago
- 1 comment
#6578 - `/show info` panics on nil ModelInfo
Issue -
State: closed - Opened by vimalk78 19 days ago
Labels: bug
#6577 - Update documentation: Change .bin to .gguf in GGUF file and adapter examples
Pull Request -
State: closed - Opened by rayfiyo 19 days ago
#6576 - libllama.so and libggml.so missing in v0.3.8 ollama-linux-arm64.tgz
Issue -
State: closed - Opened by zhongTao99 19 days ago
- 1 comment
Labels: bug
#6575 - no way
Issue -
State: closed - Opened by Klgor1803 19 days ago
- 1 comment
Labels: bug
#6574 - [Windows] Select installation location
Issue -
State: closed - Opened by hecode 19 days ago
- 1 comment
Labels: feature request
#6573 - Getting Error: llama runner process has terminated: exit status 127
Issue -
State: closed - Opened by Blasserman 19 days ago
- 2 comments
Labels: bug
#6572 - Ollama States Not Enough Video Memory When It Detects Enough
Issue -
State: closed - Opened by czhang03 20 days ago
- 19 comments
Labels: needs more info, memory
#6571 - Impossible to connect to ollama locally from another pc
Issue -
State: closed - Opened by Wilnox23 20 days ago
- 4 comments
Labels: bug
#6570 - llama: opt-in at build time
Pull Request -
State: closed - Opened by dhiltgen 20 days ago
- 1 comment
#6569 - TensorRT Support
Issue -
State: open - Opened by JonahMMay 20 days ago
- 1 comment
Labels: feature request
#6568 - Error: llama runner process has terminated: exit status 127
Issue -
State: closed - Opened by MiloDev123 20 days ago
- 5 comments
Labels: bug
#6567 - Improve error reporting with old or missing AMD driver on windows (unable to load amdhip64_6.dll)
Issue -
State: open - Opened by Jiefei-Wang 20 days ago
- 2 comments
Labels: feature request, windows, amd
#6566 - Ollama can't import safetensor of mistral 7B v0.1
Issue -
State: closed - Opened by ZhoraZhang 20 days ago
- 6 comments
Labels: bug
#6565 - Does ollma have the feature to save model response in log file?
Issue -
State: open - Opened by keezen 20 days ago
- 2 comments
Labels: feature request
#6564 - add Qwen2-VL
Issue -
State: open - Opened by FelisDwan 20 days ago
- 43 comments
Labels: model request
#6563 - ollama with text file?
Issue -
State: closed - Opened by ayttop 20 days ago
- 1 comment
Labels: feature request
#6562 - remove any unneeded build artifacts
Pull Request -
State: closed - Opened by mxyng 21 days ago
#6561 - Inconsistent API Behavior
Issue -
State: open - Opened by negaralizadeh 21 days ago
- 2 comments
Labels: bug
#6560 - Logging final input after prompting specified in model file as a debug flag
Issue -
State: closed - Opened by adela185 21 days ago
- 2 comments
Labels: feature request
#6559 - Go server command line options support
Pull Request -
State: closed - Opened by jessegross 21 days ago
#6558 - Multiple GPU´s Nvidia 56GB VRAM gemma2:27b
Issue -
State: closed - Opened by paulopais 21 days ago
- 15 comments
Labels: bug
#6557 - Where are /save and /load models saved and loaded? Which directory?
Issue -
State: closed - Opened by bitcoinmeetups 21 days ago
- 2 comments
#6556 - cuda_v12 returns poor results or crashes for Driver Version: 525.147.05
Issue -
State: closed - Opened by rick-github 21 days ago
Labels: bug, nvidia
#6555 - /api/embed returns empty embeddings in docker environment
Issue -
State: closed - Opened by smoothdvd 21 days ago
- 1 comment
Labels: bug
#6554 - Error: llama runner process has terminated: exit status 0xc0000135
Issue -
State: closed - Opened by balaji1732000 21 days ago
- 4 comments
Labels: bug
#6553 - Cannot set custom folder for storing models
Issue -
State: closed - Opened by anonymux1 21 days ago
- 2 comments
Labels: bug
#6552 - Ollama run codestral gives Error: llama runner process has terminated
Issue -
State: open - Opened by anonymux1 21 days ago
- 5 comments
Labels: bug, amd
#6551 - Need cli ollama stop
Issue -
State: closed - Opened by HomunMage 21 days ago
- 2 comments
Labels: feature request
#6550 - Cannot download models behind a proxy in docker ollama.
Issue -
State: closed - Opened by lakshmikanthgr 21 days ago
- 7 comments
Labels: bug
#6549 - Ollama v0.3.8 Restart Loop
Issue -
State: closed - Opened by romayojr 21 days ago
- 14 comments
Labels: bug, nvidia, docker
#6548 - update the openai docs to explain how to set the context size
Pull Request -
State: closed - Opened by pdevine 21 days ago
#6547 - Optimize container images for startup
Pull Request -
State: open - Opened by dhiltgen 21 days ago
- 1 comment
#6546 - fix(test): do not clobber models directory
Pull Request -
State: closed - Opened by mxyng 22 days ago
#6545 - add llama3.1 chat template
Pull Request -
State: closed - Opened by pdevine 22 days ago
#6544 - Specifying options via openai client extra_body are not handled by ollama
Issue -
State: open - Opened by gaardhus 22 days ago
- 3 comments
Labels: bug
#6543 - Failed to start docker without `root` access
Issue -
State: open - Opened by leobenkel 22 days ago
- 2 comments
Labels: bug
#6542 - Update README.md
Pull Request -
State: open - Opened by rapidarchitect 22 days ago
#6541 - llama runner process has terminated: exit status127
Issue -
State: closed - Opened by sosojust1984 22 days ago
- 21 comments
Labels: bug
#6540 - actively retrieves the content returned from the web page
Issue -
State: open - Opened by Nurburgring-Zhang 22 days ago
- 1 comment
Labels: feature request
#6539 - fix: validate modelpath
Pull Request -
State: closed - Opened by mxyng 22 days ago
#6538 - throw an error when encountering unsupport tensor sizes
Pull Request -
State: closed - Opened by pdevine 22 days ago
#6537 - Add metrics endpoint and basic request metrics otel based
Pull Request -
State: open - Opened by amila-ku 22 days ago
- 2 comments
#6536 - Embeddings fixes
Pull Request -
State: closed - Opened by jessegross 22 days ago
#6535 - Move ollama executable out of bin dir
Pull Request -
State: closed - Opened by dhiltgen 22 days ago
#6534 - update templates to use messages
Pull Request -
State: closed - Opened by mxyng 22 days ago
#6533 - /api/embeddings returning 404
Issue -
State: closed - Opened by jwstanwick 22 days ago
- 2 comments
Labels: bug
#6532 - add safetensors to the modelfile docs
Pull Request -
State: closed - Opened by pdevine 22 days ago
#6531 - Prebuilt `ollama-linux-amd64.tgz` without cuda libs, please?
Issue -
State: open - Opened by sevaseva 23 days ago
- 2 comments
Labels: feature request
#6530 - fix: comment typo
Pull Request -
State: closed - Opened by seankhatiri 23 days ago
#6529 - Ollama will stop using GPU when the total graphics memory usage exceeds the dedicated graphics memory size
Issue -
State: closed - Opened by UserGzy 23 days ago
- 2 comments
Labels: bug
#6528 - Fix import image width
Pull Request -
State: closed - Opened by pdevine 23 days ago
#6527 - stella_en_400M_v5 model request
Issue -
State: open - Opened by raymond-infinitecode 23 days ago
- 1 comment
Labels: model request
#6526 - database modify capability
Issue -
State: closed - Opened by nRanzo 23 days ago
- 3 comments
Labels: feature request
#6525 - ollama collapses CPU
Issue -
State: closed - Opened by Hyphaed 23 days ago
- 9 comments
Labels: bug
#6524 - server: clean up route names for consistency
Pull Request -
State: closed - Opened by jmorganca 23 days ago
#6523 - llama: clean up sync
Pull Request -
State: closed - Opened by jmorganca 23 days ago
#6522 - detect chat template from configs that contain lists
Pull Request -
State: closed - Opened by mxyng 23 days ago
- 1 comment
#6521 - Go Server Fixes
Pull Request -
State: closed - Opened by jessegross 23 days ago
#6520 - environmental variable not passed to service
Issue -
State: closed - Opened by N4S4 24 days ago
- 4 comments
Labels: bug
#6519 - Installer: Linux : Ask if unit/systemd file needs to be recreated or left alone
Issue -
State: closed - Opened by jonz-secops 24 days ago
- 1 comment
Labels: feature request
#6518 - Unable to run on tcp4/ipv4 on Lambda Labs instance
Issue -
State: open - Opened by bayadyne 24 days ago
- 2 comments
Labels: bug, needs more info
#6517 - Is Fine-Tuning Supported in Ollama?
Issue -
State: closed - Opened by parthipan76 24 days ago
- 1 comment
Labels: bug
#6516 - OLLAMA_NUM_PARALLEL with Gemma-2-9B model
Issue -
State: closed - Opened by lihkinVerma 24 days ago
- 3 comments
Labels: bug
#6515 - "ollama run qwen2" return "the resource allocation failed"
Issue -
State: closed - Opened by fenggaobj 24 days ago
- 6 comments
Labels: bug, nvidia
#6514 - Implicit openai model parameter multiplication disabled
Pull Request -
State: closed - Opened by yaroslavyaroslav 24 days ago
- 6 comments
#6513 - magnum-v2.5-12b-kto and magnum-v2-12b not running on ollama
Issue -
State: open - Opened by Tuxaios 24 days ago
- 8 comments
Labels: bug
#6512 - Error downloading the ollama tgz file
Issue -
State: closed - Opened by CjhHa1 24 days ago
- 4 comments
Labels: bug
#6510 - Performing GET request to registry.ollama.ai/v2/ returns 404 page not found
Issue -
State: open - Opened by yeahdongcn 24 days ago
- 3 comments
Labels: bug
#6507 - Create Blob API returned nothing
Issue -
State: closed - Opened by cool-firer 24 days ago
- 1 comment
Labels: bug
#6506 - Ollama with RX6600, Openwebui and win 11 support
Issue -
State: closed - Opened by kevinleijh 24 days ago
- 2 comments
Labels: feature request, windows, amd
#6505 - glm4 model function call support
Issue -
State: open - Opened by xiaopa233 25 days ago
- 10 comments
Labels: feature request, model request
#6504 - openai: increase context window when max_tokens is provided
Pull Request -
State: open - Opened by jmorganca 25 days ago
- 1 comment
#6503 - add integration: py-gpt
Pull Request -
State: open - Opened by szczyglis-dev 25 days ago
- 1 comment
#6502 - ONNX backend runtime support to simplify HW support?
Issue -
State: open - Opened by TheSpaceGod 25 days ago
- 3 comments
Labels: feature request
#6497 - Sync master for support MiniCPM-V 2.5 and 2.6
Pull Request -
State: closed - Opened by tc-mb 25 days ago
- 2 comments
#6495 - Detect running in a container
Pull Request -
State: open - Opened by dhiltgen 25 days ago
#6494 - igpu
Issue -
State: closed - Opened by ayttop 25 days ago
- 6 comments
Labels: feature request
#6493 - Scheduler should respect main_gpu on multi-gpu setup
Issue -
State: open - Opened by henryclw 26 days ago
- 3 comments
Labels: bug
#6492 - Models drastically quality drop on `chat/completions` gateway
Issue -
State: closed - Opened by yaroslavyaroslav 26 days ago
- 7 comments
Labels: bug
#6491 - Jamba 1.5 Model
Issue -
State: open - Opened by sanjibnarzary 26 days ago
- 2 comments
Labels: model request
#6490 - WISPER
Issue -
State: closed - Opened by DewiarQR 26 days ago
- 2 comments
Labels: model request
#6489 - Error 403 occurs when I call ollama's api
Issue -
State: closed - Opened by brownplayer 26 days ago
- 10 comments
Labels: bug
#6488 - Ollama serve crashes with llama3.1:70b
Issue -
State: open - Opened by remco-pc 26 days ago
- 6 comments
Labels: bug, needs more info
#6487 - When invoked from the command line in an active conversation session, missing model for `/load` shouldn't be fatal error
Issue -
State: open - Opened by erkinalp 26 days ago
Labels: bug
#6486 - add LongWrtier Llama3.1 8b and LongWrtier GLM4 9b
Issue -
State: open - Opened by Willian7004 26 days ago
- 5 comments
Labels: model request
#6485 - Optimize container images for startup
Pull Request -
State: closed - Opened by dhiltgen 26 days ago
- 3 comments
#6484 - Only enable numa on CPUs
Pull Request -
State: closed - Opened by dhiltgen 26 days ago
#6483 - gpu: Group GPU Library sets by variant
Pull Request -
State: closed - Opened by dhiltgen 27 days ago