Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / oobabooga/text-generation-webui issues and pull requests
#6276 - Make compress_pos_emb float
Pull Request -
State: closed - Opened by hocjordan 4 months ago
- 1 comment
#6273 - Render the output without refocusing the cursor on the screen
Issue -
State: open - Opened by srevenant 4 months ago
- 3 comments
Labels: enhancement
#6272 - Saving a character causes the character selector to become blank and the UI fails to save logs correctly as a result
Issue -
State: closed - Opened by sinful-developer 4 months ago
- 2 comments
Labels: bug
#6270 - Update dependencies for getting LLAMA 3.1 to work
Issue -
State: open - Opened by reuschling 4 months ago
- 11 comments
Labels: enhancement
#6269 - Getting argument device not found error from utils.py
Issue -
State: closed - Opened by JulienBeck 4 months ago
- 2 comments
Labels: bug
#6267 - load Mistral-Nemo-Instruct-2407(exl2) fail
Issue -
State: open - Opened by turandot2017 4 months ago
- 1 comment
Labels: bug
#6266 - Request to Update Exllamav2 Module to version 0.1.8
Issue -
State: closed - Opened by GrennKren 4 months ago
- 3 comments
Labels: enhancement
#6264 - update llama.cpp
Issue -
State: open - Opened by imilli 4 months ago
- 5 comments
Labels: enhancement
#6256 - Truncate prompt value from settings
Issue -
State: open - Opened by mykeehu 4 months ago
- 1 comment
Labels: enhancement
#6255 - FileNotFoundError: Could not find module 'J:\AI\text-generation-webui\installer_files\env\Lib\site-packages\llama_cpp_cuda\lib\llama.dll'
Issue -
State: closed - Opened by allrobot 4 months ago
- 3 comments
Labels: bug
#6253 - [WinError 126] Error loading "backend_with_compiler.dll" Intel ARC
Issue -
State: open - Opened by Magenta-Flutist 4 months ago
- 5 comments
Labels: bug
#6250 - ExLlamav2 won't unload model commit 0315122c
Issue -
State: open - Opened by Remowylliams 4 months ago
- 1 comment
Labels: bug
#6246 - API Request not working correctly
Issue -
State: open - Opened by Chepko932 4 months ago
- 1 comment
Labels: bug
#6237 - All characters replying with 'Char:' in this version
Issue -
State: closed - Opened by spike4379 4 months ago
- 2 comments
Labels: bug
#6235 - Exception: Cannot import 'llama-cpp-cuda' because 'llama-cpp' is already imported. Switching to a different version of llama-cpp-python currently requires a server restart.
Issue -
State: open - Opened by dark-passages 4 months ago
- 4 comments
Labels: bug
#6225 - RuntimeWarning: Detected duplicate leading "<|begin_of_text|>" in prompt
Issue -
State: open - Opened by Kaszebe 4 months ago
- 4 comments
Labels: bug
#6210 - Illegal instruction (core dumped) after update
Issue -
State: closed - Opened by NXTler 4 months ago
- 4 comments
Labels: bug
#6209 - Oobabooga login not working through reverse proxy
Issue -
State: open - Opened by binkleym 4 months ago
- 2 comments
Labels: bug
#6207 - Saving the interface theme is missing
Issue -
State: closed - Opened by Alkohole 4 months ago
Labels: bug
#6204 - Issue installing
Issue -
State: open - Opened by jaimedeoya 4 months ago
- 2 comments
Labels: bug
#6202 - Llama-cpp-python 0.2.81 'already loaded' fails to load models
Issue -
State: open - Opened by Patronics 4 months ago
- 18 comments
Labels: bug
#6189 - Whisper Fix, js replacement of the gradio element to prevent browser crash.
Pull Request -
State: closed - Opened by RandomInternetPreson 4 months ago
- 3 comments
#6176 - Resolving the 'LlamaCppModel' Object Missing 'Device' Attribute Error in OpenAI LogitsBiasProcessor function - Ubuntu Linux
Issue -
State: open - Opened by BlueprintCoding 5 months ago
- 2 comments
Labels: bug
#6169 - AMDGPU is broken on 1.8
Issue -
State: open - Opened by pl752 5 months ago
- 6 comments
Labels: bug
#6168 - Add Q4/Q8 cache for llama.cpp
Issue -
State: open - Opened by GodEmperor785 5 months ago
- 4 comments
Labels: enhancement
#6144 - Impossible to load DeepSeek-Coder-V2-Instruct.gguf
Issue -
State: open - Opened by narikm 5 months ago
- 10 comments
Labels: bug
#6138 - Update script will redownload from the beginning all files in temp_requirements.txt if there's any download problem, even in the last file.
Issue -
State: open - Opened by CalculonPrime 5 months ago
- 2 comments
Labels: bug
#6112 - Add support for Nvidia Optimum
Issue -
State: open - Opened by Iaotle 5 months ago
- 1 comment
Labels: enhancement
#6102 - start_macos.sh fails on unabling to find torch version
Issue -
State: open - Opened by sadovsf 5 months ago
- 1 comment
Labels: bug
#6085 - In-progress chat session resets / disappears when returning to chat after a time away
Issue -
State: closed - Opened by MovingSymbols 5 months ago
- 12 comments
Labels: bug
#6081 - hi
Issue -
State: closed - Opened by ngaile 5 months ago
Labels: enhancement
#6036 - Can't connect api to SillyTavern or Agnai
Issue -
State: open - Opened by alferitu 6 months ago
- 2 comments
Labels: bug
#6028 - Full use of dual GPU
Issue -
State: open - Opened by Skit5 6 months ago
- 6 comments
Labels: enhancement
#6021 - Batched/multi replies
Issue -
State: closed - Opened by Beinsezii 6 months ago
- 5 comments
Labels: enhancement
#6019 - one click not working
Issue -
State: open - Opened by pro9code 6 months ago
- 7 comments
Labels: bug
#6003 - Multi-GPU cannot load transformers on a single card
Issue -
State: closed - Opened by Urammar 6 months ago
- 2 comments
Labels: bug, stale
#5998 - How can I change web port 7860 to another port?
Issue -
State: closed - Opened by union-cmd 6 months ago
- 2 comments
Labels: enhancement, stale
#5997 - Since some update min_token_length or minimum length has disapeared
Issue -
State: closed - Opened by iboyles 6 months ago
- 3 comments
Labels: bug, stale
#5995 - Failed to build the chat prompt.
Issue -
State: open - Opened by thejohnd0e 6 months ago
- 3 comments
Labels: bug
#5994 - Support for real-time TTS!
Pull Request -
State: open - Opened by czuzu 6 months ago
- 4 comments
#5986 - ValueError: When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost.
Issue -
State: open - Opened by nanshaws 6 months ago
- 3 comments
Labels: bug
#5985 - RuntimeError: FlashAttention only supports Ampere GPUs or newer.
Issue -
State: open - Opened by linzm1007 6 months ago
- 3 comments
Labels: bug
#5983 - Incorrect LLaMA 3 Instruct tokenization with llama.cpp loader
Issue -
State: closed - Opened by JohannesGaessler 6 months ago
- 1 comment
Labels: bug, stale
#5981 - break external extentions
Issue -
State: closed - Opened by kalle07 6 months ago
- 1 comment
Labels: bug, stale
#5980 - character user
Issue -
State: closed - Opened by kalle07 6 months ago
- 9 comments
Labels: bug, stale
#5979 - CSS loading from incorrect endpoint
Issue -
State: open - Opened by bars0um 6 months ago
- 2 comments
Labels: bug
#5977 - Cant load llama 3 safetensor model
Issue -
State: open - Opened by Bedoshady 6 months ago
- 7 comments
Labels: bug
#5974 - not a directory
Issue -
State: closed - Opened by moonciest 6 months ago
- 1 comment
Labels: bug, stale
#5972 - Conda environment is empty.
Issue -
State: open - Opened by wbrunoFC 7 months ago
- 1 comment
Labels: bug
#5971 - Mismatch GPU architecture causes excessive vram usage for exl2
Issue -
State: closed - Opened by lawallet 7 months ago
- 1 comment
Labels: bug, stale
#5968 - I do not know how to use some llm models with gpu some deprecated stuff
Issue -
State: closed - Opened by makrse 7 months ago
- 3 comments
Labels: bug, stale
#5966 - The same model, with the same prompt has different performance in webui and LM Studio
Issue -
State: open - Opened by AndreyRGW 7 months ago
- 2 comments
Labels: bug
#5965 - Run Llama 3 70b locally combining ram and vram like with other apps?
Issue -
State: closed - Opened by 311-code 7 months ago
- 2 comments
Labels: enhancement, stale
#5956 - powershell install script
Issue -
State: closed - Opened by Xcertik-Realist 7 months ago
- 1 comment
Labels: enhancement, stale
#5952 - Fix async events for OpenAI API extension + Other small fixes
Pull Request -
State: open - Opened by Artificiangel 7 months ago
#5950 - Request: Stop current generation in API
Issue -
State: closed - Opened by Wladastic 7 months ago
- 1 comment
Labels: enhancement, stale
#5949 - Low performance - need help
Issue -
State: closed - Opened by RndUsr123 7 months ago
- 4 comments
Labels: bug, stale
#5948 - if you have python installed already, you can just install the requiremenst_amd.txt and that would get you started.
Issue -
State: closed - Opened by cgwers 7 months ago
- 2 comments
Labels: stale
#5930 - PHI-3 128K GGUF - Model Fails to Load
Issue -
State: closed - Opened by dmsweetser 7 months ago
- 5 comments
Labels: bug, stale
#5928 - Unable to train LoRA
Issue -
State: closed - Opened by Cohejh 7 months ago
- 3 comments
Labels: bug, stale
#5925 - install error
Issue -
State: closed - Opened by VladTheDestructor 7 months ago
- 8 comments
Labels: bug, stale
#5919 - The checksum verification for miniconda_installer.exe has failed
Issue -
State: open - Opened by cgwers 7 months ago
- 5 comments
Labels: bug
#5914 - Can no longer start: AttributeError: module 'lib' has no attribute 'X509_V_FLAG_NOTIFY_POLICY'. Did you mean: 'X509_V_FLAG_EXPLICIT_POLICY'?
Issue -
State: closed - Opened by thelabcat 7 months ago
- 9 comments
Labels: bug, stale
#5902 - Update API documentation with examples to list/load models
Pull Request -
State: closed - Opened by joachimchauvet 7 months ago
#5896 - requirements conflict between text-generation-webui, superbooga and coqui_tts
Issue -
State: closed - Opened by bridgesense 7 months ago
- 2 comments
Labels: bug, stale
#5894 - Auto-Split Error for GPTQ on Nvidia
Issue -
State: closed - Opened by Zaxl445 7 months ago
- 9 comments
Labels: bug, stale
#5888 - can't load models
Issue -
State: closed - Opened by MenrichKoller 7 months ago
- 7 comments
Labels: bug, stale
#5882 - error when start_windows.bat run
Issue -
State: closed - Opened by YondSun 7 months ago
- 2 comments
Labels: bug, stale
#5861 - rope_freq_base > 1,000,000
Issue -
State: closed - Opened by oldgithubman 7 months ago
- 4 comments
Labels: enhancement, stale
#5845 - when trying to use pygmalion-7b-4bit-128g-cuda-2048Token it gives a missing keyword error
Issue -
State: closed - Opened by paulpackgithub 7 months ago
- 3 comments
Labels: bug, stale
#5765 - Loading Model Error
Issue -
State: open - Opened by MateuszUlan 8 months ago
- 9 comments
Labels: bug
#5764 - loading model errors
Issue -
State: closed - Opened by ARCIST-AI 8 months ago
- 5 comments
Labels: bug, stale
#5736 - AWQ model error: ERROR Failed to load the model. NotImplementedError: Cannot copy out of meta tensor; no data!
Issue -
State: closed - Opened by guispfilho 8 months ago
- 3 comments
Labels: bug, stale
#5734 - AttributeError: 'function' object has no attribute '__wrapped__'
Issue -
State: closed - Opened by varkappadev 8 months ago
- 3 comments
Labels: bug, stale
#5705 - UserWarning: 1Torch was not compiled with flash attention.
Issue -
State: open - Opened by capactiyvirus 8 months ago
- 24 comments
Labels: bug
#5702 - EXL2 formatting is busted through the openai like API
Issue -
State: closed - Opened by MrMojoR 8 months ago
- 3 comments
Labels: bug, stale
#5689 - ERROR text-generation-webui???
Issue -
State: closed - Opened by kangz543g 8 months ago
- 4 comments
Labels: bug, stale
#5627 - Use webui with external OpenAI compatible model server
Issue -
State: closed - Opened by StableLlama 8 months ago
- 6 comments
Labels: enhancement, stale
#5603 - Multimodal for llama.cpp GGUF and Llava 1.6
Issue -
State: closed - Opened by Dartvauder 9 months ago
- 3 comments
Labels: enhancement
#5562 - Add support for Google Gemma Model
Issue -
State: closed - Opened by shreyanshsaha 9 months ago
- 13 comments
Labels: enhancement, stale
#5532 - Ollama Integration
Issue -
State: closed - Opened by CHesketh76 9 months ago
- 5 comments
Labels: enhancement
#5501 - Memory Management BSOD after loading model with exllamav2
Issue -
State: closed - Opened by Norwaere 9 months ago
- 6 comments
Labels: bug, stale
#5474 - Fixed, improved, and unified Docker environment
Pull Request -
State: open - Opened by StefanDanielSchwarz 9 months ago
- 2 comments
#5457 - Docker Install / Transformer Cache issue
Issue -
State: closed - Opened by tilllt 9 months ago
- 12 comments
Labels: bug, stale
#5408 - RuntimeError: Failed to import transformers.models.llama.modeling_llama
Issue -
State: closed - Opened by HeroMines 10 months ago
- 13 comments
Labels: bug, stale
#5381 - Swap to huggingface_hub get_token function
Pull Request -
State: closed - Opened by Anthonyg5005 10 months ago
- 1 comment
#5270 - No module named 'yaml'
Issue -
State: closed - Opened by karsontsang23 10 months ago
- 18 comments
Labels: bug, stale
#5185 - Add function calling ability to openai extension
Pull Request -
State: closed - Opened by yhyu13 10 months ago
- 20 comments
#5166 - Installation fails due to missing gradio, dateuil and probably others
Issue -
State: closed - Opened by rowild 10 months ago
- 3 comments
Labels: bug, stale
#5129 - Allow Path Recursive Search In Models Folder
Issue -
State: closed - Opened by JiHa-Kim 11 months ago
- 4 comments
Labels: stale
#5084 - Loading in Exllama results in enormous VRAM usage
Issue -
State: closed - Opened by azulika 11 months ago
- 3 comments
Labels: bug, stale
#5038 - Bug fixes for llava multimodal
Pull Request -
State: open - Opened by szelok 11 months ago
- 4 comments
#5036 - Unable to start with arguments --model liuhaotian_llava-v1.5-13b --multimodal-pipeline llava-v1.5-13b
Issue -
State: closed - Opened by szelok 11 months ago
- 14 comments
Labels: bug, stale
#4720 - Add supports for multiple system messages on OpenAI API extensions.
Issue -
State: closed - Opened by justpain02 12 months ago
- 5 comments
Labels: enhancement
#4707 - Support for Styletts2
Issue -
State: closed - Opened by D3voz 12 months ago
- 3 comments
Labels: enhancement, stale
#4672 - 404 Error: {"detail":"Not Found"} response to a post request made to the API running on port 5000 (/api/v1/generate)
Issue -
State: closed - Opened by jaquielajoie 12 months ago
- 3 comments
Labels: bug
#4561 - Adds multiple file input to superboogav2
Pull Request -
State: open - Opened by carlulsoe about 1 year ago
- 1 comment
#4371 - Error loading multiple LORAs on Transformers Adapter
Issue -
State: closed - Opened by Xeba111 about 1 year ago
- 13 comments
Labels: bug, stale
#4357 - RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback): DLL load failed while importing flash_attn_2_cuda: The specified module could not be found.
Issue -
State: closed - Opened by ayush1268 about 1 year ago
- 24 comments
Labels: enhancement, stale
#4195 - ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory
Issue -
State: closed - Opened by mushinbush about 1 year ago
- 13 comments
Labels: bug, stale