Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / marella/ctransformers issues and pull requests
#213 - Link in readme broken
Issue -
State: open - Opened by KansaiUser about 2 months ago
#212 - model not loading on GPU
Issue -
State: open - Opened by kot197 2 months ago
#212 - model not loading on GPU
Issue -
State: open - Opened by kot197 2 months ago
#211 - Problem accessing libctransformers.so
Issue -
State: open - Opened by Ajayvenki 2 months ago
- 1 comment
#211 - Problem accessing libctransformers.so
Issue -
State: open - Opened by Ajayvenki 2 months ago
- 1 comment
#210 - Support for Llama3
Issue -
State: open - Opened by gultar 5 months ago
- 2 comments
#209 - OSError: .......cannot open shared object file: No such file or directory
Issue -
State: open - Opened by Saurabh11811 6 months ago
- 3 comments
#208 - Error when trying to run on kali linux
Issue -
State: open - Opened by Kuro0911 6 months ago
#207 - Add Support for Google/Gemma-2b-it
Issue -
State: closed - Opened by Arya920 7 months ago
#206 - GGUF MODEL INFERENCE
Issue -
State: open - Opened by 1234AP1234 7 months ago
#205 - Inputting embeddings directly
Issue -
State: open - Opened by liechtym 7 months ago
#204 - Does ctransformers support ollama models?
Issue -
State: open - Opened by PriyaranjanMarathe 7 months ago
- 1 comment
#203 - Add support for Google's Gemma models
Issue -
State: open - Opened by gultar 7 months ago
#202 - Does ctransformers boost the inference speed in llm inference?
Issue -
State: open - Opened by pradeepdev-1995 8 months ago
#201 - How to load the finetuned model in safetensors format(not in gguf)
Issue -
State: open - Opened by pradeepdev-1995 8 months ago
#200 - fix: context_length has no effect
Pull Request -
State: open - Opened by chosen-ox 8 months ago
#199 - Cannot generate text on GPU
Issue -
State: open - Opened by congson1293 9 months ago
- 3 comments
#198 - Not working with gpu_layers
Issue -
State: closed - Opened by MNekoRain 9 months ago
- 2 comments
#197 - Add support for Microsoft Phi-2
Issue -
State: open - Opened by niutech 10 months ago
#196 - Unsupported Model : Zephyr 'stablelm' GGUF
Issue -
State: open - Opened by Jonathanjordan21 10 months ago
#195 - Pulling models outside of hf?
Issue -
State: open - Opened by shell-skrimp 10 months ago
#194 - Multimodal models compatibility
Issue -
State: open - Opened by ParisNeo 10 months ago
#193 - precompiled rocm and metal wheels
Issue -
State: open - Opened by ParisNeo 10 months ago
#192 - Infinite token generation
Issue -
State: closed - Opened by yukiarimo 10 months ago
- 1 comment
#191 - GPTQ models are not respecting context_length or max_seq_len settings
Issue -
State: open - Opened by chrsbats 10 months ago
#190 - OSError: [WinError 1114] A dynamic link library (DLL) initialization routine failed
Issue -
State: open - Opened by saurabhbluebenz 10 months ago
#189 - Unclear error: GGML_ASSERT: D:\a\ctransformers\ctransformers\models\ggml/llama.cpp:453: data
Issue -
State: open - Opened by deveolper 10 months ago
- 3 comments
#188 - request feature: RAG of local docs
Issue -
State: open - Opened by nimzodisaster 10 months ago
#187 - Fine-tuning option?
Issue -
State: open - Opened by yukiarimo 10 months ago
#186 - OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found
Issue -
State: open - Opened by luohao123 10 months ago
- 1 comment
#184 - NotImplementedError when using CTransformers AutoTokenizer
Issue -
State: open - Opened by NasonZ 10 months ago
- 3 comments
#183 - Segfault with DeepSeek GGUF models
Issue -
State: closed - Opened by freckletonj 10 months ago
- 3 comments
#182 - Request: `stopping_criteria`
Issue -
State: open - Opened by freckletonj 10 months ago
#181 - Mistral Sliding Window Attention (SWA)
Issue -
State: open - Opened by stygmate 10 months ago
- 1 comment
#180 - Something wrong with a generator
Issue -
State: closed - Opened by yukiarimo 10 months ago
- 1 comment
#179 - [Question] Run CTransformer with oracle linux server hits error with libctransformers.so
Issue -
State: closed - Opened by guanw 11 months ago
- 1 comment
#178 - Adapt to Upcoming pip Behavior Change for --no-binary Option
Issue -
State: open - Opened by wenboown 11 months ago
#177 - Model not loading on GPU
Issue -
State: open - Opened by AndreaLombax 11 months ago
- 1 comment
#176 - core dumped / segmentation fault
Issue -
State: open - Opened by lysa324 11 months ago
- 1 comment
#175 - Everything OK? Abandoned?
Issue -
State: open - Opened by TheBloke 11 months ago
- 10 comments
#174 - Will rwkv be supported?
Issue -
State: open - Opened by calvinweb 11 months ago
#173 - Out of memory exits process
Issue -
State: open - Opened by kczimm 11 months ago
#172 - Slow + No config options
Issue -
State: closed - Opened by yukiarimo 11 months ago
- 2 comments
#171 - Support for cuda 11.8 and above
Pull Request -
State: open - Opened by sujeendran 11 months ago
#170 - Cuda 11.8
Issue -
State: open - Opened by JeanChristopheMorinPerso 11 months ago
- 1 comment
#169 - Mistral-7b Error: AttributeError: 'str' object has no attribute 'tolist'
Issue -
State: open - Opened by JannikSchneider12 11 months ago
- 1 comment
#168 - Requesting support for 'CausalLM' models
Issue -
State: open - Opened by SixftOne 11 months ago
#167 - AutoModelForCausalLM.from_pretrained(.., gpu_layers=..) gives Windows Error 0xc000001d
Issue -
State: open - Opened by JeremyBickel 11 months ago
- 1 comment
#166 - logprobs are greater than 0
Issue -
State: open - Opened by RevanthRameshkumar 12 months ago
#165 - CUDA error 222 at D:\a\ctransformers\ctransformers\models\ggml\ggml-cuda.cu:6045: the provided PTX was compiled with an unsupported toolchain.
Issue -
State: open - Opened by AnhNgDo 12 months ago
- 2 comments
#164 - How to handle the token limitation for a LLM response?
Issue -
State: open - Opened by phoenixthinker 12 months ago
- 2 comments
#163 - GPU is not used even after specifying gpu_layers
Issue -
State: open - Opened by YogeshTembe 12 months ago
- 3 comments
#162 - CUDA error - the provided PTX was compiled with an unsupported toolchain
Issue -
State: closed - Opened by melindmi 12 months ago
- 1 comment
#161 - Adding additional models
Issue -
State: open - Opened by harryjulian 12 months ago
#160 - Can't use AVX2 lib in Linux.
Issue -
State: open - Opened by khanjandharaiya 12 months ago
#159 - Calculate Token spends
Issue -
State: open - Opened by VpkPrasanna 12 months ago
#158 - How to know if my CPU supports BLAS?
Issue -
State: open - Opened by AayushSameerShah 12 months ago
- 1 comment
#157 - How to increase speed of inference speed for CPU?
Issue -
State: open - Opened by khanjandharaiya 12 months ago
- 2 comments
#156 - Occasional Segmentation Fault
Issue -
State: open - Opened by harryjulian 12 months ago
- 1 comment
#155 - recover from `transformers 4.34 refactored`
Pull Request -
State: open - Opened by victorlee0505 12 months ago
#154 - transformers 4.34 caused NotImplementedError when calling CTransformersTokenizer(PreTrainedTokenizer)
Issue -
State: open - Opened by victorlee0505 12 months ago
- 17 comments
#153 - Text is exceeding maximum context length (512)
Issue -
State: closed - Opened by CHesketh76 12 months ago
- 1 comment
#152 - Can I run ctransformers on linux? I guess it gives error for GLIBC version.
Issue -
State: closed - Opened by AayushSameerShah 12 months ago
- 1 comment
#151 - Is that the case that larger prompt takes longer time to just get started for the first token?
Issue -
State: open - Opened by AayushSameerShah 12 months ago
#150 - How to compute logits output in parallel for all the input sequence?
Issue -
State: open - Opened by djmMax about 1 year ago
- 2 comments
#149 - Support for Mistral
Issue -
State: open - Opened by Ananderz about 1 year ago
- 10 comments
#148 - Regarding the model type_update for Starcoder/BigCode
Issue -
State: open - Opened by ankit1063 about 1 year ago
#147 - Instructions for compiling from scratch
Issue -
State: open - Opened by RevanthRameshkumar about 1 year ago
#146 - 2nd Generation is really bad
Issue -
State: open - Opened by jojac47 about 1 year ago
#145 - While running the model, facing the error: `exception: access violation writing 0x000002B6F404B000`
Issue -
State: open - Opened by Saurav-Navdhare about 1 year ago
- 1 comment
#144 - How to specify Maximum Context Length for my llm
Issue -
State: open - Opened by Harri1703 about 1 year ago
- 2 comments
#143 - CTransformers doesn't store model on right location
Issue -
State: open - Opened by Yanni8 about 1 year ago
- 1 comment
#142 - I am getting a module not found error but I have ctransformers installed and the dll file is present.
Issue -
State: open - Opened by Harri1703 about 1 year ago
#141 - feature request
Issue -
State: open - Opened by thistleknot about 1 year ago
- 2 comments
#140 - Error during loading Codellama GGUF
Issue -
State: open - Opened by GooDRomka about 1 year ago
- 1 comment
#139 - CUDA error 35 at /home/runner/work/ctransformers/ctransformers/models/ggml/ggml-cuda.cu:5067: CUDA driver version is insufficient for CUDA runtime version
Issue -
State: open - Opened by thistleknot about 1 year ago
- 4 comments
#138 - Can I get a little clarification over my understanding for the terminologies and the GGUF models?
Issue -
State: open - Opened by AayushSameerShah about 1 year ago
#137 - [AMD] Fix compilation issue with ROCm
Pull Request -
State: open - Opened by bhargav about 1 year ago
- 6 comments
#136 - Remove GGML_USE_CUBLAS when CT_HIPBLAS is defined
Pull Request -
State: open - Opened by muaiyadh about 1 year ago
- 4 comments
#135 - CT_HIPBLAS=1 fails to build on Arch (Could not build wheels for ctransformers)
Issue -
State: closed - Opened by CrashTD about 1 year ago
- 2 comments
#134 - Unable to compile for ROCM on Ubuntu 22.04
Issue -
State: open - Opened by bugfixin about 1 year ago
- 1 comment
#133 - Feat: cache_dir
Pull Request -
State: closed - Opened by wheynelau about 1 year ago
- 3 comments
#132 - Unable to save to different folder
Issue -
State: open - Opened by wheynelau about 1 year ago
#131 - How do I make a model use mps?
Issue -
State: open - Opened by jmtayamada about 1 year ago
- 6 comments
#130 - I am not even seeing True or False Straightly it dropping out
Issue -
State: open - Opened by Bakulesh1Codes108 about 1 year ago
#129 - No matching distribution found for exllama==0.1.0; extra == "gptq" (from ctransformers[gptq])
Issue -
State: open - Opened by BajrangWappnet about 1 year ago
#128 - Repeated text for longer prompts.
Issue -
State: open - Opened by PawelFaron about 1 year ago
#127 - Is there a way to implement trust_remote_code like the regular transformers library has?
Issue -
State: open - Opened by ZeroUni about 1 year ago
#126 - n_ctx doesn't work for Yarn-Llama-2-13B-64K-GGUF?
Issue -
State: open - Opened by surflip about 1 year ago
- 1 comment
#125 - Langchain with GPU not working
Issue -
State: closed - Opened by drmwnrafi about 1 year ago
- 4 comments
#124 - Llama tokenizer can not stop at </s>
Issue -
State: open - Opened by lucasjinreal about 1 year ago
#123 - About streaming server in openai API like
Issue -
State: open - Opened by lucasjinreal about 1 year ago
#122 - Support for vision-language model
Issue -
State: open - Opened by dnth about 1 year ago
#121 - Code Llama 34B GGUF produces garbage after a certain point
Issue -
State: closed - Opened by viktor-ferenczi about 1 year ago
- 6 comments
#120 - CUDA library without AVX2, FMA, F16C support possible?
Issue -
State: closed - Opened by m-from-space about 1 year ago
- 2 comments
#119 - using ctransfromers for langchain agents pnadas data frame agent
Issue -
State: closed - Opened by deepthi97midasala about 1 year ago
- 2 comments
#118 - Streaming decode issue
Issue -
State: open - Opened by lucasjinreal about 1 year ago
- 3 comments
#117 - Can I use on Mac Os X Darwin 10.14?
Issue -
State: open - Opened by andreapagliacci about 1 year ago
- 2 comments
#116 - WizardCoder-Python-34b GGUF
Issue -
State: open - Opened by MichaelMartinez about 1 year ago