An open API service for providing issue and pull request metadata for open source projects.

GitHub / marella/ctransformers issues and pull requests

#216 - `AutoModelForCausalLM` is stuck, no error

Issue - State: open - Opened by victoris93 5 months ago

#215 - I'm having trouble running gguf models

Issue - State: open - Opened by fatihsazan 6 months ago - 1 comment

#214 - IBM Granite models support?

Issue - State: open - Opened by 0wwafa 7 months ago

#213 - Link in readme broken

Issue - State: open - Opened by KansaiUser 12 months ago

#212 - model not loading on GPU

Issue - State: open - Opened by kot197 12 months ago

#211 - Problem accessing libctransformers.so

Issue - State: open - Opened by Ajayvenki about 1 year ago - 1 comment

#210 - Support for Llama3

Issue - State: open - Opened by gultar over 1 year ago - 2 comments

#208 - Error when trying to run on kali linux

Issue - State: open - Opened by Kuro0911 over 1 year ago

#207 - Add Support for Google/Gemma-2b-it

Issue - State: closed - Opened by Arya920 over 1 year ago

#206 - GGUF MODEL INFERENCE

Issue - State: open - Opened by 1234AP1234 over 1 year ago

#205 - Inputting embeddings directly

Issue - State: open - Opened by liechtym over 1 year ago

#204 - Does ctransformers support ollama models?

Issue - State: open - Opened by PriyaranjanMarathe over 1 year ago - 1 comment

#203 - Add support for Google's Gemma models

Issue - State: open - Opened by gultar over 1 year ago

#200 - fix: context_length has no effect

Pull Request - State: open - Opened by chosen-ox over 1 year ago

#199 - Cannot generate text on GPU

Issue - State: open - Opened by congson1293 over 1 year ago - 3 comments

#198 - Not working with gpu_layers

Issue - State: closed - Opened by MNekoRain over 1 year ago - 2 comments

#197 - Add support for Microsoft Phi-2

Issue - State: open - Opened by niutech over 1 year ago

#196 - Unsupported Model : Zephyr 'stablelm' GGUF

Issue - State: open - Opened by Jonathanjordan21 over 1 year ago

#195 - Pulling models outside of hf?

Issue - State: open - Opened by shell-skrimp over 1 year ago

#194 - Multimodal models compatibility

Issue - State: open - Opened by ParisNeo over 1 year ago

#193 - precompiled rocm and metal wheels

Issue - State: open - Opened by ParisNeo over 1 year ago

#192 - Infinite token generation

Issue - State: closed - Opened by yukiarimo over 1 year ago - 1 comment

#188 - request feature: RAG of local docs

Issue - State: open - Opened by nimzodisaster over 1 year ago

#187 - Fine-tuning option?

Issue - State: closed - Opened by yukiarimo over 1 year ago

#186 - OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found

Issue - State: open - Opened by luohao123 over 1 year ago - 1 comment

#184 - NotImplementedError when using CTransformers AutoTokenizer

Issue - State: open - Opened by NasonZ over 1 year ago - 3 comments

#183 - Segfault with DeepSeek GGUF models

Issue - State: closed - Opened by freckletonj over 1 year ago - 3 comments

#182 - Request: `stopping_criteria`

Issue - State: open - Opened by freckletonj over 1 year ago

#181 - Mistral Sliding Window Attention (SWA)

Issue - State: open - Opened by stygmate over 1 year ago - 1 comment

#180 - Something wrong with a generator

Issue - State: closed - Opened by yukiarimo over 1 year ago - 1 comment

#177 - Model not loading on GPU

Issue - State: open - Opened by AndreaLombax over 1 year ago - 1 comment

#176 - core dumped / segmentation fault

Issue - State: open - Opened by lysa324 over 1 year ago - 1 comment

#175 - Everything OK? Abandoned?

Issue - State: open - Opened by TheBloke over 1 year ago - 10 comments

#174 - Will rwkv be supported?

Issue - State: open - Opened by calvinweb over 1 year ago

#173 - Out of memory exits process

Issue - State: open - Opened by kczimm over 1 year ago

#172 - Slow + No config options

Issue - State: closed - Opened by yukiarimo over 1 year ago - 2 comments

#171 - Support for cuda 11.8 and above

Pull Request - State: open - Opened by sujeendran over 1 year ago

#170 - Cuda 11.8

Issue - State: open - Opened by JeanChristopheMorinPerso almost 2 years ago - 1 comment

#168 - Requesting support for 'CausalLM' models

Issue - State: open - Opened by SixftOne almost 2 years ago

#166 - logprobs are greater than 0

Issue - State: open - Opened by RevanthRameshkumar almost 2 years ago

#164 - How to handle the token limitation for a LLM response?

Issue - State: open - Opened by phoenixthinker almost 2 years ago - 2 comments

#163 - GPU is not used even after specifying gpu_layers

Issue - State: open - Opened by YogeshTembe almost 2 years ago - 3 comments

#162 - CUDA error - the provided PTX was compiled with an unsupported toolchain

Issue - State: closed - Opened by melindmi almost 2 years ago - 1 comment

#161 - Adding additional models

Issue - State: open - Opened by harryjulian almost 2 years ago

#160 - Can't use AVX2 lib in Linux.

Issue - State: open - Opened by khanjandharaiya almost 2 years ago

#159 - Calculate Token spends

Issue - State: open - Opened by VpkPrasanna almost 2 years ago

#158 - How to know if my CPU supports BLAS?

Issue - State: open - Opened by AayushSameerShah almost 2 years ago - 1 comment

#157 - How to increase speed of inference speed for CPU?

Issue - State: open - Opened by khanjandharaiya almost 2 years ago - 2 comments

#156 - Occasional Segmentation Fault

Issue - State: open - Opened by harryjulian almost 2 years ago - 1 comment

#155 - recover from `transformers 4.34 refactored`

Pull Request - State: open - Opened by victorlee0505 almost 2 years ago

#153 - Text is exceeding maximum context length (512)

Issue - State: closed - Opened by CHesketh76 almost 2 years ago - 1 comment

#150 - How to compute logits output in parallel for all the input sequence?

Issue - State: open - Opened by djmMax almost 2 years ago - 2 comments

#149 - Support for Mistral

Issue - State: open - Opened by Ananderz almost 2 years ago - 10 comments

#148 - Regarding the model type_update for Starcoder/BigCode

Issue - State: open - Opened by ankit1063 almost 2 years ago

#147 - Instructions for compiling from scratch

Issue - State: open - Opened by RevanthRameshkumar almost 2 years ago

#146 - 2nd Generation is really bad

Issue - State: open - Opened by jojac47 almost 2 years ago

#144 - How to specify Maximum Context Length for my llm

Issue - State: open - Opened by Harri1703 almost 2 years ago - 2 comments

#143 - CTransformers doesn't store model on right location

Issue - State: open - Opened by Yanni8 almost 2 years ago - 2 comments

#141 - feature request

Issue - State: open - Opened by thistleknot almost 2 years ago - 2 comments

#140 - Error during loading Codellama GGUF

Issue - State: open - Opened by GooDRomka almost 2 years ago - 1 comment

#137 - [AMD] Fix compilation issue with ROCm

Pull Request - State: open - Opened by bhargav almost 2 years ago - 6 comments

#136 - Remove GGML_USE_CUBLAS when CT_HIPBLAS is defined

Pull Request - State: open - Opened by muaiyadh almost 2 years ago - 4 comments

#135 - CT_HIPBLAS=1 fails to build on Arch (Could not build wheels for ctransformers)

Issue - State: closed - Opened by CrashTD almost 2 years ago - 2 comments

#134 - Unable to compile for ROCM on Ubuntu 22.04

Issue - State: open - Opened by bugfixin almost 2 years ago - 1 comment

#133 - Feat: cache_dir

Pull Request - State: closed - Opened by wheynelau almost 2 years ago - 3 comments

#132 - Unable to save to different folder

Issue - State: open - Opened by wheynelau almost 2 years ago

#131 - How do I make a model use mps?

Issue - State: open - Opened by jmtayamada almost 2 years ago - 6 comments

#128 - Repeated text for longer prompts.

Issue - State: open - Opened by PawelFaron almost 2 years ago

#126 - n_ctx doesn't work for Yarn-Llama-2-13B-64K-GGUF?

Issue - State: open - Opened by surflip almost 2 years ago - 1 comment

#125 - Langchain with GPU not working

Issue - State: closed - Opened by drmwnrafi almost 2 years ago - 4 comments

#124 - Llama tokenizer can not stop at </s>

Issue - State: open - Opened by lucasjinreal almost 2 years ago

#123 - About streaming server in openai API like

Issue - State: open - Opened by lucasjinreal almost 2 years ago

#122 - Support for vision-language model

Issue - State: open - Opened by dnth almost 2 years ago

#121 - Code Llama 34B GGUF produces garbage after a certain point

Issue - State: closed - Opened by viktor-ferenczi almost 2 years ago - 6 comments

#120 - CUDA library without AVX2, FMA, F16C support possible?

Issue - State: closed - Opened by m-from-space almost 2 years ago - 2 comments

#119 - using ctransfromers for langchain agents pnadas data frame agent

Issue - State: closed - Opened by deepthi97midasala almost 2 years ago - 2 comments

#118 - Streaming decode issue

Issue - State: open - Opened by lucasjinreal almost 2 years ago - 3 comments

#117 - Can I use on Mac Os X Darwin 10.14?

Issue - State: open - Opened by andreapagliacci almost 2 years ago - 2 comments