Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / intel-analytics/BigDL issues and pull requests

#10341 - Error when executing "from bigdl.llm.langchain.llms import TransformersLLM"

Issue - State: closed - Opened by Zhuohua-HUANG 7 months ago - 4 comments
Labels: user issue

#10340 - Fix device_map bug by raise an error when using device_map=xpu

Pull Request - State: closed - Opened by Zhangky11 7 months ago - 1 comment

#10339 - [LLM Doc] Small fixes to oneAPI link

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago

#10338 - Update llamaindex ut

Pull Request - State: closed - Opened by hxsz1997 7 months ago

#10337 - Examples: langchain RAG examples

Pull Request - State: closed - Opened by pengyb2001 7 months ago

#10336 - HuatuoGPT-7B will self Q & A with history by TextIteratorStreamer

Issue - State: closed - Opened by KiwiHana 7 months ago - 1 comment
Labels: user issue

#10335 - [LLM] Temp igpu perf test for newer driver

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago

#10334 - Fix Baichuan2 prompt format

Pull Request - State: closed - Opened by NovTi 7 months ago - 1 comment

#10331 - Failed to run Llama 2 inference on Flex 140

Issue - State: open - Opened by HLneoh 7 months ago - 4 comments
Labels: user issue

#10330 - LLM: add quantize kv cache support for baichuan 7b and 13b.

Pull Request - State: open - Opened by lalalapotter 7 months ago
Labels: llm

#10330 - LLM: add quantize kv cache support for baichuan 7b and 13b.

Pull Request - State: open - Opened by lalalapotter 7 months ago
Labels: llm

#10330 - LLM: add quantize kv cache support for baichuan 7b and 13b.

Pull Request - State: open - Opened by lalalapotter 7 months ago
Labels: llm

#10330 - LLM: add quantize kv cache support for baichuan 7b and 13b.

Pull Request - State: open - Opened by lalalapotter 7 months ago
Labels: llm

#10330 - LLM: add quantize kv cache support for baichuan 7b and 13b.

Pull Request - State: open - Opened by lalalapotter 7 months ago
Labels: llm

#10330 - LLM: add quantize kv cache support for baichuan 7b and 13b.

Pull Request - State: closed - Opened by lalalapotter 7 months ago
Labels: llm

#10330 - LLM: add quantize kv cache support for baichuan 7b and 13b.

Pull Request - State: open - Opened by lalalapotter 7 months ago
Labels: llm

#10330 - LLM: add quantize kv cache support for baichuan 7b and 13b.

Pull Request - State: open - Opened by lalalapotter 7 months ago
Labels: llm

#10329 - Optimize speculative decoding PVC memory usage

Pull Request - State: open - Opened by cyita 7 months ago

#10329 - Optimize speculative decoding PVC memory usage

Pull Request - State: open - Opened by cyita 7 months ago

#10329 - Optimize speculative decoding PVC memory usage

Pull Request - State: open - Opened by cyita 7 months ago

#10329 - Optimize speculative decoding PVC memory usage

Pull Request - State: closed - Opened by cyita 7 months ago

#10329 - Optimize speculative decoding PVC memory usage

Pull Request - State: open - Opened by cyita 7 months ago

#10329 - Optimize speculative decoding PVC memory usage

Pull Request - State: open - Opened by cyita 7 months ago

#10329 - Optimize speculative decoding PVC memory usage

Pull Request - State: open - Opened by cyita 7 months ago

#10329 - Optimize speculative decoding PVC memory usage

Pull Request - State: open - Opened by cyita 7 months ago

#10327 - fail to run model when load low bits instead of load original for qwen

Issue - State: closed - Opened by aoke79 7 months ago - 1 comment
Labels: user issue

#10327 - fail to run model when load low bits instead of load original for qwen

Issue - State: closed - Opened by aoke79 7 months ago - 1 comment

#10326 - LLM: support quantized kv cache for Mistral in transformers >=4.36.0

Pull Request - State: closed - Opened by lalalapotter 7 months ago
Labels: llm

#10326 - LLM: support quantized kv cache for Mistral in transformers >=4.36.0

Pull Request - State: closed - Opened by lalalapotter 7 months ago
Labels: llm

#10326 - LLM: support quantized kv cache for Mistral in transformers >=4.36.0

Pull Request - State: closed - Opened by lalalapotter 7 months ago
Labels: llm

#10326 - LLM: support quantized kv cache for Mistral in transformers >=4.36.0

Pull Request - State: closed - Opened by lalalapotter 7 months ago
Labels: llm

#10326 - LLM: support quantized kv cache for Mistral in transformers >=4.36.0

Pull Request - State: closed - Opened by lalalapotter 7 months ago
Labels: llm

#10326 - LLM: support quantized kv cache for Mistral in transformers >=4.36.0

Pull Request - State: closed - Opened by lalalapotter 7 months ago
Labels: llm

#10326 - LLM: support quantized kv cache for Mistral in transformers >=4.36.0

Pull Request - State: closed - Opened by lalalapotter 7 months ago
Labels: llm

#10326 - LLM: support quantized kv cache for Mistral in transformers >=4.36.0

Pull Request - State: closed - Opened by lalalapotter 7 months ago
Labels: llm

#10325 - Fix fschat DEP version error

Pull Request - State: closed - Opened by Romanticoseu 7 months ago - 4 comments

#10325 - Fix fschat DEP version error

Pull Request - State: open - Opened by Romanticoseu 7 months ago

#10324 - optimize bge large performance

Pull Request - State: closed - Opened by MeouSker77 7 months ago

#10324 - optimize bge large performance

Pull Request - State: closed - Opened by MeouSker77 7 months ago

#10324 - optimize bge large performance

Pull Request - State: closed - Opened by MeouSker77 7 months ago

#10324 - optimize bge large performance

Pull Request - State: closed - Opened by MeouSker77 7 months ago

#10324 - optimize bge large performance

Pull Request - State: closed - Opened by MeouSker77 7 months ago

#10324 - optimize bge large performance

Pull Request - State: closed - Opened by MeouSker77 7 months ago

#10324 - optimize bge large performance

Pull Request - State: closed - Opened by MeouSker77 7 months ago

#10324 - optimize bge large performance

Pull Request - State: closed - Opened by MeouSker77 7 months ago

#10323 - [FastChat-integration] Add initial implementation for loader

Pull Request - State: open - Opened by gc-fu 7 months ago

#10323 - [FastChat-integration] Add initial implementation for loader

Pull Request - State: open - Opened by gc-fu 7 months ago

#10323 - [FastChat-integration] Add initial implementation for loader

Pull Request - State: open - Opened by gc-fu 7 months ago

#10323 - [FastChat-integration] Add initial implementation for loader

Pull Request - State: open - Opened by gc-fu 7 months ago

#10323 - [FastChat-integration] Add initial implementation for loader

Pull Request - State: open - Opened by gc-fu 7 months ago

#10323 - [FastChat-integration] Add initial implementation for loader

Pull Request - State: closed - Opened by gc-fu 7 months ago

#10323 - [FastChat-integration] Add initial implementation for loader

Pull Request - State: open - Opened by gc-fu 7 months ago

#10323 - [FastChat-integration] Add initial implementation for loader

Pull Request - State: open - Opened by gc-fu 7 months ago

#10322 - [LLM Doc] Restructure

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago - 1 comment

#10322 - [LLM Doc] Restructure

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago - 1 comment

#10322 - [LLM Doc] Restructure

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago - 1 comment

#10322 - [LLM Doc] Restructure

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago - 1 comment

#10322 - [LLM Doc] Restructure

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago - 1 comment

#10322 - [LLM Doc] Restructure

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago - 1 comment

#10322 - [LLM Doc] Restructure

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago - 1 comment

#10322 - [LLM Doc] Restructure

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago - 1 comment

#10321 - upload bigdl-llm wheel to sourceforge for backup

Pull Request - State: closed - Opened by liu-shaojun 7 months ago

#10321 - upload bigdl-llm wheel to sourceforge for backup

Pull Request - State: closed - Opened by liu-shaojun 7 months ago

#10321 - upload bigdl-llm wheel to sourceforge for backup

Pull Request - State: closed - Opened by liu-shaojun 7 months ago

#10321 - upload bigdl-llm wheel to sourceforge for backup

Pull Request - State: closed - Opened by liu-shaojun 7 months ago

#10321 - upload bigdl-llm wheel to sourceforge for backup

Pull Request - State: closed - Opened by liu-shaojun 7 months ago

#10321 - upload bigdl-llm wheel to sourceforge for backup

Pull Request - State: closed - Opened by liu-shaojun 7 months ago

#10321 - upload bigdl-llm wheel to sourceforge for backup

Pull Request - State: closed - Opened by liu-shaojun 7 months ago

#10320 - Failed to run Llama2-7B on Intel GPU

Issue - State: open - Opened by Mushtaq-BGA 7 months ago - 1 comment
Labels: user issue

#10320 - Failed to run Llama2-7B on Intel GPU

Issue - State: open - Opened by Mushtaq-BGA 7 months ago - 1 comment
Labels: user issue

#10320 - Failed to run Llama2-7B on Intel GPU

Issue - State: open - Opened by Mushtaq-BGA 7 months ago - 2 comments
Labels: user issue

#10320 - Failed to run Llama2-7B on Intel GPU

Issue - State: open - Opened by Mushtaq-BGA 7 months ago - 1 comment
Labels: user issue

#10319 - [WIP] LLM: Enable BigDL IPEX optimization for int4

Pull Request - State: open - Opened by xiangyuT 7 months ago

#10319 - LLM: Enable BigDL IPEX optimization for int4

Pull Request - State: closed - Opened by xiangyuT 7 months ago

#10318 - First token lm_head optimization

Pull Request - State: open - Opened by cyita 7 months ago - 3 comments

#10318 - First token lm_head optimization

Pull Request - State: open - Opened by cyita 7 months ago - 2 comments

#10318 - First token lm_head optimization

Pull Request - State: open - Opened by cyita 7 months ago - 2 comments

#10318 - First token lm_head optimization

Pull Request - State: open - Opened by cyita 7 months ago - 2 comments

#10318 - First token lm_head optimization

Pull Request - State: closed - Opened by cyita 7 months ago - 4 comments

#10318 - First token lm_head optimization

Pull Request - State: open - Opened by cyita 7 months ago - 2 comments

#10317 - Empty cache for lm_head

Pull Request - State: closed - Opened by hkvision 7 months ago - 5 comments

#10316 - Update WebUI quickstart

Pull Request - State: closed - Opened by chtanch 7 months ago - 4 comments

#10315 - LLM: Compress some models to save space

Pull Request - State: closed - Opened by WeiguangHan 7 months ago

#10314 - Add llamaindex gpu example

Pull Request - State: closed - Opened by Ricky-Ting 7 months ago - 1 comment

#10313 - [LLM] Test `load_low_bit` in iGPU perf test on Windows

Pull Request - State: closed - Opened by Oscilloscope98 7 months ago

#10312 - LLM: compress some models to save space

Pull Request - State: closed - Opened by WeiguangHan 7 months ago - 1 comment