Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / intel-analytics/BigDL issues and pull requests
#10554 - Replace ipex with ipex-llm
Pull Request -
State: closed - Opened by Romanticoseu 6 months ago
#10553 - core dump in ARC gpu while running phi2 and mistral
Issue -
State: open - Opened by tsantra 6 months ago
- 1 comment
Labels: user issue
#10552 - Update installation instructions to install oneapi 2024.0
Issue -
State: open - Opened by yangw1234 6 months ago
- 2 comments
#10551 - Install ipex-llm is resulting in timeout
Issue -
State: open - Opened by shailesh837 6 months ago
- 1 comment
Labels: user issue
#10550 - ImportError: /home/gta/miniconda3/envs/llm/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: iJIT_NotifyEvent
Issue -
State: open - Opened by rmurtazi 6 months ago
- 13 comments
Labels: user issue
#10549 - LLM: refactor logic of esimd sdp
Pull Request -
State: closed - Opened by rnwang04 6 months ago
#10548 - [Serving] Fix fastchat breaks
Pull Request -
State: closed - Opened by gc-fu 6 months ago
- 1 comment
#10547 - change bigdl-llm-tutorial to ipex-llm-tutorial in README
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10546 - Add verified models in document index
Pull Request -
State: closed - Opened by JinBridger 6 months ago
#10545 - replace bigdl-llm with ipex-llm
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10544 - Remove ipex-llm dependency in readme
Pull Request -
State: closed - Opened by JinBridger 6 months ago
- 2 comments
#10543 - LLM: add esimd sdp for pvc
Pull Request -
State: closed - Opened by rnwang04 6 months ago
#10542 - LLM: split chatglm3's mlp and use mlp fusion
Pull Request -
State: open - Opened by rnwang04 6 months ago
#10541 - AssertionError: daemonic processes are not allowed to have children
Issue -
State: open - Opened by ywang30intel 6 months ago
- 3 comments
Labels: user issue
#10540 - fix chatglm
Pull Request -
State: closed - Opened by MeouSker77 6 months ago
- 1 comment
#10539 - Enable Speculative Mixtral on CPU
Pull Request -
State: open - Opened by Uxito-Ada 6 months ago
#10538 - 2 GPU settings for Llama2_7b is not working, per XPU-SMI device 0 @ 99% and device 1 @ 0% during execution --resolved
Issue -
State: open - Opened by gbertulf 6 months ago
- 1 comment
Labels: user issue
#10537 - init-llama-cpp.bat not getting created in windows to use ipex[cpp] as the backend for llama.cpp
Issue -
State: open - Opened by rahulunair 6 months ago
- 1 comment
Labels: API, user issue
#10536 - QWen 1.8B can't run 8k input after fused qkv
Issue -
State: closed - Opened by hkvision 6 months ago
- 1 comment
#10535 - [Doc] Index page small typo fix
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
#10534 - [Doc] Update IPEX-LLM Index Page
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
#10533 - add nightly_build workflow
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10532 - [Doc] IPEX-LLM Doc Layout Update
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
#10531 - enable fp4 fused mlp and qkv
Pull Request -
State: closed - Opened by qiuxin2012 6 months ago
#10530 - update linux quickstart and formats of migration
Pull Request -
State: closed - Opened by shane-huang 6 months ago
#10529 - LLM: Add length check for IPEX-CPU speculative decoding
Pull Request -
State: closed - Opened by xiangyuT 6 months ago
#10528 - Modify example from fp32 to fp16
Pull Request -
State: open - Opened by Zhangky11 6 months ago
#10527 - LLM: fix mistral hidden_size setting for deepspeed autotp
Pull Request -
State: closed - Opened by plusbang 6 months ago
#10526 - revise migration guide
Pull Request -
State: closed - Opened by shane-huang 6 months ago
#10525 - memory utilization for 1k input is larger than 3k input for baichuan2-7b with INT4 precision
Issue -
State: open - Opened by Fred-cell 6 months ago
- 1 comment
Labels: user issue
#10524 - Update README.md
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10523 - Update migration guide
Pull Request -
State: closed - Opened by hzjane 6 months ago
- 1 comment
#10522 - AttributeError: 'Linear' object has no attribute 'qtype'. Did you mean: 'type'?
Issue -
State: closed - Opened by nazneenn 6 months ago
- 7 comments
Labels: user issue
#10521 - move migration guide to quickstart
Pull Request -
State: closed - Opened by shane-huang 6 months ago
#10520 - update nightly test
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10519 - update readthedocs project name
Pull Request -
State: closed - Opened by glorysdj 6 months ago
#10518 - Update README.md
Pull Request -
State: closed - Opened by jason-dai 6 months ago
#10517 - ImportError: /lib64/libc.so.6: version `GLIBC_2.32' not found
Issue -
State: open - Opened by devpramod 6 months ago
- 4 comments
Labels: user issue
#10516 - pydantic.error_wrappers.ValidationError occurred when BigDL integrate with Kor
Issue -
State: open - Opened by Zhuohua-HUANG 6 months ago
- 4 comments
Labels: user issue
#10515 - Unable to run on dGPU
Issue -
State: closed - Opened by dyedd 6 months ago
- 16 comments
Labels: user issue
#10514 - Inference Qwen1.5-7B-Chat failed
Issue -
State: closed - Opened by Fred-cell 6 months ago
- 2 comments
Labels: user issue
#10513 - inference Llama-2-7b-chat-hf failed with 8k input and INT4 precision
Issue -
State: open - Opened by Fred-cell 6 months ago
- 3 comments
Labels: user issue
#10512 - Facing issue when install this library : "python3.9 -m pip install --ignore-installed PyYAML --pre --upgrade bigdl-chronos[pytorch]==2.4.0"
Issue -
State: closed - Opened by SjeYinTeoIntel 6 months ago
- 2 comments
Labels: user issue
#10511 - inference chatglm3-6b with int4 and 8k input prompt failed
Issue -
State: open - Opened by Fred-cell 6 months ago
- 1 comment
Labels: user issue
#10510 - Remove native_int4 in LangChain examples
Pull Request -
State: closed - Opened by hxsz1997 6 months ago
- 3 comments
#10509 - Test0322
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10508 - Remove native_int4 and move examples in transformers_int4
Pull Request -
State: closed - Opened by hxsz1997 6 months ago
#10507 - Run neural-chat 7b inference with Deepspeed on Flex 140.
Issue -
State: open - Opened by Vasud-ha 6 months ago
- 5 comments
Labels: user issue
#10506 - 模型推理问题 Model inference issue
Issue -
State: open - Opened by SJF-ECNU 6 months ago
- 3 comments
Labels: user issue
#10505 - LLM: add windows related info in llama-cpp quickstart
Pull Request -
State: closed - Opened by rnwang04 6 months ago
- 3 comments
Labels: document
#10504 - LLM: fix baichuan7b quantize kv abnormal output.
Pull Request -
State: closed - Opened by lalalapotter 6 months ago
Labels: llm
#10503 - fix a typo in yuan
Pull Request -
State: closed - Opened by MeouSker77 6 months ago
#10502 - InvalidModule: Invalid SPIR-V module
Issue -
State: open - Opened by zeminli 6 months ago
- 4 comments
Labels: user issue
#10501 - pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default)
Issue -
State: open - Opened by shailesh837 6 months ago
- 2 comments
Labels: user issue
#10500 - Fix llava example to support transformerds 4.36
Pull Request -
State: closed - Opened by jenniew 6 months ago
#10499 - Update Linux Quickstart
Pull Request -
State: closed - Opened by hkvision 6 months ago
#10498 - LLM: don't export env variables when linux kernel is 6.5
Pull Request -
State: closed - Opened by WeiguangHan 6 months ago
- 2 comments
#10497 - Enable CPU Speculative Mixtral
Pull Request -
State: closed - Opened by Uxito-Ada 6 months ago
- 1 comment
#10496 - [LLM] Add nightly igpu perf test for INT4+FP16 1024-128
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
#10495 - performance regression for llama2-7b between 0320 and 0319 version based on Xeon platform + Arc A770
Issue -
State: closed - Opened by Fred-cell 6 months ago
- 4 comments
Labels: user issue
#10494 - Fix CPU finetuning docker
Pull Request -
State: closed - Opened by Uxito-Ada 6 months ago
#10493 - Add FastChat bigdl_worker
Pull Request -
State: closed - Opened by gc-fu 6 months ago
- 4 comments
#10492 - remove unused python package from Dockerfile
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10491 - Run Qwen-7B-QAnything on MTL iGPU or Arc A770 get error
Issue -
State: open - Opened by violet17 6 months ago
- 1 comment
Labels: user issue
#10490 - lm_head empty_cache for more models
Pull Request -
State: closed - Opened by hkvision 6 months ago
- 1 comment
#10489 - Orca: fix failed orca action
Pull Request -
State: closed - Opened by plusbang 6 months ago
#10488 - LLM: add empty cache in deepspeed autotp benchmark script
Pull Request -
State: closed - Opened by plusbang 6 months ago
Labels: llm
#10487 - Update generate.py
Pull Request -
State: open - Opened by CodingQinghao 6 months ago
#10486 - no output for Baichuan2-7b with 2k input prompt
Issue -
State: closed - Opened by Fred-cell 7 months ago
- 1 comment
Labels: user issue
#10485 - add sdp fp8 for qwen llama436 baichuan mistral baichuan2
Pull Request -
State: closed - Opened by qiuxin2012 7 months ago
- 1 comment
#10484 - LLM: fix deepspeed error of finetuning on xpu
Pull Request -
State: closed - Opened by rnwang04 7 months ago
- 7 comments
Labels: llm
#10483 - Fix `modules_not_to_convert` argument
Pull Request -
State: closed - Opened by MeouSker77 7 months ago
#10482 - the performance of Qwen-7b with 1k input is slower than 2k input, either for memory utilization
Issue -
State: open - Opened by Fred-cell 7 months ago
- 6 comments
Labels: user issue
#10481 - Remove softmax upcast fp32 in llama
Pull Request -
State: closed - Opened by hkvision 7 months ago
- 1 comment
#10480 - LLM: Enable BigDL IPEX Int8
Pull Request -
State: closed - Opened by xiangyuT 7 months ago
#10479 - the memory issue about Llama-2-7B inference increases too high with >4k input prompt
Issue -
State: closed - Opened by Fred-cell 7 months ago
- 3 comments
Labels: user issue
#10478 - `bigdl-llm-init` can‘t properly set environments variable
Issue -
State: open - Opened by Cyberpunk1210 7 months ago
- 1 comment
Labels: user issue
#10477 - LLM: change fp16 benchmark to model.half
Pull Request -
State: closed - Opened by JinBridger 7 months ago
#10476 - LLM QLoRA script issue
Issue -
State: open - Opened by JeNi0310 7 months ago
- 2 comments
Labels: user issue
#10475 - Update serving doc
Pull Request -
State: open - Opened by Romanticoseu 7 months ago
#10475 - Update serving doc
Pull Request -
State: closed - Opened by Romanticoseu 7 months ago
#10474 - fix rwkv v5 fp16
Pull Request -
State: open - Opened by MeouSker77 7 months ago
#10474 - fix rwkv v5 fp16
Pull Request -
State: closed - Opened by MeouSker77 7 months ago
- 1 comment
#10474 - fix rwkv v5 fp16
Pull Request -
State: open - Opened by MeouSker77 7 months ago
#10473 - LLM: fix whiper model missing config.
Pull Request -
State: closed - Opened by lalalapotter 7 months ago
Labels: llm
#10472 - Speed-up mixtral in pipeline parallel inference
Pull Request -
State: open - Opened by hzjane 7 months ago
#10472 - Speed-up mixtral in pipeline parallel inference
Pull Request -
State: open - Opened by hzjane 7 months ago
#10472 - Speed-up mixtral in pipeline parallel inference
Pull Request -
State: closed - Opened by hzjane 7 months ago
#10471 - Add quick start of running LLM inference using fastchat on CPU
Pull Request -
State: open - Opened by songhappy 7 months ago
#10471 - Add quick start of running LLM inference/finetuning on CPU/GPU
Pull Request -
State: closed - Opened by songhappy 7 months ago
#10470 - miniCPM-V get error self and mat2 must have the same dtype, but got Half and Byte
Issue -
State: open - Opened by violet17 7 months ago
Labels: user issue
#10470 - miniCPM-V get error self and mat2 must have the same dtype, but got Half and Byte
Issue -
State: closed - Opened by violet17 7 months ago
- 2 comments
Labels: user issue
#10469 - Facing rust compiler issue with `pip install --pre --upgrade bigdl-llm[all] ` on windows
Issue -
State: open - Opened by HimanshuJanbandhu 7 months ago
- 1 comment
Labels: user issue
#10469 - Facing rust compiler issue with `pip install --pre --upgrade bigdl-llm[all] ` on windows
Issue -
State: open - Opened by HimanshuJanbandhu 7 months ago
- 1 comment
Labels: user issue
#10469 - Facing rust compiler issue with `pip install --pre --upgrade bigdl-llm[all] ` on windows
Issue -
State: open - Opened by HimanshuJanbandhu 7 months ago
- 3 comments
Labels: user issue
#10468 - ReAct in LangChain not working properly
Issue -
State: closed - Opened by Zhuohua-HUANG 7 months ago
- 2 comments
Labels: user issue
#10468 - ReAct in LangChain not working properly
Issue -
State: open - Opened by Zhuohua-HUANG 7 months ago
Labels: user issue
#10468 - ReAct in LangChain not working properly
Issue -
State: open - Opened by Zhuohua-HUANG 7 months ago
Labels: user issue
#10467 - How to load model in 4bit when useing BigDL-LLM FastChat Serving
Issue -
State: closed - Opened by kunger97 7 months ago
- 6 comments
Labels: user issue
#10467 - How to load model in 4bit when useing BigDL-LLM FastChat Serving
Issue -
State: open - Opened by kunger97 7 months ago
- 5 comments
Labels: user issue