Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / intel-analytics/BigDL issues and pull requests
#10656 - fix stablelm2 1.6b
Pull Request -
State: closed - Opened by qiuxin2012 6 months ago
#10655 - update the video demo for coding copilot quickstart
Pull Request -
State: closed - Opened by shane-huang 6 months ago
#10654 - Update quickstart
Pull Request -
State: closed - Opened by jason-dai 6 months ago
#10653 - revise ollama quickstart
Pull Request -
State: closed - Opened by shane-huang 6 months ago
#10652 - add langchain-chatchat quickstart
Pull Request -
State: closed - Opened by shane-huang 6 months ago
#10651 - update indexes, move some sections in coding quickstart to webui
Pull Request -
State: closed - Opened by shane-huang 6 months ago
#10650 - update coding quickstart and webui quickstart for warmup note
Pull Request -
State: closed - Opened by shane-huang 6 months ago
#10649 - add ollama quickstart
Pull Request -
State: closed - Opened by pengyb2001 6 months ago
#10648 - LLM: support int4 fp16 chatglm2-6b 8k input.
Pull Request -
State: open - Opened by lalalapotter 6 months ago
Labels: llm
#10647 - LLM: upgrade deepspeed in AutoTP on GPU
Pull Request -
State: open - Opened by plusbang 6 months ago
Labels: llm
#10646 - Change style for video rendering in WebUI quickstart
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
#10645 - fix wrong cpu core num seen by qlora
Pull Request -
State: closed - Opened by Uxito-Ada 6 months ago
#10644 - update spr perf test runner
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10643 - Add GPU and CPU example for stablelm-zephyr-3b
Pull Request -
State: closed - Opened by JinBridger 6 months ago
#10642 - optimize starcoder2-3b normal kv cache
Pull Request -
State: closed - Opened by MeouSker77 6 months ago
#10641 - Support vllm tensor parallel
Pull Request -
State: open - Opened by gc-fu 6 months ago
#10640 - Add Deepspeed TP Example of FLEX Mistral
Pull Request -
State: closed - Opened by Uxito-Ada 6 months ago
#10639 - Bump ossf/scorecard-action to v2.3.1
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10638 - verify and refine ipex-llm-finetune-qlora-xpu docker document
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10637 - fix prompt format for llama-2 in langchain
Pull Request -
State: closed - Opened by ivy-lv11 6 months ago
#10636 - fix stablelm logits diff
Pull Request -
State: closed - Opened by qiuxin2012 6 months ago
#10635 - fix TUF invalid key bug in openssf check
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10634 - test scorecard
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10633 - Fix qwen-vl style
Pull Request -
State: closed - Opened by jenniew 6 months ago
- 1 comment
#10632 - Update Moss-moe example README to install transformers<4.34
Pull Request -
State: closed - Opened by jenniew 6 months ago
#10631 - Update Replit example Readme to install transformers < 4.35
Pull Request -
State: closed - Opened by jenniew 6 months ago
- 1 comment
#10630 - Update WebUI Quickstart
Pull Request -
State: closed - Opened by jason-dai 6 months ago
#10629 - Add seq len check for llama softmax upcast to fp32
Pull Request -
State: closed - Opened by hkvision 6 months ago
#10628 - [Langchain-Chatchat]Add time consumption msg about first token and rest tokens
Issue -
State: open - Opened by johnysh 6 months ago
- 1 comment
Labels: user issue
#10627 - add test api transformer_int4_fp16_gpu
Pull Request -
State: open - Opened by pengyb2001 6 months ago
- 1 comment
#10626 - Update llamaindex README
Pull Request -
State: open - Opened by hxsz1997 6 months ago
#10625 - optimize starcoder2-3b
Pull Request -
State: closed - Opened by MeouSker77 6 months ago
#10624 - Port llamaindex json query engine example
Pull Request -
State: open - Opened by hxsz1997 6 months ago
#10623 - LLM: support bigdl quantize kv cache env and add warning.
Pull Request -
State: closed - Opened by lalalapotter 6 months ago
Labels: llm
#10622 - there is no output for inference of Qwen-7B-chat with FP8 weight-only based all-in-one
Issue -
State: open - Opened by Fred-cell 6 months ago
- 3 comments
Labels: user issue
#10621 - LLM: add `cpu_embedding` and peak memory record for deepspeed autotp script
Pull Request -
State: closed - Opened by plusbang 6 months ago
Labels: llm
#10620 - add python style check
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10619 - Optimize StableLM
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
- 1 comment
#10618 - Tiny fix to win install guide
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
#10617 - Migrate portable zip to ipex-llm
Pull Request -
State: open - Opened by JinBridger 6 months ago
#10616 - ModuleNotFoundError: No module named 'transformers_modules.Qwen-7B-Chat-Int4'
Issue -
State: closed - Opened by ChenVkl 6 months ago
- 6 comments
Labels: user issue
#10615 - refine and verify ipex-llm-serving-xpu docker document
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10614 - Fix llava example to support transformerds 4.36
Pull Request -
State: open - Opened by jenniew 6 months ago
#10613 - Converting mistralai/Mistral-7B-Instruct-v0.2 to lower 4 bit running into error
Issue -
State: open - Opened by tsantra 6 months ago
- 1 comment
Labels: user issue
#10612 - Fix starcoder first token perf
Pull Request -
State: closed - Opened by hkvision 6 months ago
#10611 - LLM: fix llama2 FP16 & bs>1 & autotp on PVC and ARC
Pull Request -
State: closed - Opened by plusbang 6 months ago
Labels: llm
#10610 - Add continue quickstart
Pull Request -
State: closed - Opened by Mingyu-Wei 6 months ago
#10609 - Fix llama
Pull Request -
State: open - Opened by ivy-lv11 6 months ago
- 3 comments
#10608 - Modify the link in Langchain-upstream ut
Pull Request -
State: closed - Opened by Zhangky11 6 months ago
#10607 - starcoder2-3B model for reset token latency
Issue -
State: open - Opened by juan-OY 6 months ago
Labels: user issue
#10606 - LLM: remove ipex.optimize for gpt-j
Pull Request -
State: closed - Opened by rnwang04 6 months ago
#10605 - LangChain-Chatchat shows RuntimeError: could not create a primitive
Issue -
State: open - Opened by MYaoBQ 6 months ago
- 5 comments
Labels: user issue
#10604 - Inference on GPU Error - [RuntimeError: Native API failed. Native API returns: -5 (PI_ERROR_OUT_OF_RESOURCES) -5 (PI_ERROR_OUT_OF_RESOURCES)]
Issue -
State: open - Opened by Mushtaq-BGA 6 months ago
- 3 comments
Labels: user issue
#10601 - gpt-j: NameError: name 'ipex' is not defined
Issue -
State: open - Opened by ywang30intel 6 months ago
- 1 comment
Labels: user issue
#10600 - RecursionError at get_peft_model when using lora/qlora
Issue -
State: closed - Opened by Aloereed 6 months ago
- 2 comments
Labels: user issue
#10599 - inference Llama-2-13b-chat with W4A16, the latency of next token with 1k input is slower than 2k input
Issue -
State: closed - Opened by Fred-cell 6 months ago
- 2 comments
Labels: user issue
#10598 - inference Llama-2-7b-chat with W4A16, the latency of next token with 1k input is slower than 2k input
Issue -
State: closed - Opened by Fred-cell 6 months ago
- 6 comments
Labels: user issue
#10597 - Getting PI_ERROR_OUT_OF_RESOURCES -- resolved
Issue -
State: open - Opened by gbertulf 6 months ago
- 6 comments
Labels: user issue
#10596 - LLM: support iq1s for llama2-70b-hf
Pull Request -
State: closed - Opened by rnwang04 6 months ago
#10595 - Ubuntu22.04 can not update gpu driver. Seems Like intel repo has somethings wrong!
Issue -
State: closed - Opened by zhangjizxc 6 months ago
- 1 comment
Labels: user issue
#10594 - Port LlamaIndex ReAct Agent Example
Pull Request -
State: open - Opened by hxsz1997 6 months ago
#10593 - refine and verify ipex-llm-xpu docker document
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10592 - LLM: add memory optimization for llama.
Pull Request -
State: closed - Opened by lalalapotter 6 months ago
Labels: llm
#10591 - fix rwkv with pip installer
Pull Request -
State: closed - Opened by MeouSker77 6 months ago
#10590 - Llamaindex: add tokenizer_id and support chat
Pull Request -
State: open - Opened by ivy-lv11 6 months ago
#10589 - Fix typo in Baichuan2 example
Pull Request -
State: closed - Opened by Zhangky11 6 months ago
#10588 - Add tokenizer_id in Langchain
Pull Request -
State: closed - Opened by ivy-lv11 6 months ago
#10587 - chatglm2-6b performance not good on Arc770
Issue -
State: open - Opened by qing-xu-intel 6 months ago
- 5 comments
Labels: user issue
#10586 - inference on CPU using docker image
Pull Request -
State: open - Opened by songhappy 6 months ago
#10585 - nightly build docker images
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10584 - Enable kv cache quantization by default for flex when 1 < batch <= 8
Pull Request -
State: closed - Opened by qiyuangong 6 months ago
#10583 - finetune qlora on CPU using docker image
Pull Request -
State: open - Opened by songhappy 6 months ago
#10582 - Fix Qwen-VL example problem
Pull Request -
State: closed - Opened by jenniew 6 months ago
#10581 - Small style fix in Install Guide
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
#10580 - LLM: check user env
Pull Request -
State: closed - Opened by WeiguangHan 6 months ago
- 4 comments
#10579 - LLM: Disable esimd sdp for PVC GPU when batch size>1
Pull Request -
State: closed - Opened by lalalapotter 6 months ago
- 2 comments
Labels: llm
#10578 - MTL failed to run rwkv-4-world-7b: Failed to load libsycl-fallback-bfloat16.spv
Issue -
State: open - Opened by WeiguangHan 6 months ago
- 1 comment
Labels: user issue
#10577 - Win install change oneapi to pip installer
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
- 2 comments
#10576 - Modify install_linux_gpu.md
Pull Request -
State: closed - Opened by Zhangky11 6 months ago
#10575 - BigDL-A750-Qwen7b-Allocation is out of device memory on current platform.
Issue -
State: open - Opened by ChenVkl 6 months ago
- 3 comments
Labels: user issue
#10574 - LangChain-Chatchat ERROR: Exception in ASGI application
Issue -
State: open - Opened by ywang30intel 6 months ago
- 2 comments
Labels: user issue
#10573 - Add linux 6.5 kernel installation
Pull Request -
State: closed - Opened by NovTi 6 months ago
#10572 - Fix qwen's position_ids no enough
Pull Request -
State: closed - Opened by qiuxin2012 6 months ago
#10571 - Add SYCL_CACHE_PERSISTENT in doc and explain warmup in benchmark quickstart
Pull Request -
State: closed - Opened by hkvision 6 months ago
#10570 - Langchain-chatchat for ipex-llm cannot run on Linux
Issue -
State: closed - Opened by Rayegoe 6 months ago
- 6 comments
Labels: user issue
#10569 - Delete llm/readme.md
Pull Request -
State: closed - Opened by jason-dai 6 months ago
#10568 - Hide pip installer for windows install
Pull Request -
State: closed - Opened by Oscilloscope98 6 months ago
#10567 - IPEX-LLM vLLM support 4 socket machine?
Issue -
State: closed - Opened by Storm0921 6 months ago
- 2 comments
Labels: user guide
#10566 - LLM: set different envs based on different Linux kernels
Pull Request -
State: closed - Opened by WeiguangHan 6 months ago
#10565 - refine and verify ipex-inference-cpu docker document
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10564 - LLM: support iq1_s
Pull Request -
State: closed - Opened by rnwang04 6 months ago
#10563 - Port llamaindex TextToSQL example
Pull Request -
State: open - Opened by hxsz1997 6 months ago
#10562 - LLM: Fix wrong import in speculative
Pull Request -
State: closed - Opened by xiangyuT 6 months ago
#10561 - Specify oneAPI version 2024.0 in documentation
Pull Request -
State: closed - Opened by chtanch 6 months ago
#10560 - fix -1 top_k
Pull Request -
State: closed - Opened by gc-fu 6 months ago
#10559 - Create scorecard.yml
Pull Request -
State: closed - Opened by liu-shaojun 6 months ago
#10558 - LLM: fix abnormal output of fp16 deepspeed autotp
Pull Request -
State: closed - Opened by plusbang 6 months ago
Labels: llm
#10557 - Update pip install to use --extra-index-url for ipex package
Pull Request -
State: closed - Opened by chtanch 6 months ago
- 6 comments
#10556 - LLM: fix `torch_dtype` setting of apply fp16 optimization through `optimize_model`
Pull Request -
State: closed - Opened by plusbang 6 months ago
Labels: llm
#10555 - support fp8 in xetla
Pull Request -
State: open - Opened by yangw1234 6 months ago