Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / ray-project/ray-llm issues and pull requests
#152 - Update README.md for RayLLM archival
Pull Request -
State: closed - Opened by akshay-anyscale 6 months ago
#151 - [DOC] Add instructions to install and run RayLLM backend locally
Pull Request -
State: open - Opened by xwu99 6 months ago
#150 - Running ray-llm 0.5.0 on g4dn.12xlarge instance
Issue -
State: open - Opened by golemsentience 7 months ago
- 2 comments
#149 - Update to latest vLLM upstream and Support vLLM on CPU
Pull Request -
State: open - Opened by xwu99 7 months ago
- 9 comments
#148 - Is this project still actively being maintained?
Issue -
State: open - Opened by nkwangleiGIT 7 months ago
- 8 comments
#147 - templating serve-config and model config instead of copy and paste
Issue -
State: open - Opened by nhaocalgary 7 months ago
#146 - follow up on removing explorer from readme
Pull Request -
State: closed - Opened by ArturNiederfahrenhorst 8 months ago
#145 - Remove aviary.anyscale.com references from README
Pull Request -
State: closed - Opened by ArturNiederfahrenhorst 8 months ago
#144 - Error when `serve run`
Issue -
State: open - Opened by andakai 8 months ago
- 1 comment
#143 - RAY-LLM stuck at replica step
Issue -
State: open - Opened by NBTrong 8 months ago
- 1 comment
#142 - Run rayllm frontend on head pod fails
Issue -
State: open - Opened by viirya 8 months ago
- 1 comment
#141 - `serve status` fails on the head pod after model is deployed
Issue -
State: open - Opened by viirya 8 months ago
#140 - RayWorkerVllm Actor Dies After ~1h: The actor is dead because all references to the actor were removed.
Issue -
State: open - Opened by cshyjak 8 months ago
#139 - Issue when running ray-llm with tensorrt-llm
Issue -
State: open - Opened by leon-g-xu 8 months ago
- 1 comment
#138 - Use vllm 0.3.3
Pull Request -
State: closed - Opened by leon-g-xu 8 months ago
#137 - There should be a feature saying that all of the 3 options are wrong
Issue -
State: open - Opened by ollie-iterators 8 months ago
- 1 comment
#136 - Support vllm 0.3.x
Pull Request -
State: closed - Opened by leon-g-xu 9 months ago
- 1 comment
#135 - Update deploy-on-eks.md
Pull Request -
State: open - Opened by viirya 9 months ago
#134 - How to adjust engine kwargs from defaults values for models in `./models/`
Issue -
State: closed - Opened by SamComber 9 months ago
#133 - Autoscaling support in Ray-llm
Issue -
State: open - Opened by Jeffwan 9 months ago
#132 - Support for the Mistral based Embeddings models
Issue -
State: open - Opened by lynkz-matt-psaltis 9 months ago
#131 - In the 0.5.0 release, some files appear to be missing
Issue -
State: open - Opened by lynkz-matt-psaltis 9 months ago
#130 - Serve a new model without restarting RayLLM
Issue -
State: open - Opened by k6l3 10 months ago
- 1 comment
#129 - A100 not correctly detected / No available node types can fulfill resource request
Issue -
State: open - Opened by wizche 10 months ago
#128 - Add examples to the prompt format docs
Pull Request -
State: closed - Opened by alanwguo 10 months ago
#127 - Podman Error on red hat 9?
Issue -
State: open - Opened by jayteaftw 10 months ago
#126 - Add more details about prompt format in the docs
Pull Request -
State: closed - Opened by alanwguo 10 months ago
- 1 comment
#125 - [BUG] workers do not launch on g5.12xlarges for the latest image 0.5.0.
Issue -
State: open - Opened by JGSweets 10 months ago
- 6 comments
#124 - Error when trying to run tensorrt model on ray
Issue -
State: closed - Opened by rifkybujana 10 months ago
- 1 comment
#123 - Queue-Worker System
Issue -
State: open - Opened by AIApprentice101 10 months ago
- 2 comments
#122 - [BUG] Docker -- anyscale/ray-llm:latest is essentially 0.3.0
Issue -
State: closed - Opened by JGSweets 10 months ago
- 1 comment
#121 - [BUG] serve.run host / port are deprecated, Cannot access port 8000 from outside the resource.
Issue -
State: closed - Opened by JGSweets 10 months ago
- 8 comments
#120 - Recently updated to 0.5.0 and can no longer deploy models -- add suitable Node Error
Issue -
State: closed - Opened by JGSweets 10 months ago
- 2 comments
#119 - Update vllm version
Pull Request -
State: closed - Opened by sihanwang41 10 months ago
#118 - Add a example of Azure GPU
Issue -
State: open - Opened by chenweisomebody126 10 months ago
- 1 comment
#117 - How to use partial GPU?
Issue -
State: open - Opened by rifkybujana 10 months ago
- 2 comments
#116 - Anyscale/ray-llm:latest does not contain the serve_configs folder and models/continuous_batching/quantization folders
Issue -
State: closed - Opened by WinsonSou 10 months ago
- 2 comments
#115 - Update doc build ci job
Pull Request -
State: closed - Opened by sihanwang41 10 months ago
#114 - Ray-LLM Head with VLLM Head throws configuration error
Issue -
State: open - Opened by lynkz-matt-psaltis 10 months ago
- 2 comments
#113 - Loading 13B model across 6 GPUs (Distributed Inference)
Issue -
State: closed - Opened by WinsonSou 10 months ago
- 2 comments
#112 - Permission denied when downloading or serving any model.
Issue -
State: open - Opened by Anindyadeep 10 months ago
#111 - Release 0.5.0
Pull Request -
State: closed - Opened by sihanwang41 10 months ago
- 5 comments
#110 - Release 0.5.0
Pull Request -
State: closed - Opened by sihanwang41 10 months ago
#109 - Cannot specify models using yaml as a list of dicts or LLMApp objects
Issue -
State: open - Opened by victorserbu2709 11 months ago
#108 - Error: Tokenizer class does not exist when load local model
Issue -
State: open - Opened by YixuanCao 11 months ago
- 1 comment
#107 - Create Google--GcpGPT
Pull Request -
State: open - Opened by NeelamMandavia 11 months ago
- 1 comment
#106 - How to submit a LLM training job?
Issue -
State: open - Opened by qingqiuhe 11 months ago
- 2 comments
#105 - [New Model] Add an example for Mixtral 8 * 7B Instruct model
Issue -
State: open - Opened by jinnig 11 months ago
#104 - Closed
Issue -
State: closed - Opened by nyagah 11 months ago
- 1 comment
#103 - depoyment of quantized models fails
Issue -
State: open - Opened by matthiasmfr 11 months ago
- 1 comment
#102 - Record telemetry when RayLLM is launched using a Serve config
Pull Request -
State: closed - Opened by shrekris-anyscale 11 months ago
#101 - No available node types can fulfill resource request defaultdict - error on local deployment
Issue -
State: open - Opened by NikolayTV 11 months ago
- 1 comment
#100 - Remote address refuse queries
Issue -
State: open - Opened by rifkybujana 12 months ago
- 1 comment
#99 - Is there a way to increase the scaling up speed?
Issue -
State: open - Opened by rifkybujana 12 months ago
- 2 comments
#98 - Support function calling models
Issue -
State: open - Opened by richardliaw 12 months ago
#97 - Support for multi-modal models
Issue -
State: open - Opened by richardliaw 12 months ago
#96 - Langchain integration
Issue -
State: closed - Opened by XBeg9 12 months ago
- 1 comment
#95 - Add AWQ and SqueezeLLM quantization configs
Pull Request -
State: closed - Opened by uvikas 12 months ago
#94 - Multiple models second models always request GPU: 1
Issue -
State: open - Opened by lynkz-matt-psaltis about 1 year ago
- 2 comments
#93 - Update deploy-on-gke.md
Pull Request -
State: open - Opened by ChaosEternal about 1 year ago
#92 - fix: no attribute 'set_url'
Pull Request -
State: closed - Opened by marov about 1 year ago
#91 - Request for Comment: RayLLM <-> FastChat Integration
Issue -
State: open - Opened by Extremys about 1 year ago
- 2 comments
#90 - LLM Deployment Observability
Issue -
State: open - Opened by roelschr about 1 year ago
- 3 comments
#89 - rayllm's frontend can't work properly via rayllm:0.4.0 image
Issue -
State: closed - Opened by k0286 about 1 year ago
#88 - VLLM Ray Workers are being killed by GCS
Issue -
State: closed - Opened by rtwang1997 about 1 year ago
- 6 comments
#87 - Rename ray-llm docker image name in doc
Pull Request -
State: closed - Opened by sihanwang41 about 1 year ago
#86 - fix: Add explanation for copying files
Pull Request -
State: open - Opened by enori about 1 year ago
#85 - Doc/Config update for rayllm
Pull Request -
State: closed - Opened by sihanwang41 about 1 year ago
#84 - fix: docker image references
Pull Request -
State: closed - Opened by roelschr about 1 year ago
- 1 comment
#83 - Update README.md to use new docker repo name
Pull Request -
State: closed - Opened by ethanyanjiali about 1 year ago
- 1 comment
#82 - Add AWQ Quantized Llama 2 70B Model Config & Update README
Pull Request -
State: closed - Opened by YQ-Wang about 1 year ago
- 4 comments
#81 - No example for quantized model
Issue -
State: closed - Opened by jinnig about 1 year ago
- 2 comments
#80 - [doc] Cannot deploy an LLM model on EKS with KubeRay
Issue -
State: open - Opened by enori about 1 year ago
- 3 comments
#79 - 0.4.0 release
Pull Request -
State: closed - Opened by avnishn about 1 year ago
#78 - Anyscale Image
Issue -
State: open - Opened by harsh-goglocal about 1 year ago
- 2 comments
#77 - Update README.md with link to kuberay instructions
Pull Request -
State: closed - Opened by akshay-anyscale about 1 year ago
#76 - Issues serving other models from HF
Issue -
State: closed - Opened by kenthua about 1 year ago
- 5 comments
#75 - [Docs] "max_total_tokens" is missing in the doc
Issue -
State: open - Opened by scottsun94 about 1 year ago
#74 - [docs] Improve docs around configuration
Issue -
State: open - Opened by richardliaw about 1 year ago
- 2 comments
Labels: documentation
#73 - Deploying RayLLM locally failed with exit code 0 even if deployment is ready
Issue -
State: open - Opened by lamhoangtung about 1 year ago
- 1 comment
#72 - Ray LLM on Nvidia RTX series?
Issue -
State: open - Opened by shahrukhx01 about 1 year ago
- 3 comments
#71 - Update method of accessing Serve controller
Pull Request -
State: closed - Opened by shrekris-anyscale about 1 year ago
#70 - fixes cluster yaml to use the correct resource attribute naming
Pull Request -
State: closed - Opened by JGSweets about 1 year ago
- 1 comment
#69 - Set `route_prefixes` in Serve configs to `/`
Pull Request -
State: closed - Opened by shrekris-anyscale about 1 year ago
#68 - Revert "Fix serve config parsing"
Pull Request -
State: closed - Opened by Yard1 about 1 year ago
#67 - [docs] Update the application's config to be compatible with v0.3.0
Pull Request -
State: closed - Opened by YQ-Wang about 1 year ago
- 1 comment
#66 - Fix serve config parsing
Pull Request -
State: closed - Opened by gvspraveen about 1 year ago
#65 - Add Serve config for LightGPT
Pull Request -
State: closed - Opened by shrekris-anyscale about 1 year ago
#64 - Add back LightGPT
Pull Request -
State: closed - Opened by Yard1 about 1 year ago
#63 - [docs] Update changes to RayLLM
Pull Request -
State: closed - Opened by richardliaw about 1 year ago
#62 - RayLLM v0.3.0
Pull Request -
State: closed - Opened by Yard1 about 1 year ago
- 1 comment
#61 - issue with run locally
Issue -
State: open - Opened by omlomloml about 1 year ago
- 1 comment
#60 - ray-llm support for ML Accelerators (Google's TPU, AWS Inferential & etc)
Issue -
State: closed - Opened by sudujr about 1 year ago
- 1 comment
#59 - Minor Update for Model Warmup
Pull Request -
State: closed - Opened by FerdinandZhong about 1 year ago
- 1 comment
#58 - Follow the doc to deploy llama2 70b throws error
Issue -
State: closed - Opened by YQ-Wang about 1 year ago
- 1 comment
#57 - Embedding model support in ray-llm
Issue -
State: closed - Opened by YQ-Wang about 1 year ago
- 1 comment
Labels: enhancement
#56 - [docs] GKE / EKS updates
Pull Request -
State: closed - Opened by richardliaw about 1 year ago
#55 - S3 bucket model download fails silently if the cluster doesn't have the right permissions
Issue -
State: open - Opened by architkulkarni about 1 year ago
- 1 comment
#54 - Create serve configs to deploy LLMs to production
Pull Request -
State: closed - Opened by shrekris-anyscale about 1 year ago
#53 - Weight caching being based on model-id creates confusion
Issue -
State: open - Opened by ArturNiederfahrenhorst about 1 year ago