GitHub / Tabrizian issue stats
Total issues: 23
Total pull requests: 227
Merged pull request: 197
Average time to close issues: about 2 months
Average time to close pull requests: about 1 month
Average comments per issue: 2.78
Average comments per pull request: 1.33
Issues created
- tabrizian/learning-to-quantize: 3
- bakjs/bak: 2
- Bambil/Haho: 2
- microsoft/onnxruntime: 2
- pytorch/pytorch: 1
- 1995parham/krtp: 1
- onejgordon/flow-dashboard: 1
- reorx/httpstat: 1
- grpc/grpc: 1
- prefix-dev/pixi: 1
- Bambil/vue-mqttsocket: 1
- 1995parham/1995parham.github.io: 1
- mchav/with: 1
- fandogh/nuxt-helpers: 1
- apache/openwhisk-deploy-kube: 1
- triton-inference-server/onnxruntime_backend: 1
- 1995parham/dotfiles: 1
- mqttjs/async-mqtt: 1
Pull requests created
- triton-inference-server/python_backend: 74
- triton-inference-server/model_analyzer: 39
- NVIDIA/TensorRT-LLM: 22
- triton-inference-server/client: 14
- triton-inference-server/common: 12
- triton-inference-server/pytorch_backend: 8
- triton-inference-server/server: 7
- triton-inference-server/tensorrt_backend: 7
- triton-inference-server/core: 6
- triton-inference-server/backend: 5
- triton-inference-server/tutorials: 5
- triton-inference-server/onnxruntime_backend: 3
- triton-inference-server/tensorflow_backend: 3
- Bambil/Haho: 2
- triton-inference-server/vllm_backend: 2
- triton-inference-server/third_party: 2
- mqttjs/async-mqtt: 2
- vllm-project/vllm: 1
- dustin/beanstalk-tools: 1
- NVIDIA/GenerativeAIExamples: 1
- nuxt-community/starter-template: 1
- triton-inference-server/openvino_backend: 1
- pi0/subz-proxy: 1
- python/cpython: 1
- 1995parham/dotfiles: 1
- tabrizian/learning-to-quantize: 1
- ekalinin/dockerfile.vim: 1
- triton-inference-server/tensorrtllm_backend: 1
- pytorch/pytorch: 1
- triton-inference-server/square_backend: 1
- bakjs/bak: 1
Maintainer
- triton-inference-server/python_backend: 74
- triton-inference-server/model_analyzer: 39
- NVIDIA/TensorRT-LLM: 22
- triton-inference-server/common: 12
- triton-inference-server/pytorch_backend: 8
- triton-inference-server/tensorrt_backend: 7
- triton-inference-server/core: 6
- triton-inference-server/server: 5
- triton-inference-server/tutorials: 5
- triton-inference-server/backend: 5
- tabrizian/learning-to-quantize: 4
- triton-inference-server/onnxruntime_backend: 4
- triton-inference-server/tensorflow_backend: 3
- mqttjs/async-mqtt: 3
- triton-inference-server/vllm_backend: 2
Active Maintainer
Issue Author Associations
- None (11, 47.83%)
- Contributor (7, 30.43%)
- Owner (3, 13.04%)
- Member (1, 4.35%)
- Collaborator (1, 4.35%)
Pull Request Author Associations
- Member (199, 87.67%)
- Contributor (22, 9.69%)
- None (3, 1.32%)
- Collaborator (2, 0.88%)
- Owner (1, 0.44%)
Top Issue Labels
- question (2)
- help wanted (1)
- ep:CUDA (1)
- ep:TensorRT (1)
- documentation (1)
- model:transformer (1)
- kind/bug (1)
- lang/c++ (1)
- priority/P2 (1)
- disposition/requires reporter action (1)
- untriaged (1)
- enhancement (1)
- bug (1)
Top Pull Request Labels
- open source (1)
- module: dlpack (1)
- ciflow/trunk (1)
- topic: new feature (1)
- topic: improvements (1)
- release notes: export (1)
- docs (1)
- skip news (1)
- PR: perf (1)
- investigating (1)