An open API service for providing issue and pull request metadata for open source projects.

GitHub / triton-inference-server / server issue stats

Last synced: 10 days ago

Total issues: 516
Total pull requests: 393
Average time to close issues: 2 months
Average time to close pull requests: 17 days
Total issue authors: 427
Total pull request authors: 54
Average comments per issue: 3.64
Average comments per pull request: 0.77
Merged pull requests: 278
Bot issues: 0
Bot pull requests: 0

Past year issues: 298
Past year pull requests: 323
Past year average time to close issues: 18 days
Past year average time to close pull requests: 6 days
Past year issue authors: 242
Past year pull request authors: 45
Past year average comments per issue: 1.83
Past year average comments per pull request: 0.65
Past year merged pull requests: 241
Past year bot issues: 0
Past year bot pull requests: 0

More repo stats: https://repos.ecosyste.ms/hosts/GitHub/repositories/triton-inference-server/server
JSON API: https://issues.ecosyste.ms/api/v1/hosts/GitHub/repositories/triton-inference-server%2Fserver

Issue Author Associations

  • None (511, 99.03%)
  • Contributor (5, 0.97%)

Pull Request Author Associations

  • Contributor (350, 89.06%)
  • None (20, 5.09%)
  • Member (12, 3.05%)
  • Collaborator (11, 2.80%)

Top Pull Request Authors


All Maintainers

Active Maintainers


Top Issue Labels

  • question (67)
  • enhancement (49)
  • bug (30)
  • investigating (26)
  • performance (17)
  • module: backends (16)
  • grpc (15)
  • crash (8)
  • module: platforms (8)
  • module: server (7)
  • build (5)
  • python (5)
  • TensorRT (4)
  • verify to close (3)
  • openai (3)
  • module: frontends (2)
  • pytorch (1)
  • kubernetes (1)
  • memory (1)
  • C API (1)
  • module: clients (1)
  • TensorRT-LLM (1)
  • good first issue (1)

Top Pull Request Labels

  • PR: test (26)
  • PR: ci (19)
  • PR: build (16)
  • cherry-pick (15)
  • PR: docs (13)
  • PR: fix (12)
  • PR: feat (11)
  • build (6)
  • bug (5)
  • PR: refactor (3)
  • PR: perf (2)
  • models: fix (2)
  • module: clients (1)
  • module: platforms (1)
  • investigating (1)
  • enhancement (1)
  • grpc (1)
  • kubernetes (1)
  • crash (1)