Reverse Dependencies of tritonclient
The following projects have a declared dependency on tritonclient:
- afilter — no summary
- afilter-vsp — no summary
- ariel-client-triton — Client utilities for the triton inference server
- BentoML — BentoML: Build Production-Grade AI Applications
- bionemo-controlled-generation — Guided molecule generation via the BioNemo cloud service
- body-organ-analysis — BOA is a tool for segmentation of CT scans developed by the SHIP-AI group at the Institute for Artificial Intelligence in Medicine (https://ship-ai.ikim.nrw/). Combining the TotalSegmentator and the Body Composition Analysis, this tool is capable of analyzing medical images and identifying the different structures within the human body, including bones, muscles, organs, and blood vessels.
- clarifai — Clarifai Python SDK
- confai — build ai with config
- datumaro — Dataset Management Framework (Datumaro)
- drlx — DRLX is a library for distributed training of diffusion models via RL
- example-pkg-krish9d — A small example package
- fastdeploy-llm — FastDeploy for Large Language Model
- fastnn — A python library and framework for fast neural network computations.
- fedml — A research and production integrated edge-cloud library for federated/distributed machine learning at anywhere at any scale.
- infer-client — Abstraction for AI Inference Client
- kentoml — no summary
- koinapy — Python client to communicate with Koina.
- kozmoserver — KozmoServer
- kserve-mathking — KServe Python SDK
- langchain-nvidia-trt — An integration package connecting TritonTensorRT and LangChain
- llama-index-llms-nvidia-triton — llama-index llms nvidia triton integration
- lpr-pkg — no summary
- m4-utils — Biblioteca com funções de uso comum em projetos de aprendizado de máquina e ciencia de dados.
- ml4gw-hermes — Inference-as-a-Service deployment made simple
- mlflow-tritonserver — Tritonserver Mlflow Deployment
- mlserver — MLServer
- msir-infer — Inference client for msir inference service
- oktoberfest — Public repo oktoberfest
- OpenELM — Evolution Through Large Models
- paddle-pipelines — Paddle-Pipelines: An End to End Natural Language Proceessing Development Kit Based on PaddleNLP
- remyxai — no summary
- robotoff — Real-time and batch prediction service for Open Food Facts.
- sagemaker — Open source library for training and deploying models on Amazon SageMaker.
- stochasticx — Stochastic client library
- streaming-infer — streaming_infer
- tah-example-pkg — no summary
- titan-iris — no summary
- triton-bert — easy to use bert with nvidia triton inference server
- triton-model-analyzer — Triton Model Analyzer is a tool to analyze the runtime performance of one or more models on the Triton Inference Server
- triton-model-navigator — Triton Model Navigator: An inference toolkit for optimizing and deploying machine learning models and pipelines on the Triton Inference Server and PyTriton.
- triton-requests — A high level package for Nvidia Triton requests
- tritonclient-futures — 基于python标准库concurrent & requests封装tritonclient
- tritonv2 — project descriptions here
- tritony — Tiny configuration for Triton Inference Server
- windmilltritonv2 — project descriptions here
- wtu-mlflow-triton-plugin — W-Train Utils for MLflow Triton Plugin
- zerohertzLib — Zerohertz's Library
- zxftools — 工具包
1