Reverse Dependencies of accelerate
The following projects have a declared dependency on accelerate:
- llm-explorer — A Lakehouse LLM Explorer. Wrapper for spark, databricks and langchain processes
- llm-foundry — LLM Foundry
- LLM-keyword-extractor — This is a python package to extract keywords from a given text using LLMs
- llm-lens — llm-lens is a Python package for CV as NLP, where you can run very descriptive image modules on images, and then pass those descriptions to a Large Language Model (LLM) to reason about those images.
- llm-optimized-inference — no summary
- llm-rankers — Pointwise, Listwise, Pairwise and Setwise Document Ranking with Large Language Models.
- llm-rs — Unofficial python bindings for llm-rs. 🐍❤️🦀
- llm-rs-cuda — Unofficial python bindings for llm-rs.
- llm-rs-metal — Unofficial python bindings for llm-rs. 🐍❤️🦀
- llm-rs-opencl — Unofficial python bindings for llm-rs. 🐍❤️🦀
- llm-serve — An LLM inference solution to quickly deploy productive LLM service
- llm-toolkit — LLM Finetuning resource hub + toolkit
- llm-vm — An Open-Source AGI Server for Open-Source LLMs
- llm2openai — Create a Python package.
- llmlingua — To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
- llmlite — A library helps to chat with all kinds of LLMs consistently.
- llmopenai — Create a Python package.
- llmpool — Large Language Models' pool management library
- llmtools — A package with useful functions for working with Large Language Models
- llmtuner — Easy-to-use LLM fine-tuning framework
- llmx — LLMX: A library for LLM Text Generation
- lm-buddy — Ray-centric library for finetuning and evaluation of (large) language models.
- lm-checkpoints — Simple library for loading checkpoints of language models.
- lm-eval — A framework for evaluating language models
- lm-polygraph — Uncertainty Estimation Toolkit for Transformer Language Models
- lmms-eval — A framework for evaluating large multi-modality language models
- lmql — A query language for language models.
- lmquant — This package is used for evaluating large foundation models quantization in deep learning.
- lmwrapper — Wrapper around language model APIs
- LocalCat — Fine-tune Large Language Models locally.
- localretriever — A simple Python package
- lodestonegpt — 🤖 Modular Auto-GPT Framework Build For Project Lodestone
- longchat — LongChat and LongEval
- LongNet — LongNet - Pytorch
- loopgpt — Modular Auto-GPT Framework
- luis-v-subtitler — A Python package to use AI to subtitle any video in any language
- magic-assistant — An AI agent framework.
- magvit2-pytorch — MagViT2 - Pytorch
- mangoes — Mangoes v3 is a toolbox for constructing and evaluating static or contextual token vector representations (aka word embeddings).
- manifest-ml — Manifest for Prompting Foundation Models.
- mase-tools — Machine-Learning Accelerator System Exploration Tools
- med-seg-diff-pytorch — MedSegDiff - SOTA medical image segmentation - Pytorch
- medcat — Concept annotation tool for Electronic Health Records
- medspellchecker — Fast and effective spellchecker for Russian medical texts
- medusa-llm — Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads
- mergekit — Tools for merging pre-trained large language models
- mergoo — Impelementation of Leeroo LLM composer.
- meshgpt-pytorch — MeshGPT Pytorch
- metatreelib — PyTorch Implementation for MetaTree: Learning a Decision Tree Algorithm with Transformers
- mexca — Emotion expression capture from multiple modalities.
- micromind — MicroMind
- microvault — Microvault - RL for Navigation
- minerva-torch — Transformers at zeta scales
- minicons — A package of useful functions to analyze transformer based language models.
- miniminiai — A mini version of fastai's miniai
- mistral-v0.2-jax — JAX implementation of the Mistral v0.2 base model.
- mlora — A tool for fine-tuning large language models (LLMs) using the LoRA or QLoRA methods more efficiently.
- modalities — Modalities, a python framework for distributed and reproducible foundation model training.
- modelscope — ModelScope: bring the notion of Model-as-a-Service to life.
- modelz-llm — LLM unified service
- MovieChat — Long video understanding
- mpdd-alignn — A version of the NIST-JARVIS ALIGNN optimized in terms of model performance and to some extent reliability, for large-scale deployments over the MPDD infrastructure by Phases Research Lab.
- ms-swift — Swift: Scalable lightWeight Infrastructure for Fine-Tuning
- MultiEL — Multilingual Entity Linking model by BELA model
- multimodal-transformers — Multimodal Extension Library for PyTorch HuggingFace Transformers
- muse-maskgit-pytorch — MUSE - Text-to-Image Generation via Masked Generative Transformers, in Pytorch
- musiclang-predict — A python package for music notation and generation
- musiclm-pytorch — MusicLM - AudioLM + Audio CLIP to text to music synthesis
- mw-adapter-transformers — A friendly fork of HuggingFace's Transformers, adding Adapters to PyTorch language models
- naifu — naifu is designed for training generative models with various configurations and features.
- nanorag — Testing doing nanorag with nbdev to try it out
- nataili — Nataili: Multimodal AI Python Library
- naturalspeech2-pytorch — Natural Speech 2 - Pytorch
- navigate-with-image-language-model — The package provides a pipeline that utilizes models like ClipSeg and StableDiffusion or ClipSeg and SegmentAnything to prompt an image for a path.
- nendo-plugin-textgen — A text generation plugin using local LLMs or other text generation methods. Builds on top of `transformers` by Hugging Face.
- nendo-plugin-transcribe-whisper — A nendo plugin for speech transcription, based on Whisper by OpenAI.
- nerfstudio — All-in-one repository for state-of-the-art NeRFs
- neuralnest — NeuralNest: An open-source personal AI assistant designed for seamless data integration from various sources, offering versatile connections to Language Learning Models (LLMs), enhanced security, and human-in-the-loop options for a personalized AI experience.
- neurocache — Neurocache: A library for augmenting language models with external memory.
- newAI — newAi
- nextai-star — An open platform for training, serving, and evaluating large language model based chatbots by next ai
- nixietune — A semantic search embedding model fine-tuning tool
- nl2query — no summary
- nlpbaselines — Quickly establish strong baselines for NLP tasks
- nlpsig — Path signatures for Natural Language Processing.
- nmatheg — no summary
- nnsight — Package for interpreting and manipulating the internals of deep learning models.
- nrtk-explorer — Model Visualizer
- nvidia-modelopt — Nvidia TensorRT Model Optimizer: a unified model optimization and deployment toolkit.
- ochat — An efficient framework for training and serving top-tier, open-source conversational LLMs.
- olive-ai — Olive is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation.
- onediff — an out-of-the-box acceleration library for diffusion models
- OneDiffusion — Onediffusion: REST API server for running any diffusion models - Stable Diffusion, Anything, ControlNet, Lora, Custom
- onediffx — onediff extensions for diffusers
- open-gpt-torch — An open-source cloud-native of large multi-modal models (LMMs) serving framework.
- open-gpts — An open-source implementation of large-scale language model (LLM).
- openbb-chat — Deep learning package to add chat capabilities to OpenBB
- OpenBMB — Create a Python package.
- opencompass — A comprehensive toolkit for large model evaluation
- openicl — An open source framework for in-context learning.