Reverse Dependencies of bitsandbytes
The following projects have a declared dependency on bitsandbytes:
- accelerate — Accelerate
- aegis-web3-cli — no summary
- africanwhisper — A package for fast fine-tuning and API endpoint deployment of the Whisper model, specifically developed to accelerate Automatic Speech Recognition (ASR) for African languages.
- ag-llama-api — ag_llama_api Test Package for Somthing
- ag-llama-api-s — ag_llama_api_s Test Package for Somthing
- ag-llama-hub — ag_llama_hub Test Package for Somthing
- ag-llama-hub-s — ag_llama_hub Test Package for Somthing
- ag-llama2-api-s — ag_llama2_api_s Test Package for Somthing
- ag-llama2-hub-s — ag_llama2_hub Test Package for Somthing
- Agentx — AgentX: Seamlessly integrate intelligent agents into your projects. Empower your applications with advanced AI capabilities.
- aiforthechurch — Package for training and deploying doctrinally correct LLMs.
- airoboros — Updated and improved implementation of the self-instruct system.
- alignment-handbook — The Alignment Handbook
- alpaca-eval — AlpacaEval : An Automatic Evaluator of Instruction-following Models
- Andromeda-llm — andromeda - Pytorch
- angle-emb — AnglE-optimize Text Embeddings
- applyllm — A python package to apply opensource LLM in local CUDA environment
- Assistant — Your very own Assistant. Because you deserve it.
- atradebot — atradebot package
- auto-coder — AutoCoder: AutoCoder
- autora-doc — Automatic documentation generator from AutoRA code
- autotrain-advanced — no summary
- autotransformers — a Python package for automatic training and benchmarking of Language Models.
- bitmat-tl — An efficent implementation for the paper: "The Era of 1-bit LLMs"
- bongovaad — no summary
- chatmof — chatmof
- cm3 — Description of the cm3 package
- cog-hf-template — Cog template for Hugging Face.
- collie-lm — CoLLiE: Collaborative Training of Large Language Models in an Efficient Way
- competitions — Hugging Face Competitions
- compllments — Send nice texts to your friends using LLMs
- conflare — conformal retreival augmented generation with LLMs
- curated-transformers — A PyTorch library of transformer models and components
- cyphertune — A Trainer for Fine-tuning LLMs for Text-to-Cypher Datasets
- datadreamer — A library for dataset generation and knowledge extraction from foundation computer vision models.
- datadreamer.dev — Prompt. Generate Synthetic Data. Train & Align Models.
- DatasetRising — Toolchain for creating and training Stable Diffusion models with custom datasets
- dbgpt — DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
- dbgpt-hub — DB-GPT-Hub: Text-to-SQL parsing with LLMs
- demeterchain — A set of data tools in Python
- dendron — A library for working with LLMs and behavior trees.
- ditty — no summary
- dtpu — Utilities for text processing tasks with Deep NLP
- eleuther-elk — Keeping language models honest by directly eliciting knowledge encoded in their activations
- falkor — Deploys the Falkor Large Language Model
- fastxtend — Train fastai models faster (and other useful tools)
- fmrai — no summary
- frye — LLM toolkit
- fsdp-qlora — Package for training big freaking models on your small GPUs
- ft-suite — A fine-tuning suite based on Transformers and LoRA.
- galore-torch — GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
- genaudit — GenAudit is a tool to fact-check text, especially AI-generated text against reference documents.
- geniusrise-audio — audio bolts for geniusrise
- geniusrise-text — Text bolts for geniusrise
- geniusrise-vision — Huggingface bolts for geniusrise
- gptfast — Accelerate transformer inference by 6-8.5x. Native to Huggingface and PyTorch.
- grag — A simple package for implementing RAG
- graph-of-thoughts — Python package for Graph of Thoughts that enables solving elaborate problems with Large Language Models
- graphpatch — graphpatch is a library for activation patching on PyTorch neural network models.
- grazier — A tool for calling (and calling out to) large language models.
- guardrail-ml — Monitor LLMs with custom metrics to scale with confidence
- h2ogpt — no summary
- hcpdiff — A universal Stable-Diffusion toolbox
- healthsageai — HealthSage AI LLMs, Healthcare genAI platform
- helmet — no summary
- img2tags — Tag images
- indomain — no summary
- inranker — no summary
- irisml-tasks-llava — Irisml adapter tasks for LLAVA models
- jac-nlp — no summary
- kimchima — The collections of tools for ML model development.
- kogitune — The Kogitune 🦊 LLM Project
- KosmosX — Transformers at zeta scales
- langanisa — A langchain, transformers, and attention_sinks wrapper for longform response generation.
- languru — The general purpose LLM app stacks.
- lavague-llms-huggingface — HuggingFaceLLM integration for lavague
- lighteval — A lightweight and configurable evaluation package
- Lightning — The Deep Learning framework to train, deploy, and ship AI products Lightning fast.
- lightning-fabric — no summary
- lisa-on-cuda — no summary
- litGPT — Hackable implementation of state-of-the-art open-source LLMs
- llama-index-extra-llm — Just a simple extension for LlamaIndex for better apply some llm such as DeepSeek.
- llama-index-packs-zephyr-query-engine — llama-index packs zephyr_query_engine integration
- llama-recipes — Llama-recipes is a companion project to the Llama 2 model. It's goal is to provide examples to quickly get started with fine-tuning for domain adaptation and how to run inference for the fine-tuned models.
- llama-trainer — Llama trainer utility
- llama2-terminal — Llama2 Terminal Tools Project
- llama2-wrapper — Use llama2-wrapper as your local llama2 backend for Generative Agents / Apps
- llamafactory — Easy-to-use LLM fine-tuning framework
- llamagym — Fine-tune LLM agents with online reinforcement learning
- llamatune — Haven's Tuning Library for LLM finetuning
- llava-torch — Towards GPT-4 like large language and visual assistant.
- llm-blender — LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diverse strengths and weaknesses of multiple open-source large language models (LLMs). LLM-Blender cut the weaknesses through ranking and integrate the strengths through fusing generation to enhance the capability of LLMs.
- llm-connect — LLM Connect API
- LLM-keyword-extractor — This is a python package to extract keywords from a given text using LLMs
- llm-lens — llm-lens is a Python package for CV as NLP, where you can run very descriptive image modules on images, and then pass those descriptions to a Large Language Model (LLM) to reason about those images.
- llm-serve — An LLM inference solution to quickly deploy productive LLM service
- llm-toolkit — LLM Finetuning resource hub + toolkit
- llmpool — Large Language Models' pool management library
- llmtuner — Easy-to-use LLM fine-tuning framework
- lm-buddy — Ray-centric library for finetuning and evaluation of (large) language models.