Reverse Dependencies of einops
The following projects have a declared dependency on einops:
- tnn-pytorch — Toeplitz Neural Network for Sequence Modeling
- tnt-tensorflow — An Implementation of of Transformer in Transformer for image classification, attention inside local patches
- todd-ai — Toolkit for Object Detection Distillation
- token-shift-gpt — Token Shift GPT - Pytorch
- toolformer — Implementation of Toolformer
- toolformer-pytorch — Toolformer - Pytorch
- torch-cubic-b-spline-grid — Cubic B-spline interpolation on multidimensional grids in PyTorch
- torch-cubic-spline-grids — Cubic spline interpolation on multidimensional grids in PyTorch
- torch-ecg — A Deep Learning Framework for ECG Processing Tasks Based on PyTorch
- torch-rs — PyTorch Library for Remote Sensing
- torch-skeleton — skeleton datasets and transforms for pytorch
- torch-spatiotemporal — A PyTorch library for spatiotemporal data processing
- torch-uncertainty — Uncertainty quantification library in PyTorch
- torcharc — Build PyTorch networks by specifying architectures.
- torchgeo — TorchGeo: datasets, samplers, transforms, and pre-trained models for geospatial data
- torchgfn — A torch implementation of GFlowNets
- torchglyph — Data Processor Combinators for Natural Language Processing
- torchio — Tools for medical image processing with PyTorch
- torchmanager-diffusion — Torchmanager Implementation for Diffusion Model (v1.0.2)
- torchmix — Flexible components for transformers π§©
- torchscale-gml — Transformers at any scale
- torchseg — TorchSeg: Semantic Segmentation models for PyTorch
- toyds — Toy datasets for training sequence models
- TPDNE-utils — TPDNE
- tracr-pypi — Compiler from RASP to transformer weights
- tranception-pytorch — Tranception - Pytorch
- tranception-pytorch-dohlee — Tranception - Pytorch
- trans-utils — no summary
- transformer-in-transformer — Transformer in Transformer - Pytorch
- transformer-in-transformer-flax — Transformer in Transformer - Flax
- transformer-lens — An implementation of transformers tailored for mechanistic interpretability.
- transformerocr — Transformer base text detection
- transformerx — TransformerX is a python library for building transformer-based models using ready-to-use layers.
- transganformer — TransGanFormer
- transpector — Visually inspect, analyse and debug transformer models. Aimed at reducing cycle times for interpretability research and lowering the barrier to entry.
- treex — no summary
- triangle-multiplicative-module — Triangle Multiplicative Module
- triton-transformer — Transformer in Triton
- tts — Deep learning for Text to Speech by Coqui.
- uformer-pytorch — Uformer - Pytorch
- UniCell — Universal cell segmentation
- unified-focal-loss-pytorch — An implementation of loss functions from "Unified Focal loss: Generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation"
- UnifiedML — Unified library for intelligence training.
- uniformer-pytorch — Uniformer - Pytorch
- unipercept — UniPecept: A unified framework for perception tasks focusing on research applications that require a high degree of flexibility and customization.
- uniteai — AI, Inside your Editor.
- Unseal — Unseal: A collection of infrastructure and tools for research in transformer interpretability.
- unsupervised-on-policy — Unsupervised pre-training with PPG
- uss — Universal source separation (USS) with weakly labelled data.
- v-pyiqa — PyTorch Toolbox for Image Quality Assessment
- vec2text — convert embedding vectors back to text
- vector-quantize-pytorch — Vector Quantization - Pytorch
- vformer — A modular PyTorch library for vision transformer models
- video-clip — AskVideos-VideoCLIP model
- video-dataloader-for-pytorch — A small example package
- video-diffusion-pytorch — Video Diffusion - Pytorch
- video-vit — Paper - Pytorch
- video2dataset — Easily create large video dataset from video urls
- vietocr — Transformer base text detection
- vision-llama — Vision Llama - Pytorch
- vision-mamba — Vision Mamba - Pytorch
- vision-xformer — Vision Xformers
- visu3d — 3d geometry made easy.
- vit-prisma — A Vision Transformer library for mechanistic interpretability.
- vit-pytorch — Vision Transformer (ViT) - Pytorch
- vit-pytorch-implementation — Vision Transformer (ViT) - Pytorch
- vit-rgts — vit-registers - Pytorch
- vjepa-encoder — JEPA research code.
- vltk — The Vision-Language Toolkit (VLTK)
- VN-transformer — Vector Neuron Transformer (VN-Transformer)
- vocab-coverage — θ―θ¨ζ¨‘εδΈζθ―εηεζ
- vocos — Fourier-based neural vocoder for high-quality audio synthesis
- voicebox-pytorch — Voicebox - Pytorch
- voltron-robotics — Voltron: Language-Driven Representation Learning for Robotics.
- vsscunet — SCUNet function for VapourSynth
- wafl — A hybrid chatbot.
- wafl-llm — A hybrid chatbot - LLM side.
- wavemix — WaveMix - Pytorch
- welford-torch — Online Pytorch implementation to get Standard Deviation, Covariance, Correlation and Whitening.
- wildtorch — WildTorch: Leveraging GPU Acceleration for High-Fidelity, Stochastic Wildfire Simulations with PyTorch
- x-clip — X-CLIP
- x-dgcnn — X-DGCNN - Pytorch
- x-maes — X-MAEs - Pytorch
- x-mlps — Configurable MLPs built on JAX and Haiku
- x-transformers — X-Transformers - Pytorch
- x-unet — X-Unet
- x2vlm-gml — Package for X2-VLM, a vision-language model from ByteDance
- xarray-einstats — Stats, linear algebra and einops for xarray
- xinference — Model Serving Made Easy
- xtuner — An efficient, flexible and full-featured toolkit for fine-tuning large models
- yaib — Yet Another ICU Benchmark is a holistic framework for the automation of the development of clinical prediction models on ICU data. Users can create custom datasets, cohorts, prediction tasks, endpoints, and models.
- yet-another-retnet — yet-another-retnet
- ysda — no summary
- zae-engine — Deep learning engine powered by zae-park.
- zetascale — Rapidly Build, Optimize, and Deploy SOTA AI Models
- zorro-pytorch — Zorro - Pytorch