Reverse Dependencies of librosa
The following projects have a declared dependency on librosa:
- tensionflow — A Tensorflow framework for working with audio data.
- tensorflow-datasets — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- TensorFlowASR — Almost State-of-the-art Automatic Speech Recognition using Tensorflow 2
- TensorflowTTS — TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for TensorFlow 2
- terra-ai-datasets-framework — Framework to create a dataset to train a neural network.
- testgailbot002 — GailBot API
- testgailbotapi — GailBot Test API
- testgailbotapi001 — GailBot Test API
- Tetra-Model-Zoo — Models optimized for export to run on device.
- tfds-nightly — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- tfds-nightly-gradient — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- tflibrosa — Re-implementation of some librosa function for tensorflow. Reproduction from torchlibrosa.
- tflite-model-maker — TFLite Model Maker: a model customization library for on-device applications.
- tflite-model-maker-nightly — TFLite Model Maker: a model customization library for on-device applications.
- tglcourse — work-in-progress course
- thefiarlib — thefiarlib
- tifresi — Time Frequency Spectrogram Inversion
- tinygrad — You like pytorch? You like micrograd? You love tinygrad! <3
- tinygrad-tools — You like pytorch? You like micrograd? You love tinygrad! <3
- tmh — TMH Speech package
- tonic — Neuromorphic datasets and transformations.
- torch-audiomentations — A Pytorch library for audio data augmentation. Inspired by audiomentations. Useful for deep learning.
- torch-ecg — A Deep Learning Framework for ECG Processing Tasks Based on PyTorch
- torch-mfcc — A librosa's STFT/FBANK/MFCC implement based on Torch
- torch-stft — An STFT/iSTFT for PyTorch
- torchaudio-augmentations — Audio augmentations library for PyTorch, for audio in the time-domain.
- torchlibrosa — PyTorch implemention of part of librosa functions.
- torchopenl3 — Deep audio and image embeddings, based on Look, Listen, and Learn approach Pytorch
- torchsynth — A modular synthesizer in pytorch, GPU-optional and differentiable
- transcription-diff — Speech to transcription comparison
- transformers — State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
- tsaug — A package for time series augmentation
- tts — Deep learning for Text to Speech by Coqui.
- TTS2 — Deep learning for Text to Speech by Coqui.
- ttscube — Text-to-speech synthesis engine, based on end-to-end GAN training. Features: multilanguage, multispeaker, realtime CPU synthesis
- ttskit — text to speech toolkit
- ttsmms — Text-to-speech with The Massively Multilingual Speech (MMS) project
- tuning-fork — A clip/sample auto tuner
- twang — Machine learning tools for guitarists
- udls — Base class and presets for fast dataset creation inside IRCAM
- univoc — A PyTorch implementation of Towards Achieving Robust Universal Neural Vocoding.
- uss — Universal source separation (USS) with weakly labelled data.
- vectorhub — One liner to encode data into vectors with state-of-the-art models using tensorflow, pytorch and other open source libraries. Word2Vec, Image2Vec, BERT, etc
- vectorhub-nightly — One liner to encode data into vectors with state-of-the-art models using tensorflow, pytorch and other open source libraries. Word2Vec, Image2Vec, BERT, etc
- vibe-analyser — A vibration analysis and data acquisition suite for the rpi
- vid2aud — A python module to extract audio from a video
- vid2cleantxt — A command-line tool to easily transcribe speech-based video files into clean text. also in Colab.
- visbeat — Code for 'Visual Rhythm and Beat' SIGGRAPH 2018
- visbeat3 — Python3 Implementation for 'Visual Rhythm and Beat' SIGGRAPH 2018
- vital-sqi — Signal quality control pipeline for electrocardiogram and photoplethysmogram
- vocalpy — A core package for acoustic communication research in Python
- vocex — Voice Frame-Level and Utterance-Level Attribute Extraction
- Voice-Cloning — Introducing Voice_Cloning: A Python Package for Speech Synthesis and Voice Cloning!
- voice-toolbox — Convenient wrappers for audio signal processing in Python
- voice100-runtime — Voice100 Runtime is a TTS/ASR sample app that uses ONNX Runtime, WORLD and Voice100 neural TTS/ASR models on Python. Inference of Voice100 is low cost as its models are tiny and only depend on CNN without recursion.
- voicefixer — This package is written for the restoration of degraded speech
- Voicelab — Fully Automated Reproducible Acoustical Analysis
- voices — 适用于 diffsinger 的多功能工具集
- wav-autoencoder — WavAutoencoder: A Self-Supervised Framework for Learning Audio Representations
- wav2clip — Wav2CLIP: Learning Robust Audio Representations From CLIP.
- Wav2Lipy — Wrapper Package for LipGan Project
- waveglow — Waveglow library
- waveglow-cli — Command-line interface (CLI) to train WaveGlow using .wav files.
- wavescapes — Python library to build wavescapes, plots used in musicology.
- waveser — Used to process audio data.
- waveuse — Used to process audio data.
- wavmark — AI-Based Audio Watermarking Tool
- week1-test — Provides simple interface for recording audio
- whisper-cpp-python — no summary
- whisperer-ml — Go from raw audio to a text-audio dataset with OpenAI's Whisper
- wilddrummer — Turn sound samples into drums.
- wipypedia — no summary
- worship — Sound processing toolkit with Python
- youtube-tts-data-generator — A python library that generates speech data with transcriptions by collecting data from YouTube.
- youtube-video-analyzer — A package for downloading YouTube videos, extracting audio and frames, and analyzing sound intervals
- youtube2text — Convert youtube urls to text with speech recognition
- yt-audio-collector — Create hindi language dataset for Speech Recognition from youtube
- ytdlbt — download youtube music by title
- yuntu — Acoustic Analysis tools for Conabio
- zangorth-ramsey — Helper Functions for Ramsey Project