Reverse Dependencies of avro
The following projects have a declared dependency on avro:
- acryl-datahub — A CLI to work with DataHub metadata
- airbyte-cdk — A framework for writing Airbyte Connectors.
- airbyte-cdk-PHLAIR — A framework for writing Airbyte Connectors.
- arcane-pubsub — Override pubsub client
- avdoc — CLI tool to generate HTML documentation for an Apache Avro schema
- avro-byte-counter — Count number of bytes per field
- avro_codec — An avro codec which exposes an API similar to the standard library's marshal, pickle and json modules
- avro-compat — A library-compatibility layer for avro parsers
- avro-gen-topkrabbensteam — Avro record class and specific record reader generator
- avro-gen3 — Avro record class and specific record reader generator
- avro-helper-devlibx — Python DSL for setting up Flask app CDC
- avro-helper-devlibx-v1 — Python DSL for setting up Flask app CDC
- avro-to-python-etp — Light tool for compiling avro schema files (.avsc) to python classes making using avro schemata easy.
- avrofastapi — Automatic avro wire protocol support for FastAPI
- avroknife — Utility for browsing and simple manipulation of Avro-based files
- azure-schemaregistry-avroencoder — Microsoft Azure Schema Registry Avro Encoder Client Library for Python
- azure-schemaregistry-avroserializer — Microsoft Azure Schema Registry Avro Serializer Client Library for Python
- cavro — avro codec implemented with Cython
- cbpcommon — Common library for Crypto Bot Platform
- cdpdev-datahub — A CLI to work with DataHub metadata
- confluent-kafka — Confluent's Python client for Apache Kafka
- datacontract-cli — Test data contracts
- datasurface — Automate the governance, management and movement of data within your enterprise
- eventyst — no summary
- ewah — An ELT with airflow helper module: Ewah
- fastscore-cli — FastScore CLI
- Feast — Python SDK for Feast
- feastmo — Python SDK for Feast
- feathr — An Enterprise-Grade, High Performance Feature Store
- gdp-time-series — no summary
- idg-metadata-client — Ingestion Framework for OpenMetadata
- ingestor — Apache Beam pipeline that ingests data from a PostgreSQL and writes it to GCS.
- kafka-avro-producer-topkrabbensteam — Kafka (produce messages) using Apache Avro schemas
- kfserving — KFServing Python SDK
- kserve-mathking — KServe Python SDK
- lamatic-airbyte-cdk — A framework for writing Airbyte Connectors.
- lintML — A security-first linter for machine learning training code.
- m4-utils — Biblioteca com funções de uso comum em projetos de aprendizado de máquina e ciencia de dados.
- market-data-transcoder — Market Data Transcoder
- metadata-guardian — MetadataGuardian is used to protect data by searching the source metadata.
- metaphor-connectors — A collection of Python-based 'connectors' that extract metadata from various sources to ingest into the Metaphor app.
- mk-feature-store — Python SDK for Feast
- ml-core — Core Package for MissingLink.ai
- mlrun — Tracking and config of machine learning runs
- mockingbird — Generate mock documents in various formats (CSV, DOCX, PDF, TXT, and more) that embed seed data and can be used to test data classification software.
- moonlogger — moon-logger, a Python logging library
- neon-schemas — Schemas for Neon Law
- nypl-py-utils — A package containing Python utilities for use across NYPL
- openmetadata-ingestion — Ingestion Framework for OpenMetadata
- pingpong-datahub — A CLI to work with DataHub metadata
- pup-confluent-kafka — Patched version of Confluent's Python client for Apache Kafka
- py-adapter — Round-trip serialization/deserialization of any Python object to/from any serialization format including Avro and JSON.
- py-avro-schema — Generate Apache Avro schemas for Python types including standard library data-classes and Pydantic data models.
- pyCGA — A REST client for OpenCGA web services
- pyhwschema — Python API for Hortonworks Schema Registry
- pypz-kafka-io — Provides a Kafka implementation of the ChannelInput/OutputPort in pypz.
- qgate-sln-mlrun — The quality gate for testing MLRun/Iguazio solution.
- ripflow — Python package to insert analysis pipelines into data streams
- robotframework-avrolibrary — Avro library for Robot Framework
- robotframework-confluentkafkalibrary — Confluent Kafka library for Robot Framework
- rpm-confluent-schemaregistry — Confluent Schema Registry lib
- scikit-digital-health — Python general purpose human motion inertial data processing package.
- snapstream — Streamline your Kafka data processing, this tool aims to standardize streaming data from multiple Kafka clusters. With a pub-sub approach, multiple functions can easily subscribe to incoming messages, serialization can be specified per topic, and data is automatically processed by data sink functions.
- spce — ScalePlan's CloudEvents implementation
- streaminghub-datamux — A library to stream data into real-time analytics pipelines
- super-cereal — Serialize Python objects
- syngen — The tool uncovers patterns, trends, and correlations hidden within your production datasets.
- tfrecorder — TFRecorder creates TensorFlow Records easily.
- tokyo-lineage — Tokyo Lineage
1