Reverse Dependencies of pandas-gbq
The following projects have a declared dependency on pandas-gbq:
- alo7-airflow — Programmatically author, schedule and monitor data pipelines
- am4894bq — Helper library for BigQuery
- apache-airflow — Programmatically author, schedule and monitor data pipelines
- apache-airflow-backport-providers-google — Backport provider package apache-airflow-backport-providers-google for Apache Airflow
- apache-airflow-providers-google — Provider package apache-airflow-providers-google for Apache Airflow
- apache-airflow-zack — Programmatically author, schedule and monitor data pipelines
- api2db — Python Api data collection tool
- asyncdb — Library for Asynchronous data source connections Collection of asyncio drivers.
- atscale — The AI-Link package created by AtScale
- basedosdados — Organizar e facilitar o acesso a dados brasileiros através de tabelas públicas no BigQuery.
- bigframes — BigQuery DataFrames -- scalable analytics and machine learning with BigQuery
- bqcon — Python Package for doing basic operations over google big query
- bqsc — Define schema object from bigquery schema definition json file
- bquest — Effortlessly validate and test your Google BigQuery queries with the power of pandas DataFrames in Python.
- brighter — A simple library for data-wrangling with GCP support
- cacheless-airflow — Programmatically author, schedule and monitor data pipelines
- calitp-data-analysis — Shared code for querying Cal-ITP data in notebooks, primarily.
- change_detection — package for detecting change in time-series data
- cloudops-google-bigquery — The cloudops-google-bigquery package
- cloudservice — Auto machine learning, deep learning library in Python.
- cmapBQ — Toolkit for interacting with Google BigQuery and CMAP datasets
- continual — Operational AI for the Modern Data Stack
- custom-utils — Utilities for database connectors, slack alerter, loggers etc
- custom-workflow-solutions — Programmatically author, schedule and monitor data pipelines
- dagster-gcp-pandas — Package for storing Pandas DataFrames in GCP.
- dagster-gcp-pyspark — Package for storing PySpark DataFrames in GCP
- data-science-common — UNDER CONSTRUCTION: A simple python library to facilitate analysis
- dataflowutil — no summary
- DBD — dbd is a data loading and transformation tool that enables data analysts and engineers to load and transform data in SQL databases.
- dbnd-gcp — Machine Learning Orchestration
- dbpal — A utility package for pushing around data
- dbt-table-diff — Compares models in dbt during an open PR
- deephub — no summary
- dimschema — dimschema
- droughty — droughty is an analytics engineering toolkit, helping keep your workflow dry.
- droughty-dev — no summary
- ebmdatalab — Package for ebmdatalab jupyter notebook stuff
- edu-airflow — Programmatically author, schedule and monitor data pipelines
- friktionless — Friktionless is a Python package providing simplified interfaces to Friktion data. It aims to be the fundamental building block for doing data engineering and data analysis in Python for Friktion.
- fugue-warehouses — Fugue data warehouses integration
- ga-attribution-scrape — Scrapes attribution data from GAs Model Comparison Tool through JS Network and sends to Bigquery.
- ganymede-py — Ganymede Python libraries
- gcp-data-ingestion — Utility Functions for Data Ingestion in GCP
- gcp-scraper — Scraping utils with GCP functions
- gcpts — no summary
- gdp-time-series — no summary
- geniepy — Gene-Disease relationship trend detection
- GeoFuns — A package for geo tools used to map card score
- googlewrapper — Simple API wrapper for Google Products
- gps-building-blocks — Modules and tools useful for use with advanced data solutions on Google Ads, Google Marketing Platform and Google Cloud.
- grounder — Estimate the quality and factual correctness of natural language text to help ground language models such as BERT, GPT-J, GPT-Neo, ChatGPT, and Bard.
- honeycomb — Multi-source/engine querying tool
- honeydew — Collection of connectors for ETL
- humailib — HUMAI data science framework
- instackup — A package to ease interaction with cloud services, DB connections and commonly used functionalities in data analytics.
- kedro-datasets — Kedro-Datasets is where you can find all of Kedro's data connectors.
- mcl_google_cloud_bigquery — The mcl_google_cloud_bigquery package.
- mercury-fil — Python client for accessing Filecoin chain data.
- OniPKG-Api — Helper para consumo de APIs
- openlineage-airflow — OpenLineage integration with Airflow
- pandas-pyarrow — A library for switching pandas backend to pyarrow
- pano-airflow — Programmatically author, schedule and monitor data pipelines
- pipeline-penguin — Pipeline Penguin is a versatile python library for data quality.
- polywhirl — Run pandas-profiling HTML reports for a given list of database tables.
- PREAGeoFuns — A package for geo tools used to map card score
- profiles-rudderstack — no summary
- pyFission — A tool to sync data across data sources
- pygbq — Easily integrate data in BigQuery
- pygyver — Data engineering & Data science Pipeline Framework
- santoku — Custom Python wrapper around many third party APIs, including AWS, BigQuery, Slack and Salesforce.
- shaku-database — Shaku Database util
- skt — SKT package
- terra-billing-alert — Terra Billing Alert script for Google Cloud Function
- tokyo-lineage — Tokyo Lineage
- trell-ai-utils — Trell Database connectors, slack alerter and loggers
- ultipro — Python Client for the UltiPro SOAP API
- viadot2 — A simple data ingestion library to guide data flows from some places to other places.
- vpxhw-db-job-locator — package for locating jobs in vpxhw database
- waj-bigquery — no summary
- warehouses — Python library to facilitate read/write of GIS and tabular data between Python and cloud warehouses
- wh-lookml-gen — generates lookml from a warehouse
- xedro — Kedro helps you build production-ready data and analytics pipelines
1