Reverse Dependencies of openpyxl
The following projects have a declared dependency on openpyxl:
- CTDFjorder-git — A package for processing and analyzing CTD data.
- curibio.sdk — CREATE A DESCRIPTION
- custom-maya — Maya的便利工具
- cxalio-studio-tools — Scripts for po studio made by cxalio
- cygnsslib — Toolset for working with CYGNSS data and downloading CYGNSS data from PODAAC
- cytopy — Data centric algorithm agnostic cytometry analysis framework
- czsc — 缠中说禅技术分析工具
- d22d — Migrating form DataBase to DataBase by 2 lines code
- d6tstack — d6tstack: Quickly ingest CSV and XLS files. Export to pandas, SQL, parquet
- daimin-data — no summary
- DAJIN2 — One-step genotyping tools for targeted long-read sequencing
- dallasparser — TX Dallas Criminal Case Parser
- damnit — The Data And Metadata iNspection Interactive Thing
- dapodik-webservice — SDK Python Web Service aplikasi Dapodik
- dapodix — Alat bantu aplikasi Dapodik kemdikbud
- dara-core — Dara Framework Core
- darbiadev-hermes — Shipping tooling
- dargalpy — A simple commandline app for automate Dargal T&E report
- dash-new-version-test-sdk — OpsRamp Analytics SDK
- data-aggregator — Simple data aggregator framework to simplify data processing
- data-check — simple data validation
- data-dictionary-cui-mapping — This package allows you to load in a data dictionary and map cuis to defined fields using either the UMLS API or MetaMap API from NLM.
- data-ecosystem-dependencies — Data Ecosystem Dependencies - Python (PADE)
- data-ecosystem-flask — Program Agnostic Data Ecosystem (PADE) - Flask Web Service
- data-ecosystem-python — Program Agnostic Data Ecosystem (PADE) - Python Services
- data-ecosystem-services — Program Agnostic Data Ecosystem (PADE) - Python Services
- data-eng-utils — de_utils is a package that contains collections of useful functions grouped by theme e.g. HDFS, Spark, Python, etc... and a data engineering framework under the engineering_utils sub-package.
- data-importer — Simple library to easily import data with Django
- data-minion — no summary
- data-pyetl — Data Pyetl is an python approach to extract and load data from a source to a database
- data-science-toolbox — Various code to aid in data science projects for tasks involving data cleaning, ETL, EDA, NLP, viz, feature engineering, feature selection, model validation, etc.
- data-science-toolkit — Data Science Toolkit (DST) is a Python library that helps implement data science related project with ease.
- data-scout — This package provides the tools to quickly setup a scalable and readable data pipeline that can be run on different platforms.
- data2rdf — A generic pipeline that can be used to map raw data to RDF.
- Database-comparator — Bioinformatics tool for compering large sequence files
- databricks-sql-connector — Databricks SQL Connector for Python
- datacollectors — Datacollectors to load energy market data from various sources
- datadict-toolbox — A package to build a data dictionary from .xmla MS SQL SERVER file and select SQL query.
- datadictionary — A package for profiling data and creating data dictionaries.
- datadir — A data directory helper Python package
- datafiletoolbox — a set of utilities to deal with results from reservoir simulators and other sources of tabulated data.
- dataFilterTool — DataFilterTool is a Python package designed to
- dataflows — A nifty data processing framework, based on data packages
- dataflows-tabulator — Consistent interface for stream reading and writing tabular data (csv/xls/json/etc)
- dataflowutil — no summary
- dataframeviewer — PyQt5 application to visualize pandas DataFrames
- dataknead — Fluent conversion between data formats like JSON, XML and CSV
- datamanipy — A Python package that provides tools to help you manipulating data.
- datamart-materialize — Materialization library for Auctus
- datamatrix — This file is part of datamatrix.
- datameta — DataMeta - submission server for data and associated metadata
- datanalyse — A data analysis package.
- DataNormalizer — Package to help with normalizing data needed for the platform!
- datapackage-convert — Convert your datapackages
- datapipe-core — `datapipe` is a realtime incremental ETL library for Python application
- dataplaybook — Playbooks for data. Open, process and save table based data.
- datapro-learning — learning common data structure
- DataRecorder — 用于记录数据的模块。
- DataSae — Data Quality Framework provides by Jabar Digital Service
- DataSanitiser — Data Combination and Sanitisation Tool
- DataScrubber — A data cleaning package and visualisation tool for data science projects
- dataset-librarian — Dataset librarian is a tool to download and apply the preprocessing needed for the list of supported datasets
- datasetmaker — Fetch, transform, and package data.
- datasimple — Utility library and scripts for simpler data-processing tasks
- datasmoothie-tally-client — Python wrapper for the Tally API.
- DataSynthoSphere — DataSynthoSphere is an innovative and cutting-edge AI-powered Synthetic Data project that revolutionizes how organizations understand and interact with their customer data in the digital era. As businesses increasingly shift to online interactions and face stringent data privacy regulations, the need for privacy-preserving customer insights becomes more critical than ever.
- datatoolbox — The Python Data Toolbox
- datatracker — Methods to help track the scripts and datafiles in a project.
- datavisualization — no summary
- datedays — Python Date Tools
- datesy — basic tools for taking care for making the intro to handling handling with python easier
- datupapi — Utility library to support Datup AI MLOps processes
- davt-dependencies-python — Data, Analytics and Visualization Templates (DAVT) - Python Dependencies
- davt-services-python — Data, Analytics and Visualization Templates (DAVT) - Python Services
- db-fillers — no summary
- DBD — dbd is a data loading and transformation tool that enables data analysts and engineers to load and transform data in SQL databases.
- dbdump — Dump the database or Export data from MySQL to Excel, be it a table or view
- dbgpt — DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.
- dbnd — Machine Learning Orchestration
- dbnd-run — Machine Learning Orchestration
- dbpedia-get — dbPedia Concept Linking and Redirect Analysis
- dbs-cli — This package provides a unified command line interface to DeepNatural Brain Services.
- dbt-ing — dbt Ingestion Framework
- dbtjumpstart — A package to jumpstart a dbt project
- dcapy — Oil and Gas DCA Workflows
- dccQuantities — Python classes for working with DDC calibration data
- dcf-process-assignments — Package to automate processing DCF ThankView Assignments
- dcicutils — Utility package for interacting with the 4DN Data Portal and other 4DN resources
- dcicwrangling — Scripts and Jupyter notebooks for 4DN wrangling
- dcl-stats-n-plots — coming soon
- dcma — Deep Codon Mutation Analyser
- dcmstudyclean — Clean dicom files into folders seperated by study id name and then create json files with key information
- DDExcelAccesslib — This lib controls Excel files for ProbeCard project
- ddm-flow — A toolset for processing, organizing, and visualizing data for individual FoVs from DDm analysis.
- ddr-davis-data — Package to handle davis data files
- deatool — Data envelopment analysis efficiency calculator
- DEBM — A package for modeling behavior in decision from experience experiments
- decisao-diretoria-363 — O projeto objetiva disponibilizar os parâmetros de qualidade em formato adequado para utilização em análises computacionais
- decneo — Comberons from single cell transcriptomics in endothelial cells
- decreto-estadual-8468 — O projeto objetiva disponibilizar os parâmetros de qualidade em formato adequado para utilização em análises computacionais