Reverse Dependencies of py4j
The following projects have a declared dependency on py4j:
- ai2xl — ai2xl is a free Python library that allows using Excel for ML data preparation, zero dependency ML model deployment, model debugging, model explainability and collaboration.
- ai2xlcore — ai2xl is a free Python library that allows using Excel for ML data preparation, zero dependency ML model deployment, model debugging, model explainability and collaboration.
- aironsuit — A model wrapper for automatic model design and visualization purposes.
- alpyperl — An open source library for connecting AnyLogic models with Reinforcement Learning frameworks through OpenAI Gymnasium
- anovos — An Open Source tool for Feature Engineering in Machine Learning
- apache-dolphinscheduler — pydolphinscheduler is Apache DolphinScheduler Python API.
- apache-flink — Apache Flink Python API
- asphalt-py4j — Py4J integration component for the Asphalt framework
- auto-surprise — A python package that automates algorithm selection and hyperparameter tuning for the recommender system library Surprise
- beakerx-kernel-autotranslation — BeakerX: Beaker Extensions for Jupyter Notebook
- carl-bench — CARL- Contextually Adaptive Reinforcement Learning
- chicken-coop — An environment for reproducing dominance hierarchies in RL agents
- databricks-connect — Databricks Connect Client
- dataos-pyflare — Pyspark bridge to dataos
- datarobot-mlops — datarobot-mlops library to read and report MLOps statistics
- datarobot-predict — DataRobot Prediction Library
- doorpj-test — Can use it to save or recall preferences from Python.
- dqlauncher — DataQuality functions over pyspark.sql SparkSession and DataFrame
- dqvalidator — Quality functions over PySpark DataFrames
- dreem — DREEM solves RNA structure ensembles using chemical probing data
- EgyVoc — Vocalizer for Ancient Egyptian
- exelog — Enabling meticulous logging for Spark Applications
- fast-causal-inference — fast causal inference package
- feathr — An Enterprise-Grade, High Performance Feature Store
- fiware-pyspark-connector — Connects FIWARE Context Brokers with fiware_pyspark_connector
- gatenlp — GATE NLP implementation in Python.
- geniusrise — An LLM framework
- ggml — GridGain ML Python API
- ginsim — Python interface to GINsim and BioLQM API
- gnuper — Open Source Package for Mobile Phone Metadata Preprocessing
- hyperopt — Distributed Asynchronous Hyperparameter Optimization
- javac-parser — Exposes the OpenJDK Java parser and scanner to Python
- jsonSpark — This is a wrapper package for pyspark to process json files. It pythonifies the json pyspark object.
- jupiter-negotiation — Simulator for automated negotiation
- kazu — Biomedical Named Entity Recognition and Entity Linking for Enterprise use cases
- keanu — A probabilistic approach from an Improbabilistic company
- kedro-popmon — Kedro Popmon makes integrating Popmon with Kedro easy!
- klay4py — KLAY4py is for python user considering to use KLAY.
- koalanlp — Python wrapper for KoalaNLP
- lakehouse-engine — A Spark framework serving as the engine for several lakehouse algorithms and data flows.
- liga — no summary
- llm-explorer — A Lakehouse LLM Explorer. Wrapper for spark, databricks and langchain processes
- lohrasb — This versatile tool streamlines hyperparameter optimization in machine learning workflows.It supports a wide range of search methods, from GridSearchCV and RandomizedSearchCVto advanced techniques like OptunaSearchCV, Ray Tune, and Scikit-Learn Tune.Designed to enhance model performance and efficiency, it's suitable for tasks of any scale.
- longalpha-utils — no summary
- luq89-pyspark-app-luq89 — Sample app in PyPI
- ml-comp — An engine for running component based ML pipelines
- ml-ops — A library to read and report MLApp statistics
- mlpiper — An engine for running component based ML pipelines
- mmtfPyspark — Methods for parallel and distributed analysis and mining of the Protein Data Bank using MMTF and Apache Spark
- mu-alpha-zero-library — Library for running and training MuZero and AlphaZero models.
- musket-core — The core of Musket ML
- negmas — NEGotiations Managed by Agent Simulations
- NL4Py — A NetLogo connector for Python.
- nutter — A databricks notebook testing library
- oakx-robot — no summary
- OLIVER — Convenience functions for interacting with the OLIVER workspace
- Orange3-spark — A series of Widgets for Orange3 to work with Spark ML
- osgiservicebridge — OSGi services implemented in Python
- palantir.spark.time — no summary
- py-hiverunner — Python API for unittest Hive applications
- py4phi — A library for encryption/decryption and analysis of sensitive data.
- PyBacmman — Utilities for analysis of data generated from bacmman software
- pycsp3 — Modeling Constrained Combinatorial Problems in Python
- pygw — GeoWave bindings for Python3
- pyhdfs-client — A py4j based hdfs client for python for native hdfs CLI performance.
- pyjxslt-user-defined-address — Python XSLT 2.0 Gateway
- pymlsql — MLSQL Python API
- pyotm — Python connector for OTM
- pyraphtory — Raphtory - Temporal Graph Analytics Platform. This is the Python version of the library.
- pyrasterframes — Access and process geospatial raster data in PySpark DataFrames
- pyspark-extension — A library that provides useful extensions to Apache Spark.
- pysparklib — A elaborate and developed PySpark libraries and resources.
- pytispark — TiSpark support for python
- qsmap — Package provides functionality for working with geographical data, routing, and mapping
- quartic-sdk — QuarticSDK is the SDK package which exposes the APIs to the user
- reina — A Causal Inference library for Big Data.
- robotframework-sikulixlibrary — Robot Framework SiluliX library powered by SikuliX Java library and JPype or Py4J Python modules.
- sagas — sagas ai stack
- salang — A langpack package
- salesforce-merlion — Merlion: A Machine Learning Framework for Time Series Intelligence
- sc-permut — Deep learning annotation of cell-types with permutation inforced autoencoder
- sensor-dataset — Put a description
- sfcli — web-platform command tools
- smlb — Scientific Machine Learning Benchmark
- sparklanes — A lightweight framework to build and execute data processing pipelines in pyspark (Apache Spark's python API)
- steam-pysigma — This is python wrapper of STEAM SIGMA code
- streaming-jupyter-integrations — JupyterNotebook Flink magics
- systemds — SystemDS is a distributed and declarative machine learning platform.
- textworld-express — TextWorldExpress: a highly optimized reimplementation of three text game benchmarks focusing on instruction following, commonsense reasoning, and object identification.
- tikit — Kit for TI PLATFORM
- tikit-lite — Kit for TI PLATFORM
- tikit-test — Kit for TI PLATFORM
- Twister2 — Twister2 is a composable big data environment supporting streaming, data pipelines and analytics. Our vision is to build robust, simple to use data analytics solutions that can leverage both clouds and high performance computing infrastructure.
- valido — PySpark dataframes based workflow validator
- vsm — Vector Space Semantic Modeling Framework for the Indiana Philosophy Ontology Project
- warp10-jupyter — Jupyter extension that contains a cell magic to execute WarpScript code
- XalExtractor — servicios para extractores
- yaetos — Write data & AI pipelines in (SQL, Spark, Pandas) and deploy them to the cloud, simplified
- yvestest — An open-source simplifies ETL workflow with Python based on Spark
1