Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spark batch functionality #82

Open
johnantonn opened this issue Nov 23, 2020 · 0 comments
Open

Spark batch functionality #82

johnantonn opened this issue Nov 23, 2020 · 0 comments
Labels
feature New feature or request

Comments

@johnantonn
Copy link
Contributor

johnantonn commented Nov 23, 2020

We need to introduce a new Kafka consumer component using Spark for aggregate statistics calculations. These can be min/max, means, variances, averages and counts of the Cenote data stored in Cockroach.

This is a requirement stemming from the eeRIS application. Right now, these averages are being calculated in the real-time pipeline using Lua scripts running on Redis, i.e. some form of caching. A more robust design would separate these calculations from the real-time event streaming and place them at the batch pipeline.

Essentially, we need to introduce a design of various Spark consumers according to the job at hand. Later on, these consumers might run ML models on the data as well. Currently we need to search for the correct way for this infrastructure of Spark consumers/cluster to integrate upon the existing Cenote architecture.

@johnantonn johnantonn added the feature New feature or request label Nov 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant