Posts

Showing posts from January 22, 2016

Apache Hadoop MapReduce, Apache Spark and Apache Flink adjusted for your business logic

Image
Everyone is aware of Apache Hadoop MapReduceand Apache Sparkas the obvious engine for all things big data; also Apache Flink, as a streaming-native engine. However more often, these modern engines require rewriting pipelines to adopt engine-specific APIs, often with different implementations for streaming and batch scenarios.
On behalf of scalability, lower latency, flexibility, performance and agility, henceforth, thanks to Dataflow Java SDKby Google, data Artisans for Apache Flink, and Cloudera for Apache Spark, you can move your application or data pipeline to the appropriate engine, or to the appropriate environment (from on-prem to cloud) while keeping the business logic intact.
You can write one portable data pipeline, which can be used for either batch or stream, and executed in a number of runtimes including Flink, Spark, Google Cloud Dataflow or the local direct pipeline.

Application or one data pipeline for multiple processing needs running in a number of runtimes, on-premise, in the cloud, or locally

Image
Big data pipelines are henceforth at the core of our digital lives where immature code is not tolerated and where the data pipelines need to scale better, have lower latency, run more cheaply, and complete faster.This means that, rewriting pipelines to adopt engine-specific APIs, different implementations for streaming and batch scenarios is not accepted.
Thanks to Dataflow Java SDKby Google, data Artisans for Apache Flink, and Cloudera for Apache Spark, you can move your application or data pipeline to the appropriate engine, or to the appropriate environment (e.g., from on-prem to cloud) while keeping the business logic intact.
Henceforth we can talk about the ability to define one data pipeline for multiple processing needs, without tradeoffs, which can be run in a number of runtimes, on-premise, in the cloud, or locally.

Help you Adopt and Operationalize Containers also means:

microservice based applications and container technologies are henceforth indispensable in our ever-connected era where distributed computing is a game changer. Therefore, Adopt and Operationalize Containers also means:Confidence that containerized applications are developed, deployed and maintained on validated platforms with appropriate provenance and governance,give developers and operators the ability to build and execute a broad array of microservice based applications,Provide frictionless resources and management to enable flexible deployment of containers as workloads change,Resilient access to application data regardless of container deployment locality,It also means: Consistent container deployment frameworks, resources and platforms wherever development and deployment occurs.