Google Cloud Dataflow is a cloud-based data-processing service for both batch and real-time data-streaming applications. Our teams are using Dataflow to create processing pipelines for integrating, preparing and analyzing large data sets, with Apache Beam's unified programming model on top to ease manageability. We first featured Dataflow in 2018, and its stability, performance and rich feature set make us confident to move it to Trial in this edition of the Radar.
Google Cloud Dataflow is useful in traditional ETL scenarios for reading data from a source, transforming it and then storing it to a sink, with configurations and scaling being managed by dataflow. Dataflow supports Java, Python and Scala and provides wrappers for connections to various types of data sources. However, the current version won’t let you add additional libraries, which may make it unsuitable for certain data manipulations. You also can’t change the dataflow DAG dynamically. Hence, if your ETL has conditional execution flows based on parameters, you may not be able to use dataflow without workarounds.