Dataflow programming

Source: Wikipedia, the free encyclopedia.

In

functional languages, and were generally developed in order to bring some functional concepts to a language more suitable for numeric processing. Some authors use the term datastream instead of dataflow to avoid confusion with dataflow computing or dataflow architecture, based on an indeterministic machine paradigm. Dataflow programming was pioneered by Jack Dennis
and his graduate students at MIT in the 1960s.

Considerations

Traditionally, a program is modelled as a series of operations happening in a specific order; this may be referred to as sequential,[2]: p.3  procedural,[3] control flow[3] (indicating that the program chooses a specific path), or imperative programming. The program focuses on commands, in line with the von Neumann[2]: p.3  vision of sequential programming, where data is normally "at rest".[3]: p.7 

In contrast, dataflow programming emphasizes the movement of data and models programs as a series of connections. Explicitly defined inputs and outputs connect operations, which function like black boxes.[3]: p.2  An operation runs as soon as all of its inputs become valid.[4] Thus, dataflow languages are inherently parallel and can work well in large, decentralized systems.[2]: p.3 [5] [6]

State

One of the key concepts in computer programming is the idea of

Enterprise Java Beans when building data-intensive, non-OLTP applications.[citation needed
]

Where a sequential program can be imagined as a single worker moving between tasks (operations), a dataflow program is more like a series of workers on an assembly line, each doing a specific task whenever materials are available. Since the operations are only concerned with the availability of data inputs, they have no hidden state to track, and are all "ready" at the same time.

Representation

Dataflow programs are represented in different ways. A traditional program is usually represented as a series of text instructions, which is reasonable for describing a serial system which pipes data between small, single-purpose tools that receive, process, and return. Dataflow programs start with an input, perhaps the

command line
parameters, and illustrate how that data is used and modified. The flow of data is explicit, often visually illustrated as a line or pipe.

In terms of encoding, a dataflow program might be implemented as a hash table, with uniquely identified inputs as the keys, used to look up pointers to the instructions. When any operation completes, the program scans down the list of operations until it finds the first operation where all inputs are currently valid, and runs it. When that operation finishes, it will typically output data, thereby making another operation become valid.

For parallel operation, only the list needs to be shared; it is the state of the entire program. Thus the task of maintaining state is removed from the programmer and given to the language's

runtime
. On machines with a single processor core where an implementation designed for parallel operation would simply introduce overhead, this overhead can be removed completely by using a different runtime.

Incremental updates

Some recent dataflow libraries such as Differential/Timely Dataflow have used incremental computing for much more efficient data processing.[1][7][8]

History

A pioneer dataflow language was

sampled data systems.[9]
A BLODI specification of functional units (amplifiers, adders, delay lines, etc.) and their interconnections was compiled into a single loop that updated the entire system for one clock tick.

In a 1966 Ph.D. thesis, The On-line Graphical Specification of Computer Procedures,

assigned once. This allows the compiler to easily identify the inputs and outputs. A number of offshoots of SISAL have been developed, including SAC, Single Assignment C, which tries to remain as close to the popular C programming language
as possible.

The United States Navy funded development of ACOS and SPGN (signal processing graph notation) starting in the early 1980s. This is in use on a number of platforms in the field today.[12]

A more radical concept is

Macintosh, which remained single-processor until the introduction of the DayStar Genesis MP in 1996.[citation needed
]

There are many hardware architectures oriented toward the efficient implementation of dataflow programming models.[vague] MIT's tagged token dataflow architecture was designed by Greg Papadopoulos.[undue weight? ]

Data flow has been proposed[by whom?] as an abstraction for specifying the global behavior of distributed system components: in the live distributed objects programming model, distributed data flows are used to store and communicate state, and as such, they play the role analogous to variables, fields, and parameters in Java-like programming languages.

Languages

Dataflow programming languages include:

Libraries

  • Apache Beam: Java/Scala SDK that unifies streaming (and batch) processing with several execution engines supported (Apache Spark, Apache Flink, Google Dataflow etc.)
  • Apache Flink: Java/Scala library that allows streaming (and batch) computations to be run atop a distributed Hadoop (or other) cluster
  • Apache Spark
  • SystemC: Library for C++, mainly aimed at hardware design.
  • TensorFlow: A machine-learning library based on dataflow programming.

See also

References

  1. ^ a b Schwarzkopf, Malte (7 March 2020). "The Remarkable Utility of Dataflow Computing". ACM SIGOPS. Retrieved 31 July 2022.
  2. ^
    S2CID 5257722
    . Retrieved 15 August 2013.
  3. ^ . Retrieved 15 August 2013.
  4. ^ a b "Dataflow Programming Basics". Getting Started with NI Products. National Instruments Corporation. Retrieved 15 August 2013.
  5. ^ Harter, Richard. "Data Flow languages and programming - Part I". Richard Harter's World. Archived from the original on 8 December 2015. Retrieved 15 August 2013.
  6. ^ "Why Dataflow Programming Languages are Ideal for Programming Parallel Hardware". Multicore Programming Fundamentals Whitepaper Series. National Instruments Corporation. Retrieved 15 August 2013.
  7. ^ McSherry, Frank; Murray, Derek; Isaacs, Rebecca; Isard, Michael (5 January 2013). "Differential dataflow". Microsoft. Retrieved 31 July 2022.
  8. ^ "Differential Dataflow". Timely Dataflow. 30 July 2022. Retrieved 31 July 2022.
  9. .
  10. . Retrieved 2022-08-25.
  11. ^ Gloria Lambert (1973). "Large scale file processing: POGOL". POPL '73: Proceedings of the 1st annual ACM SIGACT-SIGPLAN symposium on Principles of programming languages. ACM. pp. 226–234.
  12. ^ Underwater Acoustic Data Processing, Y.T. Chan

External links