Stream processing explained

In computer science, stream processing (also known as event stream processing, data stream processing, or distributed stream processing) is a programming paradigm which views streams, or sequences of events in time, as the central input and output objects of computation. Stream processing encompasses dataflow programming, reactive programming, and distributed data processing.[1] Stream processing systems aim to expose parallel processing for data streams and rely on streaming algorithms for efficient implementation. The software stack for these systems includes components such as programming models and query languages, for expressing computation; stream management systems, for distribution and scheduling; and hardware components for acceleration including floating-point units, graphics processing units, and field-programmable gate arrays.[2]

The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. Given a sequence of data (a stream), a series of operations (kernel functions) is applied to each element in the stream. Kernel functions are usually pipelined, and optimal local on-chip memory reuse is attempted, in order to minimize the loss in bandwidth, associated with external memory interaction. Uniform streaming, where one kernel function is applied to all elements in the stream, is typical. Since the kernel and stream abstractions expose data dependencies, compiler tools can fully automate and optimize on-chip management tasks. Stream processing hardware can use scoreboarding, for example, to initiate a direct memory access (DMA) when dependencies become known. The elimination of manual DMA management reduces software complexity, and an associated elimination for hardware cached I/O, reduces the data area expanse that has to be involved with service by specialized computational units such as arithmetic logic units.

During the 1980s stream processing was explored within dataflow programming. An example is the language SISAL (Streams and Iteration in a Single Assignment Language).

Applications

Stream processing is essentially a compromise, driven by a data-centric model that works very well for traditional DSP or GPU-type applications (such as image, video and digital signal processing) but less so for general purpose processing with more randomized data access (such as databases). By sacrificing some flexibility in the model, the implications allow easier, faster and more efficient execution. Depending on the context, processor design may be tuned for maximum efficiency or a trade-off for flexibility.

Stream processing is especially suitable for applications that exhibit three application characteristics:

Examples of records within streams include:

For each record we can only read from the input, perform operations on it, and write to the output. It is permissible to have multiple inputs and multiple outputs, but never a piece of memory that is both readable and writable.

Code examples

By way of illustration, the following code fragments demonstrate detection of patterns within event streams. The first is an example of processing a data stream using a continuous SQL query (a query that executes forever processing arriving data based on timestamps and window duration). This code fragment illustrates a JOIN of two data streams, one for stock orders, and one for the resulting stock trades. The query outputs a stream of all Orders matched by a Trade within one second of the Order being placed. The output stream is sorted by timestamp, in this case, the timestamp from the Orders stream.

SELECT DataStream Orders.TimeStamp, Orders.orderId, Orders.ticker, Orders.amount, Trade.amountFROM OrdersJOIN Trades OVER (RANGE INTERVAL '1' SECOND FOLLOWING)ON Orders.orderId = Trades.orderId;

Another sample code fragment detects weddings among a flow of external "events" such as church bells ringing, the appearance of a man in a tuxedo or morning suit, a woman in a flowing white gown and rice flying through the air. A "complex" or "composite" event is what one infers from the individual simple events: a wedding is happening.

WHEN Person.Gender EQUALS "man" AND Person.Clothes EQUALS "tuxedo"FOLLOWED-BY Person.Clothes EQUALS "gown" AND (Church_Bell OR Rice_Flying)WITHIN 2 hoursACTION Wedding

Comparison to prior parallel paradigms

Basic computers started from a sequential execution paradigm. Traditional CPUs are SISD based, which means they conceptually perform only one operation at a time.As the computing needs of the world evolved, the amount of data to be managed increased very quickly. It was obvious that the sequential programming model could not cope with the increased need for processing power. Various efforts have been spent on finding alternative ways to perform massive amounts of computations but the only solution was to exploit some level of parallel execution. The result of those efforts was SIMD, a programming paradigm which allowed applying one instruction to multiple instances of (different) data. Most of the time, SIMD was being used in a SWAR environment. By using more complicated structures, one could also have MIMD parallelism.

Although those two paradigms were efficient, real-world implementations were plagued with limitations from memory alignment problems to synchronization issues and limited parallelism. Only few SIMD processors survived as stand-alone components; most were embedded in standard CPUs.

Consider a simple program adding up two arrays containing 100 4-component vectors (i.e. 400 numbers in total).

Conventional, sequential paradigm

for (int i = 0; i < 400; i++) result[i] = source0[i] + source1[i];This is the sequential paradigm that is most familiar. Variations do exist (such as inner loops, structures and such), but they ultimately boil down to that construct.

Parallel SIMD paradigm, packed registers (SWAR)

for (int el = 0; el < 100; el++) // for each vector vector_sum(result[el], source0[el], source1[el]);This is actually oversimplified. It assumes the instruction vector_sum works. Although this is what happens with instruction intrinsics, much information is actually not taken into account here such as the number of vector components and their data format. This is done for clarity.

You can see however, this method reduces the number of decoded instructions from numElements * componentsPerElement to numElements. The number of jump instructions is also decreased, as the loop is run fewer times. These gains result from the parallel execution of the four mathematical operations.

What happened however is that the packed SIMD register holds a certain amount of data so it's not possible to get more parallelism. The speed up is somewhat limited by the assumption we made of performing four parallel operations (please note this is common for both AltiVec and SSE).

Parallel stream paradigm (SIMD/MIMD)

// This is a fictional language for demonstration purposes.elements = array streamElement([number, number])[100]kernel = instance streamKernel("@arg0[@iter]")result = kernel.invoke(elements)In this paradigm, the whole dataset is defined, rather than each component block being defined separately. Describing the set of data is assumed to be in the first two rows. After that, the result is inferred from the sources and kernel. For simplicity, there's a 1:1 mapping between input and output data but this does not need to be. Applied kernels can also be much more complex.

An implementation of this paradigm can "unroll" a loop internally. This allows throughput to scale with chip complexity, easily utilizing hundreds of ALUs.[3] [4] The elimination of complex data patterns makes much of this extra power available.

While stream processing is a branch of SIMD/MIMD processing, they must not be confused. Although SIMD implementations can often work in a "streaming" manner, their performance is not comparable: the model envisions a very different usage pattern which allows far greater performance by itself.

It has been noted that when applied on generic processors such as standard CPU, only a 1.5x speedup can be reached.[5] By contrast, ad-hoc stream processors easily reach over 10x performance, mainly attributed to the more efficient memory access and higher levels of parallel processing.[6]

Although there are various degrees of flexibility allowed by the model, stream processors usually impose some limitations on the kernel or stream size. For example, consumer hardware often lacks the ability to perform high-precision math, lacks complex indirection chains or presents lower limits on the number of instructions which can be executed.

Research

Stanford University stream processing projects included the Stanford Real-Time Programmable Shading Project started in 1999.[7] A prototype called Imagine was developed in 2002.[8] A project called Merrimac ran until about 2004.[9] AT&T also researched stream-enhanced processors as graphics processing units rapidly evolved in both speed and functionality. Since these early days, dozens of stream processing languages have been developed, as well as specialized hardware.

Programming model notes

The most immediate challenge in the realm of parallel processing does not lie as much in the type of hardware architecture used, but in how easy it will be to program the system in question in a real-world environment with acceptable performance. Machines like Imagine use a straightforward single-threaded model with automated dependencies, memory allocation and DMA scheduling. This in itself is a result of the research at MIT and Stanford in finding an optimal layering of tasks between programmer, tools and hardware. Programmers beat tools in mapping algorithms to parallel hardware, and tools beat programmers in figuring out smartest memory allocation schemes, etc. Of particular concern are MIMD designs such as Cell, for which the programmer needs to deal with application partitioning across multiple cores and deal with process synchronization and load balancing.

A drawback of SIMD programming was the issue of array-of-structures (AoS) and structure-of-arrays (SoA). Programmers often create representations of enitities in memory, for example, the location of an particle in 3D space, the colour of the ball and its size as below: // A particle in a three-dimensional space.struct particle_t ;When multiple of these structures exist in memory they are placed end to end creating an arrays in an array of structures (AoS) topology. This means that should some algorithim be applied to the location of each particle in turn it must skip over memory locations containing the other attributes. If these attributes are not needed this results in wasteful usage of the CPU cache. Additionally, a SIMD instruction will typically expect the data it will operate on to be contiguous in memory, the elements may also need to be aligned. By moving the memory location of the data out of the structure data can be better organised for efficient access in a stream and for SIMD instructions to operate one. A structure of arrays (SoA), as shown below, can allow this.struct particle_t ;Instead of holding the data in the structure, it holds only pointers (memory locations) for the data. Shortcomings are that if an multiple attributes to of an object are to be operated on they might now be distant in memory and so result in a cache miss. The aligning and any needed padding lead to increased memory usage. Overall, memory management may be more complicated if structures are added and removed for example.

For stream processors, the usage of structures is encouraged. From an application point of view, all the attributes can be defined with some flexibility.Taking GPUs as reference, there is a set of attributes (at least 16) available. For each attribute, the application can state the number of components and the format of the components (but only primitive data types are supported for now). The various attributes are then attached to a memory block, possibly defining a stride between 'consecutive' elements of the same attributes, effectively allowing interleaved data.When the GPU begins the stream processing, it will gather all the various attributes in a single set of parameters (usually this looks like a structure or a "magic global variable"), performs the operations and scatters the results to some memory area for later processing (or retrieving).

More modern stream processing frameworks provide a FIFO like interface to structure data as a literal stream. This abstraction provides a means to specify data dependencies implicitly while enabling the runtime/hardware to take full advantage of that knowledge for efficient computation. One of the simplest and most efficient stream processing modalities to date for C++, is RaftLib, which enables linking independent compute kernels together as a data flow graph using C++ stream operators. As an example:

  1. include
  2. include
  3. include
  4. include

class hi : public raft::kernel

int main(int argc, char **argv)

Models of computation for stream processing

Apart from specifying streaming applications in high-level languages, models of computation (MoCs) also have been widely used as dataflow models and process-based models.

Generic processor architecture

Historically, CPUs began implementing various tiers of memory access optimizations because of the ever-increasing performance when compared to relatively slow growing external memory bandwidth. As this gap widened, big amounts of die area were dedicated to hiding memory latencies. Since fetching information and opcodes to those few ALUs is expensive, very little die area is dedicated to actual mathematical machinery (as a rough estimation, consider it to be less than 10%).

A similar architecture exists on stream processors but thanks to the new programming model, the amount of transistors dedicated to management is actually very little.

Beginning from a whole system point of view, stream processors usually exist in a controlled environment. GPUs do exist on an add-in board (this seems to also apply to Imagine). CPUs continue do the job of managing system resources, running applications, and such.

The stream processor is usually equipped with a fast, efficient, proprietary memory bus (crossbar switches are now common, multi-buses have been employed in the past). The exact amount of memory lanes is dependent on the market range. As this is written, there are still 64-bit wide interconnections around (entry-level). Most mid-range models use a fast 128-bit crossbar switch matrix (4 or 2 segments), while high-end models deploy huge amounts of memory (actually up to 512 MB) with a slightly slower crossbar that is 256 bits wide. By contrast, standard processors from Intel Pentium to some Athlon 64 have only a single 64-bit wide data bus.

Memory access patterns are much more predictable. While arrays do exist, their dimension is fixed at kernel invocation. The thing which most closely matches a multiple pointer indirection is an indirection chain, which is however guaranteed to finally read or write from a specific memory area (inside a stream).

Because of the SIMD nature of the stream processor's execution units (ALUs clusters), read/write operations are expected to happen in bulk, so memories are optimized for high bandwidth rather than low latency (this is a difference from Rambus and DDR SDRAM, for example). This also allows for efficient memory bus negotiations.

Most (90%) of a stream processor's work is done on-chip, requiring only 1% of the global data to be stored to memory. This is where knowing the kernel temporaries and dependencies pays.

Internally, a stream processor features some clever communication and management circuits but what's interesting is the Stream Register File (SRF). This is conceptually a large cache in which stream data is stored to be transferred to external memory in bulks. As a cache-like software-controlled structure to the various ALUs, the SRF is shared between all the various ALU clusters. The key concept and innovation here done with Stanford's Imagine chip is that the compiler is able to automate and allocate memory in an optimal way, fully transparent to the programmer. The dependencies between kernel functions and data is known through the programming model which enables the compiler to perform flow analysis and optimally pack the SRFs. Commonly, this cache and DMA management can take up the majority of a project's schedule, something the stream processor (or at least Imagine) totally automates. Tests done at Stanford showed that the compiler did an as well or better job at scheduling memory than if you hand tuned the thing with much effort.

There is proof; there can be a lot of clusters because inter-cluster communication is assumed to be rare. Internally however, each cluster can efficiently exploit a much lower amount of ALUs because intra-cluster communication is common and thus needs to be highly efficient.

To keep those ALUs fetched with data, each ALU is equipped with local register files (LRFs), which are basically its usable registers.

This three-tiered data access pattern, makes it easy to keep temporary data away from slow memories, thus making the silicon implementation highly efficient and power-saving.

Hardware-in-the-loop issues

Although an order of magnitude speedup can be reasonably expected (even from mainstream GPUs when computing in a streaming manner), not all applications benefit from this. Communication latencies are actually the biggest problem. Although PCI Express improved this with full-duplex communications, getting a GPU (and possibly a generic stream processor) to work will possibly take long amounts of time. This means it's usually counter-productive to use them for small datasets. Because changing the kernel is a rather expensive operation the stream architecture also incurs penalties for small streams, a behaviour referred to as the short stream effect.

Pipelining is a very widespread and heavily used practice on stream processors, with GPUs featuring pipelines exceeding 200 stages. The cost for switching settings is dependent on the setting being modified but it is now considered to always be expensive. To avoid those problems at various levels of the pipeline, many techniques have been deployed such as "über shaders" and "texture atlases". Those techniques are game-oriented because of the nature of GPUs, but the concepts are interesting for generic stream processing as well.

Examples

Stream programming libraries and languages

Most programming languages for stream processors start with Java, C or C++ and add extensions which provide specific instructions to allow application developers to tag kernels and/or streams. This also applies to most shading languages, which can be considered stream programming languages to a certain degree.

Non-commercial examples of stream programming languages include:

Commercial implementations are either general purpose or tied to specific hardware by a vendor. Examples of general purpose languages include:

Vendor-specific languages include:

Event-Based Processing

Batch file-based processing (emulates some of actual stream processing, but much lower performance in general)

Continuous operator stream processing

Stream processing services:

Notes and References

  1. https://www.jonathanbeard.io/blog/2015/09/19/streaming-and-dataflow.html A SHORT INTRO TO STREAM PROCESSING
  2. http://impact.crhc.illinois.edu/shared/papers/fcuda2009.pdf FCUDA: Enabling Efficient Compilation of CUDA Kernels onto FPGAs
  3. IEEE Journal of Solid-State Circuits:"A Programmable 512 GOPS Stream Processor for Signal, Image, and Video Processing", Stanford University and Stream Processors, Inc.
  4. Khailany, Dally, Rixner, Kapasi, Owens and Towles: "Exploring VLSI Scalability of Stream Processors", Stanford and Rice University.
  5. Gummaraju and Rosenblum, "Stream processing in General-Purpose Processors", Stanford University.
  6. Kapasi, Dally, Rixner, Khailany, Owens, Ahn and Mattson, "Programmable Stream Processors", Universities of Stanford, Rice, California (Davis) and Reservoir Labs.
  7. Web site: Stanford Real-Time Programmable Shading Project . Research group web site . Eric Chan . March 9, 2017 .
  8. Web site: The Imagine - Image and Signal Processor . Group web site . March 9, 2017 .
  9. Web site: Merrimac - Stanford Streaming Supercomputer Project . Group web site . dead . December 18, 2013 . https://web.archive.org/web/20131218194055/http://merrimac.stanford.edu/ . March 9, 2017 .
  10. http://cva.stanford.edu/projects/imagine/ Imagine
  11. http://merrimac.stanford.edu/ Merrimac
  12. Memeti . Suejb . Pllana . Sabri . 2018 IEEE International Conference on Computational Science and Engineering (CSE) . HSTREAM: A Directive-Based Language Extension for Heterogeneous Stream Computing . October 2018 . 138–145 . IEEE . 10.1109/CSE.2018.00026 . 1809.09387 . 978-1-5386-7649-3 .
  13. https://arstechnica.com/news.ars/post/20060918-7763.html PeakStream unveils multicore and CPU/GPU programming solution
  14. TStreams: A Model of Parallel Computation .
  15. TStreams: How to Write a Parallel Program .
  16. Web site: GitHub - walmartlabs/Mupd8: Muppet. .