The following sections provide a big picture of Lima2 Design.
Lima2 is composed of 3 main layers that are further documented in their own sections:
the Camera layer is an abstraction layer that implement the detector specific code
the Processing layer control is a tasks system and image processing algorithms library used to build the processing pipeline
the Control layer, built on top of the previous two, is in charge of preparing and running the acquisition and building and executing the processing pipeline.
High performance detectors generate more data than a single acquisition backend can handle. The solution consists in running multiple coordinated acquisition backends, each backend supporting only part of dataflow.
Multiple topologies and frame dispatching strategies are to be considered for different uses cases.
Use case: Legacy cameras.
Use case: PCO Edge / Dectris Eiger2.
Distributed Acquisition with partial frame dispatching¶
Use case: PSI Eiger / Jungfrau.
Distributed Acquisition with full frame dispatching¶
Use case: ESRF RASHPA / Smartpix.
Message Passing Interface (MPI) is a standardized message-passing library interface specification.
There are a lot of tutorials on MPI such as this excellent Introduction to distributed computing with MPI,only the basic concepts are recap in the following paragraphs.
The message passing programming model¶
An MPI application is a group of autonomous processes, each executing its own code written in a classic language (C++, Python). All the program variables are private and reside in the local memory of each process. Each process has the possibility of executing different parts of a program. A variable is exchanged between two or several processes via a programmed call to specific MPI subroutines.
Multiple Data Single Program¶
In this configuration, every processes in the cluster run the same program. The program can have multiple facets or roles which are assigned according to process rank.
Multiple Data Multiple Program¶
MPI also have the ability run tasks that can be a different program that executes different part of the workflow. For example, one task can be a C++ program and another a python.
When a program is ran with MPI all the processes are grouped in what we call a communicator. Every communication is linked to a communicator, allowing the communication to reach different processes.
Point to Point¶
MPI defines four communication modes:
that exists in both blocking/nonblocking variants.
MPIStream is an extension of MPI that supports data streams, a sequence of data flowing between source and destination processes. Streaming is widely used for signal, image and video processing for its efficiency in pipelining and effectiveness in reducing demand for memory.
Talks that influenced the design of Lima2.