You are viewing the documentation for Blueriq 15. Documentation for other versions is available in our documentation directory.

Case-Modelling has been in beta stage in the Blueriq 15 major. At the latest minor, all Blueriq 16.0 changes have been merged back to Blueriq 15.13. From this point on Case-Modelling is out of beta and can be used.

Please make sure to upgrade to at least Blueriq 15.13 when using Case-Modelling, possibly in a production environment. Earlier minor versions are not supported.

Known model constructions or patterns to consider when preparing your DCM solution for the event driven setup.

Case profile using aggregates

In future we want to work towards the concept of a case dossier and an easy to maintain case state. In the current situation, the state of the case is often stored in different modules, because there is need for a dossier (stored in multiple aggregates) and a process state (stored using a process engine). These datasets need to stay in sync with each other to call the whole state of the case consistent. In the DCM 2.0 setup, the assumption is made that the case state consists of three parts, “metadata” (aggregate), “dossier” (aggregate) and process (process-sql-store). The case engine will help you keeping the dataset consistent, by storing the trinity for you. There are several characteristics which need to be kept in mind when you want to use the new setup soon.

Metadata Aggregate

Metadata is a data definition that is shared between different case types. Containing characteristics which are relevant for multiple case types (like the people involved in handling a case). Making it possible to search regardless of case type and create (filtered) lists.

In the solution, this aggregate is also responsible for maintaining links to any relevant data sources, like other aggregates, process id and any other external data sources. In the DCM setup, all three datapoints will be created at case creation. The identifiers of the dossier and process will be added to the metadata aggregate during creation.

In order to keep the aggregate definition as general as possible, a number of restrictions are added to the metadata aggregate:

  • There is always one singleton, containing the Case_Metadata details
  • There is always a multiton entity where the dossier aggregate ID is stored, which contains a relation to the Case_Metadata entity (instance + the relation will be set implicitly by the DCM_CaseCreate service)
  • There are no other singletons in the Metadata Aggregate definition
  • All other multitons are stored in the aggregate implicitly by their relation to the singleton

Dossier Aggregate

The dossier aggregate contains the case-specific data and will be defined for each case type. The aggregate-type should be unique for each different case-type.

Process profile

The process is always maintained in the case engine. In the new setup, the process is always updated using data from the metadata and dossier aggregate, so keep in mind that any data in your task implementation that is not stored in the dossier aggregates, will not be used to update the process state.

A good practice is to keep the process state always completely derivable from rules and attributes in the case dossier. Since the process state can be hard to debug and restore, changing any rules will be easier when there is one source (dossier), which will lead to a process state.

Loading case data using aggregates

When a task is started, the case engine will gather all relevant information based on the task and other parameters set in the DCM_ExecuteTask service. The complete dataset will be loaded at the start of a task.


When the case is loaded in different contexts (for example to show the details of the case), The metadata aggregate should always be loaded first (using the aggregate ID or searched by some characteristic known in the metadata). When the metadata is loaded, the profile contains all information on what external sources are available to load next.


Loading/Updating case data when executing tasks

The process and aggregates will be updated in the case engine, so there is no need to save them in the task implementation anymore. The system uses the aggregate definition as contract between a task and the case engine. This means that all data which is needed to map to the process profile should be available in the aggregate definition + module containing all logic elements.

The mapping performed at loading a case, will also be performed in the Case Engine. The aggregates will be loaded, mapping is performed, the aggregate data is then sent back to the Runtime and loaded in the task implementation. All process data which is needed in the task implementation, should be available in the aggregate definition (otherwise it will not be sent to the task implementation)

Task execution

In the new setup, task execution will behave asynchronous by default. Starting a (manual) task is a synchronous action (the user wants to start his task immediately), but completing a task is asynchronous. Performing automatic tasks will always be asynchronous. Some effects of this behaviour are explained below.

Order of relevant actions cannot be predetermined

All actions that are relevant to be executed (for example automatic tasks, expired timer, expired task) are queued when relevant. However, they will eventually be executed when they are first in the queue, making it impossible to predetermine their order. When constructing a model, the business engineer should consider that certain actions can become relevant at the same time. When this happens, they should never depend on any order.

Automatic tasks

Automatic tasks are not performed inline but pushed to a queue to be executed by some runtime. This can be the same runtime where end-users are working from, or a dedicated runtime for performing automatic tasks (for example in another zone of the infrastructure). Automatic tasks will always be performed by the user “automatic-user”. This user does not have any roles or authentication for external use. When system to system authentication is necessary during task execution, configuration can be relevant at the specific service calls during task execution, for example by system-to-system authentication on the specific webservice calls.

When modelling an automatic task, it is necessary to have the implementation flow end in an exit with an OK exit event. This way, both the event and the task are deemed processed if the task is executed correctly in it's entirety. If such an implementation flow does end in a CANCEL event, the event itself will be deemed processed while the task in the process is not, leading to a inconsistent state between event-queue and process (without an easy way to fix it). Conceptually an automatic task should always be started with the intent to finish it correctly, while errors and other unexpected behavior will lead to unprocessed events with accompanying methods to resolve them (through the maintenance app). Error handling should be done through event-processing and the maintenance app, not through patterns within the model for automatic tasks.

Case locks

All actions (performing a task, executing an automatic task) will load the case state, change something, and update the whole case state. When multiple tasks are performed simultaneously, there is a possibility that they can interfere with one another, trying to modify the same data. To avoid any possible data loss in this event driven landscape, case locking is introduced on all actions, and will always be enabled. When multiple events are queued to be processed on the same case, they will always be processed in a serial way, so wait on the previous one to be finished.

The action of performing a task will start with a synchronous call that starts a task setting a lock, the case engine will only process a task completion event corresponding with the same task. All other events (for example an expired timer) will wait until the user has completed or cancelled his task.

Finishing a case

Whenever the process for a case finishes, the case will be closed. This means that the following persistent parts of a case will be removed:

  • The process and its tasks and data in the process-sql-store
  • The case and its manual tasks in the DCM Lists Service's MongoDB database
  • The case document in MongoDB
  • Any remaining entries in the scheduler database
  • The case aggregate and the case data aggregate

We are aware that it may not be desirable to remove the aggregates, for example if your process doesn't have steps for archival. Therefore, we introduced a property to control this behavior:

application-case-engine.properties
blueriq.case.engine.data.remove-aggregates-on-case-close = false

The default value for this property is true.