You are viewing the documentation for Blueriq 16. Documentation for other versions is available in our documentation directory.
The Case-Modelling stack is based on the Process-Moddeling Blueriq models. Both strategies use a DCM Foundation as a start package, which customers can use to build their own solutions with. This sections will highlight some of the differences between the Process-Modelling Foundation and the Case-Modelling Foundation.
Services and containers
In the Case-Modelling setup, all DCM related services and containers have been changed. Most services globally work similar to the Process-Modelling ones, but act a differently in the background. Since all Process-Modelling services are being processed by the in-line process engine present in the same model and Runtime, the Case-Modelling services are being sent from the Runtime to the Case-Engine, which returns its results.
Another important difference between the services, is the fact that the Process-Modelling services use a process-id as inputs and/or outputs (confusingly called process-id or case-id), which is of type Number. The Case-Modelling services and container use a case-id as inputs and/or outputs, which is of type Text.
The Process-Modelling and Case-Modelling services can be found at the BlueriqLibraries package, you would only one or the other (both implementations cannot exist in one model, or Runtime). The new services will be explained in more detail in one of the sections below. The Case-Modelling variants have a DCM_ prefix. Some of the services are new, and others are not available anymore in the Case-Modelling setup.
Since the containers and services could differ in details, we cannot migrate them from one to another type, so you have to manually alter them in the models.
Process-Modelling library - Services
Case-Modelling library - Services
Process-Modelling Library - Containers
Case-Modelling library - Containers
DCM_CaseCreate service migration
The CaseCreate is a new service in Case-Modelling. A case always consists of 2 aggregates (case aggregate and dossier aggregate), and a process. During the Case Creation, these elements have to be created and initialised. In the Process-Modelling Foundation, this is done by creating the elements, and updating the elements with cross references to each other. In the Case-Modelling, all elements (with their references) are stored in a case document. Therefore, the Case-Engine can start a case for you by doing everything. The DCM_CaseCreate service will send a message to the Case-Engine to start a case. When this is finished (async process), the user can find his newly added case at his overview screen. Below is the model comparison between Process-Modelling and Case-Modelling in the Foundation.
Process-Modelling Foundation
A case is created by calling a message event to start a process, then creating aggregates, and after that sending another message event telling the process to really start (and sending some ID's). Intake flow:
The Create Case flow (create the aggregates):
Corresponding Process flow (two message events and an automatic task is needed to kickstart the process):
Case-Modelling Foundation
In the Case-Modelling Foundation, the DCM_CaseCreate is needed to start a case, just needing one process start-node, the definitions of the Case and Case-Dossier aggregates, and a mapping. It will perform all actions, only when these actions have been successful, the case will be released to the users. When anything fails, all actions are rolled back, so that the start-case can be repeated. Intake flow:
Create a case:
DCM_CaseCreate service:
Corresponding process (only the start event is relevant for the case create):
DCM_ReadCase service migration
In the Process-Modelling Foundation, cases are read by reading aggregates. The case-aggregate could be read to get to the references of the dossier aggregate and process. In the DCM_ReadCase service, all of this can be done by using a case-id. This service will return one or two aggregates and load them into the profile. Also, a process-id is never needed anymore, since all process-related services use a case-id in the Case-Modelling context.
Process-Modelling Foundation
Multiple aggregates are read using case-aggregate-id:
In the aggregates, references are needed to other aggregates and process-id
Case-Modelling Foundation
Case is read using case-id:
DCM_CaseList and DCM_WorkList container migrations
In the Process-Modelling Foundation, AQ_AggregateList containers are used as caselists, After that, the case-aggregate should be loaded to get to the process-id, which could be used to filter worklists. In the Case-Modelling Foundation, only case-id's are used as inputs/outputs of the containers. This means all Caselists should be migrated to DCM_Caselist containers, and the AQ_WorkList containers should be migrated to DCM_WorkList containers. Both containers have been enhanced with extra information, which was not there before. The case-metadata and dossier-metadata (both modelled as the metadata on the aggregates) are stored to be queried in the lists containers. This means all lables (case-metadata which is global, and dossier-metadata which is case-type specific) can be used for showing and filtering data in the lists. These can be found as new field types in the columns. The case-aggregate should be in scope of the model where the list is used (so it makes sense to keep the case-data aggregate type in a shared library), and the dossier-aggregate can be used by typing the desired field (when a case-type does not have this characteristic, it will be treated as unknown).
Another advantage of the Lists, is that they are being served by the DCM-Lists-Service. Therefore, the lists are stored in the database in such a way that they can be queried more efficiently.
Process-Modelling Foundation
Aggregate lists are used for caselists:
Case-Modelling Foundation
Caselists can be used for caselists (by their new field types):
Task execution migration
In the Process-Modelling Foundation you had to make sure that the aggregates were read and updated at the beginning and ending of a task. In the Case-Modelling solution, the DCM_ExecuteTask service will do that for you. Making sure that in the end the case-data is consistent before some other actio on the case can be started. This means that in all tasks, the Read and Update subflows can be deleted, making it easier to see what the task is actually doing.
This also means that some maintenance flows (where aggregates are updated, but the process is somehow not) are not needed anymore, and can be simplified in the models.
Process-Modelling Foundation
Check the Read and Update services at each task.
Case-Modelling Foundation
No need to Read and Update the aggregates, since the case-engine will do this for you.
Aggregate definitions
The previous DCM Foundation models contained two different aggregate types: Case-Metadata and Case-Dossier. In Case-Modelling, we've adapted these two aggregates, but made sure the Case-Engine is able to update the data instead of the Runtime. The idea is that the Case-Metadata is kept consistent with all case-types, to make sure lists in your dashboard can be queried using this global definition. The Case-Dossier contains the Case-Type-specific data (please make sure the aggregate definition is always bound to a model, so create a new type for each Case Type, making sure you know which model belongs to which aggregate type).
Since the Case-Engine is in charge of these aggregates, and not the Runtime anymore, it should not be possible anymore to update/delete these aggregates using anything else than the Case-Engine.
When updating to the DCM Case-Modelling, please take into account that the aggregate definitions are used from the model. During the DCM_CaseCreate the Business Engineer can determine which aggregate definition serves as Case-Metadata, and which serves as Case-Dossier aggregate type. This data is stored at the Case Document during creation and cannot be altered later. Some aggregate settings are unavailable:
- Instances cannot be stored using the input field "Store Instances". Each aggregate can contain singletons from the definition, and also all related instances modelled in the definition. So all instances to store should be reachable from relations, given in the aggregate definition.
- Aggregate versioning is unsupported
Case migrations
Cases that are constructed using the Process-Modelling models cannot automatically be migrated to the new situation. Since the current models can be very diverse, it is not possible to generate a script to migrate. However, there are some possibilities to ease the migration.
In an ideal situation, the cases that have been started pre-migration are being finished in the old setup. There might be a situation where you could start new cases in Case-Modelling, and finish the old cases in Process-Modelling. Keep in mind that one Runtime can either be a Process-Modelling interpreter or Case-Modelling, not both. So the setup would have to run alongside the old one as long as it needs and switch off the old situation when the last case is processed.
When open cases have to be migrated, keep in mind the patterns that have been changed (for example the ones stated above, or in the getting started page). When DCM Foundation patterns are followed (having case-metadata in an aggregate type, dossier in a dossier-aggregate), could ease the migration. When the process can be derrived from the dossierdata, a migration path could be to model an intake which retrieves the dossier, and creates a new case based on this data. The process will continue in a similar state, since the data + rules have been unchanged.
Documents
When documents are generated or uploaded, options are present to store the CaseId to the documents as metadata. Since the old services use the same parameter CaseId (for the process-id) as an integer, we could not change this parameter without changing the behaviour for all our process-modelling clients. Instead, we did not change the parameter type, but you can omit the parameter in the container/service. The Runtime will automatically add the CaseId parameter when present in the system.CaseId parameter (make sure the actions are performed during task execution). For more information check Service call type: AQ_Document_Generate and Container type: AQ_File_Upload.