When an aggregate is updated, it is marked as changed for the search functionality, and is indexed at the next synchronization moment (How to configure full-text index synchronization). There are two scenarios in which you want to start the indexing for all existing aggregates.

  1. You are upgrading to version 9.7 or higher and want to benefit from the full text search. The aggregates that you already have stored in database must be indexed.
  2. All your aggregates are already indexed and a new version of the model is published to production wit new business logic, so you want to re-index all aggregates so that derived values that are changed with the new logic can be found during the search.

This article describes a strategy that allows you to index aggregates in an automatic manner.

As it is important that also system set attributes are indexed, this cannot be a database operation, and a filled runtime profile is needed. For filling the runtime profile, a migration flow is needed that loads the needed information so that the rule engine can derive all values. Once that is done, a service is needed to indicate that the current profile should be indexed, and that it belongs to an existing aggregate. The Service call type: AQ_Aggregate_Update service with the CreateNewVersion to FALSE is sufficient for this purpose in most scenarios. These scenarios are

  • Versioning on aggregates is not used, and there always is only one version for each aggregate Id.
  • Versioning is used, and you want to update the index for only the latest version.

The scenario in which this service does not suffice is that you want to (re-)index older versions of an aggregate. This can be needed since a specific old version of an aggregate can be referenced from another aggregate. (Re-)Indexing the old aggregate version has to be a conscious choice, and you might wish to keep the old index like it is for older versions. An aggregate that was created in an old version was indexed using the logic valid at that moment of time, so these may belong together. This can be a reason for only (re-)indexing the most recent version of aggregates.

In case that you want to re-index older versions of existing aggregates, we have created a special service that allows you to index aggregates, without updating them in the database. The first step is to create the service definition in studio, and then call it in an appropriate flow. The new migration service marks the aggregate as changed, without actually creating a new version.

In the next section the steps that are needed to use the migration service are described. These steps can be skipped if you are not interested in the scenario in which this service is useful, and you can use the Service call type: AQ_Aggregate_Update instead. The last section of this article describes how to model the flow which you need to model for indexing aggregates.

Step-by-step guide to add the migration service

This service is not a standard service, and it is not part of the AquimaLibrary. You have to ask for the code at support@blueriq.com and add the definition to the globals manually.

  1. In Studio add a custom service named AQ_Aggregate_Index_Migrate
  2. The service definition looks like this:


  3. The AggregateId is always required.
    The aggregate version is optional. In case this is not specified, the latest version of an aggregate will be indexed. This service has the following parameters: 

    ParameterDirectionValueData TypeDescriptionFill
    AggregateTypeInputModule ElementAggregateSelect the aggregate that you want to update.Required
    AggregateIdInputExpressionIntegerPlease provide the attribute containing the ID of the aggregate entry to index.Required
    AggregateVersionInputExpressionIntegerIf a specific version of the aggregate needs to be updated. If not provided then the latest one is updated.Optional
    StoreInstancesInputExpression - Multivalued, Multivalued resultsEntitySpecify the starting point instance(s) used for storage.Optional

     

    Event NameEvent TypeDescription
    AppExceptionCancelThe service takes this exit if indexing failed for some reason.


Step-by-step guide to model the migration flow

  • A project that wants to index the aggregates has to create an exposed migration flow.
  • This flow should be called for every aggregate Id in the database. It can be passed as a parameter in the URL, and retrieved from there with the BB_SetRequestParameters service.
  • In this flow you then call the Service call type: AQ_Aggregate_Read to load the existing aggregate into the profile. It might be needed to also load referenced aggregates in order to obtain a complete profile.
  • The next step is to use the AQ_Aggregate_Index_Migrate or Service call type: AQ_Aggregate_Update service to update the aggregate index.

The code to query the database, to select the applicable Ids, and passing them one by one to the exposed flow is left to the project team.

 
Related articles appear here based on the labels you select. Click to edit the macro and add or change labels. Note: the article itself also appears as related article, this is a known limitation.