You are viewing the documentation for Blueriq 17. Documentation for other versions is available in our documentation directory.

Scaling/availability

Blueriq services

  • For the development versions, we support a single instance of each component described in the Blueriq DCM architecture overview.
  • In the first production versions, we will support two scalability aspects:
    • Multiple runtimes for user interaction, with sticky sessions
    • Execute automatic tasks in parallel on a single runtime AND use multiple runtimes for executing automatic tasks
  • In the further future, we intend to support:
    • Multiple runtimes for user interaction with external sessions (Redis)
    • Multiple case engines

Dashboarding

Using the DCM dashboard architecture scaling/availablity and failover is supported for all services and components. This approach is only supported by configuring services to use external sessions (redis). 

Third party services

Scaling/availability for third party services (RabbitMQ, MongoDB, Oracle/MSSQL) needs to be configured in these applications. Each of them provides information on how to scale:

Backup strategy

Blueriq stores operational data in several databases. We strongly advise to back up this data, otherwise cases might get corrupt after e.g. a server crash.

SQL (MSSQL/Oracle)

Use the backup capabilities of these platforms that conform to your need. You need to back up:

  • process database
  • scheduler database
  • timeline database
  • trace database
  • comments database
  • aggregate database

MongoDB

Use the backup capability of this platform. You need to back up:

  • Maintenance app database
  • Case Database
  • Locks Database
  • DCM Lists Database *

* It is possible to restore the DCM Lists Database from the process database, but this process may take some time, depending on the number of processes in the process database

RabbitMQ

See https://www.rabbitmq.com/backup.html for information. As the data in RabbitMQ changes rapidly, there is probably no secure way to back up data at any given time. You would need to hope that the data concerning messages that were in RabbitMQ at the moment that the server on which it runs crashed is not corrupted.

The best way to deal with this kind of data loss is to use replication, so if one RabbitMQ server crashes, the other can still continue the work.

We advise to combine all exchanges/queues for the DCM setup in a single RabbitMQ virtual host.

Concurrent consumers

For the trace-sql-store, timeline-sql-store, case-engine-client components as well as the Case Engine, it's possible to set a number of concurrent consumers. Please consider the following if you want to use concurrent consumers in your application: Consumers — RabbitMQ.  If you increase the amount of concurrent consumers, the application will start multiple threads to consume messages. This will most likely increase processing speed but also increase memory consumption and CPU usage. Please take this in considerations when updating the amount of concurrent consumers.

The default setting of 1 concurrent consumer should be enough for most applications. If you notice that your application's message throughput is not high enough and there are enough resources, the concurrent consumers could be increased. We advice testing this thoroughly.

Visit Configuring RabbitMQ for more information on the functionality. 

Security

For the runtime(s) that have user interaction, we advise to protect them as you would with any Blueriq Runtime. See Security for more information.

The DCM Maintenance App is intended for intranet use. We don't expect it to be available on a public server.

All other components are internal, for system to system communication, and they shouldn't need to be exposed to public internet and/or intranet. For the third party components, you might want to have a management console, query tool or administration page available for debugging. These need to be configured as restrictively as possible.

We advise you to set up your network protection accordingly.

Logging

All of Blueriq's DCM services provide to possibility to configure logging options. Out of the box we use the Logback logging framework integrated via Spring, on which we have extensive documentation on how to configure it right here

Printing log statements

With Logback you must configure Log Appenders in order to let services output log statements. Some of Blueriq's services contain more than the default Log Appenders which Logback provides out of the box, however over the whole DCM architecture we only support Log Appenders which are delivered by Logback out of the box. A list of Log Appenders which can be used can be found here

Log aggregation

As Blueriq's DCM architecture provides multiple services, we highly recommend to think thoroughly about how log statements are safely stored and aggregated. Whether the DCM infrastructure is an fail-over setup or not, log statements should be stored in a centralized manner as they can provide essential information on how an application/services ended into a specific state.

Log storage

There are several tactics on how you can aggregate and store your log statements safely, it all depends on how the infrastructure is deployed. There are three default log configurations which all Blueriq services provide; Console-, File system-, and Socket transfer logging. All of these log configurations have there own benefits and drawbacks.

Console output

The log statements are printed to standard output and standard error (aka console) of the service, when using this configuration your statements are typically stored anywhere other the console of the service. This kind of log configuration can be used when running in a containerized infrastructure as these infrastructures usually provide log aggregation for statements which are directly printed to the console. 

Benefit

  • Simple configuration and usually managed by the infrastructure

Drawback

  •  Log statements are only available in the console of the service and are not stored out of the box

File system

The log statements are printed to a file on the configured location, when using this configuration your statements are stored in a file or multiple files on a preconfigured path. The configured path could be a path on the local machine, a network share or volume mount.

This kind of configuration can be used when running the service on a traditional infrastructure such as virtual machines or in a containerized infrastructure.

Benefit

  • The log statements are directly stored in a file which can be scraped and uploaded to the desired log aggregator

Drawback

  • Running out of disk space is a risk which could happen when stored log statements on a file system

Socket transfer

The log statements are send over TCP to the configured socket connection, when using this configuration are statements are not stored directly but can be consumed by an application which listens to a specific port. More information on how to setup a receiving application can be found here.

Benefit

  • The log statements can be aggregated by a centralized application which is only responsible for aggregating logs

Drawback

  • Another service to manage within your infrastructure
  • No labels