This article is intended to be a guide in updating project versions running in a clustered environment, without users losing their active sessions.
Since there can be a lot of ways to configure Blueriq for running in a cluster environment, this article will describe the deploying process for a clustered environment as described in the Step 1. Infrastructure runtime cluster but the process can be applied to other types of cluster configurations.
You can use two strategies to undeploy old versions.
Using the example below, version 0.0-v1 can be undeployed when upgrading 0.0-v2 to 0.0-v3. Using this strategy there are always two version deployed. In order to ensure users always use the latest version, make sure that the shortcut points to the latest versions and projects can only be started using shortcuts.
After a successful upgrade has been performed and sufficient time has passed so there are no more sessions using the old version, the old version can be unpublished. The cluster does not have to be restarted.
After a successful upgrade has been performed and sufficient time has passed so there are no more sessions using the old version, the old version can be deleted. The same process described in steps 1-3 can be used:
If the project exports are deployed on a shared folder, the old version will be deleted only once. The runtimes still have to be restarted in order to clear the application cache. |
Assumptions:
The starting nginx configuration is:
http { include mime.types; default_type application/octet-stream; sendfile on; upstream backend { server node1:8080; server node2:8080; } server { listen 80; location / { proxy_pass http://backend; } } } |
In order to deploy a new version of a project, some of the running nodes need to be taken out of the cluster and configured to run with the new project while also keeping the old project. The old project needs to be kept so users who still have sessions on the old version of a project can continue navigating seamlessly without encountering errors.
Removing a node from an upstream requires the nginx configuration to be updated like in the following example:
upstream backend { server node1:8080; server node2:8080 down; } |
node2 was chosen for no particular reason, any node can be pulled out of the cluster. If there are more than two nodes in a cluster, then more nodes can be pulled out as long as the remaining nodes can provide enough performance so that the users can still use the applications currently deployed. |
The nginx configuration must be reloaded. Consult the official documentation of nginx for more information.
All the nodes that were removed from the cluster can now be updated. The update process can be simply copying the new exported project to the runtime's export folder, or by using the publisher (see Publish and unpublish projects for details).
The shortcut used to start the "Weather" project must be updated so they point to the new version ( 0.0-v2 in our example )
Once all this is done succesfully, the runtime can be restarted.
This step is similar to Step 1 only that we will now add back the nodes we removed in step 1 and remove the currently running nodes.
The nginx configuration should look like this:
upstream backend { server node1:8080 down; server node2:8080; } |
nginx config must be reloaded. Consult the official documentation of nginx on how to do that.
The removed nodes can now be updated with the new project as we did at step 2 and then be included back in the cluster.
Users having open sessions on "Weather" project version 1 should see no difference, but new users that access the project should be redirected to "Weather" project version 2.
|