Operations Console

The Operations Console is a web application that allows you to create, monitor, and review your workflows in a web browser. You can view workflows as well as pause, resume, remove, interrupt, expedite and even edit them.

The Operations Console is known to perform significantly slower in Internet Explorer than in other browsers. We recommend using one of the other supported browsers (listed in the Technical Specifications) where possible.

Minimum Requirements

The Operations Console has the same minimum requirements as the Flux engine.

Installing the Operations Console

There are two ways to install and run the Operations Console: using the built-in Jetty 9 container, or using another third-party container (Tomcat, WebSphere, GlassFish, etc.)

You can launch the Operations Console using the built-in Jetty 9 container by running the start-opsconsole script from your Flux installation directory. This will start the Jetty container on port 7186, where you can access the Operations Console. This means that once you’ve launched the Operations Console, you can access it by visiting this URL in your browser:


Where <hostname> is the host name of the machine where the Operations Console was started.

To install the Operations Console in a separate container, you must first create the flux.war file. You can do this by running the make-war script in your Flux installation directory (this script is automatically run in the configure script as well, so if you have already run that script, there is no need to also run make-war).

Once the flux.war file has been created, you will be able to find it in the webapp directory of your Flux installation. To install the Operations Console, just deploy this file using the preferred technique for your container (more information for deploying WAR files should be included in the documentation for your container).

Connecting to Engines

When you connect to an engine in the Operations Console, Flux will store the information about that engine’s location for future use. This information is stored in a file called .opsconsole.properties, which is located under the .flux director beneath the home directory of the user who started Flux (on Unix-based systems, this will be something like /Users/myuser/.flux/opsconsole.properties, and on Windows, C:\Documents and Settings.flux\opsconsole.properties).

You view your running engines on the Operations Console “System” page. You can also directly edit the .opsconsole.properties file - this is most useful in cases where you just want to copy the engine information to a different machine with a new installation of the Operations Console.

The Operations Console and Engine Clusters

The Operations Console may only connect to one engine cluster at a time. If you add a new engine in the Console, be sure that it is part of the same cluster as all of the engines already connected to the Console, or you will likely experience problems viewing workflows and performing operations.

A separate Operations Console instance must be run for each engine cluster that you want to connect to.

The .flux Folder and opsconsole.properties

The first time you use the Operations Console, it will create a new hidden folder called .flux. This folder will be stored in the home directory of the user who starts the Operations Console.

Within the folder, you’ll find a file called opsconsole8.properties. This file is used to store all of the engines that the operations console has connected to. As you work with the Console, this file will start to become populated with entries; a typical opsconsole8.properties might look like:

#Mon Dec 24 16:45:47 MST 2012

As you’ll notice, the file contains a few lines for each engine that the Console is connected to. The settings for each engine are denoted by the keyword “engine”, followed by an incrementing index, then the ‘.’ character (so for the first engine, all entries are marked “engine1”, “engine2” for the second, and so on). A property name for one of the engine’s settings will follow, followed by the ‘=’ character and, finally, the value of the property.

These are the properties that the Operations Console uses to look up the engine. The properties are used only for these lookups and do not affect any execution or configuration for the engine itself.

The purpose of each line is as follows:

  1. Each engine begins with a blank property setting, such as “engine1=”. This blank property indicates to the Console that a new engine’s properties are starting.
  2. host – the machine name or IP address of the machine where the Flux engine is running.
  3. port – the port number that the Flux engine is bound to.
  4. preference level – when multiple engines are listed, you can use this property to specify the order in which the Console will attempt to contact engines. A lower number means the Console will try that engine first. This allows you to ensure that if some engines might have network connectivity issues, the Console can always contact the most reliable engine first.
  5. ssl – indicates if communication with this engine is secured using SSL.

The properties are then repeated for each connected engine.

Running Multiple Operations Console Instances on a Single Server

Each Operations Console instance is tied to the account of the user on the Operating System who starts the Operations Console process. Because of this, Flux only supports one Operations Console instance at a time per OS user.

In order to start multiple Console instances on the same machine, Flux requires that each Operations Console process is started using a different user account on the OS.

Custom JARs and Classes

If your workflows contain user-defined persistent variables, custom triggers, or custom actions, you do not need to deploy the corresponding class files to the Flux Operations Console. Any custom JARs, dependencies, or classes required by workflows must be available on the class path of all engines in the cluster, but they do not need to be available on the class path of the Flux Operations Console.

Running Behind a Proxy

By default, Flux will launch the web application using the included Jetty 9 container, which supports running behind a proxy. You can also deploy the Operations Console using a different container (Tomcat, WebSphere, etc.).

Operations Console and Performance

To conserve system resources and ensure the highest performance possible, it is recommended to periodically shut down the browser window and reopen it when monitoring an engine using the Operations Console.

As long as the Operations Console process itself continues running, closing the browser window will have an effect on the connection to the engine or on the data of the engine or Operations Console. Restarting the browser periodically allows resources to be freed up and refreshed to maintain optimal running conditions.

It is also not necessary to keep the browser window open to receive updates on the status or execution of your workflows. When the browser is restarted, it is able to automatically obtain the latest information from the Operations Console server.

Operations Console and Time Zones

Since the Operations Console picks up the Time Zone information from the browser, you must ensure that the system running the Flux engine and Operations console have the same Time Zone settings as the computers being used to access the Operations Console.

Having different Time Zones might lead to unexpected behavior in the Operations Console including the Forecast of scheduled workflows not being shown properly, logs not being shown in the Logs tab, etc.

Configuring the Jetty Container for SSL

Flux’s web server was upgraded in 8.0.11 and the instructions for securing using HTTPS are as follows:

  1. Modify start.ini in the Flux installation directory to the below lines. This should look like the following lines:

  2. Create a directory (e.g., Flux Installation/etc) and place the file keystore into it.

  3. Place jetty-https.xml and jetty-ssl.xml into the Flux Installation/etc directory.

  4. Start the Operations Console – https://localhost:7185 – you should get an unsigned certificate error and then get access to the Operations Console.

  5. Last – build a valid certificate and update jetty-ssl.xml (see the lines below) to point at its location with its password.
    <Set name="KeyStorePath"><Property name="jetty.base" default="." />/<Property name="jetty.keystore" default="etc/keystore"/>< /Set>
    <Set name="KeyStorePassword"><Property name="jetty.keystore.password" default="OBF:1vny1zlo1x8e1vnw1vn61x8g1zlu1vn4" /></Set>
    <Set name="KeyManagerPassword"><Property name="jetty.keymanager.password" default="OBF:1u2u1wml1z7s1z7a1wnl1u2g" /></Set>
    <Set name="TrustStorePath"><Property name="jetty.base" default="." />/<Property name="jetty.truststore" default="etc/keystore" /></Set>
    <Set name="TrustStorePassword"><Property name="jetty.truststore.password" default="OBF: 1vny1zlo1x8e1vnw1vn61x8g1zlu1vn4"/></Set>