Running Workflows

To run a workflow, you schedule it for execution by adding it to the Flux engine.

When is a Workflow Considered “Running”?

Any workflow that is scheduled, actively waiting on a condition, or executing an action is considered to be “running”. Even if the workflow is only waiting for a time condition before it starts executing some other action, the workflow itself would be considered “running” as it has been scheduled and is active on the engine.

Put more simply, a workflow is considered “running” once it has been exported to the Flux Engine – either using the Designer, using the “Submit” button from the Operations Console, or using the APIs.

A workflow is not considered running when it is simply saved to the file system or to the Repository. Even if a workflow is saved to the Repository with a schedule, the workflow will not actively run on that schedule until a user has started the workflow.

When you export a workflow to the engine, Flux follows these steps:

  1. Locate the first action or actions in the workflow (any and all actions marked as a Start Action).
  2. If the first item is a trigger that must wait for a specific time, schedule the first firing of the trigger. If it is an action that can run immediately, begin running the action.
  3. Continue executing the workflow as normal.

In short, to schedule a workflow, you’ll just need to start it (also called exporting). Flux will then handle all scheduling and execution of the workflow internally from that point.

Adding Workflows to the Engine

Once a workflow has been created as a FlowChart object, it can be added to the engine to be executed in the background. To add a workflow to the engine, you can click “Export to Engine” from the Designer, or choose “Start” from the Repository with the workflow selected.

While the engine adds the workflow, it also verifies the workflow for correctness. For example, a Timer Trigger that does not contain either a time expression or a scheduled trigger date will fail verification. Consequently, the entire workflow will fail verification, an exception will be thrown, and the workflow will not be added to the engine.

Once the workflow is successfully added, the unique workflow name can be used to retrieve the workflow from the database. Retrieved workflows represent a snapshot of the workflow in execution: while the workflow on the engine will continue to execute, the retrieved workflow will not. You can see a list of all workflows, including their unique names, on the home page of the Operations Console.

Once started, a workflow runs until it is finished. In general, a workflow finishes when the engine executes a trigger or action and there is no appropriate flow to follow afterward. There are some exceptions to this behavior: for example, a trigger or action may throw an exception, in which case the error flow is followed. If there is no error flow defined, there is no appropriate flow to follow, and the workflow finishes.

By default, the engine automatically deletes workflows when they finish executing. In this way, Flux prevents the database from growing unbounded as new workflows are added.

Adding Workflows at a High Rate of Speed in Code

Your application may call for adding many hundreds or thousands of new workflows continuously. If this is the case, you can increase the throughput at which workflows are added to the engine.

A single-engine instance will only permit a certain amount of new workflows to be added per second to the engine. If you need to increase that throughput rate, you can instantiate a second engine instance and add additional workflows through that second instance.

Both engine instances will add new workflows to the same database. Consequently, these newly added workflows will be eligible for firing by any engine instance in the cluster.

In fact, you can create a number of engine instances and use them in parallel to add new workflows to the engine. This “array of engine instances” can be used to add new workflows at a high rate of speed.

To configure the array of engine instances, follow these steps.

  1. Create an appropriate number of engine instances for the array. The right size of your array depends on the sophistication of your computer system, but as a rule of thumb, create between 2 and 8 engine instances. You may need to adjust the size of your engine array to match the capabilities of your computer system. An engine array that is too large will slow down your system.
  2. Configure each engine instance to point to the same set of database tables.
  3. Do not start any of the engine instances. A started engine will not add new workflows as fast as a stopped engine. By default, a newly created engine instance is stopped, and is only started by calling Engine.start().
  4. (Optional) Disable cluster networking on each engine in the array to increase throughput by a small amount.

Use the array of engine instances to add new workflows. You may need one or more additional engine instances to actually fire workflows. These other engine instances must be started in order to execute workflows.

Workflow Order of Execution

When multiple workflows are waiting to be executed on an engine, the engine must determine which workflow should be executed next. To make this decision, the engine considers three primary factors: Eligible Execution Time, Priority, and First-In-First-Out Scheduling.

  1. Eligible Execution Time: the time at which a workflow declares that it is ready to execute. Most actions are ready to execute immediately once workflow control reaches them. Most triggers, on the other hand, wait until a certain time before they are ready to execute, at which time they check to see if a condition particular to that trigger is satisfied.
    Note that a workflow may not actually fire at its Eligible Execution Time due to engine availability or scheduling conflicts. Moreover, if a workflow is paused, it will not execute, regardless of its Eligible Execution Time.
  2. Priority: If two workflows are both ready to execute – that is, the wall clock (the current time) is at or beyond their Eligible Execution Times – the workflow with the higher priority will always run first. Workflow priorities can be set directly on a workflow or through the runtime configuration. The lower the number, the higher the priority; a priority of 10 takes precedence over a priority of 100. For more information, see Runtime Configuration.
  3. First-In-First-Out (FIFO) Scheduling: If enabled, FIFO Scheduling controls whether workflows are scheduled for execution in a first-in-first-out manner. When a workflow is originally submitted to an engine, the current timestamp is recorded as the workflow creation timestamp. Next, when a workflow’s Eligible Execution Time permits it to execute, all such eligible workflows begin execution in the order of their creation timestamps. Priorities take precedence over FIFO scheduling; if an older and a newer workflow are eligible to execute and the newer workflow has a higher priority, the newer workflow begins executing first.

Other factors may also affect the order of execution. For instance, the fairness time window, if enabled, can temporarily raise the priority of starving low priority of workflows.

Concurrency throttling can also affect the order of execution for your workflows. Concurrency throttling takes precedence over Eligible Execution Time and Priority. A concurrency throttle might only allow a certain number of high-priority workflows to execute at once, which could cause some low-priority workflows to be executed instead (An example of this is demonstrated under “Concurrency Throttle” below). For more information on concurrency and the fairness time window, refer to Runtime Configuration.

Order of Execution Examples

Eligible Execution Time and Priority

Scenario:

Group A contains 15 workflows that each take 1 minute to execute. All of the workflows in Group A are submitted at 8:00. The workflows in Group A have a priority of 300.

Group B contains 10 workflows that each take 2 minutes to execute. All of the workflows in Group B are submitted at 8:05 and have a priority of 100.

The workflows in both groups are ready and scheduled for immediate execution. The concurrency throttle is set to 1, meaning only 1 workflow may execute at a given time.

Result:

At 8:00, Group A would have preference and begin executing. At 8:05, 5 workflows from Group A have already been executed when Group B is submitted to the engine.

Now, both groups are scheduled for immediate execution, since the wall clock (the current time) has advanced beyond the Eligible Execution Time for both groups. Consequently, either group may execute next, but because Group B has a higher priority, Group B’s workflows begin executing, since its priority is higher than Group A’s.

Once all 10 of Group B’s workflows have run to completion, the remaining 10 workflows from Group A will execute.

Workflow creation timestamps

Scenario:

Group A and Group B both contain 10 workflows that take 1 minute to execute. The workflows in both groups have a priority of 300, and the concurrency throttle is set to 1.

Group A’s workflows are submitted to the engine at 8:00 and therefore have an Eligible Execution Time of 8:00. Group B’s workflows are submitted at 8:05 and therefore have an Eligible Execution Time of 8:05.

Result:

At 8:00, Group A would have preference and begin executing. At 8:05, Group B’s workflows are submitted to the engine.

This time, precedence goes to the workflows with earlier Eligible Execution Times. In this case, because Group A was submitted at 8:00. The remaining 5 workflows from Group A must finish executing before any workflows in Group B will run.

Concurrency Throttling

Scenario:

Group A contains two workflows, each of which takes 10 minutes to complete. These workflows have a priority of 100, with a concurrency throttle of 1 on this branch in the workflow namespace. This means that only 1 workflow from Group A may execute at a given time.

Group B contains three workflows that each take 1 minute to complete. These workflows have a priority of 300, with a concurrency throttle of 1 on this branch of the workflow namespace.

Both groups of workflows are submitted at 8:00. The concurrency throttle on the root node of this workflow namespace is set to 2, meaning that only 2 workflows can execute at the same time.

Result:

At 8:00, the first workflow from Group A and the first workflow from Group B will begin executing simultaneously. This is the effect of concurrency throttling: while 2 workflows are allowed to run simultaneously, each group is throttled so that only 1 workflow can execute from that group.

Because concurrency throttling will not allow the second workflow from Group A to execute while the first is still running, all three workflows from Group B will finish before the second workflow from Group A runs, even though Group A’s workflows would normally have preference due to their priority.

FIFO Scheduling

Scenario:

Group A and Group B both contain two workflows that execute for a minute, wait at a trigger for two minutes, and then finish executing for one more minute. The workflows in both groups have the same priority, and the concurrency throttle is set to 1.

Group A’s workflows are submitted to the engine at 8:00 and therefore have a workflow creation timestamp of 8:00. Group B’s workflows are submitted at 8:01 and therefore have a workflow creation timestamp of 8:01.

Result:

At 8:00, Group A would have preference and begin executing. At 8:01, Group B’s workflows are submitted to the engine.

This time, precedence goes to the workflows with earlier workflow creation times, and a workflow in Group A begins executing. At 8:01, the second workflow in Group A executes. At 8:02, the first workflow in Group B can execute since both workflows in Group A are waiting at their triggers. At 8:03, a workflow in Group A as well as Group B is eligible to run. However, since the Group A workflows have older workflow creation timestamps, a Group A workflow must run next.

This FIFO Scheduling scenario shows that among those workflows that have Eligible Execution Times at or beyond the wall clock time (the current time), they begin executing according to the order of their workflow creation timestamps. If the concurrency throttle is 1, they execute in a true queue-like manner. If the concurrency throttle is more than 1, they begin executing in FIFO order, but the first workflow may not finish before a successive workflow starts. Finally, a workflow that begins executing and reaches a trigger must wait until the workflow’s eligible execution time once again reaches the wall clock time, at which time that workflow is subject to FIFO Scheduling once again.

Workflow Execution in a Clustered Environment

Architecture

Flux uses a Master/Worker architecture to assign workflows in a clustered Flux environment. The “Master” engine of the cluster is assigned based on the creation sequence of the engines. The first engine to be created in the cluster will be master. If that engine was to dispose, or shut down unexpectedly, the subsequent engine created would then take on the master engine’s responsibilities of assigning workflows to run. Flux uses REST to network each engine together, sharing relevant information. If REST is disabled on the network, or for some reason, the engines can not communicate with each other, each engine then essentially becomes a master engine.

Workflow Distribution

When the master engine deems a workflow is ready for execution, the worker engines are then analyzed to determine the best engine on which to execute the workflow. There are three possible algorithms that are used depending upon your environment and configuration:

  • Greedy: If the engines are unable to communicate with each other for whatever reason, each engine becomes a master engine and a Greedy algorithm is used. Whichever engine first retrieves, claims, and ultimately is allowed to run the workflow, executes it.
  • Load Balancing: If the engines are able to communicate with each other, but the System Resource Monitor is not configured on all the engines in the cluster, a load balancing algorithm is used. This algorithm looks at how many workflows each engine is currently executing and deems the engine with the lowest count the primary engine for execution of the selected workflow. If, for some reason, that engine is unable to execute the workflow, the next best engine attempts to execute it.
  • CPU Load Balancing: If the engines are able to communicate with each other, and the System Resource Monitor is configured on each engine in the cluster, a CPU load balancing algorithm is used. This algorithm analyzes each engine’s CPU usage and deems the engine with the lowest CPU usage the primary engine for execution of the workflow. If that engine is unable to execute the given workflow, the next engine with the lowest CPU usage attempts to execute it.