End-to-End MPC Implementation with ARTData: From Process Model to Operator Interface Using Gekko and TCLab
- Ivan Nemov

- Sep 29
- 20 min read
Updated: Sep 30
Model Predictive Control (MPC) can feel abstract until it is seen in action. This post introduces MPC demo built around the TCLab case study, where two heaters and two temperature sensors provide a simple but realistic Multiple Inputs Multiple Outputs (MIMO) process control problem. The MPC application is powered by the open-source Gekko optimisation library developed by Dr. John Hedengren (Brigham Young University).
To make the case practical and engaging, the demo is configured on a web-hosted instance of ARTData where engineers and students can observe, experiment, and learn how complex control logic works in real time environment. ARTData provides the execution layer, communication interface, and visualisation tools, making it possible to run the Gekko-based MPC in a framework that resembles industrial deployment.
For readability, this post is split into two parts. MPC at a Glance gives straightforward answers: what MPC is, the process it controls in the demo, where it runs, how it works, and what the results look like. The second part, Under the Hood, dives into the technical details for readers who want to understand the modelling, data handling, algorithms, and control logic behind the demo.
To get the most out of this post, it’s best to follow along using the live ARTData demo environment and Quick Reference Guide accessible via below links:
1. MPC at a Glance
1.1 What MPC is
MPC is an advanced method of process control that has been used since the 1980s across various process industries such as chemical manufacturing, oil refineries, minerals processing and power generation [3]. The basic idea behind MPC is to find optimal trajectory for Manipulated Variables (MV) that brings Controlled Variables (CV) to their set points without violating MV and CV constraints. One way to think of it is through a car-driving analogy (Fig. 1.1). MPC would find a trajectory that requires minimal steering wheel movement and still brings the car to the final point while keeping it within the lane. It does so by solving an optimisation problem over control horizon. For example, if the control horizon is 30 seconds and step size is 1 second, then MPC optimisation algorithm finds all 30 MV movements along with predicted car position at each step using initial conditions and process model (hence, model predictive control). Then only the first MV move from the horizon estimation is applied to the controlled process, and control algorithm is repeated for the next step.

Despite computational intensity and algorithmic complexity, MPC is widely used in process industries. Main reasons for such popularity are:
Simpler controller design and superior performance for large MIMO systems with mutual interactions as compared to PID-based control;
Native handling of constraints for MVs and CVs;
Has no limitation on controlled process dynamics, for example can control systems with large time delay;
Often includes both predictor and observer allowing to capture unmeasured disturbances and providing smart filtering to reject measurement noise.
1.2 The process under control
As a case study, the Temperature Control Lab (TCLab) setup from APMonitor was used [1]. The TCLab device (Fig 1.2) includes two transistor heaters, EH-100 and EH-200, acting as actuators and two temperature sensors, thermocouples TT-100 and TT-200. The transistor inputs can be adjusted within a 0-100% range resulting in modulated heat generation. The heat is dissipated by heat sinks surrounding the transistors. TT-100 and TT-200 temperature measurements are located close to the heaters and receive energy through heat conduction, convection and radiation.
The relative positions of the sensors with respect to the heaters are such that, in addition to the main process relations EH-100 → TT-100 and EH-200 → TT-200, there is also smaller cross-influence: EH-100 → TT-200 and EH-200 → TT-100. Maintaining the two temperatures in this system presents a MIMO control problem with loops interaction, making it well suited for the MPC demo.
![Fig. 1.2 - Temperature Control Lab schematics [1].](https://static.wixstatic.com/media/b2efc2_7df243bb7cb345299937f89877713d6e~mv2.png/v1/fill/w_980,h_511,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/b2efc2_7df243bb7cb345299937f89877713d6e~mv2.png)
1.3 Where it is implemented
For the MPC demo, the TCLab dual-heater process model [2] was used instead of the TCLab hardware. The process simulation runs on a standalone simulation node connected to the ARTData server over a network (Fig 1.3). Communication is configured between OPC UA server on the simulation node and OPC UA interface of the ARTData server. Sensor measurements from the TCLab process model are continuously sent to ARTData database where they are used to calculate control outputs for the heaters. The outputs are sent opposite way from ARTData database back to the process model where they are used to calculate the next temperature values.
The data space on the ARTData server is split across three databases:
dpjt_010_tclab_pcs – hosts basic control layer data and logic, such as instrument indications, PID loops, communication health monitoring. In an industrial deployment this would correspond to a Process Control System (PCS).
dpjt_010_tclab_mpc – hosts MPC data and logic. This is where MPC controller is implemented and executed. It reads measurements from the basic control layer and sends MV moves back.
dpjt_010_tclab_aux – auxiliary database to store multidimensional data generated by the MPC, specifically to visualise MV and CV predictions over control horizon.
Data from all three databases is presented in dashboard-style visualisation powered by a Grafana server, implemented as part of the ARTData platform. Files and logs generated by the Gekko MPC are stored in the ARTData file system and can be accessed via the File Transfer tool.

1.4 How it is supposed to work
The MPC demo follows a layered control structure similar to what is used in real plant. In line with the control hierarchy normally adopted in the process industries [4] the basic regulatory control and MPC should be placed in different layers, with MPC implemented on top of the base layer. Following this industry best practice, PID-based temperature control is implemented first. It includes OPC UA communication health checks and corresponding failure alarms, measurement processing and PID control loops TC-100 and TC-200 (Fig. 1.4).
When the PID control loops are in automatic mode, they adjust the EH-100 and EH-200 heater inputs to maintain TI-100 and TI-200 temperatures at their corresponding set point values. The MPC controller UC-100 is implemented one layer above and receives processed temperature measurements TI-100 and TI-200, as well as JI-100 and JI-200 readback duty values from the heaters.
MPC outputs, or MV moves, are first passed to the watchdog application WDOG-100, which decides whether the moves should be applied for control. The watchdog performs multiple checks, such as comparing MPC actual mode with the required mode, verifying the MPC solver health status, and confirming the communication state. If the checks are passed, the watchdog application sets TC-100 and TC-200 into a special mode and writes the MPC moves into the PID controller outputs. Alternatively, if the checks are not passed, then the watchdog sets TC-100 and TC-200 to their fallback modes.

1.5 Seeing MPC in action
With the Gekko MPC logic configured and connected to the TCLab process model, the controller response can now be observed under different operating scenarios and compared with PID base layer performance. Starting from steady-state condition, TI-100 set point change is introduced while TI-200 set point is kept constant. This creates a situation where cross-coupling effects become significant, providing a clear demonstration of MPC capability.
Fig. 1.5 displays measured temperatures along with their set points and heater duty readbacks JI-100 and JI-200. As the TI-100 set point change is applied, MPC anticipates the influence of EH-100 not only on TI-100 but also on TI-200 and compensates it by adjusting EH-200 duty before TI-200 starts to deviate from its target. Both heater inputs follow optimal trajectories with minimal unnecessary movement.

In contrast to MPC response, repeating the same test with the two active PID loops TC-100 and TC-200 shows noticeable MV overshoot for a similar level of control tightness.

By experimenting with different setpoint changes and disturbances in this demo environment, it becomes possible to observe directly how MPC achieves predictive, constraint-aware control of interacting systems. This not only illustrates the principles of advanced control in a transparent way but also highlights the value of ARTData as a platform for building and visualising complex real-time control logic.
2. Under the Hood
2.1 Building the process model
To run the TCLab dual heater process model, a standalone simulation node is used. It is a Windows machine connected to the same LAN as ARTData.
The TCLab process model is available from [2] and is run using the Gekko Python library [6] in Simultaneous Dynamic Simulation mode (IMODE = 4) [9]. The model is based on theoretical equations describing convective and radiative heat exchange and uses actual hardware parameters (heater mass, area, heat capacity etc.). One of the model variables is the ambient temperature, which is assumed to stay constant at 23ºC. When both heaters are at 0% duty (Q1 and Q2), the measured temperatures (T1 and T2) gradually stabilise at this value.
The model is called as asynchronous function inside the OPC UA server event loop. At each step it takes initial temperature values and heater duties and calculates future temperatures for each second in a 5-second horizon. The time the model takes to finish calculation is recorded and used in the next cycle to interpolate current temperatures from the 5-second horizon. This way a real-time effect is achieved when running the model.
2.2 Communication interface
To communicate with ARTData, the model variables are translated to an OPC UA server powered by the asyncua Python library [5]. The server runs on the simulation node and is accessible over the network at the endpoint opc.tcp://0.0.0.0:4860/sim/tclab-simulation/ using username and password authentication.
On the ARTData side, the OPC UA communication interface is represented by read and write clients, which are configured as shown in Fig. 2.1 and Fig. 2.2. To access the configuration:
→ navigate to OPC UA Interface from the tools palette
→ select the dpjt_010_tclab_pcs database
→ switch to Settings view
→ select TCLab Simulation Read client
→ press Edit
The OPC UA read and write clients are configured to access the ARTData database at a 1-second interval. The read subscription period is set at 0.5 seconds to increase the chances of receiving the most recent values from the OPC UA server.


The list of read and write data points is provided in Table 2.1. It is accessible from the Read and Write views of the ARTData OPC UA data access configuration, as shown in Fig. 2.3 and Fig. 2.4. All OPC UA read items are configured to be recorded in both the real-time and time-series data tables of ARTData. This way, they can be used in control logic configuration and in historical data analysis. OPC UA write items are sourced from the real-time data table to obtain only the current values and send them to the OPC UA server.
Table 2.1 – TCLab OPC UA communication list.
Process Model Variable | OPC UA Server Variable | ARTData tag_id.metric | Direction | Description |
Q1[0] | EH-100.PV | JT-100.RAW | Read | Heater Q1 Duty Feedback |
Q2[0] | EH-200.PV | JT-200.RAW | Read | Heater Q2 Duty Feedback |
Q1[1:] | EH-100.IN | TC-100.MV | Write | Heater T1 Temperature Controller |
Q1[1:] | EH-200.IN | TC-200.MV | Write | Heater T2 Temperature Controller |
T1 | TT-100.PV | TT-100.RAW | Read | Heater T1 Temperature |
T2 | TT-200.PV | TT-200.RAW | Read | Heater T2 Temperature |
heartbeat | UI-010.PV | UI-010.PV | Read | OPC UA communication rolling counter |


When a read or write item is selected and opened for editing, additional configuration details become visible and available for modification (Fig. 2.5, Fig. 2.6). This is where the link between an OPC UA item and the ARTData tag_id and metric is fully established.


2.3 Measurements processing
How do we know that a measurement can be trusted at any given moment? How do we record and use this kind of information? ARTData databases include a mandatory status quality field for each data record, used in both real-time and time-series database tables. A major part of measurement processing is assigning a quality value to this field.
There are several key scenarios when data is not trustworthy:
Values are not refreshed regularly from a data source, for example due to communication failure;
Initialisation conditions when there is simply no data in the ARTData database;
A value is outside its normal range.
Some optional quality checks can also be performed, such as:
Freeze check – if a value is not changing, it is often an indication of instrument or communication fault;
Erratic changes – if a value suddenly drops or increases, exceeding the normal rate of change, it is often an indication of instrument failure.
In the ARTData MPC demo, a basic incrementing counter is implemented as a heartbeat to monitor OPC UA communication with the process model and detect communication loss. The process model includes simple logic to increment the counter by 1 each cycle and reset it to 0 when reaching 9999. It is accessible from the OPC UA server under the UI-010.PV variable (Table 2.1).
On the ARTData side, this counter is read via the OPC UA interface and recorded as UI-010.PV. The logic that detects communication faults is located inside the XA-010 data processing item (Fig. 2.7). To access it:
→ navigate to Data Processing from the tools palette
→ select the dpjt_010_tclab_pcs database
→ select XA-010 DPI
→ click Edit.
The XA-010 PV is set to True (healthy) if UI-010.PV value and timestamp have changed in the last 15 seconds; otherwise, it is set to False (communication fault).

Measurement DPIs (JI-100, JI-200, TI-100, TI-200) use values received from the corresponding OPC UA raw input signals. These DPIs verify that the raw values are present in the ARTData database and have Good status, that XA-010 fault is not active, and that the raw values are within their normal range (Fig. 2.8). If the checks are passed, then Good status is generated and the raw measurement value is used as the measurement processing output PV. Alternatively, if the checks fail, Bad status is assigned and the previous value is retained for PV.

2.4 Base-layer control
The base-layer control is implemented with two simple PID loops, TC-100 and TC-200. Each loop reads the corresponding temperature measurement (TI-100 and TI-200) and writes its output to the heater input via the OPC UA interface (Fig 1.4). Except for tuning parameters, the PID controllers share an identical configuration.
To ensure a standard implementation and reduce repeated code, the PID logic is programmed in a class accessible from ARTData plugins. To access it:
→ navigate to Data Processing from the tools palette;
→ select the dpjt_010_tclab_pcs database;
→ select General Settings view;
→ press Plugins Editor button;
→ from menu of the opened window select baselayer_control plugin.
PID tuning and configuration parameters are accessible from DPIs TC-100 and TC-200, since each requires individual configuration. Some parameters, grouped under the tune variable (such as proportional gain, integral and derivative time, and MV output clamps), can be adjusted during runtime without causing the controller to reset its execution. PID tuning was performed using the Whitehouse Consulting tool, which implements ITAE rules. This tool can be accessed from [7]. The FOPDT system parameters (delay time, time constant, and process gain) were taken from the TCLab MPC lab [8].
Other configuration parameters, grouped under the variable NEW_CHASH, will cause the controller to reset upon change. ARTData does not store any temporary variables in memory and relies entirely on database records. To detect configuration changes, the hash of the previous configuration is compared with the hash of the current configuration (Fig. 2.9). If a difference is detected, the PID is reset while applying the new configuration. Both PID loops are configured with external reset feedback from the JI-100 and JI-200 heater duty readback values, which helps prevent wind-up.
Integration of MPC with the base-layer control is covered in detail in 2.6 Watching the watchdog. For now, it is sufficient to note that the MPC writes directly to the controller outputs when they are placed in the special mode ROUT. When MPC is not in active control, the PID setpoints track the PV.

2.5 MPC algorithm
2.5.1 Access and navigation
Unlike the PID algorithm, which is implemented as a reusable class, the MPC controller is programmed as a function in the MPC plugin specifically for the TCLab use case. The process model and controller parameters (prediction horizon, MV and CV constraints, CV priorities, weights, etc.) are configured directly in this plugin code.
To access the plugin:
→ navigate to Data Processing from the tools palette;
→ select the dpjt_010_tclab_mpc database;
→ select General Settings view;
→ press Plugins Editor button;
→ from the menu of the opened window select the MPC plugin.
The function solve_mpc in the MPC plugin is referenced in the UC-100 DPI, which provides the interface between the controller and the database by querying input variables and operational handles, and ingesting back moves and statuses. It can be opened through the following steps:
→ navigate to Data Processing from the tools palette;
→ select the dpjt_010_tclab_mpc database;
→ select UC-100 DPI;
→ click Edit.
2.5.2 Controller interfaces
The interface signals are defined in UC-100 DPI and summarised in Table 2.2. This section highlights some of the key ones.
The controller required mode can be set either from the MPC layer via UC-100.CTRL_REQ_MODE or from the PCS layer (i.e., PLC/SCADA/DCS). The PCS-layer required mode (WDOG-100.CTRL_REQ_PCS_MODE) is generated by the watchdog application based on operator input and subjected to fallback logic (see section 2.6 Watching the watchdog). The lowest required mode value from the MPC and PCS layers is used to activate or deactivate the controller. If any of the measurements have Bad status, the controller is automatically placed in SIM mode.
The MPC mode can be one of the four options below:
0=OFF – MPC completely disabled;
1=SIM – simulation mode where MVs remain constant over the prediction horizon;
2=STDBY – performs all calculations including MV moves but does not apply them;
3=ACTIVE - performs all calculations and applies the MV moves.
Based on diagnostic flags and parameters returned from the MPC controller (APPSTATUS, APPINFO, SOLVESTATUS, SOLVETIME, ITERATIONS) any abnormal condition with MPC can be detected. For troubleshooting, UC-100.CTRL_DBG_LEVEL can be set to active, which enables detailed logs to be generated and saved in the ARTData file system. These logs can be accessed through the File Transfer tool from the palette (Fig. 2.10).
Normally, the MPC controller uses the previous cycle’s solution as a starting point for the next iteration, which is known as a warm start. Alternatively, a cold start can be requested, in which case the previous solution is discarded and the MPC starts calculations from scratch. Cold start can be activated by the operator via UC-100.CTRL_COLD_START. If APPSTATUS flag indicates application failure, the MPC working directory is wiped automatically, which is equivalent to a cold start.
Table 2.2 – TCLab MPC interface variables.
UC-100 variable name | Direction | Description |
TI_100 | Input | TI-100.PV measurement (Heater T1 Temperature) |
TI_200 | Input | TI-200.PV measurement (Heater T2 Temperature) |
JI_100 | Input | JI-100.PV measurement (Heater Q1 Duty Feedback) |
JI_200 | Input | JI-200.PV measurement (Heater Q2 Duty Feedback) |
MV_REQ_Q1 | Input | MV 1 required status for Heater Q1 Duty |
MV_REQ_Q2 | Input | MV 2 required status for Heater Q2 Duty |
CV_REQ_T1 | Input | CV 1 required status for Heater T1 Temperature |
CV_REQ_T2 | Input | CV 2 required status for Heater T2 Temperature |
CV_SP_T1 | Input | CV 1 set point for Heater T1 Temperature |
CV_SP_T2 | Input | CV 2 set point for Heater T2 Temperature |
CTRL_REQ_MODE | Input | MPC controller required mode switch at MPC layer |
CTRL_REQ_PCS_MODE | Input | MPC controller required mode switch at PCS layer |
CTRL_DBG_LEVEL | Input | Controller debug activation switch |
CTRL_COLD_START | Input | Controller cold start switch |
MV_STATUS_Q1 | Output | MV 1 actual status for Heater Q1 Duty |
MV_STATUS_Q2 | Output | MV 2 actual status for Heater Q2 Duty |
CV_STATUS_T1 | Output | CV 1 actual status for Heater T1 Temperature |
CV_STATUS_T2 | Output | CV 2 actual status for Heater T2 Temperature |
MV_MOVE_Q1 | Output | MV 1 move for Heater Q1 Duty |
MV_MOVE_Q2 | Output | MV 2 move for Heater Q2 Duty |
CTRL_ACT_MODE | Output | MPC controller actual mode |
APPSTATUS | Output | Application status: 1=good, 0=bad [RF 9] |
APPINFO | Output | Application information: 0=good, error otherwise [RF 9] |
SOLVESTATUS | Output | Solution solve status: 1=good, 0=bad [RF 9] |
SOLVETIME | Output | Solution time (seconds) [RF 9] |
ITERATIONS | Output | Number of iterations for solution [RF 9] |
HEALTH_STATUS | Output | Combined APPSTATUS and SOLVESTATUS |
CNT | Output | Incremental counter (controller hearbeat) |
m_time_array | Output | Time dimension of prediction horizon (array) |

2.5.3 Running cycle
UC-100 DPI, which hosts the MPC algorithm, is scheduled to run every 15 seconds (Fig. 2.11). This setting is mirrored in the Gekko MPC configuration via m.options.CTRL_TIME. Each cycle follows the standard MPC workflow: gathering inputs, setting up the solver problem, solving, processing outputs, and writing results to the database.
If the actual DPI execution time slips slightly, the next cycle starts immediately after the previous one. However, if the execution time exceeds 110% of the nominal period (15 seconds) plus the liveness probe interval (ARTData is configured to check every 5 seconds), the entire data processing worker is restarted. In other words, if the worker does not complete all tasks within 22 seconds, it is considered stalled and restarted automatically.

Of the 15-second execution period, about 5 seconds are consumed by overhead calculations and database queries (Fig. 2.12). To ensure the solver completes within the remaining 10 seconds, the solver timeout is enforced via m.options.MAX_TIME. If the solver fails to finish within this limit, the task is aborted, solver failure is reported via UC-100.SOLVESTATUS, and the next cycle is started in cold start mode. To help keep solving time within the timeout, numerical tolerances are configured using m.options.OTOL and m.options.RTOL.

2.5.4 MPC model
An empirical process model from the TCLab MPC lab [8] was used to configure the controller as opposed to the first principles mode discussed in section 2.1 Building the process model. Empirical models are normally used in the industry because they can be built from the plant step test data without full knowledge of the underlying physics. In Gekko, dynamic models are primarily described using algebraic and differential equations, as shown below. The corresponding model block diagram is presented in Fig. 2.13.
# CONSTANTS
Ta = m.FV(value=23.0)
# Parameters from Estimation
K1 = m.FV(value=0.607)
K2 = m.FV(value=0.293)
K3 = m.FV(value=0.24)
tau12 = m.FV(value=192)
tau3 = m.FV(value=15)
# PROCESS MODEL
# Heat transfer between two heaters
DT = m.Intermediate(TH2-TH1)
# Empirical correlations
m.Equation(tau12 TH1.dt() + (TH1-Ta) == K1Q1 + K3*DT)
m.Equation(tau12 TH2.dt() + (TH2-Ta) == K2Q2 - K3*DT)
m.Equation(tau3 * T1.dt() + T1 == TH1)
m.Equation(tau3 * T2.dt() + T2 == TH2)

2.5.5 Constraints and tuning
The tuning parameters are configured inside the MPC plugin, as shown below. MV limits are set according to the heater input ranges (0–100%) via the UPPER and LOWER parameters. A maximum 10% limit for MV moves is applied to both MVs through the DMAX parameter.
CV priorities are set equal using the TIER parameter, meaning that all CVs are included in the same optimisation problem and the solver must trade off deviations of each CV from its respective set point. If some CVs require higher priority than others, they can be split into different sub-optimisation problems by assigning a higher TIER value to the lower-priority CVs. In this case, the solution is first found for the higher-priority CVs, leaving fewer degrees of freedom for the lower-priority ones.
To penalise MV movements in the objective function, the DCOST parameter is used. Higher DCOST values result in slower MV moves. Its equivalent on the CV side is the WSP parameter, which applies weight to the squared error between a CV and its set point. A higher WSP results in tighter CV control and balances DCOST on the MV side. Refer to [10] for a detailed overview of all terms in the dynamic control objective function. For the TCLab MPC example, CV weights were left at their default value of 20. MV movement weights were adjusted to ensure stable control, with the MV for Q2 (Heater EH-200) set smaller to reflect the lower process gain K2 between the EH-200 heater input and the heater temperature variable TH2.
In practice, MPC tuning is about finding the right balance between controller stability and responsiveness making sure CVs stay close to their set points or limits without applying excessive moves to the MVs.
# MANIPULATED VARIABLES
Q1 = m.MV(name='Q1')
Q1.FSTATUS = 1 # Use measured value
Q1.DMAX = 10 # Delta MV maximum step per horizon interval
Q1.DCOST = 100 # Delta cost penalty for MV movement
Q1.UPPER = 100.0 # Upper bound
Q1.LOWER = 0.0 # Lower bound
#
Q2 = m.MV(name='Q2')
Q2.FSTATUS = 1 # Use measured value
Q2.DMAX = 10 # Delta MV maximum step per horizon interval
Q2.DCOST = 50 # Delta cost penalty for MV movement
Q2.UPPER = 100.0 # Upper bound
Q2.LOWER = 0.0 # Lower bound
# CONTROLLED VARIABLES
CV_Priority = {"T1": 1, "T2": 1}
#
T1 = m.CV(name='T1')
T1.FSTATUS = 1 # Use measured value
T1.SP = CV_SetPoint['T1']['SP'] # set point
T1.TIER = CV_Priority['T1'] # CV priority
T1.WSP = 20 # CV Set Point error weight
#
T2 = m.CV(name='T2')
T2.FSTATUS = 1 # Use measured value
T2.SP = CV_SetPoint['T2']['SP'] # set point
T2.TIER = CV_Priority['T2'] # CV priority
T2.WSP = 20 # CV Set Point error weight
2.6 Watching the watchdog
The MPC watchdog application serves two primary functions:
It provides the interface between the base-layer control and the MPC layer.
It independently monitors the MPC application and activates safety fallbacks when required.
For the TCLab demo, the MPC watchdog is implemented in WDOG-100 DPI within the dpjt_010_tclab_pcs database. It performs three key timeout checks:
Communication timeout: triggered after 60 seconds if the UC-100.CNT counter stops incrementing, indicating a failure of the UC-100 DPI.
Health status timeout: triggered if the UC-100.HEALTH_STATUS indicates MPC solver or application failure state for more than 60 seconds.
Mode timeout: triggered after 60 seconds if the MPC does not follow the required mode from PCS.
If any of these timeouts occur, the watchdog sets the controller required mode (CTRL_REQ_PCS_MODE) to simulation (1), and all MV required statuses (MV_REQ_PCS_Q1, MV_REQ_PCS_Q2) to inactive (0). These states are referred to as shedding conditions and shedding modes.
If no timeouts are active and both the previous required mode from PCS (CTRL_REQ_PCS_MODE_PREV) and the actual MPC controller mode (UC-100.CTRL_ACT_MODE) are active, then the watchdog sets the IN_CTRL_CND flag. This flag indicates that the MPC is authorised and in position to control.
When an MV is required to be active (e.g. WDOG-100.MV_REQ_PCS_Q1 = 1), and its actual MPC status (MV_STATUS_Q1) is active while IN_CTRL_CND is also active, the MV_IN_CTRL_Q1_CND flag is raised. This indicates that the MV should receive moves from MPC. In this state, the watchdog changes the corresponding base-layer controller (e.g. TC-100) mode to ROUT, allowing it to accept external moves. Once the mode change is confirmed, the MV_MOVE_Q1_CND flag is set, and the watchdog starts to write MPC moves to the RMV datapoint of TC-100 dedicated for external output manipulation.
If shedding conditions are active, the watchdog switches the modes of TC-100 and TC-200 controllers to AUT, ensuring that temperature control continues under base-layer PID control.
2.7 Operator graphics
Once the MPC algorithm and control interfaces are in place, the operator graphics interface becomes a critical component for day-to-day operation, monitoring, and maintenance. A rich, context-aware and user-friendly HMI can have a substantial impact on the effectiveness of MPC implementation. It enables engineers to tune, monitor, and troubleshoot the controller more efficiently, and allows operators to interact with the application with greater confidence and clarity.
To make the MPC HMI dashboard clear and easy to navigate, it is divided into three functional sections:
MPC Application Handles and Monitoring (Fig. 2.14).
This section provides controls to change the controller mode from the MPC layer, activate a cold start, and enable log collection. It also gives an overview of the application health, the controller’s actual mode, and solver performance.
Process Control Overview (Fig. 2.15).
This section allows to adjust the heater temperature set points and observe the dynamic response of the two temperature measurements to changes in heater duties. MV and CV trajectories over the MPC prediction horizon are also displayed, enabling users to assess how the controller converges to its targets.
MV and CV Details (Fig. 2.16).
This section contains switches for changing the modes of individual MVs and CVs and monitoring their actual statuses. It is useful for checking which MVs and CVs are currently active, or for disabling specific ones from the MPC layer when required.



Separately from the MPC graphics, the PCS-level graphics are primarily designed for TCLab heater operation (Fig. 2.17). It includes less diagnostic information and focuses more on process data, enabling users to switch between PID-based and MPC-based control modes.

Abbreviations
Abbreviation | Full description |
ARTData | Advanced Real-Time Data Processing Platform |
AUT | Automatic (PID controller mode) |
CV | Controlled Variable |
DCS | Distributed Control System |
DPI | Data Processing Item |
EH | Electric Heater |
FOPDT | First Order Plus Dead Time system |
HMI | Human Machine Interface |
ITAE | Integral of Time multiplied by Absolute Error |
LAN | Local Area Network |
MIMO | Multiple Inputs Multiple Outputs system |
MPC | Model Predictive Control |
MV | Manipulated Variable |
OPC UA | Open Platform Communications Unified Architecture |
PID | Proportional Integral Derivative control algorithm |
PCS | Process Control System |
PLC | Programmable Logic Controller |
PV | Process Value |
RMV | Remote Manipulated Variable (controller external input) |
ROUT | Remote Manual mode of PID controller |
SCADA | Supervisory Control and Data Acquisition |
TCLab | Temperature Control Lab |
References
[1] APMonitor. Temperature Control Lab
[2] APMonitor. Dual Heater Model
[3] Schwenzer, M., Ay, M., Bergs, T. et al. Review on model predictive control: an engineering perspective. Int J Adv Manuf Technol 117, 1327–1349 (2021).
[4] Juergen Hahn, Thomas F. Edgar Kirk-Othmer Encyclopedia of Chemical Technology, 2014
[5] FreeOpcUa / opcua-asyncio
[6] BYI-PRISM / GEKKO
[7] Whitehouse Consulting. Tune PID controller.
[8] APMonitor. TCLab 2nd order model for GEKKO MPC
[9] APMonitor. Gekko MPC documentation
[10] APMonitor. Dynamic Control Objectives



Comments