Data validation and reconciliation


Industrial process data validation and reconciliation, or more briefly, process data reconciliation , is a technology that uses process information and mathematical methods in order to automatically ensure data validation and reconciliation by correcting measurements in industrial processes. The use of PDR allows for extracting accurate and reliable information about the state of industry processes from raw measurement data and produces a single consistent set of data representing the most likely process operation.

Models, data and measurement errors

Industrial processes, for example chemical or thermodynamic processes in chemical plants, refineries, oil or gas production sites, or power plants, are often represented by two fundamental means:
  1. Models that express the general structure of the processes,
  2. Data that reflects the state of the processes at a given point in time.
Models can have different levels of detail, for example one can incorporate simple mass or compound conservation balances, or more advanced thermodynamic models including energy conservation laws. Mathematically the model can be expressed by a nonlinear system of equations in the variables, which incorporates all the above-mentioned system constraints. A variable could be the temperature or the pressure at a certain place in the plant.

Error types

Data originates typically from measurements taken at different places throughout the industrial site, for example temperature, pressure, volumetric flow rate measurements etc. To understand the basic principles of PDR, it is important to first recognize that plant measurements are never 100% correct, i.e. raw measurement is not a solution of the nonlinear system. When using measurements without correction to generate plant balances, it is common to have incoherencies. Measurement errors can be categorized into two basic types:
  1. random errors due to intrinsic sensor accuracy and
  2. systematic errors due to sensor calibration or faulty data transmission.
Random errors means that the measurement is a random variable with mean, where is the true value that is typically not known. A systematic error on the other hand is characterized by a measurement which is a random variable with mean, which is not equal to the true value. For ease in deriving and implementing an optimal estimation solution, and based on arguments that errors are the sum of many factors, data reconciliation assumes these errors are normally distributed.
Other sources of errors when calculating plant balances include process faults such as leaks, unmodeled heat losses, incorrect physical properties or other physical parameters used in equations, and incorrect structure such as unmodeled bypass lines. Other errors include unmodeled plant dynamics such as holdup changes, and other instabilities in plant operations that violate steady state models. Additional dynamic errors arise when measurements and samples are not taken at the same time, especially lab analyses.
The normal practice of using time averages for the data input partly reduces the dynamic problems. However, that does not completely resolve timing inconsistencies for infrequently-sampled data like lab analyses.
This use of average values, like a moving average, acts as a low-pass filter, so high frequency noise is mostly eliminated. The result is that, in practice, data reconciliation is mainly making adjustments to correct systematic errors like biases.

Necessity of removing measurement errors

ISA-95 is the international standard for the integration of enterprise and control systems It asserts that:
Data reconciliation is a serious issue for enterprise-control integration. The data have to be valid to be useful for the enterprise system. The data must often be determined from physical measurements that have associated error factors. This must usually be converted into exact values for the enterprise system. This conversion may require manual, or intelligent reconciliation of the converted values .
Systems must be set up to ensure that accurate data are sent to production and from production. Inadvertent operator or clerical errors may result in too much production, too little production, the wrong production, incorrect inventory, or missing inventory.

History

PDR has become more and more important due to industrial processes that are becoming more and more complex. PDR started in the early 1960s with applications aiming at closing material balances in production processes where raw measurements were available for all variables. At the same time the problem of gross error identification and elimination has been presented. In the late 1960s and 1970s unmeasured variables were taken into account in the data reconciliation process., PDR also became more mature by considering general nonlinear equation systems coming from thermodynamic models.,
, Quasi steady state dynamics for filtering and simultaneous parameter estimation over time were introduced in 1977 by Stanley and Mah. Dynamic PDR was formulated as a nonlinear optimization problem by Liebman et al. in 1992.

Data reconciliation

Data reconciliation is a technique that targets at correcting measurement errors that are due to measurement noise, i.e. random errors. From a statistical point of view the main assumption is that no systematic errors exist in the set of measurements, since they may bias the reconciliation results and reduce the robustness of the reconciliation.
Given measurements, data reconciliation can mathematically be expressed as an optimization problem of the following form:
where
is the reconciled value of the -th measurement, is the measured value of the -th measurement, is the -th unmeasured variable, and is the standard deviation of the -th measurement,
are the process equality constraints and
are the bounds on the measured and unmeasured variables.
The term is called the penalty of measurement i. The objective function is the sum of the penalties, which will be denoted in the following by.
In other words, one wants to minimize the overall correction that is needed in order to satisfy the system constraints. Additionally, each least squares term is weighted by the standard deviation of the corresponding measurement. The standard deviation is related to the accuracy of the measurement. For example, at a 95% confidence level, the standard deviation is about half the accuracy.

Redundancy

Data reconciliation relies strongly on the concept of redundancy to correct the measurements as little as possible in order to satisfy the process constraints. Here, redundancy is defined differently from redundancy in information theory. Instead, redundancy arises from combining sensor data with the model, sometimes more specifically called "spatial redundancy", "analytical redundancy", or "topological redundancy".
Redundancy can be due to sensor redundancy, where sensors are duplicated in order to have more than one measurement of the same quantity. Redundancy also arises when a single variable can be estimated in several independent ways from separate sets of measurements at a given time or time averaging period, using the algebraic constraints.
Redundancy is linked to the concept of observability. A variable is observable if the models and sensor measurements can be used to uniquely determine its value. A sensor is redundant if its removal causes no loss of observability. Rigorous definitions of observability, calculability, and redundancy, along with criteria for determining it, were established by Stanley and Mah, for these cases with set constraints such as algebraic equations and inequalities. Next, we illustrate some special cases:
Topological redundancy is intimately linked with the degrees of freedom of a mathematical system, i.e. the minimum number of pieces of information that are required in order to calculate all of the system variables. For instance, in the example above the flow conservation requires that. One needs to know the value of two of the 3 variables in order to calculate the third one. The degrees of freedom for the model in that case is equal to 2. At least 2 measurements are needed to estimate all the variables, and 3 would be needed for redundancy.
When speaking about topological redundancy we have to distinguish between measured and unmeasured variables. In the following let us denote by the unmeasured variables and the measured variables. Then the system of the process constraints becomes, which is a nonlinear system in and.
If the system is calculable with the measurements given, then the level of topological redundancy is defined as, i.e. the number of additional measurements that are at hand on top of those measurements which are required in order to just calculate the system. Another way of viewing the level of redundancy is to use the definition of, which is the difference between the number of variables and the number of equations. Then one gets
i.e. the redundancy is the difference between the number of equations and the number of unmeasured variables. The level of total redundancy is the sum of sensor redundancy and topological redundancy. We speak of positive redundancy if the system is calculable and the total redundancy is positive. One can see that the level of topological redundancy merely depends on the number of equations and the number of unmeasured variables and not on the number of measured variables.
Simple counts of variables, equations, and measurements are inadequate for many systems, breaking down for several reasons: Portions of a system might have redundancy, while others do not, and some portions might not even be possible to calculate, and Nonlinearities can lead to different conclusions at different operating points. As an example, consider the following system with 4 streams and 2 units.