|
|
(10 intermediate revisions by the same user not shown) |
Line 1: |
Line 1: |
− | This document is a description of how to formulate the weighted-least squares (WLS) state estimation problem. Most of the formulation is based on the book by Abur and Exposito<ref>Ali Abur, Antonio Gomez Exposito, “Power System State Estimation Theroy and Implementation”, CRC Press
| + | *[[Media:SE.pdf|State Estimation]] |
− | </ref>.
| + | *[[Media:PF.pdf|Power Flow]] |
− | | + | *[[Media:Ybus.pdf|YBUS Admittance Matrix Formulation]] |
− | Power system state estimation is a central component in power system Energy Management Systems. A state estimator receives field measurement data from remote terminal units through data transmission systems, such as a Supervisory Control and Data Acquisition (SCADA) system. Based on a set of non-linear equations relating the measurements and power system states (i.e. bus voltage, and phase angle), a state estimator fine-tunes power system state variables by minimizing the sum of the residual squares. This is the well-known WLS method. | |
− | | |
− | The mathematical formulation of the WLS state estimation algorithm for an <math>n</math>-bus power system with <math>m</math> measurements is given below.
| |
− | | |
− | == Basic Equations ==
| |
− | | |
− | The starting equation for the WLS state estimation algorithm is
| |
− | | |
− | <math>\label{eq:main}
| |
− | z=\begin{bmatrix} z_1\\ z_2\\ .\\ .\\ .\\ z_m\\ \end{bmatrix} =
| |
− | \begin{bmatrix} h_1(x_1,x_2,...,x_n)\\ h_2(x_1,x_2,...,x_n)\\ .\\ .\\ .\\ h_m(x_1,x_2,...,x_n)\\ \end{bmatrix}+\begin{bmatrix} e_1\\ e_2\\ .\\ .\\ .\\ e_m\\ \end{bmatrix}
| |
− | =h(x)+e</math>
| |
− | | |
− | The vector <math>z</math> of <math>m</math> measured values is <math>z^{T} = \begin{bmatrix} z_1 & z_2 && ... & z_{m}\end{bmatrix}</math> The vector <math>h</math> <math>h^{T} = \begin{bmatrix} h_1(x) & h_2(x) && ... & h_{m}(x) \end{bmatrix}</math> containing the non-linear functions <math>h_{i}(</math>x<math>)</math> relates the predicted value of measurement <math>i</math> to the state vector <math>x</math> containing <math>n</math> variables <math>x^{T} = \begin{bmatrix} x_{1} & x_{2} & ... & x_{n} \end{bmatrix}</math> and <math>e</math> is the vector of measurement errors <math>e^{T} = \begin{bmatrix} e_{1} & e_{2} & ... & e_{m} \end{bmatrix}</math>
| |
− | | |
− | The measurement errors <math>e_i</math> are assumed to satisfy the following statistical properties. First, the errors have zero mean
| |
− | | |
− | <math>E(e_i) = 0, i = 1,...,m</math>
| |
− | | |
− | Second, the errors are assumed to be independent, (<math>E[e_i e_j]=0</math> for <math>i\ne j</math>), such that the covariance matrix is diagonal
| |
− | | |
− | <math>Cov(e)=E(e\cdot e^T) = R =diag \{{\sigma_1 ^2,\sigma_2^2,...,\sigma_m ^2}\}</math>
| |
− | | |
− | The objective function is then given by the relations
| |
− | | |
− | <math>\label{eq:JX}
| |
− | J(x)=\sum_{i=1}^{m}(z_i-h_i(x))^2/R_{ii} = [z-h(x)]^TR^{-1}[z-h(x)]</math>
| |
− | | |
− | The minimization condition is
| |
− | | |
− | <math>\label{eq:gx}
| |
− | g(x)=\frac{\partial J(x)}{\partial x} = -H^T(x)R^{-1}[z-h(x)] = 0</math>
| |
− | | |
− | where <math>H(x)=\partial h(x)/\partial x</math>. Expanding <math>g(x)</math> into its Taylor series leads to the expression
| |
− | | |
− | <math>\label{eq:Taylor}
| |
− | g(x)= g(x^k) + G(x^k)(x-x^k) + ... = 0</math>
| |
− | | |
− | <references />
| |