Help Center Help Center

  • Help Center
  • Trial Software
  • Product Updates
  • Documentation

Pole Placement

Closed-loop pole locations have a direct impact on time response characteristics such as rise time, settling time, and transient oscillations. Root locus uses compensator gains to move closed-loop poles to achieve design specifications for SISO systems. You can, however, use state-space techniques to assign closed-loop poles. This design technique is known as pole placement , which differs from root locus in the following ways:

Using pole placement techniques, you can design dynamic compensators.

Pole placement techniques are applicable to MIMO systems.

Pole placement requires a state-space model of the system (use ss to convert other model formats to state space). In continuous time, such models are of the form

x ˙ = A x + B u y = C x + D u

where u is the vector of control inputs, x is the state vector, and y is the vector of measurements.

State-Feedback Gain Selection

Under state feedback u = − K x , the closed-loop dynamics are given by

x ˙ = ( A − B K ) x

and the closed-loop poles are the eigenvalues of A - BK . Using the place function, you can compute a gain matrix K that assigns these poles to any desired locations in the complex plane (provided that ( A , B ) is controllable).

For example, for state matrices A and B , and vector p that contains the desired locations of the closed loop poles,

computes an appropriate gain matrix K .

State Estimator Design

You cannot implement the state-feedback law u = − K x unless the full state x is measured. However, you can construct a state estimate ξ such that the law u = − K ξ retains similar pole assignment and closed-loop properties. You can achieve this by designing a state estimator (or observer) of the form

ξ ˙ = A ξ + B u + L ( y − C ξ − D u )

The estimator poles are the eigenvalues of A - LC , which can be arbitrarily assigned by proper selection of the estimator gain matrix L , provided that ( C, A ) is observable. Generally, the estimator dynamics should be faster than the controller dynamics (eigenvalues of A - BK ).

Use the place function to calculate the L matrix

where A and C are the state and output matrices, and q is the vector containing the desired closed-loop poles for the observer.

Replacing x by its estimate ξ in u = − K x yields the dynamic output-feedback compensator

ξ ˙ = [ A − L C − ( B − L D ) K ] ξ + L y u = − K ξ

Note that the resulting closed-loop dynamics are

[ x ˙ e ˙ ] = [ A − B K B K 0 A − L C ] [ x e ] ,   e = x − ξ

Hence, you actually assign all closed-loop poles by independently placing the eigenvalues of A - BK and A - LC .

Given a continuous-time state-space model

with seven outputs and four inputs, suppose you have designed

A state-feedback controller gain K using inputs 1, 2, and 4 of the plant as control inputs

A state estimator with gain L using outputs 4, 7, and 1 of the plant as sensors

Input 3 of the plant as an additional known input

You can then connect the controller and estimator and form the dynamic compensator using this code:

Pole Placement Tools

You can use functions to

Compute gain matrices K and L that achieve the desired closed-loop pole locations.

Form the state estimator and dynamic compensator using these gains.

The following table summarizes the functions for pole placement.

Pole placement can be badly conditioned if you choose unrealistic pole locations. In particular, you should avoid:

Placing multiple poles at the same location.

Moving poles that are weakly controllable or observable. This typically requires high gain, which in turn makes the entire closed-loop eigenstructure very sensitive to perturbation.

estim | place | reg

MATLAB Command

You clicked a link that corresponds to this MATLAB command:

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

  • Switzerland (English)
  • Switzerland (Deutsch)
  • Switzerland (Français)
  • 中国 (English)

You can also select a web site from the following list:

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

  • América Latina (Español)
  • Canada (English)
  • United States (English)
  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)

Contact your local office

pole placement example

  • LRC circuit
  • BoostConverter
  • NEXT ►

pole placement example

Introduction: State-Space Methods for Controller Design

In this section, we will show how to design controllers and observers using state-space (or time-domain) methods.

Key MATLAB commands used in this tutorial are: eig , ss , lsim , place , acker

Related Tutorial Links

  • LQR Animation 1
  • LQR Animation 2

Related External Links

  • MATLAB State FB Video
  • State Space Intro Video

Controllability and Observability

Control design using pole placement, introducing the reference input, observer design.

There are several different ways to describe a system of linear differential equations. The state-space representation was introduced in the Introduction: System Modeling section. For a SISO LTI system, the state-space form is given below:

$$
\frac{d\mathbf{x}}{dt} = A\mathbf{x} + Bu
$$

To introduce the state-space control design method, we will use the magnetically suspended ball as an example. The current through the coils induces a magnetic force which can balance the force of gravity and cause the ball (which is made of a magnetic material) to be suspended in mid-air. The modeling of this system has been established in many control text books (including Automatic Control Systems by B. C. Kuo, the seventh edition).

pole placement example

The equations for the system are given by:

$$
m\frac{d^2h}{dt^2} = mg - \frac{Ki^2}{h}
$$

From inspection, it can be seen that one of the poles is in the right-half plane (i.e. has positive real part), which means that the open-loop system is unstable.

To observe what happens to this unstable system when there is a non-zero initial condition, add the following lines to your m-file and run it again:

pole placement example

It looks like the distance between the ball and the electromagnet will go to infinity, but probably the ball hits the table or the floor first (and also probably goes out of the range where our linearization is valid).

$u(t)$

Let's build a controller for this system using a pole placement approach. The schematic of a full-state feedback system is shown below. By full-state, we mean that all state variables are known to the controller at all times. For this system, we would need a sensor measuring the ball's position, another measuring the ball's velocity, and a third measuring the current in the electromagnet.

pole placement example

The state-space equations for the closed-loop feedback system are, therefore,

$$
\dot{\mathbf{x}} = A\mathbf{x} + B(-K\mathbf{x}) = (A-BK)\mathbf{x}
$$

From inspection, we can see the overshoot is too large (there are also zeros in the transfer function which can increase the overshoot; you do not explicitly see the zeros in the state-space formulation). Try placing the poles further to the left to see if the transient response improves (this should also make the response faster).

pole placement example

This time the overshoot is smaller. Consult your textbook for further suggestions on choosing the desired closed-loop poles.

Note: If you want to place two or more poles at the same position, place will not work. You can use a function called acker which achieves the same goal (but can be less numerically well-conditioned):

K = acker(A,B,[p1 p2 p3])

Now, we will take the control system as defined above and apply a step input (we choose a small value for the step, so we remain in the region where our linearization is valid). Replace t , u , and lsim in your m-file with the following:

pole placement example

The system does not track the step well at all; not only is the magnitude not one, but it is negative instead of positive!

$K\mathbf{x}$

and now a step can be tracked reasonably well. Note, our calculation of the scaling factor requires good knowledge of the system. If our model is in error, then we will scale the input an incorrect amount. An alternative, similar to what was introduced with PID control, is to add a state variable for the integral of the output error. This has the effect of adding an integral term to our controller which is known to reduce steady-state error.

$y = C\mathbf{x}$

From the above, we can see that the observer estimates converge to the actual state variables quickly and track the state variables well in steady-state.

Published with MATLAB® 9.2

pole placement example

Intro to Control Theory Part 6: Pole Placement

In Part 4 , I covered how to make a state-space model of a system to make running simulations easy. In this post, I'll talk about how to use that model to make a controller for our system.

For this post, I'm going to use an example system that I haven't talked about before - A mass on a spring:

A simple mass on a spring. Image Credit: University of Southern Queensland

If we call \(p\) the position of the cart (we use \(p\) instead of \(x\), since \(x\) is the entire state once we're using a state space representation), then we find that the following equation describes how the cart will move:

\[ \dot{p} = -\frac{k}{m}p \]

Where \(p\) is position, \(k\) is the spring constant of the spring (how strong it is), and \(m\) is the mass of the cart.

You can derive this from Hooke's Law if you're interested, but the intuitive explanation is that the spring pulls back against the cart proportionally to how far it is away from the equilibrium state of the spring, but gets slowed down the heavier the cart is.

This describes a ideal spring, but one thing that you'll notice if you run a simulation of this is that it will keep on oscillating forever! We haven't taken into account friction. Taking into account friction gets us the following equation:

\[ \dot{p} = -\frac{k}{m}p - \frac{c}{m}\dot{p} \]

Where \(c\) is the "damping coefficient" - essentially the amount of friction acting on the cart.

Now that we have this equation, let's convert it into state space form!

This system has two states - position, and velocity:

\[ x = \begin{bmatrix} p \\ \dot{p} \end{bmatrix} \]

Since \(x\) is a vector of length 2, \(A\) will be a 2x2 matrix. Remember, a state space representation always takes this form:

\[ \dot{x} = Ax + Bu \]

We'll find \(A\) first:

\[ \begin{bmatrix} \dot{p} \\ \ddot{p} \end{bmatrix} = \begin{bmatrix} ? & ? \\ ? & ? \end{bmatrix} \begin{bmatrix} p \\ \dot{p} \end{bmatrix} \]

The way that I like to think about this is that each number in the matrix is asking a question - how does X affect Y? So, for example, the upper left number in the A matrix is asking "How does position affect velocity?". Position has no effect on velocity, so the upper left number is zero. Next, we can look at the upper right number. This is asking "How does velocity affect velocity?" Well, velocity is velocity, so we put a 1 there (since you need to multiply velocity by 1 to get velocity). If we keep doing this process, we get the following equation:

\[ \begin{bmatrix} \dot{p} \\ \ddot{p} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{c}{m} \end{bmatrix} \begin{bmatrix} p \\ \dot{p} \end{bmatrix} \]

For the sake of this post, I'll pick some arbitrary values for \(m\), \(k\), and \(c\): \(m = 1 \text{kg}\), \(k = 0.4 \text{N/m}\), \(c = 0.3 \text{N/m/s}\). Running a simulation of this system, starting at a position of 1 meter, we get the following response:

Open loop response of mass on spring system

Notice that this plot shows two things happening - the position is oscillating, but also decreasing. There's actually a way to quantify how much the system will oscillate and how quickly it will converge to zero (if it does at all!). In order to see how a system will act, we look at the "poles" of the system. In order to understand what the poles of a system mean, we need to take a quick detour into linear algebra.

Our matrix \(A\) is actually a linear transformation . That means that if we multiply a vector by \(A\), we will get out a new, transformed vector. Multiplication and addition are preserved, such that \( A(x \times 5) = (Ax) \times 5 \) and \( A(x_1 + x_2) = Ax_1 + Ax_2 \). When you look at \(A\) as a linear transformation, you'll see that some vectors don't change direction when you apply the transform to them:

The vectors that don't change direction when transformed are called "eigenvectors". For this transform, the eigenvectors are the blue and pink arrows. Each eigenvector has an "eigenvalue", which is how much it stretches the vector by. In this example, the eigenvalue of the blue vectors is 3 and the eigenvalue of the pink vectors is 1.

So how does this all relate to state space systems? Well, the eigenvalues of the system (also called the poles of a system) have a direct effect on the response of the system. Let's look at our eigenvalues for our system above. Plugging the matrix into octave/matlab gives us:

So we can see that we have two eigenvalues, both of which are complex numbers. What does this mean? Well, the real component of the number tells you how fast the system will converge to zero. The more negative it is, the faster it will converge to zero. If it is above zero, the system is unstable, and will trend towards infinity (or negative infinity). If it is exactly zero, the system is "marginally stable" - it won't get larger or smaller. The imaginary part of the number tells you how much the system will oscillate. For every positive imaginary part, there is a negative one of the same magnitude with the same real part, so it's just the magnitude of the imaginary part that determines how much the system will oscillate - the higher the magnitude of the imaginary part, the more the system will oscillate.

Why is this the case? Well, as it turns out, the derivative of a specific state is the current value of that state times the eigenvalue associated with that state. So, a negative eigenvalue will result in a derivative that drives the state to zero, whereas a positive eigenvalue will cause the state to increase in magnitude forever. A eigenvalue of zero will cause the derivative to be zero, which obviously results in no change to the state.

That explains real eigenvalues, but what about imaginary eigenvalues? Let's imagine a system that has two poles, at \(0+0.1i\) and \(0-0.1i\). Since this system has a real component of zero, it will be marginally stable, but since it has an imaginary component, it will oscillate. Here's a way of visualizing this system:

The blue vector is the position of the system. The red vectors are the different components of that position (the sum of the red vectors will equal the blue vector). The green vectors are the time derivatives of the red vectors. As you can see, the eigenvalue being imaginary causes each component of the position to be imaginary, but since there is always a pair of imaginary poles of the same magnitude but different signs, the actual position will always be real.

So, how is this useful? Well, it lets us look at a system and see what it's response will look like. But we don't just want to be able to see how the system will respond, we want to be able to change how the system will respond. Let's return to our mass on a spring:

Now let's say that we can apply an arbitrary force \(u\) to the system. For this, we use our \(B\) matrix:

\[ \begin{bmatrix} \dot{p} \\ \ddot{p} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{c}{m} \end{bmatrix} \begin{bmatrix} p \\ \dot{p} \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{1}{m} \end{bmatrix} u \]

Now, let's design a controller that will stop there from being any oscillation, and drive the system to zero much more quickly. Remember, all "designing a controller" means in this case is finding a matrix \(K\), where setting \( u = Kx \) will cause the system to respond in the way that you want it to. How do we do this? Well, it turns out that it's actually fairly easy to place the poles of a system wherever you want. Since we want to have no oscillation, we'll make the imaginary part of the poles zero, and since we want a fast response time, we'll make the real part of the poles -2.5 (this is pretty arbitrary). We can use matlab/octave to find what our K matrix must be to have the poles of the closed loop system be at -2.5:

Which gives us the output:

So our K matrix is:

\[ K = \begin{bmatrix} 5.85 \\ 4.7 \end{bmatrix} \]

And running a simulation of the system with that K matrix gives us:

Closed loop response of mass on spring system

Much better! It converges in under five seconds with no oscillation, compared with >30 seconds with lots of oscillations for the open-loop response. But wait, if we can place the poles anywhere we want, and the more negative they are the faster the response, why not just place them as far negative as possible? Why not place them at -100 or -1000 or -100000? For that matter, why do we ever want our system to oscillate, if we can just make the imaginary part of the poles zero? Well, the answer is that you can make the system converge as fast as you want, so long as you have enough energy that you can actually apply to the system. In real life, the motors and actuators that are limited in the amount of force that they can apply. We ignore this in the state-space model, since it makes the system non-linear, but it's something that you need to keep in mind when designing a controller. This is also the reason that you might want some oscillation - oscillation will make you reach your target faster than you would otherwise. Sometimes, getting to the target fast is more important than not oscillating much.

So, that's how you design a state space controller with pole placement! There are also a ton of other ways to design controllers (most notably LQR) which I'll go into later, but understanding how poles determine the response of a system is important for any kind of controller.

If you're in NYC and want to meet up over lunch/coffee to chat about the future of technology, get in touch with me .

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Engineering LibreTexts

10.2: Controllers for Discrete State Variable Models

  • Last updated
  • Save as PDF
  • Page ID 24434

  • Kamran Iqbal
  • University of Arkansas at Little Rock

Emulating an Analog Controller

The pole placement controller designed for a continuous-time state variable model can be used with derived sampled-data system model. Successful controller emulation requires a high enough sampling rate that is at least ten times the frequency of the dominant closed-loop poles of the system.

In the following we illustrate the emulation of pole placement controller designed for the DC motor model (Example 8.3.4) for controlling the discrete-time model of the DC motor. The DC motor model is discretized at two different sampling rates for comparison, assuming ZOH at the plant input.

Example \(\PageIndex{1}\)

The state and output equations for a DC motor model are given as:

\[\frac{\rm d}{\rm dt} \left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]=\left[\begin{array}{cc} {-100} & {-5} \\ {5} & {-10} \end{array}\right]\left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]+\left[\begin{array}{c} {100} \\ {0} \end{array}\right]V_a , \;\;\omega =\left[\begin{array}{cc} {0} & {1} \end{array}\right]\left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]. \nonumber \]

The motor model is discretized at two different sampling rates in MATLAB. The results are:

\[T=0.01s: A_{\rm d} =\left[\begin{array}{cc} {0.367} & {-0.030} \\ {0.030} & {0.904} \end{array}\right],\; \; B_{\rm d} =\left[\begin{array}{c} {0.632} \\ {0.018} \end{array}\right],\; \; C_{\rm d} =\left[\begin{array}{cc} {0} & {1} \end{array}\right]. \nonumber \]

\[T=0.02s: A_{\rm d} =\left[\begin{array}{cc} {0.134} & {-0.038} \\ {0.038} & {0.816} \end{array}\right],\; \; B_{\rm d} =\left[\begin{array}{c} {0.863} \\ {0.053} \end{array}\right],\; \; C_{\rm d} =\left[\begin{array}{cc} {0} & {1} \end{array}\right]. \nonumber \]

For a desired characteristic polynomial: \(\Delta _{\rm des} (s)=s^{2} +150\,s+5000\), a state feedback controller for the continous-time state variable model was obtained as (Example 9.1.1): \(k^{T} =\left[\begin{array}{cc} {0.4} & {7.15} \end{array}\right]\).

We can use the same controller to control the corresponding sample-data system models.

The unit-step response of the closed-loop system is simulated in Figure 10.2.1, where both state variables, \(i_a\left(t\right)\) and \(\omega \left(t\right)\), are plotted.

clipboard_e29ed3918c27425662d5120784e9bbfac.png

We observe from the figure that the armature current has a higher overshoot at the lower sampling rate, though both models display similar settling time of about 100 msec.

Pole Placement Design of Digital Controller

Given a discrete state variable model \(\left\{A_{\rm d},\ B_{\rm d}\right\}\), and a desired pulse characteristic polynomial \(\Delta _{\rm des} (z)\), a state feedback controller for the system can be designed using pole placement similar to that of the continuous-time system (Sec. 9.1.1).

Let the discrete-time model of a SISO system be given as:

\[{\bf x}_{k+1} ={\bf A}_{\rm d} {\bf x}_{k} +{\bf b}_{\rm d} u_{k} , \;\; y_{k} ={\bf c}^T {\bf x}_{k} \nonumber \]

A state feedback controller for the discrete state variable model is defined as:

\[u_k=-{\bf k}^T{\bf x}_k+r_k \nonumber \]

where \({\bf k}^{T}\) represents a row vector of constant feedback gains and \(r_k\) is a reference input sequence. The controller gains can be obtained by equating the coefficients of the characteristic polynomial with those of a desired polynomial:

\[\Delta (z)=\left|z{\bf I-A}_{\rm d} \right|=\Delta _{\rm des} (z) \nonumber \]

The \(\Delta _{\rm des} (z)\) above is a Hurwitz polynomial (in \(z\)), with roots inside the unit circle that meet given performance (damping ratio and/or settling time) requirements. Assuming that desired \(s\)-plane root locations are known, the corresponding \(z\)-plane root locations can be obtained from the equivalence: \(z=e^{Ts}\).

Closed-loop System

The closed-loop system model is given as:

\[{\bf x}_{k+1} ={\bf A}_{\rm cl} {\bf x}_{k} +{\bf b}_{\rm d} r_{k} , \;\; y_{k} ={\bf c}^T {\bf x}_{k} \nonumber \]

where \({\bf A}_{\rm cl} =({\bf A}_{\rm d}-{\bf b}_{\rm d}{\bf k}^T)\).

Assuming closed-loop stability, for a constant input \(r_k=r_{\rm ss}\), the steady-state response, \({\bf x}_{\rm ss}\), of the system obeys: 

\[{\bf x}_{ss} ={\bf A}_{\rm cl} {\bf x}_{ss} +{\bf b}_{\rm d} r_{ss} ,\;\; y_{\rm ss} ={\bf c}^T {\bf x}_{ss} \nonumber \]

Hence, \(y_{\rm ss}={\bf c}^T\,({\bf A}_{\rm cl}-{\bf I})^{-1}\,{\bf b}_{\rm d}\,r_{\rm ss}\).

Example \(\PageIndex{2}\)

The discrete state variable model of a DC motor (\(T=0.02\)s) is given as: \[\left[\begin{array}{c} {i_{k+1} } \\ {\omega _{k+1} } \end{array}\right]=\left[\begin{array}{cc} {0.134} & {-0.038} \\ {0.038} & {0.816} \end{array}\right]\left[\begin{array}{c} {i_{k} } \\ {\omega _{k} } \end{array}\right]+\left[\begin{array}{c} {0.863} \\ {0.053} \end{array}\right]V_{k} , \;\; y_{k} =\left[\begin{array}{cc} {0} & {1} \end{array}\right]\left[\begin{array}{c} {i_{k} } \\ {\omega _{k} } \end{array}\right] \nonumber \]

The desired \(s\)-plane root locations for the model are given as: \(s=-50,\; -100.\)

The corresponding \(z\)-plane roots (\(T=0.02s\)) are obtained as: \(z=e^{-1} ,\; e^{-2}\).

The desired characteristic polynomial is given as: \(\Delta _{\rm des} (z)=z^{2} -0.95z+0.05.\)

The feedback gains \(k^T =[k_{1} ,\; k_{2} ]\), computed using the MATLAB ‘place’ command, are given as: \(k_{1} =0.247,\; k_{2} =4.435.\)

The closed-loop system matrix is given as: \(A_\rm d)= \left[\begin{array}{cc} {-0.080} & {-3.867} \\ {0.025} & {0.583} \end{array}\right]\).

An update rule for implementation of the controller on computer is obtained as: \(u_{k} =-0.247\, i_{k} -4.435\, \omega _{k} .\)

The closed-loop response has steady-state value of \(\omega _{\rm ss}=0.143 \;\rm rad/s\).

The step response of the closed-loop system is plotted in Figure 10.2.2, where the discrete system response was scaled to match the analog system response. The step response of the continuous-time system and that for the emulated controller gains are plotted alongside.

clipboard_e9ccc6b43c4082491c53a681502ca8cb5.png

Deadbeat Controller Design

A discrete-time system is called deadbeat if all closed-loop poles are placed at the origin \((z=0)\).

A deadbeat system has the remarkable property that its response reaches steady-state in \(n\)-steps, where \(n\) represents the model dimension.

The desired closed-loop pulse characteristic polynomial is selected as \(\Delta _{\rm des} (z)=z^{n}\).

To design a deadbeat controller, let the closed-loop pulse transfer function be defined as: \[T(z)=\frac{K(z)G(z)}{1+K(z)G(z)} \nonumber \]

The above equation is solved for \(K(z)\) to obtain: \[K(z)=\frac{1}{G(z)} \frac{T(z)}{1-T(z)} \nonumber \]

Let the desired \(T(z)=z^{-n}\); then, the deadbeat controller is given as: \[K(z)=\frac{1}{G(z)(z^{n} -1)} \nonumber \]

Example \(\PageIndex{3}\)

Let \(G(s)=\frac{1}{s+1} ;\) then \(G(z)=\frac{1-e^{-T} }{z-e^{-T} }\).

A deadbeat controller for the model is obtained as: \(K(z)=\frac{z-e^{-T} }{(1-e^{-T} )(z-1)}\).

Example \(\PageIndex{4}\)

The discrete state variable model of a DC motor for \(T=0.02\; \rm s\) is given as: \[\left[\begin{array}{c} {i_{k+1} } \\ {\omega _{k+1} } \end{array}\right]=\left[\begin{array}{cc} {0.134} & {-0.038} \\ {0.038} & {0.816} \end{array}\right]\left[\begin{array}{c} {i_{k} } \\ {\omega _{k} } \end{array}\right]+\left[\begin{array}{c} {0.863} \\ {0.053} \end{array}\right]V_{k} , \;\;y_{k} =\left[\begin{array}{cc} {0} & {1} \end{array}\right]\left[\begin{array}{c} {i_{k} } \\ {\omega _{k} } \end{array}\right] \nonumber \]

The state feedback controller is given as: \(u_{k} =-\left[k_{1} ,\, \, k_{2} \right]x_{k}\).

The closed-loop characteristic polynomial is obtained as: \[\Delta (z)=z^{2} +(0.863k_{1} +0.053k_{2} -0.95)z-0.707k_{1} +0.026k_{2} +0.111 \nonumber \]

For pole placement design, let \(\Delta _{\rm des} (z)=z^{2}\). By equating the polynomial coefficients, the deadbeat controller gains are obtained as: \(k_{1} =0.501,\; k_{2} =9.702\).

The update rule for controller implementation is given as: \[u_{k} =0.501\, \, i_{k} +9.702\, \, \omega _{k} \nonumber \]

The step response of the deadbeat controller (Figure 10.2.3) settles in two time periods. The response was scaled to match that of the continuous-time system.

An approximate deadbeat design can be performed by choosing distinct closed-loop eigenvalues close to the origin, e.g., \(z=\pm {10}^{-5}\), and using the 'place' command from the MATLAB Control Systems Toolbox.

The feedback gains for the approximate design are obtained as: \(k_{1} =0.509,\; k_{2} =9.702\). The resulting closed-loop system response is still deadbeat.

clipboard_ebdfc98111c6648a5cf1713d4bf80c26c.png

Feedforward Tracking System Design

A tracking system was previously designed by using feedforward cancelation of the error signal (Section 9.2.1). A similar design can be performed in the case of discrete systems.

Towards this end, let the discrete state variable model be given as: \[{\bf x}_{k+1} ={\bf A}_{\rm d} {\bf x}_{k} +{\bf b}_{\rm d} u_{k} , \;\;y_{k} ={\bf c}^T {\bf x}_{k} \nonumber \]

A tracking controller for the model is defined as: \[u_k=-{\bf k}^T{\bf x}_k+k_rr_k \nonumber \] where \({\bf k}^{T}\) represents a row vector of feedback gains, \(k_r\) is a feedforward gain, and \(r_k\) is a reference input sequence.

Assuming that a pole placement controller for the discrete system has been designed, the closed-loop system is given as: \[{\bf x}_{k+1}=\left({\bf A}_{\rm d}-{\bf b}_{\rm d}{\bf k}^T\right){\bf x}_k+{\bf b}_{\rm d}k_rr_k \nonumber \]

The closed-loop pulse transfer function is obtained as: \[T\left(z\right)={\bf c}^T_{\rm d}{\left(z{\bf I-A}_{\rm d}+{\bf b}_{\rm d}{\bf k}^T\right)}^{-1}{\bf b}_{\rm d}k_r \nonumber \] where \({\bf I}\) denotes an identity matrix. The condition for asymptotic tracking is given as: \[T\left(1\right)={\bf c}^T_{\rm d}{\left({\bf I-A}_{\rm d}+{\bf b}_{\rm d}{\bf k}^T\right)}^{-1}{\bf b}_{\rm d}k_r=1 \nonumber \]

The feedforward gain for error cancelation is obtained as: \(k_r=\frac{1}{T\left(1\right)}\).

Example \(\PageIndex{5}\)

The discrete state variable model of a DC motor (\(T=0.02\)s) is given as: \[\left[\begin{array}{c} {i_{k+1} } \\ {\omega _{k+1} } \end{array}\right]=\left[\begin{array}{cc} {0.134} & {-0.038} \\ {0.038} & {0.816} \end{array}\right]\left[\begin{array}{c} {i_{k} } \\ {\omega _{k} } \end{array}\right]+\left[\begin{array}{c} {0.863} \\ {0.053} \end{array}\right]V_{k} , \;\;y_{k} =\left[\begin{array}{cc} {0} & {1} \end{array}\right]\left[\begin{array}{c} {i_{k} } \\ {\omega _{k} } \end{array}\right] \nonumber \]

A state feedback controller for the motor model was previously designed as: \(k^T =[k_{1} ,\; k_{2} ]\), where \(k_{1} =0.247,\; k_{2} =4.435.\)

The closed-loop system is defined as: \[T\left(z\right)=\frac{0.367z+0.179}{z^2-0.503z+0.05}k_r \nonumber \]

From the asymptotic condition, the feedforward gain is solved as: \(k_r=6.98\).

The step response of the closed-loop system is shown in Figure 10.2.4.

clipboard_edca113b5d7dfbf45077bd40bafc3153a.png

Example \(\PageIndex{6}\)

The discrete state variable model of a DC motor (\(T=0.02\)s) is given as:

\[\left[\begin{array}{c} {i_{k+1} } \\ {\omega _{k+1} } \end{array}\right]=\left[\begin{array}{cc} {0.134} & {-0.038} \\ {0.038} & {0.816} \end{array}\right]\left[\begin{array}{c} {i_{k} } \\ {\omega _{k} } \end{array}\right]+\left[\begin{array}{c} {0.863} \\ {0.053} \end{array}\right]V_{k} , \;\;y_{k} =\left[\begin{array}{cc} {0} & {1} \end{array}\right]\left[\begin{array}{c} {i_{k} } \\ {\omega _{k} } \end{array}\right] \nonumber \]

A dead-beat controller for the motor model was designed as: \(k^T =[k_{1} ,\; k_{2} ]\), where \(k_{1} =0.501,\; k_{2} =9.702\).

The closed-loop system is defined as: \[T\left(z\right)=\frac{0.672z+0.328}{z^2}k_r \nonumber \]

From the asymptotic condition, the feedforward gain is solved as: \(k_r=12.77\).

The step response of the closed-loop system is shown in Figure 10.2.5.

clipboard_eaf6c75fb62a0c3b9c3294150ad9f04e5.png

Tracking PI Controller Design

A tracking PI controller for the discrete state variable model is designed similar to the design of continuous-time system (Figure 9.3.1). The tracking PI controller places an integrator in the feedback loop, thus ensuring that the tracking error goes to zero in the steady-state.

In the case of continuous-time system, the tracking PI controller was defined as: \(u=-{\bf k}^{T} {\bf x}+k_{i} \int (r-y)\rm dt\).

Using the forward difference approximation to the integrator, given as: \(v_k=v_{k-1}+Te_k\), an augmented discrete-time system model including the integrator state variable is formed as:

\[\left[\begin{array}{c} {{\bf x}(k+1)} \\ {v(k+1)} \end{array}\right]=\left[\begin{array}{cc} {{\bf A}_{\rm d} } & {\bf 0} \\ {-{\bf c}^T T} & {1} \end{array}\right] \left[\begin{array}{c} {{\bf x}(k)} \\ {v(k)} \end{array}\right]+\left[\begin{array}{c} {{\bf b}_{\rm d} } \\ {0} \end{array}\right]u+\left[\begin{array}{c} {\bf 0} \\ {T} \end{array}\right]r \nonumber \]

The state feedback controller for the augmented system is defined as:

\[u(k)=\left[\begin{array}{cc} {-{\bf k}^T } & {k_ i } \end{array}\right]\, \left[\begin{array}{c} {{\bf x}(k)} \\ {v(k)} \end{array}\right] \nonumber \]

where \(k_ i\) represents the integral gain. With the addition of the above controller, the closed-loop system is described as:

\[\left[\begin{array}{c} {{\bf x}(k+1)} \\ {v(k+1)} \end{array}\right]=\left[\begin{array}{cc} {{\bf A}_{\rm d} -{\bf b}_{\rm d} k^{T} } & {{\bf b}_{\rm d} k_{i} } \\ {-{\bf c}^T T} & {1} \end{array}\right] \left[\begin{array}{c} {{\bf x}(k)} \\ {v(k)} \end{array}\right]+\left[\begin{array}{c} {\bf 0} \\ {T} \end{array}\right]r(k) \nonumber \]

The closed-loop characteristic polynomial of the augmented system is formed as:

\[{\mathit{\Delta}}_a\left(z\right)=\left| \begin{array}{cc} z{\bf I-A}_{\rm d}+{\bf b}_{\rm d}k^T & -{\bf b}_{\rm d}k_i \\ -{\bf c}^T_{\rm d}T & z-1 \end{array} \right| \nonumber \]

where \({\bf I}\) denotes an identity matrix of order \(n\).

Next, we choose a desired characteristic polynomial of \((n+1)\) order, and perform pole placement design for the augmented system. The location of the integrator pole in the \(z\)-plane may be selected keeping in view the desired peformance criteria for the closed-loop system.

\[\left[ \begin{array}{c} i_{k+1} \\ {\omega }_{k+1} \end{array} \right]=\left[ \begin{array}{cc} 0.134 & -0.038 \\ 0.038 & 0.816 \end{array} \right]\left[ \begin{array}{c} i_k \\ {\omega }_k \end{array} \right]+\left[ \begin{array}{c} 0.863 \\ 0.053 \end{array} \right]V_k,\ \ {\omega }_k=\left[ \begin{array}{cc} 0 & 1 \end{array} \right]\left[ \begin{array}{c} i_k \\ {\omega }_k \end{array} \right] \nonumber \]

The control law for the tracking PI controller is defined as:

\[u_k=-k_1i_k-k_2{\omega }_k+k_iv_k \nonumber \]

where \(v_{k} =v_{k-1} +T(r_{k} -\omega _{k} )\) describes the output of the integrator. The augmented system model for the pole placement design using integral control is given as:

\[\left[ \begin{array}{c} i_{k+1} \\ {\omega }_{k+1} \\ v_{k+1} \end{array} \right]=\left[ \begin{array}{ccc} 0.134 & -0.038 & 0 \\ 0.038 & 0.816 & 0 \\ 0 & -0.02 & 1 \end{array} \right]\left[ \begin{array}{c} i_k \\ {\omega }_k \\ v_k \end{array} \right]+\left[ \begin{array}{c} 0.863 \\ 0.053 \\ 0 \end{array} \right]V_k+\left[ \begin{array}{c} 0 \\ 0 \\ 0.02 \end{array} \right]r_k \nonumber \]

The desired \(z\)-plane pole locations for a desired \(\zeta=0.7\) are selected as: \(z=e^{-1} ,\; e^{-1\pm j1}\).

The controller gains, obtained using the MATLAB ‘place’ command, are given as: \(k_{1} =0.43,k_{2} =15.44,\; k_{i} =-297.79.\)

An update rule for controller implementation on computer is given as:

\[u_k=-0.43i_k-15.44{\omega }_k+297.8v_k \nonumber \]

\[v_k=v_{k-1}+0.02\left(r_k-{\omega }_k\right) \nonumber \]

The step response of the closed-loop system is plotted in Figure 10.2.6. The step response of the continuous-time system (Example 9.1.1) is plotted alongside. The output in both cases attains steady-state value of unity in about 0.12sec.

clipboard_ea8f695d9639c458f7619cc7b76b4d269.png

IMAGES

  1. Pole placement design

    pole placement example

  2. Unit4_3-Pole Placement Using Direct Substitution Method

    pole placement example

  3. Unit4_4-Pole Placement Using Ackermanns Formula Method

    pole placement example

  4. Another example of amazing pole placement : r/WalgreensStores

    pole placement example

  5. Hands-free pole placement| Concrete Construction Magazine

    pole placement example

  6. Class 22 Pole Placement: Spring-Mass-Damper Example

    pole placement example

VIDEO

  1. The Concept of Pole Placement in Classical and Modern Control, 30/3/2016

  2. Pole Placement Method Theory Part 2

  3. Example problem on Control system design by pole placement

  4. Pole placement part 3

  5. Pole placement design part 2

  6. POLE PLACEMENT DAN LQR

COMMENTS

  1. Pole placement design

    Examples collapse all Pole Placement Design for Second-Order System For this example, consider a simple second-order system with the following state-space matrices: Spate-space matrices Input the matrices and create the state-space system. A = [-1,-2;1,0]; B = [2;0]; C = [0,1]; D = 0; sys = ss(A,B,C,D);

  2. PDF 8.2 State Feedback and Pole Placement

    The pole placement procedure using the state feedback for a system which is already in phase variable canonical form is demonstrated in the next example. Example 8.1: Consider the following system given in phase variable canonical form It is required to find coefficients such that the closed-loop system has the eigenvalues located at .

  3. PDF 16.30 Topic 11: Full-state feedback control

    Topic #11 16.31 Feedback Control Systems State-Space Systems Full-state Feedback Control How do we change the poles of the state-space system? Or, even if we can change the pole locations. Where do we change the pole locations to? How well does this approach work? Reading: FPE 7.3 Full-state Feedback Controller

  4. Pole Placement

    For example, for state matrices A and B, and vector p that contains the desired locations of the closed loop poles, K = place (A,B,p); computes an appropriate gain matrix K. State Estimator Design You cannot implement the state-feedback law u = − K x unless the full state x is measured.

  5. 9.1: Controller Design in Sate-Space

    Pole Placement in Controller Form . The pole placement design is facilitated if the system model is in the controller form (Section 8.3.1). In the controller form structure, the coefficients of the characteristic polynomial appear in reverse order in the last row of \(A\) matrix.

  6. Introduction: State-Space Methods for Controller Design

    To introduce the state-space control design method, we will use the magnetically suspended ball as an example. The current through the coils induces a magnetic force which can balance the force of gravity and cause the ball (which is made of a magnetic material) to be suspended in mid-air.

  7. Intro to Control Theory Part 6: Pole Placement

    So, for example, the upper left number in the A matrix is asking "How does position affect velocity?". Position has no effect on velocity, so the upper left number is zero. Next, we can look at the upper right number. This is asking "How does velocity affect velocity?"

  8. Full state feedback

    Example of FSF [ edit] Consider a system given by the following state space equations: The uncontrolled system has open-loop poles at and . These poles are the eigenvalues of the matrix and they are the roots of .

  9. Modern Control Systems (MCS)

    Pole Placement The stability and transient response characteristics are determined by the eigenvalues of matrix A-BK. If matrix K is chosen properly Eigenvalues of the system can be placed at desired location. And the problem of placing the regulator poles (closed-loop poles) at the desired location is called a pole-placement problem.

  10. PDF 16.30 Topic 12: Pole placement approach

    Linear Quadratic Regulator. Approach #2: is to place the pole locations so that the closed-loop system optimizes the cost function. ∞. JLQR = x(t)T Qx(t) + u(t)T Ru(t) dt. 0. where: xT Qx is the State Cost with weight Q. uT Ru is called the Control Cost with weight R. Basic form of Linear Quadratic Regulator problem.

  11. PDF Pole Placement Design

    Pole Placement Design Introduction Simple Examples Polynomial Design State Space Design Robustness and Design Rules Model Reduction Oscillatory Systems Summary Theme: Be aware where you place them! Introduction A simple idea Strong impact on development of control theory The only constraint is reachability and observability The robustness debate

  12. Pole Placement Design Solved Example

    107 Share 7K views 3 years ago An example is solved in this video to illustrate pole placement design. pole placement design is considered one of the important topics in the state space...

  13. PDF 4 Pole placement using polynomial methods

    4.2 Sylvester's theorem 17 Example 4. Consider the plant G(s) = s¡2 s2¡4.Find a controller that places all closed-loop poles at ¡1. Because the order of the plant is n = 2, from the argument above, we set m = 1: C(s) = p1s+p0 l1s+l0 and the characteristic polynomial is of degree three.

  14. PDF Pole placement

    The first step in pole-placement is the selection of the pole locations of the closed-loop system. It is always useful to keep in mind that the control effort required in ... Example 4.1 Consider the continuous-time dynamic system x_ = Ax+bu y = Ix i.e., all states are measured and available for feedback, with matrices A =

  15. PDF Pole Placement Approach

    Example: could keep the same damped frequency wd and then move the real part to be 2-3 times faster than real part of dominant poles ζωn Just be careful moving the poles too far to the left because it takes a lot of control effort Rules of thumb for 2nd order response: 10-90% rise time 1 + 1.1ζ + 1.4ζ2 tr = ωn Settling time (5%) ts = 3 ζωn

  16. Pole Placement Method

    View chapter Explore book Power Electronics Control Systems Youssef Kraiem, Dhaker Abbes, in Encyclopedia of Electrical and Electronic Power Engineering, 2023 Modeling and control of the DC bus This part studies the regulation of the Udc voltage. A PI controller is used to keep the DC bus voltage Udc stable at its reference value Udc∗.

  17. PDF Controller Design by Pole placement

    Introduction to control Design of two position controller Control design by pole placement Control design by PID control Introduction to Control So far we have modeled systems ( mechanical, electromechanical and electric) and analyzed their time-response behavior.

  18. Pole Placement Example 1

    Design of control system in state space using pole placement in which 1) using transformation matrix, 2) by direct substitution and 3) using Ackermann's form...

  19. PDF Dynamics and motion control Lecture 4 Feedback control -discrete ...

    • The control design (e.g. pole placement design) is performed completely in the discrete domain. (With new rules for pole placement). • Advantage: Better performance when the sampling period is "to slow" • Disadvantage: Some of the physical insight is lo st when leaving the differential equations in favour of the difference equations.

  20. 10.2: Controllers for Discrete State Variable Models

    The pole placement controller designed for a continuous-time state variable model can be used with derived sampled-data system model. Successful controller emulation requires a high enough sampling rate that is at least ten times the frequency of the dominant closed-loop poles of the system. ... (Example 8.3.4) for controlling the discrete-time ...

  21. PDF Module 2-1: Pole Placement

    Take the example of lateral control of a car. De ne Cross Track Error (CTE) as the distance of the car from trajectory. Recap: Linear CONTROL Systems - A Brief History Control:continuously operating dynamical systems

  22. PDF Pole Placement (Bass Gura)

    Example 2: Complex Poles With pole placement, you can place the closed-loop poles anywhere. For example, find the feedback gain, Kx, which places the closed-loop poles at { -1 + j3, -1 - j3, -5 + j2, -5 - j2 } Following the previous design... Step 0: Input the system (done)

  23. PDF Pole Placement Example: Heat Equation

    Pole Placement Example: Heat Equation Problem: Design a feedback controller for a 20th-order RC filter so that the closed-loop system has ... NDSU Pole Placement: Heat Equation ECE 463 JSG 13 rev March 3, 2017. NDSU Pole Placement: Heat Equation ECE 463 JSG 14 rev March 3, 2017. Title: Ami Pro - 14_PPL1.SAM Author: glower Created Date: 3/3/2017 ...