/r/ControlTheory
Welcome to r/ControlTheory
Link to Subreddit wiki for useful resources
Official Discord : https://discord.gg/CEF3n5g
Related Subreddits:
/r/ControlTheory
Does having multiple poles in the LHP and fewer complex conjugate pairs of poles have any significant meaning for a system? My thinking is that for fewer complex conjugate pairs of poles unlike in the case where all the poles are complex conjugate pairs. Am I wrong in my thinking?
For example:
Sys1 Poles = [-2, -4, -5, -1±i]
Sys2 Poles = [-2±i, -4±i, ±i, -1±i]
Hello everyone, and sorry in advance. For a college project, I need to develop first of all a MIMO system based on the union of 5 separated processes, each with their own in's and out's. If I have the 5 transfer functions, one of each plant, I need to merge them into one big MIMO system and then generate a controller for it. I've been searching online but all the information I could gather is either blunt or just simply vague in it's results. This said, I have to make it by hand, pure algebraic construction, but Matlab is permitted to make direct calcs on it.
Essentially, what steps I must follow in order to achieve this? I've been watching videos and mostly speak about superposition process of the systems in tow, but even if that's the path to follow, what comes next after having all the possible combinations? Or even if that's not the path, what should it be?
Please, I would really appreciate the help.
Hi, I am trying to design a full state feedback controller using pole placement. My system is a 4th order system with two inputs. For the life of me I cannot calculate K, I've tried various methods, even breaking the system into two single inputs. I am trying a method which uses a desired characteristic equation alongside the actual equation to find K values, but there are only 2 fourth order polynomials for 8 values of the K matrix, which I am struggling with.
Any tips would be much appreciated, thanks!
I'm trying to implement AEKF according to this paper
I'm using a simple model from the page 3 and trying to get the same results as in the tables 1 and 2. While testing I noticed that the R_k converges pretty close to R_true from any initial value. But the Q_k seems to converge to zero rather than to Q_true. No matter what initial Q I provide it always tries to go lower. It seems logical since zero Q matrix means that there's no process noise and the predictions are perfect. Which is true in my case. But obviously there's a problem either in the testing model or in the implementation itself and I just can't figure it out. Here's my implementation
Hi Folks,
I had some time and decided to learn Manim CE library by doing. Effect is an educational video aiming at explaining Active Disturbance Rejection Control in a simple and intuitive manner. You can find it on youtube https://youtu.be/DS5VEFD-r_A I would really appreciate honest feedback as well as ideas for future videos, so let me know what you think.
Hi there, I'll be working on a project to control a manipulator robotic arm using Sliding Mode Control which has its parameters tuned with reinforcement learning. For now all I have is the robotic arm model, and the sliding surface fonction. I want to know how to do this project.
I was reviewing some papers written by Liberzon, where he gives a description for how systems under arbitrary switching behavior may be stable.
Specifically given a switched system with dynamics A1,A2; the system is stable under arbitrary switching given A1A2=A2A1. A similar results is shown for the nonlinear case given the lie brackets of the two systems.
If I have a system and I have shown that given under autonomous conditions A1A2=A2A1 is not true, can I design a controller that’s makes equation above true.
My motivation is the design of a continuous controller to make the system above true switching under arbitrary conditions stable, and then have my discrete controller switch from system 1–>2 once the condition is met.
My initial approach was possibly setting a control Lyapunov function for system 1 equal to a lyapunov function for system 2 and solving for u.
I haven’t seen any papers/research detailing such a problem however.
Hi there, I am designing a system which has to dispense water from a tank into a container with an accuracy of ±10ml.
Currently the weight of the water is measured using load cells and a set quantity, say 0.5L is dispensed from the initial measured weight, say 2L.
The flow control is done with the help of a servo valve, the opening is from 0% to 100%.
Currently I am using a Proportional controller to open the valve based on the weight to dispense, which means the valve opens at a faster rate and reaches the maximum limit and then closes gradually as the weight is achieved.
So,
Process Variable = Weight of the Water in grams
Set Point = Initial Weight - Weight to dispense
Control Output = Valve Opening in percentage 0% to 100%
Is a PI or PID controller well suited for this application or is any other control method recommended?
Thank you.
Hi!
I'm designing a controller for a drone in Simulink... right now i'm trying to find the "plant" block in Laplace domain but have doubts about de transform of some mappings.
By "mapping" i mean using a linear function to go from one variable to another. For example, mapping values from Duty Cycle of PWM signals to angular velocity of motors, using a linear function like y = mx + b.
The problem lies in the fact that i can't just do Y(s) = mX(s) + b cause there is that constant b. On the other hand, doing Y(s) = m/s^2 + b/s, adds 2 poles in my system and taking into account that i have multiple mappings with a linear function, the number of poles in my system increase a lot so i'm trying to make sure that i can't do another thing than this laplace transform "Y(s) = m/s^2 + b/s".
Thanks!
Hi!
Probably very stupid question from beginner here...
I have to design a PID controller for a system in simulink. We have to come up with PID by placing zeros of the controller to compensate the dominant poles of a system and make sure the phase margin of a system will be at least 60 degrees.
I need to get values of gain, Ti and Td (integral and derivative time constants) for the model in simulink, but thats where I struggle. How do I calculate these values? Are the time constant values related to the values of the zeros of the controller?
Thanks for any advice!
I have a question: If I want to design a controller using H_inf and the Riccati equation, how can I determine the D, B, and C matrices? What is the most effective approach?"
Do I realistically have a chance of getting in somewhere 'entry level' with only Low voltage experience?
I've been in the Low volt field for almost 2 years being a lead doing pretty much everything under the sun when it comes to low volt.
I've only dabbled verrrry little in controls (Getting gates to open, close, stop) but it's a field I'm interested in. I'm willing to work long hours and travel 100% and consider myself an exceptional team player.
Are there any specific roles I should be looking for or certs that would help me enter the field? I would love to do something in industrial controls.
I want to find some research topics in control theory. First, I want some topics in research related to basic control, like recent focus on linear control. Second, I want what topics to be focused on range on control like adaptive robust and optimal control. For example current trends in adaptive control where it is headed. I tried to find online but specific topics were hard to find. For example I found control barrier function are getting some traction in robotics. Thanks
Hello. I have this open loop transfer function.
As you can see, the 's^2' means a 0 pole so the system it's unstable. I want to know if I can ignore the 's^2' to turn the fourth order system into a second order one.
I don't know what happens when the magnitude graph passes through 0 twice, which one do I consider for the phase gain? I already know that the gain margin is infinite since the phase graph does not pass through -180, but I can't find examples of the gain graph passing through 0 twice in the teacher's material.
Hi,
I have a bit of experience working with nonlinear robust MPC but so far I have only implemented robust tube MPC. I am currently interested in closed loop min-max robust MPC but implementation of a solver looks very challenging and, to be honest, I am not sure even where to start.
There are many research papers but they do not share code and assume it is possible to solve the optimization problem. I am looking for a real world implementation (i.e. library, repository, etc.). Does anyone have any idea where I could find anything?
Can someone provide me some pid controller design to control actuator and sensors in a building
Hello,
I am analyzing the settling time of a PI controller for different amplitudes of disturbances. In Simulink, the settling time remains the same regardless of the amplitude of the disturbance (e.g., step or square signal).
However, when I tested this experimentally on my device, I observed that the settling time varies with the amplitude of the disturbance signal. My plant/actuator is a PZT (piezoelectric actuator made from lead zirconate titanate), which is controlled by a PI controller.
A linear time-invariant system is defined as marginally stable if and only if the two conditions below are met:
I'm fine with condition 1, but I'm trying to understand why minimal polynomials appear in condition 2. All the books I've read so far just throw this theorem without explaining it. I know this is a definition so there's nothing to prove, but there must be some underlying logic!
Does anyone have an explanation to why the characteristic polynomial of a marginally stable system can have roots with negative real part and multiplicity greater than 1, but the minimal polynomial can't?
Hi,
I am currently trying to set up and solve an optimal control problem with GPOPS-II, using direct (orthogonal) collocation for transcribing my problem into an NLP, which is then solved with ipopt.
My problem involves the description of an attitude using unit quaternions. The system dynamics should guarantee the quaternion norm not deviate from unity. However, I am now experiencing that this is exactly what happens for some problems, expecially when looking at longer time intervals. Adding the unity constraint as a path constraint to the problem in GPOPS-II does not seem to help with that.
I am unsure how to move on with that and especially which resources to resort to utilize to solve this problem. I am very grateful for any advice on that. I kept the problem description short, please feel free to ask for more details!
Kind regards
I find nowadays a lot of young people (my peers) want to do reinforcement learning with robots.
However, it seems that reinforcement learning will not work just purely on an intuitive level because it involves trial-and-error and there isn't much trialing when it comes to hardware. If it breaks it will not work anymore.
Of course I've seen people putting some safety barriers around their hardware, or try to develop a model in software before applying to hardware. But the question of risk still lingers.
A better idea is to incorporate knowledge about the world and physics into the reinforcement learning algorithm. We can use fancy jargons such as sensor-based model-aware reinforcement learning. But hey, isn't that just control theory?
I feel that since control theory was developed before reinforcement learning, therefore people treat control theory as reinforcement learning version 1.0 whereas the rest as version 2.0 and invests a lot of effort in making 2.0 work. But version 1.0 actually works a lot better than 2.0.
Is this a correct take on the relationship between control theory and reinforcement learning?
Hey everyone,
I am a little confused as to what job titles in the field of control systems in the USA mean. I understand that automation engineers use control system software and integrate it with their plant. But I also see a lot of job posts which are titled "control system engineer" but still talk about experience with PLCs.
I graduated with a master's in chemical engineering with a focus on model predictive control for energy systems (specifically Building HVAC). As part of my education I used a lot of deep learning to model my systems and learnt and used control theory. I am seeking out advice on how to search for jobs which would better suit my education. I don't have experience in PLCs, but most job postings ask for some experience. Am I searching for the wrong jobs? Or should I use different key words? I am grateful for any advice! Thank you in advance!!
Note : My experience is mainly using machine learning to model systems, state estimation, kalman filters, and system identification. I also have a decent amount of software engineering experience.
I'm currently taking a linear systems analysis grad course (electrical engineering program). State space equations, linear algebra, stability/controllability/observability of both LTI and LTV systems, that sort of thing. The textbook the professor uses is Linear System Theory and Design by C-T Chen.
It is the worst textbook I have ever had the displeasure of using. A whole linear systems treatment crammed into under 350 pages. Everything is presented as "proof, theory, proof, theory, proof, theory" (and even the proofs are extraordinarily brief and often skipped) with no room for practical examples. Examples are very brief, and either comically trivial as to be useless and inapplicable, or so complex to be impossible to follow. The one good thing the book has is the problems, it has a great set of problems which I'm sure is why the professor is using it, but it's terrible to actually learn from.
I'm finding it difficult to find alternate books that cover the same material. There's plenty of general controls books that have a lot of classical control theory (this book is fully state-space based), or much more specific books on topics like Lyapunov stability and state estimators, or have either LTI or LTV systems but not both. Any recommendations?
Hey guys, I have a question for you.
This might not be fully in trend with the contents of this sub reddit, but I thought I might get some helpful answers. I am a student in engineering, and I have quite some experience with Matlab. I cannot get a part-time job, neither a full-time one, but I do need some pocket money. I was considering getting some projects, as a freelancer in Matlab.
How does this work? What are the platforms for this? Should I expect people to hire me? Has any of you done this?
Thank you
Edit: I am a master student
Hey Reddit! 👋
Check out this curated Optimal Control Software Repository featuring the best open-source tools for optimization and control, including:
Perfect for robotics, embedded systems, and research projects. 🚀 Let me know what you think! 😊