/r/ControlTheory
Welcome to r/ControlTheory
Link to Subreddit wiki for useful resources
Official Discord : https://discord.gg/CEF3n5g
Related Subreddits:
/r/ControlTheory
I have a 3D pendulum DAE model (2 differential equations, 2 states) and know the eigenvalues of the system about a "particular equilibrium point".
In order to do some data driven dynamics modelling, I used Deep Learning based Koopman Operator Theory method to learn a Koopman matrix from bunch of state trajectory data (about that equilibrium point) , but from this I don't know how can I get those eigenvalues which I got earlier. (previously it was four, now currently I have Koopman Eigenvalues but it's more in number and will be different). I need it for benchmarking these kind of linearization methods.
i am trying to stabilize a random unstable system to test the nyquist plot, this is the system:
z=[ -2.1-2i -2.1+2i -3.3-3i -3.3+3i -1.3-3i -1.3+3i];
p=[ -.10-.5i -.10+.5i -2-1.5i -2+1.5i -3-.24i -3+.24i -1.5 -5.0 .8 ];
sys=zpk(z,p,1);
dc=dcgain(sys);
sys=zpk(z,p,1/dc)
it has an unstable pole at 0.8
with gain K=500, the plot clearly circulates the -1 point one time counter clock wise
but when i use the step() command it comes out unstable
when i play around with low order systems the nyquist diagram works but now i want to try higher order stuff and it fails, if someone can explain and help i would be grateful.
i only want to stabilize it, no care about performance
Polito - Mechatronic engineering (Controls)
Final Project (30)
Total 120
Tor Vergata - Mechatronics engineering
Total 120
Genova - Robotics engineering
Total 120
L'Aquila - Control Systems and Automation Engineering
Total 120
Polimi - Automation and Control Engineering
Total 120
Catania - Automation Engineering and Control of Complex systems
Total 120
NB: 120 credits are needed for graduation
I am referring the CTMS tutorial to brush up my CS skills. https://ctms.engin.umich.edu/CTMS/index.php?example=Introduction§ion=ControlRootLocus
here, under section - Choosing a Value of K from the Root Locus -
In our problem, we need an overshoot less than 5% (which means a damping ratio $\zeta$ of greater than 0.7) and a rise time of 1 second (which means a natural frequency $\omega_n$ greater than 1.8).
For the above, I was able to get zeta = 0.69 using %Mp = 5%, but for tr = 1sec, my omega_n comes out to be 3.22, way off from their 1.8.. my calculations and the reference formulae are given below -
Am I doing it wrong? Or is the tutorial wrong?
I just got accepted into UCSD for a Master's in ECE with a focus on intelligent systems, robotics, and controls. While I'm passionate about robotics, I lack formal experience in the field. I've tinkered with Arduino and dabbled in projects involving VREP for SLAM and motion planning during my undergrad (in electrical engineering). Currently, I'm employed at a major aerospace company working on system modeling for flight simulators using Matlab,Simulink/ANSYS SCADE/C, for the past 2 years. I'm seeking guidance on how to make this transition smoother.
Here are my burning questions:
How can I effectively prepare for this Master's program, given my background?
What are the current job opportunities like for robotics and controls graduates in the US?
How can I leverage my current work experience when applying for robotics/ controls roles?
What specific skills are highly valued in the robotics/ controls field, making candidates more marketable?
Looking forward to your insights and advice!"
Hello,
I am looking for websites to find PhD position in automatic control(dynamical systems,observer design,time-delay systems,hybrid systems,nonlinear & adaptive control) in Europe specially(Switzerland,Germany,Netherland, Denmark,Finland,Sweden or Norway). I was seeking through many websites like state-space.forum or jobbnorge but rarely I found relevant projects that match my interest. Is there another way to find PhD too quickly, I actually have tried to directly contact professors but they don't get a response back. Any advices would be highly appreciated.
Thank you !
they dont settle, so they seem unstable. Can an unstable system be converted to a triple integrator?
with triple integrator i mean something like this;
s=tf('s')
sys=tf(1,[1 2])
sys=sys/s/s/s
thanks
I was referring the CTMS tutorials for motor control here . They have mentioned the specs as -
J = 0.01;
b = 0.1;
K = 0.01;
R = 1;
L = 0.5;
s = tf('s');
P_motor = K/((J*s+b)*(L*s+R)+K^2);
I think the specs taken here are random instead of from a real motor (or close to it), because the electric time constant L/R is 0.5 sec while the mechanical time constant J/B is 0.1 sec which seems impossible because as per my understanding the elec time constant is supposed to be much much faster (time shorter) than the mech constant.
Suppose a motor has J/B time constant as 2 seconds, then if a step input of 1 Nm torque is given it will reach 63% of the steady state speed in 2 seconds. Now if I give 2 Nm of step input then it will still take 2 seconds to reach 63% of twice the steady state input, right? Time response doesn't get faster just by increasing the input step amplitude right?
I am using RL algo DDPG for controlling a system. I am not that good at coding which is why I am stuck because i am trying to execute multiple action sampling with objective function calculation over a certain horizon , can anyone guide with any kind reference codes or suggestions that might make the task easier?
Hi all!
In my hobby projects I am designing some control systems through a micro-controller. By making mistakes, by experimenting and learning, I come up with a framework that may help in reducing the gap between control systems and software engineering and that I decided to share. You can find it here.
It is still a work in progress, but I would like to have some feedback on it, like if it is going towards the right direction, if I forget something, if there is something to improve or simply if it is just rubbish. :)
Pls tell me if this doesn’t fit the sub. I’ll remove it.
I am looking for books to understand motor control especially for PMSM motors using FOC. Even better if it has sensorless control methods. Currently I’m referring a book by R. Krishnan but I want something better.
Hi, I am relatively new to the domain of MPC and I feel kind of stuck with a problem.
I have a systen which can be written as
dx/dt = Ax + Bu
Where A is 4x4 and B is 4x1, u is a timeseries input to the system.
I am changing some system variables(say G1, G2, G3) such that matrices A and B would change over time. I have simulated several cases with predefined values of G1, G2 and G3, and have obtained the system response in each of those cases. Now I want to design a model predictive controller, which is given the reference, and it controls the system behaviour by manipulating G1, G2 and G3.
The problem is that A and B are complex functions of G1, G2 and G3, and separating them as a separate input parameters is not possible. I know that I'd have to use Linear Time Varying MPC approach to solve this problem, but I don't understand how to proceed in this problem. I have asked my professor as well but he hasn't responded to it till now.
Any help or reference would be appreciated. It might be a very trivial problem but looks very big to me at the moment.
I am learning about the Bayes filter, with lectures from Cyrill Stachniss.
At 15:00, we assume that the last action u_t does not help us estimate the previous state x_{t-1}. I understand that the action u_t cannot affect the previous state x_{t-1}, but from the viewpoint of feedback control u = Kx, given some information about the action u_t, we can have some information about the state. I think that feedback control is not a rare case, so how can we have a good reason to omit the last action u_t? I see the same question here, but there is no answer yet.
As the title says. I have a Bachelors in Mechatronics Engineering in Australia, and most enjoyed the robotics/control courses that I took. I'm looking for a graduate job that involves control theory, but I'm struggling to find one.
Do any Australians here know what companies are hiring grads for control positions?
Background: I am currently in grad school, completing my masters in mechanical engineering with a focus in autonomous cars and robotics. The coursework from my department focuses on control theory (SISO, MIMO, data-driven), all of which I find interesting although the classwork take the majority of my time. The issue is that I aiming for application engineer jobs after graduation, which focus on hardware implementation and programming (Python, ROS2, sometimes Rust) in the job requirements.
I am wondering if anyone has advice with how I can better prepare myself for the workforce while in academia?
… the students’ evaluation shall only be based on the students’ proficiency in solving control theory related problems, the communication skills and the way problems were approached and the capability of connecting theory with practice, eventually implementing something?
Out of curiosity. The question is for those involved in education.
The question is inspired by this article, as I believe such a concept could be extended to educational practices.
I am aware that this question should be asked at university level and should include many other fields not only control, but that is the field I am in :)
In my case way way less than 50% but I would interpret it as a failure as a teacher, I wouldn’t blame only the students.
Why do we need an observer when we can just simulate the system and get the states?
From my understanding if the system is unstable the states will explode if they are not "controlled" by an observer, but in all other cases why use an observer?
Former IFAC Industry chair Tariq Samad organized together with the new chair Alisa Rupenyan a workshop at CCTA 2023 how to bring Control Theory from academia into industry - and one stream was on living labs, this concept first termed by MIT, but broaden by the European Network of Living Labs (ENoLL), where you get Academics (ideator) , Industry (maintainer), Politicians (regulators) and Citizens (users) together in a often design thinking inspired work mode.
Now we have already identified several universities and private companies that operate such facilities (under different names) that help ideas transition from theory to practice, or in industry terms from theory to products/services.
Do you have experience with such labs? Could we use this threat to collect a bunch of addresses?
If you curious to join like-minded people with interest in such labs, there will be also a workshop at ECC in Stockholm this year: https://sites.google.com/view/24ecc-workshop-living-labs/home
Anyone here used do_MPC I am trying to setup control horizon which I can’t seem to find in documentation any leads will be appreciated
I want an inverse representation of my system P = K/(s*(T*s+1)).
Of course doing so would lead to more zeros than poles. What is a good to filter the system to make it causal?
i am developing a robot that is going to be in a maze solving competition,and when it comes to turning the bot,is it better to use the encoder as the Ut target in the pid loop,or the gyro? is it also viable to use the gyro to make sure the robot stays straight?
Hello everyone,
I'm currently an Engineering student and have a Control Engineering class and for one of my assignments I have been tasked with manually tuning a PID controller using Simulink. For context, the PID is within a lateral position system of a fighter jet landing on an aircraft carrier. So essentially keeping the aircraft along the centreline of the carrier.
So far, I have used the Ziegler-Nichols method in the tuning process and I've tuned the controller to a point where I am happy with the settling time and the steady state error. However, I have a 60% overshoot above the set point.
I wanted to get the opinion of people more experienced than me with controllers, would a 60% overshoot be deemed unacceptable? Considering I have a very low settling time and zero steady state error.
Thank you very much in advance for any responses :)
Usually when you control a motor, what they teach you in undergrad basic control lab is control its speed. And even for a robot, you PID their speed.
However, what if I want to just control its displacement? Say, I want it to rotate exactly 5 revolutions, expecting it to go as fast as possible and slowing down near the 5 revolutions. Here I don't care about its speed, you go as fast as you can until you're close.
But I don't know how to even program this, since you usually input the velocity to the motors in code. I don't know what equation to use in the code to calculate the velocity depending on distance traveled.
I just came across with a paper https://pubs.aip.org/aip/apl/article/119/18/181602/40534/Architecture-of-zero-latency-ultrafast-amplitude
They mentioned about a circuit which converts sine waves to the DC signal (using mathematic logic),
in terms of Atomic force microscopy RMS circuit is used for this purpose.
I am curious to test this to my device, by replace RMS, and I have plan to built it in Simulink and used it in the simulink real time. I made this on simulink but just for this question I haven't set any parameters yet. Is it will work same in real time?
i am developing a robot that is going to be in a maze solving competition,and when it comes to tuning PID Constants(kp,ki,kd) do they change depending on the surface that the motors are tuned on?if so,is there a way to create a self tuning algorthim so that it can tune itself on different surfaces? im new to using pid loops,and have just been wondering.
i am developing a robot that is going to be in a maze solving competition,and i am split between using an accelerometer or the encoders on the motor for the feedback. i have been having some issues syncing both motors,and i have been delegating whether i should use the accelerometer that i have on the bot for speed control and the encoders just to measure distance,or the opposite.what do you guys think is best?
Hi guys! I recently watched brian doughlas’ yt vid on control theory and although it’s been a long time since I studied control theory for my ug, I couldn’t really grasp how fb controller changes the stability of the system. It sure does change the dynamics, hence more available data for changes/modulation (in a good way). So where does instability occur from? If it’s self correcting and always feeds back the error for improvement when does unstable system happen?