/r/ControlTheory
Welcome to r/ControlTheory
Link to Subreddit wiki for useful resources
Official Discord : https://discord.gg/CEF3n5g
Related Subreddits:
/r/ControlTheory
It's a stupid question to be honest, but I realized that when working with control theory in class, I never had to worry much about what goes where as the system was usually predefined.
I had to code something a while ago and the question went as follows:
A robot is considered as a simple point translating in the environment (3 DOF). This robot has a maximum velocity but we suppose that an infinite acceleration is possible and the system has no delay. The environment imposes no constraint, so robot movements in the world perfectly reflect the command. At start, the robot is at a predefined initial position in the environment and it does not move.
Implement a simulation of this system that:
at program launch time, allows the user to set the initial position of the robot in the world, its maximum velocity and the time step of the simulation.
If the robot is static, it waits an input of the user. User can then input a target position where the robot has to go to. The robot then starts moving from its current position to that target position.
When I handed in the assignment, I was very conflict on whether to apply the PID on the acceleration or on the velocity. As the acceleration can be anything it means that so can the velocity. And the relationship between the two was linear. But in a physical system, my effect is the force which later translates to acceleration, which is why I'm so confused.
I had two codes, one controlling the velocity and the other controlling the acceleration and eventually handed in the one controlling the velocity as the systems seemed like they were the same just a more simple approach. But I want to know if what I did was correct.
Hi! I am looking to analyse a system featuring an N-to-1 analog multiplexer, but I cannot seem to formalize its behaviour into a proper transfer function (or approximation) of it. Some help or ideas would be greatly appreciated! I've sketched the multiplexer below, where x1, x2, .., xN are the N inputs, and y is the output.
The multiplexer (MUX) periodically cycles over the N input channels, tracking each channel for T seconds. For example, consider the simple case of two input channels. Then, the MUX output y(t) can be described by
where the weighting functions w1(t), w2(t) are visualized below.
What I've tried so far:
Starting off, the weighting functions can be expressed as sum of step functions. For example, considering again the case of N = 2, w1(t) is described by
and with Laplace transform
Then, of course I could write
but this seems like quite a bad expression to evaluate, and doesn't directly yield a transfer function.
I also tried to apply the weighting functions directly to the inputs and take the Laplace transform of that. However, this effectively leads to multiplying a signal f(t) with a delayed step function u(t-a), for which the Laplace transform becomes
and this does not seem to directly relate to the unshifted Laplace transform of f(t).
Good morning,
I’m working on a 3D ball trajectory prediction and I’m having some problems. I have the ball absolute position with an error (position provided by the robot and then a transformation is apply to have the absolute coordinate) and I’m trying to have an estimate of the ball trajectory in the next steps in order to move the robot near it. My main problem is that I only have the position coordinates, I don’t know the velocity or the acceleration (I tried to calculate the velocity considering the displacement and dividing by the time but the error is to high).
Do you have any idea on what to try? I made a test with a Kalman filter in order to reduce the error but it doesn’t seem to work, maybe a neural network can work? I never tried to implement one so I have no idea on how to start.
Sorry if there are some errors but English is not my native language
What it says. People who focus in controls, particularly for aerospace/robotics applications, what does your average day look like? Is there a lot of theory work? Implementation? Testing? Fine-tuning? What kind of softwares are a must-have?
My supervisor has assigned me with a task to find a research problem related to robotics and control engineering for my undergrad thesis.
I want to do research work in legged robotics control. Those of you who are working in relevant fields, could you please help me by suggesting some research problems?
Hi! I've been trying to prove this, however, I couldn't accomplished. I've tried using conjugates of the eigenvalue matrices that specified as in the question but I couldn't get further and got stuck. Could you help me with this please?
Is there a recommended text that is hands-on in the sense of explaining theory followed by implementation in code followed by problems with solutions.
I would rather a text that does not use simulink as I would like to re-implement the provided code in a different language for better understanding.
Hello everyone.
I have some problems where is needed to find a K gain value to set all poles of the characteristic equation to negative real part. But I'm confused in the way the characteristic equation is presented. For example:
8s ^ 4 + 5s ^ 3 + 6s ^ 2 + 5s + 2
This is one of the problems and only presents the polynomial expansion of that characteristic equation. I know this should be related to the form:
1 + KG(s)H(s)
So my intuition tells me that in this case K should be an independent term. How could I approach this problem and similar ones when only this information is presented?
Thanks for all the help.
I’m doing research on numerical techniques to design controllers. I’m evaluation one algorithms right now and looking for interesting and hard problems to benchmark and validate performance.
Targets systems range from simple linear systems high dimensional nonlinear systems.
Please share suggestions on tough control problems to use for validation, interesting problems for engaging audiences, and real engineering problems. If you can point to papers or public information on the dynamics, that would be great.
Hi, I'm not a controls engineer, I'm a bioengineering major who is now working with a simple robotic arm and has taken some classes on control theory. We covered all the basics plus optimal and predictive control and intro to reinforcement learning which were quite theoretical - the class didn't teach us how to apply these things. The professor showed us a little MATLAB which I've seen widely used (esp Simulink) in control system design, but not much more in the way of practical applications.
I have not used MATLAB much myself, instead I much prefer Python which I have a lot more experience with, and know a little C++ too.
What should I focus on to get competent at implementing control systems with appropriate hardware? Are these three languages all-encompassing in controls, and do I need to 'gitgud' at MATLAB? Thanks.
Hello to everyone! First time posting, I'm an engineering student that needs helping with a Control Theory assignment. I have to model an space-state based on a simplified differential equation that gives the vertical angle of a bycicle depending on rider's angle and the handlebar's angle. My system is second order and my question would be if it is possible to design an state feedback loop so I can control the system with both inputs. I have separated it into two systems, same output but each with an input and I can get the feedback gain with ackerman's formula for each of them (the gain is the same for both as they both come from the same differential equation, so same A and B matrix), but I don't know how to model the combined system. I'm using matlab and simulink for this.
Simulink model of combined system
I just used same A and B matrix and then added both C matrix and joined together the D matrix. Any tips are appreciated! Thanks in advance.
I am working on some class K function related proofs, but I was wondering if anyone stumbled across any resources that have some class K (and/or class K_inf) function properties? I'm looking for possible inequalities and such. There doesn't seem to be much explicit work on them as far as I could find.
Aspiring to become an automation and embedded systems engineer, I'm torn between focusing on practical skills like PLC programming, understanding sensors, motors, and variable speed drives, versus diving deep into theoretical concepts like phase lead, phase lag controllers, and predictive control. Additionally, I'm unsure about the necessity of learning Python in this field. Can anyone in the industry shed light on which skills are most essential for success in this career path, and how much emphasis should be placed on each area?
To the question “What kind of engineer are you?” I always have problems in answering to the point that today I just reply: “I am in-fact an applied mathematician”.
This because every time I say “control theory” people get curious and follow up with questions that I find difficult to answer. And they never get it. And next time you meet them they may ask the same question again:”Oh, I really didn’t get… “. To me it’s annoying, and I don’t want nor I am interested that they get right. But ofc I have to give an answer.
I tried to say that I work with “control systems” and it got a bit better. But then people understand that I am sort of electric gates technician, or that works in home surveillance design installations or that I am a PLC expert.
For a while I used to say “I am a missed mathematician” and well… you could guess the follow up question.
I tried to say “I study decisional strategies” and then they believe that I work in HR or in some management position.
To circumnavigate the problem, sometimes I just answer: “I sell drugs”. Such an answer works in a surprisingly high number of cases.
Now I say “I am an applied mathematician” when I cannot use the previous answer, which is not correct but probably is closer to the reality compared to the above definitions.
The point is that if you say mechanical, chemical, civil, building, etc, engineer, then people immediately relates. But what in our case?
Hello everybody!
I'm currently practicing on some basic control theory stuff (and learning matlab too), the idea is to learn how to develop a simple controller to implement a path following algorithm.
I have been told to use this block from https://it.mathworks.com/help/vdynblks/ref/vehiclebody3dof.html
as "test vehicle".
In order to identify a mathematical model of this vehicle (Transfer function) I want to implement a series of step-steer manouvers at different speeds and log the outputs (such as: yaw rate, lateral velocity, ...). Using tfest function https://it.mathworks.com/help/ident/ref/tfest.html#btkf8hm-6 I should be able to find the transfer function by also specifying the number of zeros and poles.
The question is: how do I know the number of zeroes and poles of that model? Should I linearize the model first?
thank you for your help
I am using a nonlinear MPC and have 4 states as my concentration in reactor and temperature and flow rate as manipulated variables. Can I control more than 2 states at any time point as I do not have enough degrees of freedom, my PI was against this idea as I used RL to control more than 2 set points for comparison with NMPC
You look at DCS integrators and the best they offer is always an MPC or a fuzzy PID. What happens when the process changes- like a vendor goes bankrupt and you have to use another reagent, the oil you're drilling is an unexpected grade etc. Now your model is useless.
Admittedly, I don't know what's within these MPC optimizations, but it just doesn't sound like a long term solution
Note: could not edit my previous post where additional information about the model was requested.
Looking for a way to solve the cost function circled in blue on image no. 7 with the additional term at time instance t+. I’ve never come across a function like this.
This is referred from the research paper ' A Powertrain LQR-Torque Compensator with Backlash Handling - P.Templin and B.Egardt'
I have attached snaps of the entire paper for more information and background of the topic. The simple LQR cost function on page no. 4 is solved and a control law is derived on page no.5. the state feedback conrol law for the optimization in question (page no.7) is what I need to form/derive.
I could use some help from the reddit community
Hey guys, in my lecture we had the formula of a PT2 : G(s) = w0^2 / (s^2 + 2Dw0s + w0^2 ) With D beim the damping and w0 being the „circle frequency“ (English isn’t my first language;)). Then in the next slide we have the formula of the step response of this PT2 being h(t) = 1-e^-(Dw0t)[cos(wdt)+D/(sqrt(1-D^2))sin(wdt)] With wd being the dampened frequency.
The lecturer said we could get there by multiplying G(s) with a step (1/s) and then using the correspondence table to transform to time. I tried getting to the formula by separating the fraction in multiple fractions to the find corresponding formulas in the correspondence table, but have been unable to do so. When I searched for my Problem in the internet there was always a classification of D is 0 or bigger then 1 or smaller etc.
My question: How do I analytically get to this step response in the time domain? Especially since in the solution there is a multiplication, implying a folding in the s domain?
PS: if there is a way to write nicer formulas on Reddit lmk.
.
I'm putting together a mass excel sheet with all types of control, their applications, pros, cons, etc, so I can understand how to choose which control type to use in a given scenario, but I am having trouble determining broad category titles.
I've separated them into General feedback (bang-bang, PID, state feedback, robust), General feedforward (input shaping), optimal (LQR, LQG, MPC, Reinforcement), adaptive (MRAC, Scheduling, Self-tuning regulator, adaptive least squares), and intelligent (Fuzzy logic, NN).
Questions:
Is there any resource out there that already does this?
Are these categories appropriate? Many control types seem to overlap in different categories so I'm finding it difficult to truly categorize these correctly.
Hi folks,
Suppose we have discrete time linear state space model of the form x_t = A x_{t-1} + B u_t + w_t, y_t = C x_t + v_t. Here x_t is the state vector, y_t is the observation vector, u_t is the control vector, w_t is the process noise, v_t is the observation noise, and A, B, C are model parameters (matrices). Where can I find literature on identification of the model parameters A, B, C, and also the distribution of the noises, given the observations and controls? I have come across the Maximum Likelihood method that would identify an "equivalent model" (A', B', C', noise distribution) that would produce the same output for the same input. But for my problem, it is necessary that we identify the original (A, B, C, noise distribution). I believe we would need to impose some structure to the matrices A, B, C and noise for their unique identification, but could not find the latest result regarding this. Thanks in advance for any help!