/r/ControlTheory

Photograph via snooOG

Welcome to r/ControlTheory

Link to Subreddit wiki for useful resources


Official Discord : https://discord.gg/CEF3n5g


Related Subreddits:

/r/ControlTheory

30,251 Subscribers

12

Need Assistance in creating a linear model for non-linear system

Hi, I hope I've come to the right place with this question. I feel the need to talk to other people about this question.

I want to model a physical system with a set of ODEs. I have already set up the necessary nonlinear equations and linearized it with the Taylor expansion, but the result is sobering.

Let's start with the system:

Given is a (cylindrical) body in water, which has a propeller at one end. The body can turn in the water with this propeller. The output of the system is the angle that describes the orientation of the body. The input of the system is the angular velocity of the propeller.

To illustrate this, I have drawn a picture in Paint:

https://preview.redd.it/zq1gd15p1pyd1.jpg?width=304&format=pjpg&auto=webp&s=60ab7ff6819babc51de2eeb52f1a30220f8a6934

Let's move on to the non-linear equations of motion:

The angular acceleration of the body is given by the following equation:

https://preview.redd.it/z02mcmdr1pyd1.jpg?width=208&format=pjpg&auto=webp&s=97e5cabcefb8e5631229bd315709c0c17b181c94

where

https://preview.redd.it/5efdnshu1pyd1.jpg?width=208&format=pjpg&auto=webp&s=6671efd76751bcce00fc58a60b225e2707650402

is the thrust force (k_T abstracts physical variables such as viscosity, propeller area, etc.), and

https://preview.redd.it/3ed83hew1pyd1.jpg?width=208&format=pjpg&auto=webp&s=05082875f5eccf5024249c247e81180b8734d08b

is the drag force (k_D also abstracts physical variables such as drag coefficient, linear dimension, etc.).

Now comes the linearization:

I linearize the body in steady state, i.e. at rest (omega_ss = 0 and dot{theta}_ss = 0). The following applies:

https://preview.redd.it/83e93q9y1pyd1.jpg?width=409&format=pjpg&auto=webp&s=a72a6f8beec675515179269cd85ab5e6a78ead10

https://preview.redd.it/i5xv0qrz1pyd1.jpg?width=261&format=pjpg&auto=webp&s=b3e195d68841d09d0002e9ba3901b4510d9da72f

https://preview.redd.it/dt6gxwg02pyd1.jpg?width=255&format=pjpg&auto=webp&s=bdef5d59ebd80764954d1e3241645cb2db5f82cc

This gives me, that the angular acceleration is identical to 0 (at the steady state).

Finally, the representation in the state space:

https://preview.redd.it/g5w7w1v12pyd1.jpg?width=351&format=pjpg&auto=webp&s=a85f2bb37b7a31cad12cc2f7df6baed99c4d7938

Obviously, the Taylor expansion is not the method of choice to linearize the present system. How can I proceed here? Many thanks for your replies!

Some Edits:

  • The linearization above is most probably correct. The question is more about how to model it that way that B is not all zeros.
  • I'm not a physicist. It is very likely that the force equations may not be that accurate. I tried to keep it simple to focus on the control theoretical problem. It may help to use different equations. If you know of some, please let me know.
  • The background of my question is, that I want to control the body with a PWM motor. I added some motor dynamics equations to the motion equations and sumbled across that point where the thrust is not linear in the angular velocity of the motor.

Best solution (so far):

Assumung the thrust FT to be controllable directly by converting w to FT (Thanks @ColloidalSuspenders). This may also work with converting pwm signals to FT.

PS: Sorry for the big images. In the preview they looked nice :/

25 Comments
2024/11/03
14:09 UTC

6

System Identification avion for novel multi-rotor designs

I have a novel multi rotor design and currently doing flight tests. However I would like to implement a seamless system identification and control parameter optimization workflow for better performance. Can someone advice or link relevant resources for those without hands-on experience in sys identification? Also if you are a flight control engineer, how have you done it before to get a model that's closer to the real aircraft?

3 Comments
2024/11/02
11:36 UTC

14

What programs do you use for projects?

Hi guys ,

I worked on matlab and simulink when I designed a field oriented control for a small Bldc.

I now want to switch to python. The main reason why I stayed with matlab/ simulink is that I could sent real time sensor data via uart to my pc and directly use it in matlab to do whatever. And draining a control loop in simulink is very easy.

Do you know any boards with which I can do the same in python?

I need to switch because I want to buy an apple macbook. The blockset I’m using in simulink to Programm everything doesn’t support MacBooks.

Thank you

24 Comments
2024/11/01
21:25 UTC

2

Is there a streamlined way of deriving equations of motion using the Euler-Lagrange formalism?

As far as I understand, the Euler-Lagrange formalism presents an easier and vastly more applicable way of deriving the equations of motion of systems used in control. This involves constructing the Lagrangian L and derivating the Euler-Lagrange equations from L by taking derivatives against generalized variables q.

For a simple pendulum, I understand that you can find the kinetic energy and potential energy of the mass of the pendulum via these pre-determined equations (ighschool physics), such as T = 1/2 m \dot x^2 and P = mgh. From there, you can calculate the Lagrangian L = K - V pretty easily. I can do the same for many other simple systems.

However, I am unsure how to go about doing this for more complicated systems. I wish to develop a step-by-step method to find the Lagrangian for more complicated types of systems. Here is my idea so far, feel free to provide a critique to my method.

Step-by-step way to derive L

Step 1. Figure out how many bodies there exist in your system and divide them into translational bodies and rotational bodies. (The definition of body is a bit vague to me)

Step 2. For all translational bodies, create kinetic energy K_i = 1/2 m\dot x^2, where x is the linear translation variable (position). For all rotational bodies, create K_j = 1/2 J w^2, where J is the moment of inertia and w is the angle. (The moment of inertia is usually very mysterious to me for anything that's not a pendulum rotating around a pivot) There seems to be no other possible kinetic energies besides these two.

Step 3. For all bodies (translation/rotation), the potential energy will either be mgh or is associated with a spring. There are no other possible potential energies. So for each body, you check if it is above ground level, if it is, then you add a P_i = mgh. Similarly, check if there exists a spring attached to the body somewhere, if there is, then use P_j = 1/2 k x^2, where k is the spring constant, x is the position from the spring, to get the potential energy.

Step 4. Form the Lagrangian L = K - V, where K and V are summation of kinetic and potential energies and take derivatives according to the Euler-Lagrange equation. You get equation of motion.

Is there some issues with approach? Thank you for your help!

3 Comments
2024/11/01
13:29 UTC

9

Connecting ML model to nonlinear system model?

How can we combine a data-driven (machine learning) model of hair growth (for example) with a nonlinear system model to improve accuracy?

And a more complex case is, what concepts should I look into to connect a ML hair treatment model (not just hair growth) to the nonlinear system of hair growth?

Sorry if my question is vague. The goal is to understand what treatments work better. relying on both mathematical modeling and on ML models.

4 Comments
2024/11/01
12:36 UTC

7

Digital NLMPC text?

Hey all,

Would anyone have recommendations for design/implementation of a digital nonlinear MPC? I’ve built linear MPCs in the past, however I’m interested in upgrading for full nonlinear.

I would bias towards texts on the subject rather than pre-built libraries.

Appreciate your guidance!

5 Comments
2024/10/31
19:42 UTC

4

DFIG-WT Slow simulation in Simulink

Hello guys, I'm working on a project where I have to model and control a DFIG based wind turbine using different methods like sliding mode, adaptive control using neural networks, backstepping ...etc, I've successfully modeled this system and tested vector control and a simple adaptive backstepping on it and simulation kind of slow down a bit but it's okay. But when I try other advanced techniques the simulation is either too slow, and if I try a discrete time solver it wouldn't work, even though I changed gains and tried different kinds of models using different kinds of tools like blocks, s-functions, interpreted matlab functions, it just frustrates me I been working on it for 3 months, if anyone had ever work on such system, please help me out, thanks,

4 Comments
2024/10/31
16:23 UTC

15

Control Theory and Biology: Academical and/or Practical?

Hello guys and gals,

I am very curious about the intersection of control theory and biology. Now I have graduated, but I still have the above question which was unanswered in my studies.

I read in a previous similar post, a comment mentioning applications in treatment optimization—specifically, modeling diseases to control medication and artificial organs.

I see many researchers focus on areas like systems biology or synthetic biology, both of which seem to fall under computational biology or biology engineering.

I skimmed this book on this topic that introduces classical and modern control concepts (e.g. state-space, transfer functions, feedback, robustness) alongside with little deep dive to biological dynamic systems.

Most of the research, I read emphasizes mostly on understanding the biological process, often resulting in complex non-linear systems that are then simplified or linearized to make them more manageable. The control part takes a couple of pages and is fairly simple (PID, basic LQR), which makes sense given the difficulties of actuation and sensing at these scales.

My main questions are as follows:

  1. Is sensing and actuation feasible at this scale and in these settings?

  2. Is this field primarily theoretical, or have you seen practical implementations?

  3. Is the research actually identification and control related or does it rely mainly to existing biology knowledge (that is what I would expect)

  4. Are there industries currently positioned to value or apply this research?

I understand that some of the work may be more academic at this stage, which is, of course, essential.

I would like to hear your thoughts.

**My research was brief, so I may have missed essential parts.

8 Comments
2024/10/31
15:42 UTC

9

How do the job opportunities looks like in Robotics/Medical Robotics?

I'm someone with keen interest in Robotics, Semiconductors as well as Biology. I'm currently pursuing an undergrad in Computer Engineering but p torn up at this point on what to do ahead. I've a pretty diverse set of interests, as mentioned above. I can code in Python, C++, Java, and C. I'm well familiar with ROS as well as worked on a few ML projects but nothing too crazy in that area yet. I was initially very interested in CS but the job market right now is so awful for entry level people.

I'm up for Grad school as well to specialize into something, but choosing that is where I feel stuck right now. I've research experience in Robotics and Bioengineering labs as well.

Any help would be greatly appreciated!

8 Comments
2024/10/31
11:52 UTC

2

I need help to solve technical issue using Casadi.

Hello, I am using the Casadi library to implement variable impedance control with MPC.

To ensure the stability of the variable impedance controller, I intend to use a control Lyapunov function(CLFs). Therefore, I created a CLFs function and added it as a constraint, but when I run the code, the following error occurs:

CasADi - 2024-10-31 19:18:40 WARNING("solver:nlp_g failed: NaN detected for output g, at (row 84, col 0).") [.../casadi/core/oracle_function.cpp:377]

After debugging the code, I discovered that the error occurs in the CLF constraints, and the cause lies in the arguments passed to the CLF function. When I pass the parameters as constant vectors to the function, the code works fine.

Additionally, even if I force the CLFs function's result to always be negative, the same error occurs when using the predicted states and inputs.

Am I possibly using the Casadi library incorrectly? If so, how should I fix this?

Below is my full code.

#include "vmpic/VMPICcontroller.h"
#include <Eigen/Dense>
#include <casadi/casadi.hpp>
#include <iostream>
#include <vector>

using namespace casadi;

VMPIController::VMPIController(int horizon, double dt) : N_(horizon), dt_(dt)
{
    xi_ref_ = DM::ones(6) * 0.8;

    xi = SX::sym("xi", 6);
    z = SX::sym("z", 12);
    v = SX::sym("v", 6);
    H = SX::sym("H", 6, 6);
    F_ext = SX::sym("F_ext", 6);
    v_ref_ = SX::sym("v_ref", 6);
    slack_ = SX::sym("s", 1);

    w_ = SX::vertcat({0.01, 0.1, 0.1, 1, 1, 1});
    eta_ = 1400;
    epsilon_ = SX::ones(num_state) * 0.01;

    set_dynamics(xi, z, v, H, F_ext);
    set_CLFs(z, v, xi, H, F_ext, slack_);

    initializeNLP();
}

// change input lammbda to H and v_ref
std::pair<Eigen::VectorXd, Eigen::VectorXd> VMPIController::solveMPC(const Eigen::VectorXd &z0,
                                                                     const Eigen::MatrixXd &H,
                                                                     const Eigen::MatrixXd &v_ref,
                                                                     const Eigen::VectorXd &F_ext)
{
    std::vector<double> z0_std(z0.data(), z0.data() + z0.size());
    DM z_init = DM(z0_std);

    // give parameter to NLP
    std::vector<double> H_std(H.data(), H.data() + H.size());
    DM H_dm = DM::reshape(H_std, H.rows(), H.cols());

    std::vector<double> F_ext_std(F_ext.data(), F_ext.data() + F_ext.size());
    DM F_ext_dm = DM(F_ext_std);

    std::vector<double> v_ref_std(v_ref.data(), v_ref.data() + v_ref.size());
    DM v_ref_dm = DM(v_ref_std);

    DM u_init = vertcat(v_ref_dm, DM(xi_ref_));

    // set ineauqlity constraints (num)
    std::vector<double> lbx(num_state * (N_ + 1) + num_input * N_ + N_, -inf); // state + input
    std::vector<double> ubx(num_state * (N_ + 1) + num_input * N_ + N_, inf);  // state + input
    std::fill(lbx.end() - N_, lbx.end(), 0.0);                                 // slack
    std::fill(ubx.end() - N_, ubx.end(), inf);                                 // slack

    // Constraints bounds 설정
    int n_dyn = num_state * (N_ + 1); // Num of dynamics constraints
    int n_clf = N_;                   // Num of CLFs constraints
    std::vector<double> lbg(n_dyn + n_clf);
    std::vector<double> ubg(n_dyn + n_clf);

    // 1. Initial state constraints
    for (int i = 0; i < num_state; ++i)
    {
        lbg[i] = z0[i];
        ubg[i] = z0[i];
    }

    // 2. Dynamics constraints
    for (int i = num_state; i < n_dyn; ++i)
    {
        lbg[i] = 0;
        ubg[i] = 0;
    }

    // 3. CLFs constraints
    for (int i = n_dyn; i < n_dyn + n_clf; ++i)
    {
        lbg[i] = -inf;
        ubg[i] = 0;
    }

    std::map<std::string, DM> arg;
    arg["lbx"] = DM(lbx);
    arg["ubx"] = DM(ubx);
    arg["lbg"] = DM(lbg);
    arg["ubg"] = DM(ubg);
    arg["p"] = horzcat(H_dm, F_ext_dm, v_ref_dm);

    DM x0 = DM::zeros(num_state * (N_ + 1) + num_input * N_ + N_, 1);
    // input initial guess
    x0(Slice(num_state * (N_ + 1), num_state * (N_ + 1) + num_input * N_)) = DM::ones(num_input * N_, 1);

    arg["x0"] = x0;

    auto res = solver_(arg);

    // Extract the solution
    DM sol = res.at("x");

    auto u_opt = sol(Slice(num_state * (N_ + 1), num_state * (N_ + 1) + num_input * N_));

    std::vector<double> u_opt_std = std::vector<double>(u_opt);
    Eigen::VectorXd u_opt_eigen = Eigen::Map<Eigen::VectorXd>(u_opt_std.data(), u_opt_std.size());

    Eigen::VectorXd v_opt = u_opt_eigen.head(6);
    Eigen::VectorXd xi_opt = u_opt_eigen.tail(6);

    return std::make_pair(v_opt, xi_opt);
}

void VMPIController::initializeNLP()
{
    SX Z = SX::sym("Z", num_state * (N_ + 1)); // state = {x, \dot{x}}
    SX U = SX::sym("U", num_input * N_);       //  u = {v; xi}
    SX P = horzcat(H, F_ext, v_ref_);          // parameters
    SX S = SX::sym("S", N_);                   // slack variable
    SX obj = 0;
    SX g = SX::sym("g", 0);

    g = vertcat(g, Z(Slice(0, 12)));

    for (int i = 0; i < N_; i++) // set constraints about dynamics
    {
        SX z_i = Z(Slice(num_state * i, num_state * (i + 1)));
        SX u_i = U(Slice(num_input * i, num_input * (i + 1)));

        SX v_i = u_i(Slice(0, 6));
        SX xi_i = u_i(Slice(6, 12));

        std::vector<SX> args = {v_i, z_i, xi_i, H, F_ext};
        SX z_next = F_(args)[0];

        g = vertcat(g, z_next - Z(Slice(num_state * (i + 1), num_state * (i + 2))));

        obj += cost(v_i, xi_i, xi_ref_, v_ref_, z_i);
    }

    for (int i = 0; i < N_; i++) // Control Lyapunov Function
    {
        SX z_i = Z(Slice(num_state * i, num_state * (i + 1)));
        SX u_i = U(Slice(num_input * i, num_input * (i + 1)));
        SX s_i = S(i);
        SX v_i = u_i(Slice(0, 6));
        SX xi_i = u_i(Slice(6, 12));

        std::vector<SX> args_clf = {z_i, v_i, xi_i, H, F_ext, s_i};
        SX clfs = CLFs_(args_clf)[0];

        g = vertcat(g, clfs);

        obj += mtimes(s_i, s_i);
    }

    SXDict nlp = {{"x", vertcat(Z, U, S)}, {"f", obj}, {"g", g}, {"p", P}};

    Dict config = {{"calc_lam_p", true},     {"calc_lam_x", true},  {"ipopt.sb", "yes"},
                   {"ipopt.print_level", 0}, {"print_time", false}, {"ipopt.warm_start_init_point", "yes"},
                   {"expand", true}};

    solver_ = nlpsol("solver", "ipopt", nlp, config);
}

void VMPIController::set_dynamics(const casadi::SX &v, const casadi::SX &z, const casadi::SX &xi, const casadi::SX &H,
                                  const casadi::SX &F_ext)
{
    SX A = SX::zeros(12, 12);
    SX sqrt_v = sqrt(v);
    SX H_T_inv = inv(H.T());

    A(Slice(6, 12), Slice(0, 6)) = -mtimes(H_T_inv, mtimes(diag(v), H.T()));
    A(Slice(6, 12), Slice(6, 12)) = -2 * mtimes(H_T_inv, mtimes(diag(xi * sqrt_v), H.T()));
    A(Slice(0, 6), Slice(6, 12)) = SX::eye(6);

    SX B = SX::zeros(12, 1);
    B(Slice(6, 12)) = mtimes(inv(mtimes(H, H.T())), F_ext);

    auto f = [&](const SX &z_current) { return mtimes(A, z_current) + B; };

    // RK4 implementation
    SX k1 = f(z);
    SX k2 = f(z + dt_ / 2 * k1);
    SX k3 = f(z + dt_ / 2 * k2);
    SX k4 = f(z + dt_ * k3);

    // Calculate next state using RK4
    SX z_next = z + dt_ / 6 * (k1 + 2 * k2 + 2 * k3 + k4);

    // Create CasADi functions
    F_ = Function("F_", {v, z, xi, H, F_ext}, {z_next});
}
casadi::SX VMPIController::cost(const casadi::SX &v, const casadi::SX &xi, const casadi::SX &xi_ref,
                                const casadi::SX &v_ref, const casadi::SX &z)
{

    SX cost = mtimes((w_ * (v - v_ref)).T(), w_ * (v - v_ref)) + eta_ * mtimes(((xi - xi_ref)).T(), (xi - xi_ref));

    return cost;
}

void VMPIController::set_CLFs(const casadi::SX &z, const casadi::SX &v, const casadi::SX &xi, const casadi::SX &H,
                              const casadi::SX &F_ext, const casadi::SX &slack)
{

    // lyapunov function (constraints)
    SX M = mtimes(H, H.T());
    SX x = z(Slice(0, 6));
    SX xdot = z(Slice(6, 12));

    SX sqrt_v = sqrt(v);
    SX K = mtimes(mtimes(H, diag(v)), H.T());
    SX D = 2 * mtimes(H, mtimes(diag(xi * sqrt_v), H.T()));

    SX V = 0.5 * mtimes(xdot.T(), mtimes(M, xdot)) + 0.5 * mtimes(x.T(), mtimes(K, x));

    SX V_dot = -mtimes(xdot.T(), mtimes(D, xdot)) + mtimes(xdot.T(), F_ext);

    SX lambda = 0.3;

    SX clf = V_dot + lambda * V - slack;

    CLFs_ = Function("CLFs_", {z, v, xi, H, F_ext, slack}, {clf});
}

Additionally, the dynamics I implemented are as follows:

https://preview.redd.it/viru2s1dt2yd1.png?width=490&format=png&auto=webp&s=c6d2e62308eff34cc0665f11d68e9ba170a4dfad

4 Comments
2024/10/31
11:21 UTC

4

Hobby robot project

Hi !

I would like to start a hobby project of building a small robot using vision technology. Eventually I would like to program it myself in python and learn to apply some ML to detect targets/objects to drive to.

But firstly I need something to easily built it. I thought about some Lego but I want something that is easily integrated with the a micro controller of some sort and that has weels, motors etc . Any ideas ?

2 Comments
2024/10/30
18:55 UTC

5

Predictor Feedback - Backstepping Transformation

ss for Delay Compensationfor Nonlinear, Adaptive, and PDE Systems

Hello everyone,

I'm studying input-delay nonlinear systems and I'm having trouble understanding this specific page. I have gone though the book as well as the more recent Predictor Feedback for Delay Systems: Implementations and Approximations and this idea is present in both and there is something I'm missing.

the proposed solution to the problem of input time delay to have a control law such that u(t-D) = k(x(t)), but since it would violate causality to have u(t) = k(x(t+D)), we build a predictor that obtains the trajectory solution at x(t+D) given x(t), by computing:

x(t+D) = \int_{t}^{t+D} f(x(s), u(s-D)) ds + x(t)

Which we call the Predictor P(t), thus our causal control law is k(P(t)).

So my question here is how did we get (11.4)? I can see that it is similar to the rule that I got but I don't understand why it is from t-D to t and what is the Z(t) doing there. I understand the initial condition as the evolution of the system from -D to 0.

Finally, I don't understand the backstepping transformation quite yet:

If U(t) = k(P(t)) as in (11.3) then (11.6) implies that W(t) = 0, and that U(t) = k(\Pi(t)). I'm sure if that was all there is then (11.6) wouldn't be there. Why is \Pi(t) there? If someone can point to me what I'm missing then I'd be infinitely grateful.

0 Comments
2024/10/30
14:43 UTC

7

MPC for tracking a time varying reference

EDIT: I more or less found what I was looking for in "A nonlinear model predictive control framework using reference generic terminal ingredients" by Kohler, Muller and Allgower, thanks for anyone who helped. I wrote the post while on the phone so now that I reread what I wrote, it's indeed not very clear what I was asking for. My issue was what kind of assumptions would I have needed to have on my problem to guarantee that my mpc would always be feasible and stable even if my reference is a non constant trajectory that might change suddenly. e.g. I might want to track a sequence of states of which I know the value in the N next steps, so x_0, x_1, ..., x_N but the evolution of these sequence might have some sudden changes that make my mpc infeasible and in the case of feasibility, how could I prove that starting from a different initial state I am able to converge to a dynamic trajectory.

11 Comments
2024/10/30
08:44 UTC

7

Number optimal control over large horizon

If the system of interest like ship or satellite have large horizon time or final time, while applying direct transcription requires a large number of grid points which increases the computational complexity. How is this is tackled?

8 Comments
2024/10/30
04:56 UTC

24

How relevant is square root filtering in the modern era of computing?

I am working on a project at work that involves inertial navigation and have some questions about square root Kalman Filters.

From what I have read the main benefit to using a square root Kalman Filter (or factored or whatever) is greater numerical precision for the same amount of memory at the expense of increased computational complexity. The Apollo flight computer used this strategy because they only had a 16 bit word length.

Modern computers and even microprocessors usually have 32 bit or even 64 bit instruction sets. Does this mean that square root filtering isn't needed except for the most extreme cases?

8 Comments
2024/10/29
22:44 UTC

10

Course on Digital/Discrete Controls

Can someone suggest good coursework/textbooks/youtube playlist related to Discrete Controls? I would like to learn topics such as sample theory, z-transform, and other analysis tools that are used to analyze and design digital control systems; Analysis: state space and input/output representation, modeling and analysis of digital control systems;

5 Comments
2024/10/28
00:20 UTC

12

Math Pathway for control theory question

I basically have 2 choices for math progressions in college after calc 3 and I'm debating which to go for. Looking for what would be more useful in the long run for controls. The main options are:

  1. Linear, then ODEs

  2. Linear+diff eqs, then partial diff eqs (but linear and diff are combined into a single faster paced course which skips some topics, so I would get less in depth knowledge)

Basically, is a class on partial differential equations more important than greater knowledge of linear and ODEs?

14 Comments
2024/10/27
21:37 UTC

5

Need some help/advice ma

Hello all,

So I’m a soon-to-be graduate with a Bachelor’s in Mechanical Engineering and I have had some research experience, in Control Theory and its applications, as an undergrad student. I plan on pursuing a PhD or a masters soon but it just doesn’t seem to be working out due to logistical/financial issues. However, I still want to work on my research profile (in Control Theory/Robotics/Optimization(maybe?)) and I am not sure how to go about it without enrolling in a university. I’ve thought about reaching out to some professors in local universities that research in the field and maybe work on a project of theirs, but that doesn’t seem like it would work out. Can anyone offer advice on what I can do?

0 Comments
2024/10/26
17:08 UTC

2

ESC - Bachelor's thesis ideea

I would like to design an ESC for a brushed motor for my bachelor's thesis but I m afraid it would be too simple. What feature could I add for it to be different from an Aliexpress ESC that can be bought for 15$?

Ideally I would like for it to have a hardware implementation, not only a software part.

5 Comments
2024/10/26
09:10 UTC

1

Singularity in fixed-time controllers

Hello,

Lately I have been focusing on designing fixed-time controllers. One drawbacks in parallel to the good performances of these controllers, is that when state approaches the settling time, the controller falls into singularity. ex: u = h(x)/(Ts-t) where h(x) is some feedback control function and Ts is the settling time.

Please how to avoid this.

Thanks.

1 Comment
2024/10/25
19:16 UTC

7

Books for Inverter Design

Hi guys, looking for good starter/ semi professional books on controls for electrical power system components like Inverter, Rectifier and stuff. Thank you.

9 Comments
2024/10/25
16:58 UTC

4

Setting up a High-level and a Mid-level controller to pass commands to Low-level controller

Hello fellow Control-enthusiasts,

I have a set-up that I want to implement in ROS 1 (my actual robot is on ROS 1), and I was wondering if any experts here could donate some wisdom regarding how to implement it.

So, I have a High-level controller (let’s call it HLC) (runs at 2 Hz) that gives me the end-effector poses, and a Mid-level-controller (let’s call it MLC) (runs at 20 Hz) which takes this and performs Differential-Ik to give joint-velocities of the manipulator. These calculated joint-velocities are then sent over to their respective joint-velocity-controller topics to be dealt-with by Ros’s Low-level controllers.

Now, some hurdles that I foresee are as follows:

  1. Both the HLC and the MLC need to be provided by the current system-state at the time of their execution. So, if the system-state ros-topics are being published at 200-messages a sec and we start at the 0th-message, then the HLC must receive the 100th and 200th message, while the MLC must receive the 10th, 20th,…, 190th, 200th message.
  2. This act of rejecting the 1st, 2nd, …, and 4th message and then choosing the 5th message to send to the MLC needs to be done by (one) worker, while another chooses the messages for the HLC.

So, all-in-all, I need 4 workers for this job:

  • Worker A: One worker to sort messages for the HLC,
  • Worker B: One worker to carry-out the task of the HLC,
  • Worker C: One worker to sort messages for the MLC,
  • Worker D: One worker to carry-out the task of the MLC.

Now, I am planning to use the multiprocessing-package of python to ensure that the timing is maintained:

  1. Workers A and C will be receiving a queue each, which will be populated at 200 Hz. Every time they receive a message, they will check if it is time to accept the current message or not. If it is time, they will pass it on to their out-going queue (A is outputting at 2 Hz | B is outputting at 20 Hz).
  2. Worker B will be receiving data at a single-rate (2 Hz), and will be sending out data at 2 Hz.
  3. Worker D will be receiving data at two-different rates: 2 Hz and 20 Hz. Before starting its work, it will compare the current-time with the expected time-of-arraival of each message.

Now, I was wondering if there is a better way of doing this, especially in regards to having two workers A and C for sorting messages. Any suggestions are welcome.

2 Comments
2024/10/25
15:19 UTC

13

Pole-Zero Cancellation

I recently read about pole-zero cancellation in feedback loop. That we never cancel a unstable pole in a plant with a unstable zero in thae controller as any disturbance would blow up the response. I got a perfect MATLAB simulation to this also.

Now my question is can we cancel a non-minimum phase zero with unstable pole in the controller. How can we check this in MATLAB if the response gets unbounded with what disturbance or noise ?

12 Comments
2024/10/25
10:37 UTC

32

Good/best book to start with?

I am very new to control theory (I have math, physics, and programming backgrounds), and I am searching for a good book to start from. Currently, I am looking toward Ogata's "Modern Control Engineering." Is it a good book to start with or not?

25 Comments
2024/10/24
17:49 UTC

15

Job diversity in controls

Hey all,

the title might be a bit misleading but my question basically is, how flexible someone is, having a rigorous education in rather advanced control methods, to work in different fields? I myself am about to finish a degree in chemical engineering, but have had a strong focus on control theory during my studies, up to the point where more than half the courses i took were controls-related. How difficult would it be to get a job in another sector (e.g. robotics, automotive, aerosoace)? I would guess the only problem would be the the system modeling ability. I do have some mechanical systems expertise from my bachelor's but it limited. Would this fact deter potential employers? I think, I would be able to pick those things up rather quickly. Anyways, hope you could maybe share your experieces here :)

Have a great day!

9 Comments
2024/10/24
15:05 UTC

4

I need ideas for a capstone project - something to mix controls with machine learning

I want to mix controls and machine learning for my capstone project, but I am lacking ideas.

I was even thinking of maybe some reinforcement learning, but while I got experience with more traditional machine learning applications, reinforcement learning would be a new for me. It's either an opportunity to learn or a terrible idea to pick something I don't know for a capstone project. Or both.

4 Comments
2024/10/23
19:50 UTC

2

TwinCat Cascade Controller for a servomotor

Hey.

I am working on my first college project in controls engineering. The project consists of an industry robot (3-axis robot-arm), where each axis is steered by a servo motor and controlled using TwinCat's cascade controller. In my previous controls classes we didn't really discuss cascade controlling and focused more on state-space, stability criterion, observer design and non-linearity.

After following the model used in previous projects for the servomotor, 2 out of the 3 servomotors function properly. The third one though(the one at the base) has this peculiarity where it drives well until it reaches a low point on either sides and then the current controller starts oscillating. The current doesn't oscillate if the arm is perpendicular to the base (most likely because the motor doesn't have to overcome the momentum created by gravitational foces which are quite considerable for this motor). Once you turn off the control, the motor produces an alarm sound, due to the current oscillating. I have tried reducing the gain factor for the velocity controller, it did reduce the current oscillations but increased considerably the velocity oscillations. After calling Beckhoff tech support, the guy recommend using a notch filter for 200Hz with a bandwidth of 300Hz. This seemed to work at the beginning but once I drove the arm to almost ground level the oscillations were back. I have seen a couple videos on filtering, it seems to fix the symptoms not the issues of the control system and I am quite perplexed on how to go further.

I will appreciate any advice!

The last scope is for set and actual velocity

The controls parameters

0 Comments
2024/10/23
14:13 UTC

Back To Top