/r/ROS
This subreddit is for discussions around the Robot Operating System, or ROS.
This sub is for discussions around the Robot Operating System, or ROS.
Note that ROS (aka ROS 1) and ROS 2 are different. Please mention which one you're talking about when asking for help or starting a discussion.
See also
Both ROS (aka ROS 1) and ROS 2
ROS 1
ROS 2
If you're looking for help, please read the support guidelines before asking your question. Following those guidelines really helps in getting your question answered.
It is also recommend to post your question on Robotics Stack Exchange, since that is meant to be the central place for questions about ROS. You can of course post here and link to your Robotics Stack Exchange question.
Rules
Also see reddit's rules and the ROS etiquette.
Related subreddits
/r/ROS
I just finished the ROS1 and ROS2 tutorials from RoboticsBackEnd and like them a lot. For my next step of learning, I am thinking to purchase a robot kit. I have found Duckiebot and Turtlebot3 that fit my budget. Are there anything else I should consider?
What would you choose if you were in my position?
I want to build a prototype for a personal project, do you have any technologies to recommend me that would make my projet possible ?
Thanks
I am working on a robot and I'm trying to control it with Nav2. I am using the SmacPlannerHybrid in my planner server and Regulated Pure Pursuit in my controller server. My robot has issues navigating curves and the final pose is a bit off my expected Nav2 Goal. I need help with configuring my nav2_params yaml file especially the planner and controller servers. You can point me to reading materials to help with the tuning.
Below is a link to my nav2_params.yaml: https://github.com/codecraftersknust/pathfinders/blob/main/src/wro_ws/src/robot_description/config/nav2_params.yaml
And these are the controller server and planner server respectively
Controller
controller_server:
ros__parameters:
use_sim_time: False
controller_frequency: 20.0
min_x_velocity_threshold: 0.001
min_y_velocity_threshold: 0.5
min_theta_velocity_threshold: 0.001
progress_check_plugin: "progress_checker"
goal_checker_plugins: ["goal_checker"]
controller_plugins: ["FollowPath"]
# speed_limit_topic: "/speed_limit"
progress_checker:
plugin: "nav2_controller::SimpleProgressChecker"
required_movement_radius: 0.5
movement_time_allowance: 10.0
goal_checker:
plugin: "nav2_controller::SimpleGoalChecker"
xy_goal_tolerance: 0.5
yaw_goal_tolerance: 0.5
stateful: True
FollowPath:
plugin: "nav2_regulated_pure_pursuit_controller::RegulatedPurePursuitController"
desired_linear_vel: 10.0
lookahead_dist: 0.8
min_lookahead_dist: 0.4
max_lookahead_dist: 1.0
lookahead_time: 1.5
rotate_to_heading_angular_vel: 2.2
transform_tolerance: 0.1
use_velocity_scaled_lookahead_dist: true
min_approach_linear_velocity: 0.2
approach_velocity_scaling_dist: 0.6
use_collision_detection: true
max_allowed_time_to_collision_up_to_carrot: 1.5
use_regulated_linear_velocity_scaling: true
use_fixed_curvature_lookahead: false
curvature_lookahead_dist: 0.5
use_cost_regulated_linear_velocity_scaling: false
regulated_linear_scaling_min_radius: 0.1
regulated_linear_scaling_min_speed: 0.2
use_rotate_to_heading: false
allow_reversing: false
rotate_to_heading_min_angle: 0.785
max_angular_accel: 4.0
max_robot_pose_search_dist: 10.0
Planner
planner_server:
ros__parameters:
planner_plugins: ["GridBased"]
use_sim_time: False
GridBased:
plugin: "nav2_smac_planner/SmacPlannerHybrid"
downsample_costmap: false
downsampling_factor: 1
tolerance: 0.25
allow_unknown: true
max_iterations: 1000000
max_on_approach_iterations: 1000
max_planning_time: 5.0
motion_model_for_search: "DUBIN"
angle_quantization_bins: 72
analytic_expansion_ratio: 3.5
analytic_expansion_max_length: 3.0
analytic_expansion_max_cost: 200.0
analytic_expansion_max_cost_override: false
minimum_turning_radius: 0.10
reverse_penalty: 2.0
change_penalty: 0.0
non_straight_penalty: 1.0
cost_penalty: 1.3
retrospective_penalty: 0.015
lookup_table_size: 20.0
cache_obstacle_heuristic: false
debug_visualizations: false
use_quadratic_cost_penalty: False
downsample_obstacle_heuristic: True
allow_primitive_interpolation: False
smooth_path: True
smoother:
max_iterations: 1000
w_smooth: 0.3
w_data: 0.2
tolerance: 1.0e-10
do_refinement: true
refinement_num: 2
I want to build a centralised database in ros2 for robot swarm where multiple bots share there data in a single database and have the ability to access it , the database has the maps , objects location , and all bots are autonomous, which database should I use and what should be my approach to build it ?
I am currently working on an Ackermann steering robot and I currently have most functionality working. I am able to navigate to specific poses. But I’m trying to use the follow dynamic point feature of Nav2. I have followed the tutorials but I’m not getting it to work as expected. My final goal is to navigate using a virtual carrot which keeps updating once the robot almost reaches the previous virtual carrot. Can anyone give any assistance on this
I am building a robot and I have a LiDAR and an intel real sense d435i. I am currently using just the lidar for odometry of the robot but it drifts when I move at higher speeds. I need assistance in setting up sensor fusion between the lidar and IMU for better odometry. I know I have to use robot_localization package and maybe ekf. But I need help with writing the yaml file to set up parameters to tune for best odometry. It’s a simple miniature Ackermann steering robot
I'm looking for a simple way to start gazebo with the simulation paused. I'm trying in fact to implement some effort control but initially the robot must not fall due to gravity
i'm using container to run my ros2 project, and each time i created a new container, the gazebo takes for like 3min to launch, i only have one table model inside .world
and so i captured the network traffic of gazebo, found that the //table/model.tar.gz
is not requested at start up, but model config file(from //database.config
), then //table/model.tar.gz
.
And it's so weird that the requests are not run in parallel which make it takes about 3min to load all that model config.
what can i do about this???
Hi everyone,
I'm trying to play an MCAP file using the following command:
ros2 bag play -s mcap diagnostic_0.mcap
Expected Behavior:
The bag should play as expected.
Actual Behavior:
I receive the following error:
yaml-cpp: error at line 1, column 12: bad conversion
System Details:
Does anyone know what could be causing this issue or how to resolve it? Thanks in advance for your help!
does anybody know? I bought from a 3rd party seller and cannot figure out the maintenance code.
Hello everyone,
I am currently learning ROS2 + MoveIt2 (with the python API) to create a 6 dof robotic arm.
I achieve to run the tutorials and understood globaly the motion planning with moveit. I created my custom URDF and moveit configuration using the moveit assistant, and I'm currently creating my own controller package for my custom arm with moveit_py. It's not yet finished, but is going the right direction and I think I'll soon reach the part where I'll need to physically build the thing. (yeah!)
A computer only have USB ports and the USB-GPIO boards seems to be "limited". RaspberryPi seems limited on computing capabilities and GPIO count.
What kind of hardware are commonly used to run the last ROS2 LTS effectively and have a lot of GPIO ? Is it always SBC or does it exists some big USB-GPIO interfaces ? (I like to have it running on a computer to be able to do other things in parallel)
I find the BeagleBone boards, are they effective and able to run moveit ?
What are your recommandations ?
Thanks ;-]
Hello has anyone here worked with a Schneider lexium Coobot ?
I need to ask some questions
Tia
Why does my terminal show an "invalid frame_id of odom" error when passing it to "frame_id of base link", given that the view_frame result indicate the frame_id for both odom and base_link present?
Hi,
I have an external LIDAR system running ROS2 on a 'robot' that cannot move by itself (the poor robot is not crippled, I dismantled the motion parts for a technical reason). This 'robot of disabiliy' is placed on a 2nd robot that CAN move, an iRobot Create3 actually. Motion of the Create3 is controlled by me with simple python scripts, not even using the ROS 2 capabilities of the Create3.
I am running Gmapping on the 'riding robot' , and even though currently it does not get any Odometry info, the basic particle filtering of Gmapping succeeds in 'adapting' the pose in response to the actual physical movement. But this requires small discontinues motions, giving the algorithm enough time to 'adapt'. It is also not the 'proper' way to do this, as we rely completely on the scan matching rather than using Odometry as well.
I control the motion, so I can generate an odometry information stream in any format. My question is what would be the fastest, dirtiest, hackiest way to do so in ROS2, so that Gmapping would be fed the motion info rather than constantly have to be 'surprised' by it and respond to it in the pose estimation?
thanks in advance,
even if I don't get a complete answer, I always learn from the replies!
title
i know it says it requires atleast a 3070, but i dont have that :(
Hi everyone, for the first time I want to use a gripper. For now I’m looking onto solutions on how to control a gripper. The gripper model is Schunk egp 40 nn iol https://schunk.com/ch/de/greiftechnik/parallelgreifer/egp/egp-40-n-n-iol/p/000000000001372735 . I’m new to this gripper thing. Could anyone give me any tips or information on how to control this gripper ?
TIA
I'm learning ROS2 from RoboticsBackEnd's YouTube tutorials. My question is about "ros2 run demo_nodes_cpp" talker and listener. I understand that talker is a publisher, listener is a subscriber, and chater is a topic. My specific questions are:
Thanks.
As title mentioned, are there any tips/advice that you wish you’d know back when you’re learning ROS? I am interested in this because I am learning ROS2, and sometimes I wonder how I can code “better” and more efficiently.
One tip from me first: Find whatever tool to keep track of the packages, nodes and message types when it scales up. For this I’m using Figma’s FigJam.
Hello Everyone,
Yesterday i was helping a couple of friends set-up a similated robot using gazebo. I noticed this seemed this be a bit of an issue for newcomers to the community so i quickly put together this repo to help with this.
This packages provides 2 simulated robots: a 2-wheeled and a 4-wheeled differential drive robot. There are currently only four sensors available: camera, depth camera, 2D lidar & 3D lidar. The simulation also comes with slam and navigation set-up, so its easy to get going with-out having to change the source code. There are a few launch arguments available for different use cases as well.
The package currently works on Foxy & Humble (tested on both). Jazzy support, more robot types and ros2 control will be added soon.
Feel free to use this package to get started with robot simulation, learn the basics of working with Gazebo or even as a basic template. Let me know if there is anything else that should be added or can be improved.
Code and more information is available here
I have recently started using Docker in ROS dev. My approach is simple, basic terminal use, and most things work. I face difficulty with VS code dev containers and visualisation, etc. What is your experience so far? Do you feel that dev on the host is like living in the Stone Agee?
I’m a computer vision engineer with a strong interest in drones and robotics. In my experience, most of my CV work has been fairly standalone, with limited integration into robotic systems. For instance, I worked on a warehouse project where cameras mounted above the cargo area recognized package dimensions and damage. While we used ROS for some parts, it was primarily a CV-focused project (one of the most interesting I’ve worked on).
I’m curious - are there interesting projects right now at the intersection of CV and ROS? Has the industry matured to the point where CV plays a significant role in robotics workflows?
Would love to hear your thoughts or experiences. Thanks!
Hello everyone! I need help with the following problem:
I'm developing a program in esp32 using esp-idf with micro-ROS and freeRTOS.
Basically I need to create a "best effort" publisher that publishes an odometry message, after doing several very basic tests I can't get it to work.
The strangest thing of all is that when the publisher is of type "default" it publishes the odometry message correctly, but it's not what I need since as I mentioned before, I need a "best effort" publisher for its speed.
Another thing I found is that the "best effort" publisher works correctly when it publishes for example an Int32 message or an array.
Any suggestions on what could be happening? I share my code with you:
#include <string.h>
#include <stdio.h>
#include <unistd.h>
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "esp_log.h"
#include "esp_system.h"
#include <uros_network_interfaces.h>
#include <rcl/rcl.h>
#include <rcl/error_handling.h>
#include <nav_msgs/msg/odometry.h>
#include <rclc/rclc.h>
#include <rclc/executor.h>
void
PublishWheelOdom();
#ifdef CONFIG_MICRO_ROS_ESP_XRCE_DDS_MIDDLEWARE
#include <rmw_microros/rmw_microros.h>
#endif
#define RCCHECK(
fn
) {
rcl_ret_t
temp_rc = fn; if((temp_rc != RCL_RET_OK)){printf("Failed status on line %d: %d. Aborting.\n",__LINE__,(
int
)temp_rc);vTaskDelete(NULL);}}
#define RCSOFTCHECK(
fn
) {
rcl_ret_t
temp_rc = fn; if((temp_rc != RCL_RET_OK)){printf("Failed status on line %d: %d. Continuing.\n",__LINE__,(
int
)temp_rc);}}
rcl_publisher_t
publisher;
nav_msgs__msg__Odometry odom_msg;
void
timer_callback(
rcl_timer_t
*
timer
,
int64_t
last_call_time
)
{
RCLC_UNUSED(
last_call_time
);
if (
timer
== NULL) {
return;
}
PublishWheelOdom();
}
void
micro_ros_task(
void
*
arg
)
{
rcl_allocator_t
allocator = rcl_get_default_allocator();
rclc_support_t
support;
rcl_init_options_t
init_options = rcl_get_zero_initialized_init_options();
RCCHECK(rcl_init_options_init(&init_options, allocator));
#ifdef CONFIG_MICRO_ROS_ESP_XRCE_DDS_MIDDLEWARE
rmw_init_options_t
* rmw_options = rcl_init_options_get_rmw_init_options(&init_options);
// Static Agent IP and port can be used instead of autodisvery.
RCCHECK(rmw_uros_options_set_udp_address(CONFIG_MICRO_ROS_AGENT_IP, CONFIG_MICRO_ROS_AGENT_PORT, rmw_options));
//RCCHECK(rmw_uros_discover_agent(rmw_options));
#endif
// create init_options
RCCHECK(rclc_support_init_with_options(&support, 0, NULL, &init_options, &allocator));
// create node
rcl_node_t
node;
RCCHECK(rclc_node_init_default(&node, "esp32_int32_publisher", "", &support));
// create publisher
RCCHECK(rclc_publisher_init_best_effort(
&publisher,
&node,
ROSIDL_GET_MSG_TYPE_SUPPORT(nav_msgs, msg, Odometry),
"/wheel/odometry"));
// create timer,
rcl_timer_t
timer;
const
unsigned
int
timer_timeout = 1000;
RCCHECK(rclc_timer_init_default(
&timer,
&support,
RCL_MS_TO_NS(timer_timeout),
timer_callback));
// create executor
rclc_executor_t
executor;
RCCHECK(rclc_executor_init(&executor, &support.context, 1, &allocator));
RCCHECK(rclc_executor_add_timer(&executor, &timer));
while(1){
rclc_executor_spin_some(&executor, RCL_MS_TO_NS(100));
usleep(10000);
}
// free resources
RCCHECK(rcl_publisher_fini(&publisher, &node));
RCCHECK(rcl_node_fini(&node));
vTaskDelete(NULL);
}
void
PublishWheelOdom()
{
// get current time for header timestamp field in microROS message
rosidl_runtime_c__String__assign(&odom_msg.header.frame_id, "odom");
rosidl_runtime_c__String__assign(&odom_msg.child_frame_id, "base_link");
odom_msg.header.stamp.sec = 0;
odom_msg.header.stamp.nanosec = 0;
// set the position
odom_msg.pose.pose.position.x = 5;
// x position
odom_msg.pose.pose.position.y = 5;
// y position
odom_msg.pose.pose.position.z = 0.0;
// z position
// set the orientation
odom_msg.pose.pose.orientation.x = 0.0;
// x orientation
odom_msg.pose.pose.orientation.y = 0.0;
// y orientation
odom_msg.pose.pose.orientation.z = 0.0;
// z orientation
odom_msg.pose.pose.orientation.w = 1.0;
// w orientation
// set the linear velocity from cmd_vel topic
odom_msg.twist.twist.linear.x = 4;
odom_msg.twist.twist.linear.y = 0.0;
odom_msg.twist.twist.linear.z = 2;
// covariance matrix
// publish odometry message
RCSOFTCHECK(rcl_publish(&publisher, (
const
void
*)&odom_msg, NULL));
}
void
app_main(
void
)
{
#if defined(CONFIG_MICRO_ROS_ESP_NETIF_WLAN) || defined(CONFIG_MICRO_ROS_ESP_NETIF_ENET)
ESP_ERROR_CHECK(uros_network_interface_initialize());
#endif
//pin micro-ros task in APP_CPU to make PRO_CPU to deal with wifi:
xTaskCreate(micro_ros_task,
"uros_task",
CONFIG_MICRO_ROS_APP_STACK,
NULL,
CONFIG_MICRO_ROS_APP_TASK_PRIO,
NULL);
}
I write here just because I finished to update the Scout Mini gazebo package to Gazebo Fortress from Gazebo Classic. In the future I will also update the Bunker package from Agilex Robotics.
If you are working with any of the Agilex robots and are you planning to leave Gazebo Classic you can contribuite at the following link: https://github.com/mattiadutto/ugv_gazebo_sim
This can also be used as a reference guide for moving from Gazebo Classic to Gazebo Fortress using ROS2 Humble. The starting point was this one: https://github.com/agilexrobotics/ugv_gazebo_sim/tree/humble
I'm working on a project that involves a custom 6axis robotic arm with a laser as the end effector
I'll be using it for laser powder directed energy deposition. Additive manufacturing of large metal parts.
The robot vendor has given me the custom ROS environment to tool path the robot.
There's gotta be a tool in ROS for additive manufacturing right?
I saw this RAM project but it doesn't look like it works with humble?
I'm completely new to this so any help is appreciated!
I currently use fancy, abb, kuka robots for additive but this new robot is a VERY special one of a kind...
Hello everyone, I am trying to get into the community a bit so I could fully get to know the extent I can go to with ROS, I started out a project to try and use a kinect on a UAV to map an indoor environment, I had worked with turtlebot and got to know most conventions ROS uses.
I am currently struggling with the large nature of rtabmap itself, I can’t figure out how to get it running using the kinect and imu readings.
I have tried the Setup your Robot roswiki and came up with the below launch file, i use robot localization to use the imu as an external odometry source and feed it so as not to rely on visual odometry only.
<launch> <!-- Kinect Depth and RGB Topics --> <node pkg="nodelet" type="nodelet" name="depth_to_pointcloud" args="load depth_image_proc/point_cloud_xyzrgb rtabmap_ros/rgbd_sync" output="screen"> <remap from="depth_registered/image_rect" to="/kinect/depth/image_raw"/> <remap from="rgb/image_rect_color" to="/kinect/rgb/image_raw"/> <remap from="rgb/camera_info" to="/kinect/rgb/camera_info"/> <param name="approx_sync" value="true"/> </node> <!-- RTAB-Map Visual-Inertial Odometry Node --> <node pkg="rtabmap_ros" type="odometry" name="rtabmap/odometry" output="screen"> <param name="frame_id" value="base_link"/> <param name="odom_frame_id" value="odom"/> <param name="publish_tf" value="true"/> <param name="wait_for_transform" value="true"/><!-- Enable Visual-Inertial Odometry -->
<param name="Odom/Strategy" value="2"/> <!-- 2 enables visual-inertial odometry -->
<param name="Odom/IMUUsed" value="true"/>
<!-- Kinect RGB-D data remapping -->
<remap from="rgb/image" to="/kinect/rgb/image_raw"/>
<remap from="depth/image" to="/kinect/depth/image_raw"/>
<remap from="rgb/camera_info" to="/kinect/rgb/camera_info"/>
<!-- IMU data remapping -->
<remap from="imu" to="/imu"/>
</node> <!-- RTAB-Map SLAM Node --> <node pkg="rtabmap_ros" type="rtabmap" name="rtabmap" output="screen">
<param name="frame_id" value="base_link"/>
<param name="odom_frame_id" value="odom"/>
<param name="subscribe_depth" value="true"/>
<param name="subscribe_rgbd" value="true"/>
<param name="subscribe_scan" value="false"/>
<remap from="odom" to="/rtabmap/odom"/>
<remap from="rgb/image" to="/kinect/rgb/image_raw"/>
<remap from="depth/image" to="/kinect/depth/image_raw"/>
<remap from="rgb/camera_info" to="/kinect/rgb/camera_info"/>
</node>
</launch>However I keep getting distorted odometry. How can i debug this or if there is anything wrong with my launch files, i would appreciate the help.
Does nav2 planners support 3d planning in? Or is there any pkg support that?
I have worked on ROS1 as well as on ROS2 (little bit) in my undergrad. It’s been 1 year and haven’t looked at ROS or used anywhere. I am back at the basic level. I want to get back to the level where I was by doing crash course or basic projects to get back on track.
Any suggestions on which tutorials, GitHub projects or free resources that I can follow to learn ROS2? Also, what is most preferred language python or c++ in ROS projects, if there are any specific videos or resources to learn c++ used in ROS, do suggest.