hi ,
i'm trying to get odometry from hector_slam bu using hokuyo laser because the odometry that i'm getting from wheel encoder is not so good , so i configured the hector_slam and i'm getting a good odometry data but the problem is that despite i put
it's still publishing trasnformation from map to odom and i don't want it because amcl is already publishing it , i want only the odometry data not the tf , somemone can tell me how to deal with it ?
here is the launch file i'm using to get odometry from hector_slam :
↧
hector_slam how to disable tf ?
↧
Where I can find the launch files for hector_mapping after installtion
Hello everyone
I used `sudo apt-get install ros-kinetic-hector-salm` to generate a map using my rplidar
After running my rplidar successfully ( I can see the red points on the rviz from my lidar), after `roslaunch hector_slam_launch tutorial.launch`, the rviz opens up, but I don't see anything on the grid squares.
I'm trying to follow this tutorial [here](https://hackaday.io/project/7284-oscar-omni-service-cooperative-assistant-robot/log/26164-first-foray-into-ros) to edit my launch file so I can see the map
However, I'm not sure how to access my launch file as the tutorial mentioned
↧
↧
No tf data. Actual error: Fixed Frame [map] does not exist
Hello
I am trying to use hector_slam with rplidar
After following the steps on this tutorial [here](https://github.com/NickL77/RPLidar_Hector_SLAM)
I only see a grid without any map on rviz, with this error on the left side
No tf data. Actual error: Fixed Frame [map] does not exist
↧
Hector-Slam not producing map, lookupTransform timed out error
I am running Ubuntu 16.04 and running ROS Kinetic. I am currently using an ouster LIDAR that obtains pointcloud2 messges. I am using pointcloud_to_laserscan package to obtain laserscan messages from the pointcloud2 messages. After running rviz, I see that the package is running correctly and showing pointcloud and laserscan data.
pointcloud_to_laserscan:
rosrun pointcloud_to_laserscan pointcloud_to_laserscan_node cloud_in:=/os1_node/points
Next I launch the hector_slam package using the following launch file:
After running "rosrun tf view_frames", my tree shows:
odom -> base_link -> os1
, with recent transform being about 0.005 seconds old so I believe my transformation are correct.
After hector_slam us running, I obtain the following:
[ INFO] [1534352731.363850888]: lookupTransform base_link to os1 timed out. Could not transform laser scan into base_frame.
[ INFO] [1534352731.869131126]: lookupTransform base_link to os1 timed out. Could not transform laser scan into base_frame.
[ INFO] [1534352732.372954411]: lookupTransform base_link to os1 timed out. Could not transform laser scan into base_frame.
[ INFO] [1534352732.874232147]: lookupTransform base_link to os1 timed out. Could not transform laser scan into base_frame.
If I run "rostopic echo map", all I obtain is -1 repeating
Any thoughts on what I am doing wrong?
↧
Fusing absolute position information and IMU-data using the ekf_localization_node
Hello,
I have a robot car with a laser-scanner and an imu and would like to fuse the position information of the two sensors.
For laser based localization I am using the hector_mapping node which produces a /poseupdate topic. Additionally I am using an imu producing the /imu/data topic.
Those two shall be fused to provide a more accurate position estimate at a higher rate.
After incorporating the suggestions from Tom Moore my launch files look like this:
**Edit 1**
I am getting better behaviour now. What I did is based on the following thoughts: The odom->baselink transformation according to Rep105 should only be computed by odometry sources (by def. relativ/differential information). However before I was using differential set to false in the /imu0 data of the odom->baselink ekf, so the odometry got the absolute heading information from the imu. Could this have caused the orientation errors?
As I understood Rep105 the map->odom transform is computed indirectly by using absolute sensory information. So I thought it needs the absolute information from all source. That is why I set, differential to false for the imu topic of the map->odom ekf.
Here are my updated launchfiles (the mess with hector frames reverted):
**The odom-baselink ekf-instance:**
[true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
[false, false, false,
false, false, true,
false, false, false,
false, false, true,
true, true, false]
**The map-odom ekf:**
[true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
[false, false, false,
false, false, true,
false, false, false,
false, false, true,
true, true, false]
Is there any way to check whether the ekfs are really fusing the information (and *not* throwing away the imu data)?
For testing that I set the usage of all poseupdate values for both ekfs to false so as to only use the imu. Then of course everything was drifting wildly but I could see rotations and accelerations of the vehicle.
Although the baselink frame is now stable and the odom frame is only driftnig very little (10-20cm) I sometimes get jerks in the baselink position of a couple of centimetres. I read the ekf does this because of improper covariances? Setting the differential option to true in the map->odom ekf did not change anything though.
↧
↧
move_base not executing, robot does not receive the goal
Hello,
I am doing SLAM using HUSKY A200, (ros1 kinetic on both end - robot and remote pc). I am using **hector mapping** for my mapping. I want to do SLAM without saving the map, just by using the 2D Nav goal in the rviz. I am using husky's **move_base** for that purpose. I am using a **pepperl_fuchs_r2000 laser scanner.**
**My ROS master is set to in my remote pc .bashrc as:**
export ROS_MASTER_URI=http://cpr-a200-0468:11311
Network config:
Husky:
ROS_IP: 192.168.131.1
ROS_HOSTNAME:cpr-a200-0468
Remote PC:
ROS_IP:192.168.131.10
HOSTNAME: rosi
**SETUP:**
**All my husky bringup scripts (navsat.launch, um7.launch, base.launch are added as robot_upstart script. Only r2000(laser), hector_mapping, move_base is launched from my remote pc.
I can rostopic echo /tf from my remote pc on husky boot up.**
I had some issues with the tf frames and time mismatch between the system.
TF_OLD_DATA ignoring data from the past for frame base_link
I solved those using other ros posts, also used [chrony](http://wiki.ros.org/ROS/NetworkSetup) for syncing the husky time with my remote pc time. Now I can get the scanned map but when I send the nav goal it does not execute.
I did publish my messages from rviz and tried listening over on husky end:
> rostopic echo /move_base_simple/goal
but no message appears on the husky terminal (sshed over wifi).
Just to make sure I send some cmd_velocity across the system from the terminal:
husky:
rostopic pub /cmd_vel geometry_msgs/Twist "linea::
x: 0.1
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0"
publishing and latching message. Press ctrl-C to terminate
remote pc:
rostopic echo /cmd_vel
But I hear nothing but the robot moves.
Vice versa:
> remote pc: published
>> husky listening: nothing received
>> robot does not move.
The launch file launching the laser, hector_mapping and move_base is as follows:
Images:
TF tree view before launching laser, hector_mapping, move_base : [C:\fakepath\before.png](/upfiles/15362256786426905.png)
rqt_graph before launching laser, hector_mapping, move_base : [C:\fakepath\before_launch_laser.png](/upfiles/1536225740559899.png)
TF tree view before launching laser, hector_mapping, move_base: [C:\fakepath\after.png](/upfiles/15362257804520006.png)
rqt_graph before launching laser, hector_mapping, move_base : [C:\fakepath\after_laser.png](/upfiles/15362258195711496.png)
I cannot figure out why the communication is not happening across the system. Any help will be appreciated. Please let me know if I need to give any other information.
↧
endless spinning with move_base and hector mapping
I am using Husky A200 in ROS kinetic. I am using P&F R2000 laser scanner. I have am getting a map using hector_mapping. I am using move_base for simple navigation using RVIZ.
I launch my scanner and move_base node inside the husky. I visualize it in rviz in my own remote pc. When I provide 2D nav goal from the rviz, the mobile base keeps spinning. It throws me error:
[ERROR] [1538035032.225474602]: Extrapolation Error: Lookup would require extrapolation into the future. Requested time 1538035032.285401881 but the latest data is at time 1538035032.281328883, when looking up transform from frame [odom] to frame [map]
[ERROR] [1538035032.225506623]: Global Frame: odom Plan Frame size 80: map
[ WARN] [1538035032.225563373]: Could not transform the global plan to the frame of the controller
[ERROR] [1538035032.225597427]: Could not get local plan
[ INFO] [1538035032.425304326]: Got new plan
[ERROR] [1538035032.425386196]: Extrapolation Error: Lookup would require extrapolation into the future. Requested time 1538035032.485577569 but the latest data is at time 1538035032.481471291, when looking up transform from frame [odom] to frame [map]
I tried the suggestions provided [here](https://answers.ros.org/question/38507/why-so-much-spinning-in-place-with-move_base/?comment=261349#post-id-261349)
My launch file looks like below:
It could be because my remote pc and husky pc time are not sync. I tried the suggestions in ROS/network setup guide but could not sync the time.
Any help will be appreciated.
↧
UnicodeEncodeError while running roslaunch (kinetic)
Hi there, I'm very new to ROS.
I'm trying to connect my lidar (EAI ydlidar X4) with hector mapping, and downloaded from
git clone https://github.com/tu-darmstadt-ros-pkg/hector_slam
I've created a launch file (from tutorial online):
however, when I roslaunch it, I got:
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
'ascii' codec can't encode character u'\xa0' in position 0: ordinal not in range(128)
The traceback for the exception was written to the log file
and in the log:
.......
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 365, in resolve_args
resolved = _resolve_args(arg_str, context, resolve_anon, commands)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 376, in _resolve_args
for a in _collect_args(arg_str):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 434, in _collect_args
buff.write(c)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 0: ordinal not in range(128)
[rospy.core][INFO] 2018-10-03 12:28:17,248: signal_shutdown [atexit]
Anyone knows how to solve this?
↧
How to map with hector_mapping and just a Lidar
Hi, I just got a lidar (ydlidar x4), and would like to use it to map (just holding the lidar and raspberry pi myself and walk around the room without odometry)..I'm very new to ROS. I have read quite a lot posts...but still could fix the errors..
The lidar is connected to raspberry pi 3, and the scanning data is successfully shown in rviz. But as I tried with hector
_slam, I got an error:
[ INFO] [1539142958.281154496]: lookupTransform base_link to laser_frame timed out. Could not transform laser scan into base_frame.
[ WARN] [1539142958.855603123]: No transform between frames /map and /base_link available after 20.009138 seconds of waiting. This warning only prints once.
To start with hector_slam, I roslaunch the all_node.launch file (from another tutorial) below:
While for the mapping default.launch:
For your information, here's my tf tree:

Please, anyone knows how to solve this?
=========================================================
After changing the all_nodes.launch file:

↧
↧
Simultaneous Hector Mapping and Gas Distribution Mapping with RVIZ
Good day, I am new to ROS Answer and beginner of ROS.
Referred to the title above, this [Image A](https://imgur.com/IARuwMf) illustrates the desired goal I would like to achieve. It is done in LabVIEW and right now, I would like to achieve the similar result by using RVIZ It is doing Hector Mapping and Sensor value mapping simultaneously in real time.
Currently, I have successfully implement Hector SLAM and this [Image B](https://imgur.com/XIb4QfG) shows the result of my Hector SLAM implementation.
My question is, which type of message should my data published to so it can be visualized as Image A? For example, should my data publish as visualization_msgs/Marker, visualization_msgs/MarkerArray, nav_msg or sensor_msg.
Advice and recommendation are very welcome.
Thank you for the assist !
↧
hector-mapping Lookup would require extrapolation into the future
Hello guys,
I'm generating a map with hector mapping, I have the follow system:
- Ubuntu 16.04 on Virtualbox;
- ROS kinetic;
- Laser scanner Sick LMS111;
- hector slam;
- Ixxat usb to can v2;
- IMU Memsic (6 DOF --> 3 accel, 3 gyro) canbus communication.
I installed all drivers, launch the system and hector_mapping generate a good map.
----------
I copied this same system on a pc with native Ubuntu 16.04, launching the system I have a problem:
on rviz the laserscan topic blink with this error:
For frame [laser]: No transform to fixed frame [map]. TF error: [Lookup would require extrapolation into the future. Requested time 1542901381.561356359 but the latest data is at time 1542901381.541110389, when looking up transform from frame [laser] to frame [map]
I've been trying to understand why this is happening:
- The CPU usage is low;
- map-to-odom transformation is published by hector_mapping;
- odom-to-base_link is a dynamic transformation and is published by a node at 20hz;
- base_link-to-laser is a static transformation:
I read that could be a synchronization problem trought multiple pc, but in this case I have only one pc.
I don't understand why on virtual machine works all good while in the native ubuntu machine I have the flickering topic.
Thanks in advance.
↧
Problem with setting origin of hector map
Hello!
I want to have the start position of my robot to be in the middle of the map when running hector slam so I have set map_start_x and map_start_y to 0.5. However, the result of this is that the middle point of the map is half a grid cell length down to the right, not in the middle. Why is it not in the middle?
↧
hector_mapping odometry
Hi,
Is it possible to use the published odometry/tf from hector_mapping and use it to localize in a know map instead of SLAM/updating the map?
Thanks in advance
↧
↧
How does `hector_mapping` build the coordinate system?
Now I want to know the localization accuracy of [hector_mapping](http://wiki.ros.org/hector_mapping), so I need to measure the actual position of the robot and compare it with the position of `hector_mapping` publish on `slam_out_pose`. As far as I know, after the [cartographer](https://github.com/googlecartographer/cartographer) starts, the direction in which the robot advances is defined as the positive direction of the x-axis, and then based on this coordinate system, the position information of the robot is output. Is `hector_mapping` based on such a mechanism to establish a coordinate system? If not, what is the `hector_mapping based on to establish the coordinate system?
↧
Get the localization track of hector_mapping
I want to use the localization part of [hector_mapping](http://wiki.ros.org/hector_mapping), so I want to test the accuracy of the localization. I am going to set a straight physical track for the robot, so the path of the robot is a very straight line. Then I want to get the `hector_mapping` localization trajectory(a track containing all the localization points), and then compare the actual trajectory with the `hector_mapping` locating trajectory. How can I get the the above track measured by `hector_mapping`? Or is there a better way to test the localization accuracy of `hector_mapping`?
↧
Improve the localization accuracy of `hector_mapping`
I have a [rplidar A2](https://www.slamtec.com/en/Lidar/A2), I want to use it to provide localization information for my robot. I designed a track like the photo below, and the lidar will be moved 1 meter along the track every time I test.

If the `hector_mapping` localization information is accurate, the trajectory observed in `rviz` will have a high degree of coincidence with the the x-axis. However, I tested it many times and got the following trajectories, the result shows that the localization error is relatively large.



And this is the output of `slam_out_pose`(The ideal value should be around (1,0), and he scan angle of the lidar is 360 degrees):

Is there a way to improve the localization accuracy of `hector_mapping`?
↧
Intersection is not mapped properly in hector_mapping
Hi all,
I am trying to map a tunnel using hector_mapping. While the mapping works well throughout the tunnel, it is missing the junction. The tunnel is not planner. There are some elevations present through its path. Is this why the map is not properly generated? Any clue how to fix this?
The generated map is attached (intersection is marked). I am using ros melodic with Ubuntu 18.04.

↧
↧
Can I perform Hector Navigation using ROS kinetic?
Hello,
I am trying to perform navigation on my Robot using ROS. So far I am able to do SLAM using Hector SLAM. Now to perform Navigation I am considering hector navigation. But, when I saw the ROS Wiki Page of [Hector_navigation](http://wiki.ros.org/hector_navigation), On the top of the page, it says that "Only released in EOL distros : Electric, Fuerte, Groovy, Hydro". So I wonder whether it is possible to use hector navigation in Kinetic?.
↧
Visualization on multiple machines
I'm trying to visualize my lidar data and hector_mapping on my main pc.
Everything works fine when using the following launch files of hector_mapping: tutorial.launch and mapping_default.launch, (see code below) on my raspberry Pi.
But when I try showing the same data with rviz on my main pc, I can not show the Map for example (While echo'ing the map data works fine) without checking the Unreliable option and within no time the following message appears:
a message of over a gigabyte was predicted in tcpros. that seems highly unlikely, so I'll assume protocol synchronization is lost.
Running everything on my main pc except the lidar node with the /scan publisher is also not working. It is not able to get the transform data:
No transform between frames /map and scanmatcher_frame available after 20.002555 seconds of waiting. This warning only prints once.
I feel like that in my first problem maybe to much data is send to correctly display my data? And in the second problem there is something going wrong with my TF data. When running 'rosrun tf tf_monitor' on my main pc no tf data is availble, while it shows my tf data on the raspberry pi. Can this be timing problems?
Can someone help me steer in the right direction to tackle this problem?
env | grep ROS # on my raspberry pi
ROS_ROOT=/opt/ros/kinetic/share/ros
ROS_PACKAGE_PATH=/home/ubuntu/lidar_ws/src:/home/ubuntu/catkin_ws/src:/opt/ros/kinetic/share
ROS_MASTER_URI=http://IP_MAIN:11311
ROS_VERSION=1
ROS_PARALLEL_JOBS=-j2
ROS_HOSTNAME=IP_pi
ROSLISP_PACKAGE_DIRECTORIES=/home/ubuntu/lidar_ws/devel/share/common- lisp:/home/ubuntu/catkin_ws/devel/share/common-lisp
ROS_DISTRO=kinetic
ROS_IP=IP_pi
ROS_ETC_DIR=/opt/ros/kinetic/etc/ros
env | grep ROS # on my main pc
ROS_ROOT=/opt/ros/kinetic/share/ros
ROS_PACKAGE_PATH=/home/ubuntu/lidar_ws/src:/home/ubuntu/catkin_ws/src:/opt/ros/kinetic/share
ROS_MASTER_URI=http://IP_MAIN:11311
ROS_VERSION=1
ROS_PARALLEL_JOBS=-j2
ROS_HOSTNAME=IP_MAIN
ROSLISP_PACKAGE_DIRECTORIES=/home/ubuntu/lidar_ws/devel/share/common-lisp:/home/ubuntu/catkin_ws/devel/share/common-lisp
ROS_DISTRO=kinetic
ROS_IP= IP_MAIN
ROS_ETC_DIR=/opt/ros/kinetic/etc/ros
tutorial.launch
mapping_default.launch
↧
How to use /poseupdate or /slam_out_pose from hector_mapping as /odom for base_local_planner
My aim is to implement autonomous navigation using hector_slam.
I have an RPLidar, through which I get Laser scan, and I can see the real time position of the lidar in rViz. I need to know how can I use this information and serve it as odom to move_base, or base_local_planner to be precise. /odom is of type nav_msgs/odometry while /slam_out_pose is of type geometry_msgs/posestamped and /poseupdate is of type geometry_msgs/posewithcovariancestamped.
I have read that hector_slam is a good approach in absence of odometry information. But could not find any source on how to implement it. Again, the answer to my specific question might not be the one I'm looking for. So please help me to implement autonomous navigation.
↧