The robot mower project has been frustrating lately. I find myself working on problems that are easy to solve but ultimately unimportant. Like balancing the mower blades.
Problems like these are easy to focus on because solving them creates the illusion of progress. But in the big scheme of things, they’re peripheral to the main goal of the mower project: how do we make an autonomous robot lawn mower?
The robot I’ve built cuts grass autonomously if it has a clear view of the sky. It relies entirely on RTK GNSS to know where it’s at. If you put it under the maple tree in my front lawn, it will lose the RTK fix, and as it exists today, the robot can’t mow my lawn autonomously. So there remains work to do to achieve the primary goal of the mower project.
The robot needs another way to know its position when RTK GNSS isn’t available. That positioning method needs to be at least as accurate as the RTK GNSS position. There are a few ways this could be accomplished.
It’s possible we could rely on wheel encoders and the Pixhawk IMU to estimate the robot’s position by dead reckoning until we reestablish an RTK fix. But the longer you go without an RTK fix, the greater the position drift you’ll have to reconcile once you do reestablish the robot’s actual position.
You could minimize the drift by using some kind of visual odometry with a very high quality IMU. But this only conceals the underlying problem: you still don’t exactly know where the robot is at, and you don’t know when in the future you’ll rediscover its location with a new RTK fix.
Other solutions like LIDAR or depth sensing cameras require a companion computer to analyze the sensor data and send position estimates to the Ardupilot software. Which puts me at a big crossroad.
I’ve really enjoyed running Ardupilot on the Pixhawk due to its simplicity. But adding a companion computer complicates things greatly. If I’m going to be using ROS on the companion computer to do SLAM and send position estimates to Ardupilot, isn’t that just using ROS, but with more steps? I might as well just jump headfirst into trying to figure out ROS if that’s the choice I’m faced with.
The learning curve with Ardupilot was steep, but the learning curve with ROS appears almost insurmountable to me as a mechanical engineer. The upside, though, is that there are some things that you can only do with ROS at the moment, like using LIDAR for position estimation and integrating a depth sensing camera. So the education investment appears worthwhile.
From my cursory research, it appears that a 360° LIDAR sensor would complement the RTK GNSS positioning method very well. When something occludes the robot’s view of the sky, chances are it will be detectable on the LIDAR.
I haven’t even started considering obstacle avoidance. This is something else that ROS also seems to have more capability for than Ardupilot. But that’s a can of worms for another day. Let’s build a robot that can actually mow my lawn before we worry about crashing into my sprinkler system well again.
Can anyone speak to the accuracy of these thoughts? Any ROS users out there willing to chime in?