As a member of the Sense-South project, we are glad to announce that our proposal has been accepted for funding by the IRD. The project targets innovative sensors and IoT telecommunication networks for environmental surveillance in southern countries. The consortium gathers 27 partners from 4 countries (Cameroon, France, Senegal, Vietnam) spread over 3 continents (Africa, Asia, Europe).

Recently, we released several ROS packages for multi-robot exploration, including:

  1. explore_multirobot http://wiki.ros.org/explore_multirobot: is a multi-robot version of the explore package.
  2. map_merging http://wiki.ros.org/map_merging: merges multiple maps with knowledge of the initial relative positions of robots.
  3. tf_splitter http://wiki.ros.org/tf_splitter: decomposes the /tf topic into multiple ones.
  4. pose_publisher http://wiki.ros.org/pose_publisher: provides current position and orientation of the robot in the map.

These packages have been tested in ROS Groovy. However, Groovy is EOLed and there are no documentation or release jobs running anymore. We will test in more recent versions in order to improve our wiki.

In traditional robot behavior programming, the edit-compile-simulate-deploy-run cycle creates a large mental disconnect between program creation and eventual robot behavior. This significantly slows down behavior development because there is no immediate mental connection between the program and the resulting behavior. With live programming the development cycle is made extremely tight, realizing such an immediate connection. In our work on programming of ROS robots in a more dynamic fashion through PhaROS, we have experimented with the use of the Live Robot Programming language. This has given rise to a number of requirements for such live programming of robots. In this text we introduce these requirements and illustrate them using an example robot behavior.

This presentation has been given at the DSL Rob Workshop held as part of the SIMPAR 2014 conference (Bergamo, Italy).

In this tutorial we will show you how to programs robot behaviour on using Live Robot Programming over PhaROS.

Setup

  1. Follow the steps 1 to 4 of this post.
  2. Create a ROS node that consumes /kompai2/pose and /kompai/scan and publish in /command_velocity. To do this, just executing this:

    LrpharosPackage uniqueInstance
    
  3. Create an instance of the RobulabBridge class

    RobulabBridge uniqueInstance
    
  4. To assure that everything is fine, inspect the instance of RobulabBridge and check that its instance variable laserData is not nil and its values change over the time.
  5. Open the LRP UI by right-clicking the World and selecting Live Robot Programming.

Stop when an obstacle is detected

  1. Ok, so now we can start writing the behavior. First we will need some variables, those are: robulab to manage the robot and some constants such as: f_vel as linear velocity, t_vel for angular velocity, and min_distance as the minimum distance between the robot and an obstacle.

    (var robulab := [RobulabBridgr uniqueInstance ])
    (var min_distance := [0.5])
    (var f_vel := [0.25])
    (var t_vel := [0.5])
    
  2. We define the state machine called Tito.
    What we want to the robot is to go forward unless there is an obstacle in front of it so it should stop and turn to avoid it.
    This could be modelled in a abstractly as two states: forward and avoid.

    (machine Tito
        ;; States
        (state forward
            (onentry [robulab value forward: f_vel value])
        )
        (state stop
            (onentry [robulab value stop])
        )
    
        ;; Transitions
        (on obstacle forward -> stop t-avoid)
        (on noObstacle avoid -> forward t-forward)
    
        ;; Events
        (event obstacle [robulab value isThereAnObstacle: min_distance value])
        (event noObstacle [(robulab value isThereAnObstacle: min_distance value) not])
    )
    
  3. Finally, to run it, just start the machine on the forward state.
    (spawn Tito forward)

  4. The robot should move linearly and stop when detects an obstacle.

Avoiding obstacles

Let’s add an avoiding behavior. A simple one might be turning until it detects there is no obstacle and go forward again.
Then a simple behavior that match the avoidance requisite is:

  • If the obstacle is in the left side of the front: turn right
  • If the obstacle is in the right side of the front: turn left.

RobotBridge provides two methods to detect obstacles on the left and right part of the front of the robot: RobotBridge>>isThereARightObstacle: and RobotBridge>>isThereALeftObstacle:
Then, the idea is to turn left if there is an obstacle in the front-right, or, turn right if there is an obstacle in the front-left.

  1. Add the following states

            (state turnLeft
                (onentry [robulab value turn: t_vel value])
            )
    
            (state turnRight
                (onentry [robulab value turn: t_vel value negated])
            )
    
  2. Add the corresponding transitions

            (on rightObstacle stop -> turnLeft t-lturn)
            (on leftObstacle stop -> turnRight t-rturn)
            (on noObstacle turnLeft -> stop t-tlstop)
            (on noObstacle turnRight -> stop t-trstop)
    
  3. And add the events

            (event rightObstacle [robulab value isThereARightObstacle: minDistance value])
            (event leftObstacle [robulab value isThereALeftObstacle: minDistance value])
    
  4. Now the robot will start turning to avoid the obstacle.

Note

Updated version of LRP it is not necessary to add value after a variable.
Then,

    (onentry [robulab value turn: t_vel value negated])

is turned to

    (onentry [robulab turn: t_vel negated])

making it more readable.