The Road to RTT 2.0

This Chapter collects all information about the migration to RTT 2.0. Nothing here is final, it's a scratch book to get us there. There are talk pages to discuss the contents of these pages.

These are the major work areas:

  • New Data Flow API, proposed by S. Joyeux
  • Streamlined Execution Flow API, proposed by P. Soetens (RTT::Message)
  • Full distribution support and cleanup (Events in CORBA)
  • Alternative Data Flow transport layer (non blocking).
  • Small tools for interacting with Components

If you want to contribute, you can post your comments in the following wiki pages. It will be (hopefully) more concise and straightforward compared with the developers Forum.

  • Which weakness have you detected in RTT?
  • Which features would you like to have in RTT 2.0?

These items are worked out on separate Wiki pages.

RTT and OCL 2.0 have been merged on the master branches of all official git repositories:

Stable releases are on different branches, for example toolchain-2.0:

Goals for RTT 2.0

The sections below formulate the major goals which RTT 2.0 wishes to attain.

Simplicity

The Real-Time Toolkit shouldn't be in the way of building complex applications, instead it should help making it easier. We're improving on different fronts to make it more simple to use for both beginners and experienced power users.

API: user oriented

The API is clearly separated in what public (rtt user) and private (rtt internal) APIs are. The number of concepts are reduced and a sane default is chosen where alternatives are possible. Policies allow users to deviate from the default behavior.

Tooling: enhancing the experience

The RTT is a very extensible library. When users require an extension, they don't need to write much or any additional code. Tools assist in generating helper libraries for adding user types (type plugins) or user interfaces (service plugins) to the RTT. The generated code is readable and understandable and documented. If required, these can be overriden by hand-written code such that tools in development do not block user development.

Component model: components are simple

RTT 2.0 components are simple to understand and explain. In essence they are stateful input/output systems that offer services to supervisors.

The input/output is offered by means of port based communication between data processing algorithms. An input port receives data, an output port sends data. The algorithms in the component define the transformation from input to output.

Service based communication offers operations such as configuration or task execution. A component always specifies if a service is provided or requested. This allows run-time dependency and system state checking, but also automatic connection/disconnection management which is important in distributed environments.

Components are stateful. They don't just start processing data right away. They can validate their preconditions, be queried for their current state and be started and stopped in a controlled manner. Although there is a standard state machine in each component that regulates these transitions, users can extend these without limitations.

Acceptable Upgrade Path

The first users of RTT 2.0 will be current users, seeing solutions for problems they have today. The upgrade path will be documented and assistive tools will be provided. Whenever possible, backwards compatibility is maintained.

Interoperability

The field knows a number of succesful robotics frameworks, languages and operating systems. RTT 2.0 is designed to allow bridges to these components.

Other frameworks

RTT 2.0 can easily interoperate with other robotics frameworks that provide the concepts of port based data flow communication and functional services.

Other languages

RTT 2.0 offers the 1.x real-time scripting language, but in addition binds to other languages as well. A real-time language binding to Lua is offered. Not real-time bindings are offered over a CORBA language independent interface.

Other operating systems

RTT 2.0 runs on Linux, RTAI, Xenomai, Mac OS-X and Windows. These are the main operating systems of the current advanced robotics domain.

Robustness

Complex systems are hard to startup, shutdown or to recover from disfunctional components. RTT 2.0 aids the system architect in maintaining a robust machine controller, even in distributed setups.

Service oriented architectures

Components are aware of the available services and have chance to execute fall-back scenarios when they disappear. They are notified in time such that they can take proper action and recover and resume when a service becomes available again. Local and global supervisors keep track of these state changes such that such mechanisms do not need to be hard-coded into each component.

Separation between real-time and not real-time processes

A real-time component can not be disturbed due to the addition of a lower priority communicating peer. This allows to build systems incrementally around a hard-realtime core. The RTT decouples the communication between sender and receiver and allows real-time data transports to assure delivery.

Contribute! Which weakness have you detected in RTT?

INTRODUCTION

You can edit this page to post your contribution to OrocosRTT 2.0. Please, keep your comment concise and clear: if you want to launch a long debate, you can still use the Developers Forum! Short examples can help other people understanding what you mean.


A) According to the section 4. of the Orocos Component Builder's Manual, the callback of a synchronous event is executed inside the thread of the event's emitter. Imagine that TaskA emits an event, and TaskB, who subscribes synchronously to it, has an handle with a infinite loop: the behavior of TaskA would be jeopardize. Keep in mind that:
  • TaskA hasn't any clue to know what will happen inside the callback of TaskB.
  • It can't prevent TaskB from connecting synchronously.
  • Once blocked, there is nothing it can do.

B) What would happen if a TaskContext is attached to a PeriodicActivity, but internally it was designed to run as a NonPeriodicActivity. What would happen if a sensor with a refresh rate of 10 Hz is read from a Component deployed at 1000 Hz? May be the Activity of the TC should be defined by the TC itself, even if this would mean to have it is hard-coded in the TC.
C) Because of single thread serialization, we can have that a sleep in 1 Task, affect other tasks which are not aware and are not responsible of it. See the source code in the sub page.

Problems with single thread serialization

Because of single thread serialization, something unexpected for the programmer happens.

1) You expect TaskA to be independent from TaskB, but it isn't. If you think it is a problem of resources of the computer, change the activity frequency of 1 of the two tasks.

Suggestion: A) let the programmer choose if single thread serialization is used or not. B) keep 1 thread for 1 activity policy for default. It will help less experienced user to avoid common errors. Experienced user can decide to "unleash" the power of STS if they want to.

2) after the "block" for 0.5 seconds, the "lost cycles" are executed all at once. In other words, updateHook is called 5 times in a row. This may have very umpredictable results. It could be desirable for some applications (filter with data buffer) or catastrophic in other applications (motion control loop).

Suggestion: C) let the user decide if the "lost cycles" or the PeriodicActivity need to be executed later or are defenitively lost.

using namespace std;
using namespace RTT;
using namespace Orocos;
 
TimeService::ticks _timestamp;
double getTime() { return TimeService::Instance()->getSeconds(_timestamp); }
 
class TaskA
    : public TaskContext
{
protected:
     PeriodicActivity act1;
public:
 
    TaskA(std::string name)
    : TaskContext(name),
      act1(1, 0.10, this->engine() )
    {
     //Start the component's activity:
    this->start();
    }  
    void updateHook()
    {
    printf("TaskA  [%.2f] Loop\n", getTime());
    }
};
 
class TaskB
    : public TaskContext
{
protected:
    int num_cycles;
    PeriodicActivity act2;
public:
     TaskB(std::string name)
    : TaskContext(name),
      act2(2, 0.10, this->engine() )
    {
    num_cycles = 0;            
    //Start the component's activity:
    this->start();
    }   
    void updateHook()
    {
    num_cycles++;
    printf("TaskB  [%.2f] Loop\n", getTime());
 
    // once every 20 cycles (2 seconds), a long calculation is done
    if(num_cycles%20 == 0)
    {
        printf("TaskB  [%.2f] before calling long calculation\n", getTime());
 
        // calculation takes longer than expected (0.5 seconds). 
        // it could be something "unexpected", desired or even a bug... 
        // it would not be relevant for this example.
        for(int i=0; i<500; i++) usleep(1000);
 
        printf("TaskB  [%.2f] after calling long calculation\n", getTime());
    }
    }
};
 
int ORO_main(int argc, char** argv)
{
    TaskA    tA("TaskA");
    TaskB    tB("TaskB");
 
    // notice: the task has not been connected. there isn't any relationship between them.
    // In the mind of the programmer, any of them is independent, because they have their own activity.
 
    // if one of the two frequency of the PeriodicActivities is changed, there isn't any problem, since they go on 2 separate threads.  
    getchar();
    return 0;
}

Contribute! Suggest a new feature to be included in RTT 2.0.

INTRODUCTION

Please be concise and provide a short example and your motivation to include it in RTT. Ask first yourself:

  • "Am I the only beneficiary of this new feature?"
  • "Can this feature be obtained with a simple layer on the top of RTT ?"

If you answered "no" to both the questions and you have already debated the new future in the Developers forum, please post here your suggestion.

Create Reference Application Architectures

In order to lower the learning curve, people are requesting often complete application examples which demonstrate well known application architectures such as kinematic robot control, application configuration from a central database or topic based data flow topologies.

1 Central Property Service (ROS like) This tasks sets up components such that they get the system wide configuration from a dedicated property server. The property server loads an XML file with all the values and other components query these values. Advanced components even extend the property server at places. A GUI is not included in this work package.

2 Universal Robot Controller (Using KDL, OCL, standard components) This application has a robot component to represent the robot hardware, a controller for joint space and cartesian space and a path planner. Users can start from this reference application to control their own robotic platform. A GUI is not included in this work package.

3 Topic based data flow (ROS and CORBA EventService like) A deployer can configure components as such that their ports are connected to 'global' topics for sending and receiving. This is similar to what many existing frameworks do today and may demonstrate how compatibility with these frameworks can be accomplished.

4 GUI communication with Orocos How a remote GUI could connect to a running application.

Please add yours

Detailed Roadmap

These pages outline the roadmap for RTT-2.0 in 2009. We aim to have a release candidate by december 2009, with the release following in januari 2010.

  • A work package is divided in tasks with deliverables.
  • All deliverables are public and are made public without delay.
  • All development is done in git repositories.
  • For each change committed to the local git repository, that change is committed to a public repository hosted at github.com within 24 hours.
  • For each task and at the end of each work package, all unit tests are expected to pass. In case additional unit tests are required for a work package, these are listed explicitly as deliverables.
  • The order of execution of tasks within a work package is suggestive and may differ from the actual order.
  • In case a task modifies the RTT API or structure, the task's deliverable implicitly includes the adaption to aforementioned modifications of following parts of OCL: CMake Build system; directories: taskbrowser, deployment, ocl, hardware, reporting, helloworld, timer, doc, debian.
  • These changes are collected in the ocl-2.0 git repository.
  • When the form of a deliverable is 'Patch set', this is equivalent to one or more commits on the public git repository.

WP1 RTT Cleanup

This work package contains structural clean-ups for the RTT source code, such as CMake build system, portability and making the public interface slimmer and explicit. RTT 2.0 is an ideal mark point for doing such changes. Most of these reorganizations have broad support from the community. This package is put up front because it allows early adopters to switch only at the beginning to the new code structure and that all subsequent packages are executed in the new structure.

Links : (various posts on Orocos mailing lists)

Allocated Work : 15 days

Tasks:

1.1 Partition in name spaces and hide internal classes in subdirectories.

A namespace and directory partitioning will once and for all separate public RTT API from internal headers. This will provide a drastically reduced class count for users, while allowing developers to narrow backwards compatibility to only these classes. This offers also the opportunity to remove classes that are for internal use only but are in fact never used.

Deliverable Title Form
1.1.1 Internal headers are in subdirectories Patch set
1.1.2 Internal classes are in nested namespaces of the RTT namespace Patch set

1.2 Improve CMake build system

Numerous suggestions have been done on the mailing list for improving portability and building Orocos on non standard platforms.

Deliverable Title Form
1.2.1 Standardized on CMake 2.6 Patch set
1.2.2 Use CMake lists instead of strings Patch set
1.2.3 No more use of Linux specific include paths Patch set
1.2.4 Separate finding from using libraries for all RTT dependencies Patch set

1.3 Group user contributed code in rtt/extras.

This directory offers variants of implementations found in the RTT, such as new data type support, specialized activity classes etc. In order not to clutter up the standard RTT API, these contributions are organized in a separate directory. Users are warned that these extras might not be of the same quality as native RTT classes.

Deliverable Title Form
1.3.1 Orocos rtt-extras directory Directory in RTT

1.4 Improve portability

Some GNU/GCC/Linux specific constructs have entered the source code, which makes maintenance on and portability to other platforms a harder task. To structurally support other platforms, the code will be compiled with another compiler (non-gnu) and a build flag ORO_NO_ATOMICS (or similar) is added to exclude all compiler and assembler specific code and replace it with ISO-C/C++ or RTT-FOSI compliant constructs.

Deliverable Title Form
1.4.1 Code compiles on non-gnu compiler Patch set
1.4.2 Code compiles without assembler constructs Patch set

1.5 Default to activity with one thread per component

The idea is to provide each component with a robust default activity object which maps to exactly one thread. This thread can periodically execute or be non periodic. The user can switch between these modes at configuration or run-time.

Deliverable Title Form
1.5.1 Generic Activity class which is by default present in every component. Patch set
1.5.2 Unit test for this class Patch set

1.6 Standardize on Boost Unit Testing Framework

Before the other work packages are started, the RTT must standardize on a unit test framework. Until now, this is the CppUnit framework. The more portable and configurable Boost UTF has been chosen for unit testing of RTT 2.0.

Deliverable Title Form
1.6.1 CppUnit removed and Boost UTF in place Patch set

1.7 Provide CMake macros for applications and components

When users want to build Orocos components or applications, they require flags and settings from the installed RTT and OCL libraries. A CMake macro which gathers these flags for compiling an Orocos component or application is provided. This is inspired on how ROS components are compiled.

Deliverable Title Form
1.7.1 CMake macro CMake macro file
1.7.2 Unit test that tests this macro Patch set

1.8 Allow lock-free policies to be configured

Some RTT classes use hard-coded lock-free algorithms, which may be in the way (due to resource restrictions) for some embedded systems. It should be possible to change the policy to not use a lock-free algorithm in that class (cfr the 'strategy' design pattern'). An example is the use of AtomicQueue in the CommandProcessor.

Deliverable Title Form
1.8.1 Allow to set/override lock-free algorithm policy patch

CMake Rework

This page collects all the data and links used to improve the CMake build system, such that you can find quick links inhere instead of scrolling through the forum.

Thread on Orocos-dev : http://www.orocos.org/node/1073 (in case you like to scroll)

CMake manual on how to use and create Findxyz macros : http://www.vtk.org/Wiki/CMake:How_To_Find_Libraries

List of many alternative modules : http://zi.fi/cmake/Modules/

An alternative solution for users of RTT and OCL is installing the Orocos-RTT-target-config.cmake macros, which serve a similar purpose as the pkgconfig .pc files: they accumulate the flags used to build the library. This may be a solution for Windows systems. Also, CMake suggests that .pc files are only 'suggestive' and that still the standard CMake macros must be used to fully capture and store all information of the dependency you're looking at.

Directories and namespace rework

The orocos/src directory reflects the /usr/include/rtt directory structure, I'll post it here from the user's point of view, so what she finds in the include dir:

Abbrevs: (N)BC: (No) Backwards Compatibility guaranteed between 2.x.0 and 2.y.0. Backwards compatibility is always guaranteed between 2.x.y and 2.x.z. In case of NBC, a class might disappear or change, as long as it is not a base class of a BC qualified class.

Directory Namespace BC/NBC Comments Header File list
rtt/*.hpp RTT BC Public API: maintains BC, a limited set of classes and interfaces. This is the most important list to get right. A header not listed in here goes into one of the subdirectories. Please add/complete/remove. TaskContext.hpp Activity.hpp SequentialActivity.hpp SlaveActivity.hpp DataPort.hpp BufferPort.hpp Method.hpp Command.hpp Event.hpp Property.hpp PropertyBag.hpp Attribute.hpp Time.hpp Timer.hpp Logger.hpp
rtt/plugin/*.hpp RTT::plugin BC All plugin creation and loading stuff. Plugin.hpp
rtt/types/*.hpp RTT::types BC All type system stuff (depends partially on plugin). Everything you (or a tool) need(s) to add your own types to the RTT. Toolkit.hpp ToolkitPlugin.hpp Types.hpp TypeInfo.hpp TypeInfoName.hpp TypeStream.hpp TypeStream-io.hpp VectorComposition.hpp TemplateTypeInfo.hpp Operators.hpp OperatorTypes.hpp BuildType.hpp
rtt/interface/*.hpp RTT::interface BC Most interfaces/base classes used by classes in the RTT namespace. ActionInterface.hpp, ActivityInterface.hpp, OperationInterface.hpp, PortInterface.hpp, RunnableInterface.hpp, BufferInterface.hpp
rtt/internal/*.hpp RTT::internal NBC Supportive classes that don't fit another category but are definately not for users to use directly. ExecutionEngine.hpp CommandProcessor.hpp DataSource*.hpp Command*.hpp Buffer*.hpp Function*.hpp *Factory*.hpp Condition*.hpp Local*.hpp EventC.hpp MethodC.hpp CommandC.hpp
rtt/scripting/*.hpp RTT::scripting NBC Users should not include these directly.
rtt/extras/*.hpp RTT::extras BC Alternative implementations of certain interfaces in the RTT namespace. May contain stuff useful for embedded or other specific use cases.
rtt/dev/*.hpp RTT::dev BC Minimal Device Interface, As-is in RTT 1.x AnalogInInterface.hpp AnalogOutInterface.hpp AxisInterface.hpp DeviceInterface.hpp DigitalInput.hpp DigitalOutput.hpp EncoderInterface.hpp PulseTrainGeneratorInterface.hpp AnalogInput.hpp AnalogOutput.hpp CalibrationInterface.hpp DigitalInInterface.hpp DigitalOutInterface.hpp DriveInterface.hpp HomingInterface.hpp SensorInterface.hpp
rtt/corba/*.hpp RTT::corba BC CORBA transport files. Users include some headers, some not. Should this also have the separation between rtt/corba and rtt/corba/internal ? I would rename the IDL modules to RTT::corbaidl in order to clear out compiler/doxygen confusion. Also note that current 1.x namespace is RTT::Corba.
rtt/property/*.hpp RTT::property BC Formerly 'rtt/marsh'. Marshalling and loading classes for properties. CPFDemarshaller.hpp CPFDTD.hpp CPFMarshaller.hpp
rtt/dlib/*.hpp RTT::dlib BC As-is static distribution library files. They are actually a form of 'extras'. Maybe they belong in there... DLibCommand.hpp
rtt/boost/*.hpp boost ? We'll try to get rid of this in 2.x
rtt/os/*.hpp RTT::OS BC As-is. (Rename to RTT::os ?) Atomic.hpp fosi_internal_interface.hpp MutexLock.hpp rt_list.hpp StartStopManager.hpp threads.hpp CAS.hpp MainThread.hpp oro_allocator.hpp rtconversions.hpp rtstreambufs.hpp Semaphore.hpp Thread.hpp Time.hpp fosi_internal.hpp Mutex.hpp OS.hpp rtctype.hpp rtstreams.hpp ThreadInterface.hpp
rtt/targets/* - BC We need this for allowing to install multiple -dev versions (-gnulinux+-xenomai for example) in the same directory. rtt-target.h <target>

Will go: 'rtt/impl' and 'rtt/boost'.

Open question to be answered: Interfaces like ActivityInterface, PortInterface, RunnableInterface etc. -> Do they go into rtt/, rtt/internal or maybe rtt/interface ?

!!! PLEASE add a LOG MESSAGE when you edit this wiki to motivate your edit !!!

WP2 Data Flow API and Implementation Improvement

Context: Because the current data flow communication primitives in RTT limit the reusability and potential implementations, Sylvan Joyeux proposed a new, but fairly compatible, design. It is intended that this new implementation can almost transparently replace the current code base. Additionally, this package extends the DataFlow transport to support out-of-band real-time communication using Xenomai IPC primitives.

Link : http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow http://www.orocos.org/wiki/rtt/rtt-2.0/dataflow

Estimated work : 45 days for a demonstrable prototype.

Tasks:

2.1 Review and merge proposed code and improve/fix where necessary

Sylvain's code is clean and of high standards, however, it has not been unit tested yet and needs a second look.

Deliverable Title Form
2.1.1 Code reviewed and imported in RTT-2.0 branch Patch set
2.1.2 Unit tests for reading, writing, connecting and disconnecting in-process communication Patch set

2.2 Port CORBA type transport to new code base

Sylvain's code has initial CORBA support. The plan is to cooperate on the implementation and offer the same or better features as the current CORBA implementation does. Also the DataFlowInterface.idl will be cleaned up to reflect the new semantics.

Deliverable Title Form
2.2.1 CORBA enabled data flow between proxies and servers which uses the RTT type system merged on RTT-2.0 branch Patch set
2.3 Allow Real-Time data port access with CORBA Proxy

A disadvantage of the current data port is that ports connected over CORBA may cause stalls when reading or writing them. The Proxy or Server implementation should, if possible, do the communication in the background and not let the other component's task block.

Deliverable Title Form
2.3.1 Event driven network-thread allocated in Proxy code to receive and send data flow samples Patch set
2.4 Reduce footprint of data connections

The current lock-free data connections allocate memory for allowing access by 16 threads, even if only two threads connect. One solution is to let the allocated memory grow with the number of connections, such that no more memory is allocated than necessary.

Deliverable Title Form
2.4.1 Let lock-free data object and buffer memory grow proportional to connected ports Patch set
2.5 Out of band data flow review

It is often argued that CORBA is excellent for setting up and configuring services, but not for continuous data transmission. There are for example CORBA standards that only mediate setup interfaces but leave the data communication connections up to the implementation. This task looks at how ROS and other frameworks set up out-of band data flow and how such a client-server architecture can be added to RTT/CORBA.

Deliverable Title Form
2.5.1 Report on out of band implementations and similarities to RTT. Email on Orocos-dev
2.6 Create automatic marshalling of user types

Since the out-of-band communication will require objects to be transformed to a byte stream and back, a marshalling system must be in place. The idea is to let the user specify his data types as IDL structs (or equivalent) and to generate a toolkit from that definition. The toolkit will re-use the generated CORBA marshalling/demarshalling code to provide this service to the out-of-band communication channels.

Deliverable Title Form
2.6.1 Marshalling/demarshalling in toolkits Patch set
2.6.2 Tool to convert data specification into toolkit Executable
2.7 Create out-of-band data flow communication

The first communication mechanism to support is data flow. This will be demonstrated with a Xenomai RTPIPE implementation (or equivalent) which is setup between a network of components.

Deliverable Title Form
2.7.1 Real-time inter-process communication of data flow values on Xenomai Patch set
2.7.2 Unit test for setting up, connecting and validating Real-Time properties of data ports in RT IPC setting. Patch set
2.8 Update documentation and Examples

In compliance with modern programming art, the unit tests should always test and pass the implementation. Documentation and Examples are provided for the users and complement the unit tests.

Deliverable Title Form
2.8.1 Unit tests updated Patch set
2.8.2 rtt-examples, rtt-exercises updated Patch set
2.8.3 orocos-corba manual updated Patch set

2.9 Organize and Port OCL deployment, reporting and taskbrowsing

RTT 2.0 data ports will require a coordinated action from all OCL component maintainers to port and test the components to OCL 2.0 in order to use the new data ports. This work package is only concerned with the upgrading of the Deployment, Reporting and TaskBrowser components.

Deliverable Title Form
2.9.1 Deployment, Reporting and TaskBrowser updated Patch set

WP3 Method / Message / Event Unified API

Context: Commands are too complex for both users and framework/transport implementers. However, current day-to-day use confirms the usability of an asynchronous and thread-safe messaging mechanism. It was proposed to reduce the command API to a message API and unify the synchronous / asynchronous relation between methods and messages with synchronous / asynchronous events. This will lead to simpler implementations, simpler usage scenarios and reduced concepts in the RTT.

The registration and connection API of these primitives also falls under this WP.

Link: http://www.orocos.org/wiki/rtt/rtt-2.0/executionflow

Estimated work : 55 days for a demonstrable prototype.

Tasks:

3.1 Provide a real-time memory allocator for messages

In contrast to commands, each message invocation leads to a new message sent to the receiver. This requires heap management from a real-time memory allocator, such as the highly recommended TLSF (Two-Level Segregate Fit) allocator, which must be integrated in the RTT code base. If the RTOS provides, the native RTOS memory allocator is used, such as in Xenomai.

Deliverable Title Form
3.1.1 Real-time allocation integrated in RTT-2.0 Patch set

3.2 Message implementation

Unit test and implement the new Message API for use in C++ and scripts. This implies a MessageProcessor (replaces CommandProcessor), a 'messages()' interface and using it in scripting.

Deliverable Title Form
3.2.1 Message implementation for C++ Patch set
3.2.2 Message implementation for Scripting Patch set

3.3 Demote the Command implementation

Commands (as they are now) become second rang because they don't appear in the interface anymore, being replaced by messages. Users may still build Command objects at the client side both in C++ as in scripting. The need for and the plausibility of identical functionality with today's Command objects is yet to be investigated.

Deliverable Title Form
3.3.1 Client side C++ Command construction Patch set
3.3.2 Client side scripting command creation Patch set

3.4 Unify the C++ Event API with Method/Message semantics

Events today duplicate much of method/command functionality, because they also allow synchronous / asynchronous communication between components. It is the intention to replace much of the implementation with interfaces to methods and messages and let events cause Methods to be called or Messages to be sent. This change will remove the EventProcessor, which will be replaced by the MessageProcessor. This will greatly simplify the event API and semantics for new users. Another change is that allowing calling Events on the component's interface can only be done by means of registering it as a method or message.

Deliverable Title Form
3.4.1 Connection of only Method/Message objects to events Patch set
3.4.2 Adding events as methods or messages to the TaskContext interface. Patch set

3.5 Allow event delivery policies

Adding a callback to an event puts a burden on the event emitter. The owner of the event must be allowed to impose a policy on the event such that this burden can be bounded. One such policy can be that all callbacks must be executed outside the thread of the owning component. This task is to extend the RTT such that it contains such a policy.

Deliverable Title Form
3.5.1 Allow to set the event delivery policy for each component Patch set

3.6 Allow to specify requires interfaces

Today one can connect data ports automatically because both providing and requiring data is presented in the interface. This is not so for methods, messages or events. This task makes it possible to describe which of these primitives a component requires from a peer such that they can be automatically connected during application deployment. The required primitives are grouped in interfaces, such that they can be connected as a group from provider to requirer.

Deliverable Title Form
3.6.1 Mechanism to list the requires interface of a component Patch set
3.6.2 Feature to connect interfaces in deployment component. Patch set

3.7 Improve and create Method/Message CORBA API

With the experience of the RTT 1.0 IDL API, the existing API is improved to reduce the danger of memory leaks and allow easier access to Orocos components when using only the CORBA IDL. The idea is to remove the Method and Command interfaces and change the create methods in CommandInterface and MethodInterface to execute functions.

Deliverable Title Form
3.7.1 Simplify CORBA API Patch set

3.8 Port new Event mechanism to CORBA

Since the new Event mechanism will seamlessly integrate with the Method/Message API, a CORBA port, which allows remote components to subscribe to component events must be straightforward to make.

Deliverable Title Form
3.8.1 CORBA idl and implementation for using events. Patch set

3.9 Update documentation, unit tests and Examples

In compliance with modern programming art, the unit tests should always test and pass the implementation. Documentation and Examples are provided for the users and complement the unit tests.

Deliverable Title Form
3.9.1 Unit tests updated Patch set
3.9.2 rtt-examples, rtt-exercises updated Patch set
3.9.3 Orocos component builders manual updated Patch set

3.10 Organize and Port OCL deployment, taskbrowsing

The new RTT 2.0 execution API will require a coordinated action from all OCL component maintainers to port and test the components to OCL 2.0 in order to use the new primitives. This work package is only concerned with the upgrading of the Deployment, Reporting and TaskBrowser components.

Deliverable Title Form
3.10.1 Deployment, Reporting and TaskBrowser updated Patch set

WP4 Create Reference Application Architecture

In order to lower the learning curve, people are requesting often complete application examples which demonstrate well known application architectures such as kinematic robot control. This work package fleshes out that example.

Links : (various posts on Orocos mailing lists)

Estimated Work : 5 days for the application architecture with documentation

Tasks:

4.1 Universal Robot Controller (Using KDL, OCL, standard components)

This application has a robot component to represent the robot hardware, a controller for joint space and cartesian space and a path planner. Users can start from this reference application to control their own robotic platform. Both axes and end effector can be controlled in position and velocity mode. A state machine switches between these modes. A GUI is not included in this work package.

Deliverable Title Form
4.1.1 Robot Controller example tar ball

Full distribution support

There are two major changes required in the CORBA IDL interface.

  1. A new interface for attaching callbacks to events in the component
  2. A rewrite of the
    1. DataFlowInterface,
    2. MethodInterface,
    3. CommandInterface / MessageInterface.

The first point will be relatively straight forward, as events attach methods and messages, which will be represented in the CORBA interface as well.

The DataFlowInterface will be adapted to reflect the rework on the new Data flow api. Much will depend on the out-of-band or through-CORBA nature of the data flow.

The MethodInterface should no longer work with 'session' objects, and all calls are related to the main interface, such that a method object can be freed after invocation.

The CommandInterface might be removed, in case it can be 'reconstructed' from lower level primitives. A MessageInterface will replace it which allows to send messages, analogous to the exiting MethodInterface.

The 'ControlTask' interface will remain mostly as is, extended with events() and messages().

RTT 2.0.0-beta1

This page is for helping you understand what's in RTT/OCL 2.0.0-beta1 release and what's not.

Caveats

First the bad things:
  • Do not use this release on real machines !
  • There are *no* guarantees for real-time operation yet.
  • CORBA transport does not work yet and needs to change drastically
  • The API is 'pretty' stable, but the transport rework might have influences. This release will certainly not be binary compatible with the final 2.0.0 release.
  • OCL has not completely catched up, and also needs to be restructured further into a leaner repository.
  • Do not manually upgrade your code ! Use the rtt2-converter script found on this site first.
  • RTT::Command is gone ! See Replacing Commands
  • RTT::Event is gone ! See Replacing Events
  • Reacting to Operations (former Event) is not yet possible in state machine scripts.
  • RTT::DataPort,BufferPort etc are gone ! See RTT 2.0 Data Flow Ports
  • In case you have patches on the orocos-rtt source tree, all files have moved drastically. First all went into rtt/ instead of src/. Next, all non-API files went into subdirectories.

For all upgrade-related notes, see Upgrading from RTT 1.x to 2.0

Missing things

The final release will have these, but this one has not:
  • A plugin system in RTT to load types (type kits) and plugins (like scripting, marshalling,...)
  • A tool/workflow to create type kits automatically
  • A working CORBA transport
  • RT-Logging framework
  • Service deployment in the DeploymentComponent
  • Misc fixes/minor feature additions and better documentation
  • Repackaged OCL tree. Especially, in OCL, only TaskBrowser, Reporting and DeploymentComponent are fully operational.
  • Debian packages have not been updated yet
  • A couple of unit tests still fail. You should see at the end:

88% tests passed, 3 tests failed out of 25
 
The following tests FAILED:
          6 - mqueue-test (Failed)
         19 - types_test (Failed)
         22 - function_test (Failed)

New Features

Updated Examples and Documentation

Most documentation (manuals and online API reference) is up-to-date, but sometimes a bit rough or lacking illustrations. The rtt-exercises have been upgraded to support RTT 2.0 API.

New style Data Ports

The data flow ports have been reworked to allow far more flexible component development and system deployment. Details are at RTT 2.0 Data Flow Ports. Motivation can be found at Redesign of the data flow interface

Improved TaskBrowser

Allows you to declare new variables, shows what a component requires and provides and if these interfaces are connected.

Improved Deployment

Specify port connection properties using XML, connect provided to required services.

Improved Reporting

Data flow logs are now sample based, such that you can trace the flow and state of connections.

Method vs Operation

The RTT 1.x Method, Command and Event APIs have been removed and replaced by Method/Operation. Details are at Methods vs Operations

Real-Time Allocation

RTT includes a copy of the TLSF library for supporting places where real-time allocation is beneficial. The RT-Logger infrastructure and the Method/Operation infrastructure take advantage of this. Normal users won't use this feature directly.

A real-time MQueue transport

Data flow between processes is now possible in real-time. The real-time MQueue transport allows to transport data between processes using Posix MQueues as well as in Xenomai.

For each type to be transported using the MQueue transport, a separate transport typekit must be available (this may change in the final 2.0 release).

Simplified API

Creating a component has been greatly simplified and the amount of code to write reduced to the absolute minimum. Documentation of operations or ports is now optional. Attributes and properties can be added by using a plain C++ class variable, the need to specify templates has been removed in some places.

Services

Component interfaces are now defined as services and a component can 'provide' or 'require' a service. These tools can be used to connect methods to operations at run-time without the necessary lookup code. For example:
 Method<bool(int,int)> setreso;
 setreso = this->getPeer("Camera")->getMethod<bool(int,int)>("setResolution");
 if ( setreso.ready() == false )
    log(Error) << "Could not find setResolution Method." <<endlog();
 else
    setreso(640,480);
becomes:
 Method<bool(int,int)> setreso("setResolution");
 this->requires("Camera")->addMethod(mymethod);
 
 // Deployment component will setup setResolution for us...
 setreso(640,480);

RTT 2.0.0-beta2

This page is for helping you understand what's in RTT/OCL 2.0.0-beta2 release and what's not.

See the RTT 2.0.0-beta1 page for the notes of the previous beta, these will not be repeated here.

Caveats

Like in any beta, first the bad things:
  • Do not use this release on real machines !
  • There are *no* guarantees for real-time operation yet.
  • The API is 'pretty' stable, but the type system rework might have influences, especially on RTT 2.0 typekits (aka RTT 1.0 toolkits). This release will certainly not be binary compatible with the final 2.0.0 release.
  • Do not manually upgrade your code ! Use the rtt2-converter script found on this site first.
  • Reacting to Operations (former Event) is not yet possible in state machine scripts.
  • This release requires CMAKE 2.6-patch3 or later

For all upgrade-related notes, see Upgrading from RTT 1.x to 2.0

Missing things

The final release will have these, but this one has not:
  • A plugin system in RTT to load types (type kits) and plugins (like scripting, marshalling,...)
  • A tool/workflow to create type kits automatically
  • RT-Logging framework
  • Service deployment in the DeploymentComponent
  • Misc fixes/minor feature additions and better documentation
  • Repackaged OCL tree. Especially, in OCL, only TaskBrowser, Reporting and DeploymentComponent are fully operational.
  • Debian packages have not been updated yet
  • A couple of unit tests still fail. You should see at the end:

97% tests passed, 1 tests failed out of 31
 
The following tests FAILED:
         24 - types_test (Failed)
If other tests fail, this may be because of too strict timing checks, but you can report them anyway on the orocos-dev mailing list or rtt-dev website forum.

New Features

See the RTT 2.0.0-beta1 page for the features added in beta1. Most features below relate to the CORBA transport.

Feature compatibility with RTT 1.x

This release is able to build the same type of applications as with RTT 1.x. It may be rough on the edges, but no big chunks of functionality (or unit tests) have been left out.

Updated CORBA IDL

Want to use an Orocos component from another language or computer ? The simplified CORBA IDL gives quick access to all properties, operations and ports.

Transparent remote or inter-process communication

The corba::TaskContextProxy and corba::TaskContextServer allow fully transparant communication between components, providing the same semantics as in-process communication. The full TaskContext C++ api is available in IDL.

Improved memory usage and reduced bandwidth/callbacks

Calling an operation, setting a parameter, all these tasks are done with a single call from client to server. No callbacks from server to client are done as in RTT 1.x. This saves a lot of memory on both client and server side and eliminates virtually all memory leaks related to the CORBA transport.

Adapted OCL components

TaskBrowser and (Corba)Deployment code is fully operational and feature-equivalent to RTT 1.x. One can deploy Orocos components using a CORBA deployer and connect to them using other deployers or taskbrowsers.

RTT and OCL Cleanup

This work package claims all remaining proposed clean-ups for the RTT source code. RTT 2.0 is an ideal mark point for doing such changes. Most of these reorganizations have broad support from the community.

1 Partition in name spaces and hide internal classes in subdirectories. A namespace and directory partitioning will once and for all separate public RTT API from internal headers. This will provide a drastically reduced class count for users, while allowing developers to narrow backwards compatibility to only these classes. This offers also the opportunity to remove classes that are for internal use only but are in fact never used.

2 Improve CMake build system Numerous suggestions have been done on the mailing list for improving portability and building Orocos on non standard platforms.

3 Group user contributed code in rtt-extras and ocl-extras packages. These packages offer variants of implementations found in the RTT and OCL, such as new data type support, specialized activity classes etc. In order not to clutter up the standard RTT and OCL APIs, these contributions are organized in separate packages. Other users are warned that these extras might not be of the same quality as native RTT and OCL classes.

Real time logging

Recent ML posts indicate the desire for a real-time (RT) capable logging framework, to supplement/replace the existing non-RT RTT::Logger. See http://www.orocos.org/forum/rtt/rtt-dev/logging-replacement for details.

NB Work in progress. Feedback welcomed

See https://www.fmtc.be/bugzilla/orocos/show_bug.cgi?id=708 for progress and patches.

Initial requirements

Approximately in order of priority (in my mind at least)

0) Completely disable all logging

1) Able log variable sized string messages

2) Able log from non-realtime and realtime code

3) Minimize (as reasonably practicable) the effect on runtime performance (eg minimize CPU cycles consumed)

4) Support different log levels

5) Support different "storage mediums" (ie able to log messages to file, to socket, to stdout)

Except for 3, and the "realtime" part of 2, the above is the functionality of the existing RTT::Logger

6) Support different log levels within a deployed system (ie able to log debug in one area, and info in another)

7) Support multiple storage mediums simultaneously at runtime

8) Runtime configuration of storage mediums and logging levels

9) Allow the user to extend the possible storage mediums at deployment-time (ie user can provide new storage class)

Optional IMHO

10) Support nested diagnostic contexts [1] [2] (a more advanced version of the Logger::In() that RTT's logger currently supports)

Logging framework

I see 3 basic choices, all of which are log4j ports (none of which support real-time right now)
  1. log4cplus - does not appear to be maintained.
  2. log4cxx - Apache license, well maintained, large, up to date functionality, heavy dependancies (APR, etc)
  3. log4cpp - LGPL license, moderately maintained, medium size, fairly up to date (re log4j and logbook), no dependancies

I prefer 3) as it has the basic functionality we need, is license compatible, has a good design, and we've been offered developer access to modify it. I also think modifying a slightly less-well-known framework will be easier than getting some of our mod's in to log4cxx.

NOTE on the ML I was using the logback term logger, but log4cpp calls it a category. I am switching to category from now on!

Preliminary design

Add TLSF to RTT (a separate topic).

Fundamentally, replace std::string, wrap one class, and override two functions. :-)

Typedef/template in a real-time string to the logging framework, instead of std::string (also any std::map, etc).

Create an OCL::Category class derived from log4cpp::Category. Add an (optionally null) association to an RTT::BufferDataPort< log4cpp::LoggingEvent > (which uses rt_strings internally). Override the callAppenders() function to push to the port instead of directly calling appenders.

Modify the getCategory() function in the hierarchy maintainer to return our OCL:: Category instead of log4cpp::category. Alternatively, leave it producing log4cpp::category but contain that within the OCL::Category object (has-a instead of is-a relationship, in OO speak). The alternative is less modification to log4cpp, but worse performance and potentially more wrapping code.

Deployment

I have a working prototype of the OCL deployment for this (without the actual logging though), and it is really ugly. As in Really Ugly! To simplify the format and number of files involved, and reduce duplication, I suggest extending the OCL deployer to better support logging.

Sample system

Component C1 - uses category org.me.myapp
Component C2 - uses category org.me.myapp.c2
 
Appender A - console
Appender B - file
Appender C - serial
 
Logger org.me.myapp has level=info and appender A
Logger org.me.myapp.C2 has level=debug and appenders B, C

Configuration file for log4cpp

log4j.logger.org.me.myapp=info, AppA
log4j.logger.org.me.myapp.C2=debug, AppB, AppC
 
 
log4j.appender.AppA=org.apache.log4j.ConsoleAppender
log4j.appender.AppB=org.apache.log4j.FileAppender
log4j.appender.AppC=org.apache.log4j.SerialAppender
 
# AppA uses PatternLayout.
log4j.appender.AppA.layout=org.apache.log4j.PatternLayout
log4j.appender.AppA.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
# AppB uses SimpleLayout.
log4j.appender.AppB.layout=org.apache.log4j.SimpleLayout
# AppC uses PatternLayout with a different pattern from AppA
log4j.appender.AppC.layout=org.apache.log4j.PatternLayout
log4j.appender.AppC.layout.ConversionPattern=%d [%t] %-5p %c %x - %m%n

One possible Orocos XML deployer configuration

File: AppDeployer.xml

<struct name="ComponentC1" 
    ... />
<struct name="ComponentC2" 
    ... />
 
<struct name="AppenderA" type="ocl::ConsoleAppender"> 
    <simple name="PropertyFile" ...><value>AppAConfig.cpf</value></simple>
    <struct name="Peers"> <simple>Logger</simple>
</struct>
 
<struct name="AppenderB" type="ocl::FileAppender"> 
    <simple name="PropertyFile" ... />
    <struct name="Peers"> <simple>Logger</simple>
</struct>
 
<struct name="AppenderC" type="ocl::SerialAppender"> 
    <simple name="PropertyFile" ... />
    <struct name="Peers"> <simple>Logger</simple>
</struct>
 
<struct name="Logger" type="ocl::Logger"> 
    <simple name="PropertyFile" ...><value>logger.org.me.myapp.cpf</value></simple>
</struct>

File: AppAConfig.cpf

<properties>
  <simple name="LayoutClass" type="string"><value>ocl.PatternLayout</value>
  <simple name="LayoutConversionPattern" type="string"><value>%-4r [%t] %-5p %c %x - %m%n</value>
</properties>

… other appender .cpf files …

File: logger.org.me.myapp.cpf

<properties>
    <struct name="Categories" type="PropertyBag">
        <simple name="org.me.myapp" type="string"><value>info</value></simple>
        <simple name="org.me.myapp.C2" type="string"><value>debug</value></simple>
    </struct>
    <struct name="Appenders" type="PropertyBag">
        <simple name="org.me.myapp" type="string"><value>AppenderA</value></simple>
        <simple name="org.me.myapp.C2" type="string"><value>AppenderB</value></simple>
        <simple name="org.me.myapp.C2" type="string"><value>AppenderC</value></simple>
    </struct>
</properties>

The logger component is no more than a container for ports. Why special case this? Simply to make life easier for the deployer and to keep the deployer syntax and semantic model similar to what it currently is. A deployer deploys components - the only real special casing here is the connecting of ports (by the logger code) that aren't mentioned in the deployment file. If you use the existing deployment approach, you have to create a component per category, and mention the port in both the appenders and the category. This is what I currently have, and as I said, it is Really Ugly.

Example logger functionality (error checking elided)

Logger::configureHook()
 
    // create a port for each category with an appender
    for each appender in property bag
        find existing category
        if category not exist
            create category
            create port
            associate port with category
        find appender component
        connect category port with appender port
 
    // configure categories
    for each category in property bag
        if category not exist
            create category
        set category level

Important points

There will probably need to be a restriction that to maintain real-time, categories are found prior to a component being started (e.g. in configureHook() or startHook() ).

Note that not all OCL::Category objects contain a port. Only those category objects with associated appenders actually have a port. This is how the hierarchy works. If you have category "org.me.myapp.1.2.3" and it has no appenders but your log level is sufficient, then the logging action gets passed up the hierarchy. Say that category "org.me.myapp" has an appender (and that no logging level stops this logging action in the hierarchy in between), then that appender will actually log this event.

Also should create toolkit and transport plugins to deal with the log4cpp::LoggingEvent struct. This will allow for remote appenders, as well as viewing within the taskbrowser.

Port names would perhaps be something like "org.me.myapp.C1" => log_org_me_myapp_C1".

Real-Time Strings ?

It's not so much the string that needs to be real-time, but the stringstream, which converts our data (strings, ints,...) into a string buffer. Conveniently, the boost::iostream library allows with two lines of code to create a real-time string stream:

#include <boost/iostreams/device/array.hpp>
#include <boost/iostreams/stream.hpp>
 
namespace io = boost::iostreams;
 
int main()
{
  // prepare static sink
  const int MAX_MSG_LENGTH = 100;
  char sink[MAX_MSG_LENGTH];
  memset( sink, 0, MAX_MSG_LENGTH);
 
  // create 'stringstream' 
  io::stream<io::array_sink>  out(sink);
 
  out << "Hello World! "; // space required to avoid stack smashing abort.
 
  // close and flush stringstream
  out.close();
 
  // re-open from position zero.
  out.open( sink );
 
  // overwrites old data.
  out << "Hello World! ";
}
If user code 'only' uses const& to strings or C-strings, there is no need for rt_string, but for an rt_stringstream. The above code allows to realize that with a statically allocated (and non expandable!) char buffer. Replacing this buffer with a dynamically growing buffer will probably need an rt_string after all.

Unfortunately, the log4cpp::LoggingEvent is passed through RTT buffers, and this has std::string members. So, we need rt_string also, but rt_stringstream will be very useful also.

Warning For anyone using the boost::iostreams like above, either clear the array to 0's first, or ensure you explicitly write the string termination character ('\0'). The out << "..."; statement does not terminate the string otherwise. Also, I did not need the "space ... to avoid stack smashing abort" bit on Snow Leopard with gcc 4.2.1.

Using boost::iostream repeatedly ... you need to reset the stream between each use

#include <boost/iostreams/device/array.hpp>
#include <boost/iostreams/stream.hpp>
#include <boost/iostreams/seek.hpp>
 
namespace io = boost::iostreams;
 
...
 
char            str[500];
io::stream<io::array_sink>    ss(str);
 
ss << "cartPose_desi " << vehicleCartPosition_desi << '\0';
logger->debug(OCL::String(&str[0]));
 
// reset stream before re-using
io::seek(ss, 0, BOOST_IOS::beg);        
ss << "cartPose_meas " << vehicleCartPosition_meas << '\0';
logger->debug(OCL::String(&str[0]));

Problems/Questions/Issues

If before the Logger is configured (and hence, the buffer ports and appender associations are created), a component logs to a category, the logging event is lost. At that time no appenders exist. It also means that for any component that logs prior to configure time, by default, those logging events are lost. I think that this requires further examination, but would likely involve more change to the OCL deployer.

The logger configure code presumes that all appenders already exist. Is this an issue?

Is the port-category association a shared_ptr<port> style, or does the category simply own the port?

If the logger component has the ports added to it as well as to the category, then you could peruse the ports within the taskbrowser. Is this useful? If this is useful, is it worth making the categories and their levels available somehow for perusal within the taskbrowser?

References

[1] http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/NDC.html

[2] Patterns for Logging Diagnostic Messages Abstract

[3] log4j and a short introduction to it.

[4] logback - log4j successor

[5] log4cpp

[6] log4cxx

[7] log4cplus

Redesign of the data flow interface

(Copied from http://github.com/doudou/orocos-rtt/commit/dc1947c8c1bdace90cf0a3aa2047ad248619e76b)

  • write ports are now common to all types of connections, and writing is "send and forget"
  • read ports still specify their type (data or buffer). The management of the connection type is offloaded on the port object (i.e. no more an intermediate ConnectionInterface object)
  • the ports maintain a list of "connected" ports. It is therefore possible to do some connection management, i.e. one knows who is listening to what.

Here is the mail that led to this implementation:

The problems

  • the current implementation is not about data connections (getting data flowing from one port to another). It is about managing shared memory places, where different ports read and write. That is quite obvious for the data ports (i.e. there is a shared data sample that anyone can read or write), and is IMO completely meaningless for buffer ports. Buffer ports are really in need of a data flow model (see above a more specific critic about multi-output buffers)
  • Per se, this does not seem a problem. Data is getting transmitted from one port to the other, isn't it ?
Well, actually it is a problem because it forbids a clean connection management implementation. Why ? Because there is no way to know who is reading and who is writing ... Thus, the completely useless disconnect() call. Why useless ? Because if you do: (this is pseudo-code of course)

     connect(source, dest)
     source.disconnect()
Then dest.isConnected() returns true, even though dest will not get any data from anywhere (there is no writer anymore on that connection).

This is more general, as it is for instance very difficult to implement proper connection management in the CORBA case.
  • Because of this connection management issue, it is very difficult to implement a "push" model. It leads to huge problems with the CORBA transport when wireless is bad, because each pop or get needs a few calls.
  • It makes the whole implementation a huge mess. There is at least twice the number of classes normally needed to implement a connection model *and* code is not reused (DataPort is actually *not* a subclass of both ReadDataPort and WriteDataPort, same for buffers).
  • We already had a long thread about multiple-output buffered connections. I'll summarize what for me was the most important points:
    • the current implementation allows to distribute workload seamlessly between different task contexts.
    • it does not allow to send the same set of samples to different task contexts. Ther is a hack allowing to read buffer connections as if they were data connections, but it is a hack given that the reader cannot know if it is really reading a sample or reading a default value because the buffer is empty.
IMO the first case is actually rare in robotic control (and you can implement a generic workload-sharing component with nicer features like for instance keeping the ordering between input and output) like in the following example:

                             => A0 A3 [PROCESSING] => A'0 A'3
 A0 A1 A2 A3 => [WORK SHARING                                  WORK SHARING] => A'0 A'1 A'2 A'3
                             => A1 A2 [PROCESSING] => A'1 A'2
The second case is much more common. For instance, in my robot, I want to have a safety component that monitors a laser scanner (near-obstacle detection for the purpose of safety) and the same laser scans to go to a SLAM algorithm. I cannot do that for now, because I need a buffered connection to the SLAM algorithm. I cannot use the aforementionned hack either because for now I plan to put a network connection between the scanner driver and the two targets, and therefore I cannot really guarantee which component will get what.

Proposal

What I'm proposing is getting back to a good'ol data flow model, namely:

  • making write ports "send and forget". If the port fails to write, then it is the problem of the reader ! I really don't see what the writer can do about it anyway, given that it does not know what the data will be used for (principle of component separation). The reader can still detect that its input buffer is full and that it did not get some samples and do something about it.

  • making write ports "connection-type less". I.e. no WRITE data ports and WRITE buffer ports anymore, only write ports. This will allow to connect a write port to a read port with any kind of connections. Actually, I don't see a use case where the port designer can actually decide what kind of connection is best for its OUTPUT ports. Some examples:
    • in the laser scanner example above, the safety component would like a data port and the slam a buffer port
    • in position filtering, some components just want the latest positions and other components all the position stream (for interpolation purposes for instance)
    • in general, GUI vs. X. GUIs want most of the time the latest values.
    • ... I'm sure I can come up with other examples if you want them
  • locating the sample on the read ports (i.e. no ConnectionInterface and subclasses anymore). The bad: one copy of each sample per read port. The good: you implement the point above (write ports do not have a connection type), and you fix buffer connections once and for all.
  • removing (or deprecating) read/write ports. They really have no place in a data flow model.

Simplified, more robust default activities

From RTT 1.8 on, an Orocos component is created with a default 'SequentialActivity', which uses ('piggy-backs on') the calling thread to execute its asynchronous functions. It has been argued that this is not a safe default, because a component with a faulty asynchronous function can terminate the thread of a calling component, in case the 'caller' emits an asynchronous event (this is quite technical, you need to be on orocos-dev for a while to understand this).

Furthermore, in case you do want to assign a thread, you need to select a 'PeriodicActivity' or 'NonPeriodicActivity', which have their quirks as well. For example, PeriodicActivity serialises activities with equal period and periodicity, and NonPeriodicActivity says what it isn't instead of what it is.

The idea is to create a new activity type which allocates one thread, and which can be periodic or non-periodic. The other activity types remain (and/or are renamed) for specialist users that know what they want.

Streamlined Execution Flow API

It started with an idea on FOSDEM. It went on as a long mail (click link for full text and discussion) on the Orocos-dev mailing list.

Here's the summary:

  • RTT interoperates badly with other software, for example, any external process needs to go through a convoluted CORBA layer. There are also no tools that could ease the job (except the ctaskbrowser), for example some small shell commands that can query/change a component.
  • RTT has remaining usability issues. Sylvain already identified the short commings of data/buffer ports and proposed a solution. But any user wrestling with the Should I use an Event (syn/asyn)-Method-Command-DataPort'?' question only got the answer: Well, we got Events(syn/asyn)-Methods-Commands and DataPorts !'. It's not coherent. There are other frameworks doing a better job. We can do a far better job.
  • RTT has issues with its current distribution implementation: programs can be constructed as such that they cause mem leaks at the remote side, Events never got into the CORBA interface (there is a reason for that), and our data ports over CORBA are equally weak as the C++ implementation.
  • And then there are also the untaken opportunities to reduce RTT & component code size drastically and remove complex features.

The pages below analyse and propose new solutions. The pages are in chronological order, so later pages represent more recent views.

First analysis

I've seen people using the RTT for inter-thread communication in two major ways: or implement a function as a Method, or as a Command. Where the command was the thread-safe way to change the state of a component. The adventurous used Events as well, but I can't say they're a huge success (we got like only one 'thank you' email in its whole existence...). But anyway, Commands are complex for newbies, Events (syn/asyn) aren't better. So for all these people, here it comes: the RTT::Message object. Remember, Methods allow a peer component to _call_ a function foo(args) of the component interface. Messages will have the meaning of _sending_ another component a message to execute a function foo(args). Contrary to Methods, Messages are 'send and forget', they return void. The only guarantee you got is, that if the receiver was active, it processed it. For now, forget that Commands exist. We have two inter- component messaging primitives now: Messages and Methods. And each component declares: You can call these methods and send these messages. They are the 'Level 0' primitives of the RTT. Any transport should support these. Note that conveniently, the transport layer may implement messages with the same primitive as data ports. But we, users, don't care. We still have Data Ports to 'broadcast' our data streams and now we have Messages as well to send directly to component X.

Think about it. The RTT would be already usable if each component only had data ports and a Message/Method interface. Ask the AUTOSAR people, it's very close to what they have (and can live with).

There's one side effect of the Message: we will need a real-time memory allocator to reserve a piece of memory for each message sent, and free it when the message is processed. Welcome TLSF. In case such a thing is not possible wanted by the user, Messages can fall back to using pre-allocated memory, but at the cost of reduced functionality (similar to what Commands can do today). Also, we'll have a MessageProcessor, which replaces and is a slimmed down version of the CommandProcessor today.

So where does this leave Events? Events are of the last primitives I explain in courses because they are so complex. They don't need to be. Today you need to attach a C/C++ function to an event and optionally specify an EventProcessor. Depending on some this-or-thats the function is executed in this-or-the-other thread. Let's forget about that. In essence, an Event is a local thing that others like to know about: Something happened 'here', who wants to know? Events can be changed such that you can say: If event 'e' happens, then call this Method. And you can say: if event 'e' happens, send me this Message. You can subscribe as many callbacks as you want. Because of the lack of this mechanism, the current Event implementation has a huge foot print. There's a lot to win here.

Do you want to allow others to raise the event ? Easy: add it to the Message or Method interface, saying: send me this Message and I'll raise the event, or call this Method and you'll raise it, respectively. But if someone can raise it, is your component's choice. That's what the event interface should look like. It's a Level 1. A transport should do no more than allowing to connect Methods and Messages (which it already supports, Level 1) to Events. No more. Even our CORBA layer could do that.

The implementation of Event can benefit from a rt_malloc as well. Indirectly. Each raised Event which causes Messages to be sent out will use the Message's rt_malloc to store the event data by just sending the Message. In case you don't have/want an rt_malloc, you fall back to what events can roughly do today. But with lots of less code ( Goodbye RTT::ConnectionC, Goodbye RTT::EventProcessor ).

And now comes the climax: Sir Command. How does he fit in the picture? He'll remain in some form, but mainly as a 'Level 2' citizen. He'll be composed of Methods, Messages and Events and will be dressed out to be no more than a wrapper, keeping related classes together or even not that. Replacing a Command with a Message hardly changes anything in the C++ side. For scripts, Commands were damn useful, but we will come up with something satisfactory. I'm sure.

How does all this interface shuffling allows us to get 'towards a sustainable distributed component model'? That's because we're seriously lowering the requirements on the transport layer:

  • It only needs to implement the Level 0 primitives. How proxies and servers are built depends on the transport. You can do so manually (dlib like) or automatically (CORBA like)
  • It allows the transport to control memory better, share it between clients and clean it up at about any time.
  • The data flow changes Sylvain proposes strengthen our data flow model and I'm betting on it that it won't use CORBA as a transport. Who knows.

And we are at the same time lowering the learning curve for new users:

  • You can easily explain the basic primitives: Properties=>XML, DataPorts=>process data, Methods/Messages=>client/server requests. When they're familiar with these, they can start playing with Events (which build on top of Method/Messages and play a role in DataPorts as well). And finally, if they'll ever need, the Convoluted Command can encompass the most complex scenarios.
  • You can more easily connect with other middleware or external programs. People with other middleware will see the opportunities for 1-to-1 mappings or even implement it as a transport in the RTT.

Dissecting Command and Method: blocking/non-blocking vs synchronous/asynchronous

(Please feel free to edit/comment etc. This is a community document, not a personal document)

Notes on naming

The word service is used to name the offering of a C/C++ function for others to call. Today in Orocos Components offer services in the form of 'RTT::Method' or 'RTT::Command' objects. Both lead to the execution of a function, but in a different way. Also, despite the title, it is advised to refrain from using the terms synchronous/asynchronous, because they are relative terms and may cause confusion if the context is not clear.

An alternative naming is possible: the offering of a C/C++ function could be named 'operation' and the collection of a given set of operations in an interface could be called a 'service'. This definition would line up better with service oriented architectures like OSGi.

Purpose

This page collects the ideas around the new primitives that will replace/enhance Method and/or Command. Although Method is a clearly understood primitive by users, Command isn't because of its multi-threaded nature. It is too complex to setup/use and can lead to unsafe applications (segfaults) if used incorrectly. To get these primitives better, we re-look at what users want to do and how to map this to RTT primitives.

What users want to do

Users want to control which thread executes which function, and if they want to wait(block) on the result or not. This all in order to meet deadlines in real-time systems. In practice, this boils down to:

  • When calling services (ie functions) of other components, one may opt to wait until the service returns the result, or not and optionally collect the result later. This is often best decided at the caller side, because both cases will cause different client code for sending/receiving the results
  • When implementing services in a component, the component may decide that the caller's thread executes the function, or that it will execute the function in it's own thread. Clearly, this can only be decided at the receiver side, because both cases will cause a different implementation of the function to be executed. Especially with respect to thread-safety.

Dissecting the cases

When putting the above in a table, you get:
Calling a service (a function)
Wait? \ Thread? Caller Component
Yes (Method) (?)
No X (Command)

For reference, the current RTT 1.x primitives are shown. There are two remarkable spots: the X and the (?).

  • The X is a practically impossible situation. It would involve that the client thread does not wait, but its thread still executes the function. This could only be resolved if a 'third' thread executes the service on behalf of the caller. It is unclear at which priority this thread should execute, what it's lifetime and exclusivity is and so on.
  • The (?) marks a hole in the current RTT API. Users could only implement this behaviour by busy-waiting on the Command's done() function. However, that is disastrous in real-time systems, because of starvation or priority inversion issues that crop up with such techniques.

Another thing you should be aware of that in the current implementation, caller and component must agree on how the service is invoked. If the Component defines a Method, the caller must execute it in its own thread and wait for the result. There's no other way for the caller to deviate from this. In practice, this means that the component's interface dictates how the caller can use its services. This is consistent with how UML defines operations, but other frameworks, like ICE, allow any function part of the interface to be called blocking or non-blocking. Clearly, ICE has some kind of thread-pool behind the scenes that does the dispatching and collects the results on behalf of the caller.

Backwards compatibility - Or how it is now

Orocos users have written many components and the primary idea of RTT 2.0 is to solve issues these components still have due to defects in the current RTT 1.x design. Things that do work satisfactory should keep working without modification of the user's design.

Method

It is very likely that the RTT::Method primitive will remain to exist as it is today. Little problems have been reported and it is easy to understand. The only disadvantage is that it can not be called 'asynchronously'. For example: if a component defines a Method, but the caller does not have the resources to invoke it (due to a deadline), it needs to setup a separate thread to do the call on its behalf. This is error prone. Orocos users often solve this by defining a command and trying to get the result data back somehow (also error prone).

Command

Commands serve multiple purposes in today programming with Orocos.
  • First, they allow thread-safe execution of a piece of code in a component. Because the component thread executes the function, no locking or synchronization primitives are required.
  • Second, they allow a caller to dispatch work to another component, in case the caller does not have the time or resources to execute a function.
  • Third, they allow to track the status of the execution. The caller can poll to see if the function has been queued, executed, what it returned (a boolean) etc.
  • Fourth, they allow to track the status of the 'effect' of the command, past its execution. This is done by attaching a completion condition, which returns a bool and can indicate if the effect of the command has been completed or not. For example, if the command is to move to a position, the completion condition would return true if the position is reached, while the command function would have only programmed the interpolator to reach that position. Completion conditions are not that much used, and must be polled.

A simpler form of Command will be provided that does not contain the completion condition. It is too seldomly used.

It is to the proposals to show how to emulate the old behavior with the new primitives.

Proposals

Each proposal should try to solve these issues:

The ability to let caller and component choose which execution semantics they want when calling or offering a service (or motivate why a certain choice is limited):

  • The ability to wait for a service to be completed
  • The ability to invoke a service and not wait for the result
  • The ability to specify in the component implementation if a function is executed in the component's thread
  • The ability to specify in the component implementation if a function is executed in the caller's thread

And regarding easy use and backwards compatibility:

  • Show how old-time behavior can be emulated with the new proposal
  • Show which semantics changed
  • How these primitives will be used in the scripting languages and in C++

And finally:

  • Define proper names for each behavior.

Proposal 1: Method/Message

This is one of the earliest proposals. It proposes to keep Method as-is, remove Command and replace it with a new primitive: RTT::Message. The Message is a stripped Command. It has no completion condition and is send-and-forget. One can not track the status or retrieve arguments. It also uses a memory manager to allow to invoke the same Message object multiple times with different data.

Emulating a completion condition is done by defining the completion condition as a Method in the component interface and requiring that the sender of the Message checks that Method to evaluate progress. In scripting this becomes:

// Old:
  do comp.command("hello"); // waits (polls) here until complete returns true
 
// New: Makes explicit what above line does:
  do comp.message("hello"); // proceeds immediately
  while ( comp.message_complete("hello") == false ) // polling
     do nothing;

In C++, the equivalent is slightly different:

// Old:
  if ( command("hello") ) {
     //... user specific logic that checks command.done() 
  }
 
// New:
  if ( message("hello") ) { // send and forget, returns immediately
     // user specifc logic that checks message_complete("hello")
  }

Users have indicated that they also wanted to be able to specify in C++:

  message.wait("hello"); // send and block until executed.

It is not clear yet how the wait case can be implemented efficiently.

The user visible object names are:

  • RTT::Method to add a 'client thread' C/C++ function to the component interface or call one.
  • RTT::Message to add a 'component thread' C/C++ function to the component interface or call one.

This proposal solves:

  • A simpler replacement for Command
  • Acceptable emulation capacities of old user code
  • The invocation of multiple times the same message object in a row.

This proposal omits:

  • The choice of caller/component to choose independently
  • Solving case 'X' (see above)
  • How message.wait() can be implemented

Other notes:

  • It has been mentioned that 'Message' is not a good and too confusing name.

Proposal 2: Method/Service

This proposal focuses on separating the definition of a Service (component side) from the calling of a Method (caller side).

The idea is that components only define services, and assign properties to these services. The main properties to toggle are 'executed in my thread or callers thread, or even another thread'. But other properties could be added too. For example: a 'serialized' property which causes the locking of a (recursive!) mutex during the execution of the service. The user of the service can not and does not need to know how these properties are set. He only sees a list of services in the interface.

It is the caller that chooses how to invoke a given service: waiting for the result ('call') or not ('send'). If he doesn't want to wait, he has the option to collect the results later ('collect'). The default is blocking ('call'). Note that this waiting or not is completely independent of how the service was defined by the component, the framework will choose a different 'execution' implementation depending on the combination of the properties of service and caller.

This means that this proposal allows to have all four quadrants of the table above. This proposal does not detail yet how to implement case (X) though, which requires a 3rd thread to do the actual execution of the service (neither component nor caller wish to do execute the C function).

This would result in the following scripting code on caller side:

//Old:
  do comp.the_method("hello");
 
//New:
  do comp.the_service.call("hello"); // equivalent to the_method.
 
//Old:
  do comp.the_command("hello");
 
//New:
  do comp.the_service.send("hello"); // equivalent to the_command, but without completion condition.

This example shows two use cases for the same 'the_service' functionality. The first case emulates an RTT 1.x method. It is called and the caller waits until the function has been executed. You can not see here which thread effectively executes the call. Maybe it's 'comp's thread, in which case the caller's thread is blocking until it the function is executed. Maybe it's the caller's thread, in which case it is effectively executing the function. The caller doesn't care actually. The only thing that has effect is that it takes a certain amount of time to complete the call, *and* that if the call returns, the function has been effectively executed.

The second case is emulating an RTT 1.x command. The send returns immediately and there is no way in knowing when the function has been executed. The only guarantee you have is that the request arrived at the other side and bar crashes and infinite loops, will complete some time in the future.

A third example is shown below where another service is used with a 'send' which returns a result. The service takes two arguments: a string and a double. The double is the answer of the service, but is not yet available when the send is done. So the second argument is just ignored during the send. A handle 'h' is returned which identifies your send request. You can re-use this handle to collect the results. During collection, the first argument is now ignored, and the second argument is filled in with the result of the service. Collection may be blocking or not.

//New, with collecting results:
  var double ignored_result, result;
 
  set h = comp.other_service.send("hello", ignored_result);
 
  // some time later :
  comp.other_service.collect(h, "ignored", result); // blocking !
 
  // or poll for it:
  if ( comp.other_service.collect_if_done( h, "ignored", result ) == true ) then {
     // use result...
  }

In C++ the above examples are written as:

//New calling:
  the_service.call("hello", result); // also allowed: the_service("hello", result);
 
//New sending:
  the_service.send("hello", ignored_result);
 
//New sending with collecting results:
  h = other_service.send("hello", ignored_result);
 
  // some time later:
  other_service.collect(h, "ignored", result); // blocking !
 
  // or poll for it:
  if ( other_service.collect_if_done( h, "ignored", result ) == true ) {
     // use result...
  }

Completion condition emulation is done like in Proposal 1.

The definition of the service happens at the component's side. The component decides for each service if it is executed in his thread or the callers thread:

  // by default creates a service executed by caller, equivalent to defining a RTT 1.x Method  
  RTT::Service the_service("the_service", &foo_service );
 
  // sets the service to be executed by the component's thread, equivalent to Command
  the_service.setExecutor( this );
 
  //above in one line:
  RTT::Service the_service("the_service", &foo_service, this );

The user visible object names are:

  • RTT::Service to add a C/C++ function to the component interface (replaces use of Method/Command).
  • RTT::CallMethod or similar to call a service, please discuss a good/better name.
  • RTT::SendMethod or similar to send (and collect results from) a service, please discuss a good/better name.

This proposal solves:

  • Allows to specify threading parameters in the component independent of call/send semantics.
  • Removes user method/command dilemma.
  • Aligns better with 3rd party frameworks that also offer 'services'.

This proposal omits:

  • How collection semantics are exactly.
  • How to resolve a 'send' with a 'service executed in thread of caller' (case X). Should a send indicate which thread must do the send on its behalf ? Is the execution deferred in another point in time in the caller's thread ?

Your Proposal here

...

Provides vs Requires interfaces

Users can express the 'provides' interface of an Orocos Component. However, there is no easy way to express which other components a component requires. The notable exception is data flow ports, which have in-ports (requires) and out-ports (provides). It is however not possible to express this requires interface for the execution flow interface, thus for methods, commands/messages and events. This omission makes the component specification incomplete.

One of the first questions raised is if this must be expressed in C++ or during 'modelling'. That is, UML can express the requires dependency, so why should the C++ code also contain it in the form of code ? It should only contain it if you can't generate code from your UML model. Since this is not yet available for Orocos components, there is no other choice than expressing it in C++.

A requires interface specification should be optional and only be present for:

  • completing the component specification, allowing better review and understanding
  • automatically connecting component 'execution' interfaces, such that the manual lookup work which you need to write today can be omitted.

We apply this in code examples to various proposed primitives in the pages below.

New Command API

Commands are no longer a part of the TaskContext API. They are helper classes which replicate the old RTT 1.0 behaviour. In order to setup commands more easily, it is allowed to register them as a 'requires()' interface.

This is all very experimental.

/**
 * Provider of a Message with command-like semantics
 */
class TaskA    : public TaskContext
{
    Message<void(double)>   message;
    Method<bool(double)>    message_is_done;
    Event<void(double)>     done_event;
 
    void mesg(double arg1) {
        return;
    }
 
    bool is_done(double arg1) {
        return true;
    }
 
public:
 
    TaskA(std::string name)
        : TaskContext(name),
          message("Message",&TaskA::mesg, this),
          message_is_done("MessageIsDone",&TaskA::is_done, this),
          done_event("DoneEvent")
    {
        this->provides()->addMessage(&message, "The Message", "arg1", "Argument 1");
        this->provides()->addMethod(&method, "Is the Message done?", "arg1", "Argument 1");
        this->provides()->addEvent(&done_event, "Emited when the Message is done.", "arg1", "Argument 1");
    }
 
};
 
class TaskB   : public TaskContext
{
    // RTT 1.0 style command object
    Command<bool(double)>   command1;
    Command<bool(double)>   command2;
 
public:
 
    TaskB(std::string name)
        : TaskContext(name),
          command1("command1"),
          command2("command2")
    {
        // the commands are now created client side, you
        // can not add them to your 'provides' interface
        command1.useMessage("Message");
        command1.useCondition("MessageIsDone");
        command2.useMessage("Message");
        command2.useEvent("DoneEvent");
 
        // this allows automatic setup of the command.
        this->requires()->addCommand( &command1 );
        this->requires()->addCommand( &command2 );
    }
 
    bool configureHook() {
        // setup is done during deployment.
        return command1.ready() && command2.ready();
    }
 
    void updateHook() {
        // calls TaskA:
        if ( command1.ready() && command2.ready() )
            command1( 4.0 );
        if ( command1.done() && command2.ready() )
            command2( 1.0 );
    }
};
 
int ORO_main( int, char** )
{
    // Create your tasks
    TaskA ta("Provider");
    TaskB tb("Subscriber");
 
    connectPeers(ta, tb);
    // connects interfaces.
    connectInterfaces(ta, tb);
    return 0;
}

New Event API

The idea of the new Event API is that: 1. only the owner of the event can emit the event (unless the event is also added as a Method or Message) 2. Only methods or message objects can subscribe to events.

/**
 * Provider of Event
 */
class TaskA    : public TaskContext
{
    Event<void(string)>   event;
 
public:
 
    TaskA(std::string name)
        : TaskContext(name),
          event("Event")
    {
        this->provides()->addEvent(&event, "The Event", "arg1", "Argument 1");
        // OR:
        this->provides("FooInterface")->addEvent(&event, "The Event", "arg1", "Argument 1");
 
        // If you want the user to let him emit the event:
        this->provides()->addMethod(&event, "Emit The Event", "arg1", "Argument 1");
    }
 
    void updateHook() {
        event("hello world");
    }
};
 
/**
 * Subscribes a local Method and a Message to Event
 */
class TaskB   : public TaskContext
{
    Message<void(string)>   message;
    Method<void(string)>    method;
 
    // Message callback
    void mesg(double arg1) {
        return;
    }
 
    // Method callback
    int meth(double arg1) {
        return 0;
    }
 
public:
 
    TaskB(std::string name)
        : TaskContext(name),
          message("Message",&TaskB::mesg, this),
          method("Method",&TaskB::meth, this)
    {
        // optional:
        // this->provides()->addMessage(&message, "The Message", "arg1", "Argument 1");
        // this->provides()->addMethod(&method, "The Method", "arg1", "Argument 1");
 
        // subscribe to event:
        this->requires()->addCallback("Event", &message);
        this->requires()->addCallback("Event", &method);
 
        // OR:
        // this->provides("FooInterface")->addMessage(&message, "The Message", "arg1", "Argument 1");
        // this->provides("FooInterface")->addMethod(&method, "The Method", "arg1", "Argument 1");
 
        // subscribe to event:
        this->requires("FooInterface")->addCallback("Event", &message);
        this->requires("FooInterface")->addCallback("Event", &method);
    }
 
    bool configureHook() {
        // setup is done during deployment.
        return message.ready() && method.ready();
    }
 
    void updateHook() {
        // we only receive
    }
};
 
int ORO_main( int, char** )
{
    // Create your tasks
    TaskA ta("Provider");
    TaskB tb("Subscriber");
 
    connectPeers(ta, tb);
    // connects interfaces.
    connectInterfaces(ta, tb);
    return 0;
}

New Message API

This use case shows how one can use messages in the new API. The unchanged method is added for comparison. Note that I have also added the provides() and requires() mechanism such that the RTT 1.0 construction:

  method = this->getPeer("PeerX")->getMethod<int(double)>("Method");

is no longer required. The connection is made similar as data flow ports are connected.

/**
 * Provider
 */
class TaskA    : public TaskContext
{
    Message<void(double)>   message;
    Method<int(double)>     method;
 
    void mesg(double arg1) {
        return;
    }
 
    int meth(double arg1) {
        return 0;
    }
 
public:
 
    TaskA(std::string name)
        : TaskContext(name),
          message("Message",&TaskA::mesg, this),
          method("Method",&TaskA::meth, this)
    {
        this->provides()->addMessage(&message, "The Message", "arg1", "Argument 1");
        this->provides()->addMethod(&method, "The Method", "arg1", "Argument 1");
        // OR:
        this->provides("FooInterface")->addMessage(&message, "The Message", "arg1", "Argument 1");
        this->provides("FooInterface")->addMethod(&method, "The Method", "arg1", "Argument 1");
    }
 
};
 
class TaskB   : public TaskContext
{
    Message<void(double)>   message;
    Method<int(double)>     method;
 
public:
 
    TaskB(std::string name)
        : TaskContext(name),
          message("Message"),
          method("Method")
    {
        this->requires()->addMessage( &message );
        this->requires()->addMethod( &method );
        // OR:
        this->requires("FooInterface")->addMessage( &message );
        this->requires("FooInterface")->addMethod( &method );
    }
 
    bool configureHook() {
        // setup is done during deployment.
        return message.ready() && method.ready();
    }
 
    void updateHook() {
        // calls TaskA:
        method( 4.0 );
        // sends two messages:
        message( 1.0 );
        message( 2.0 );
    }
};
 
int ORO_main( int, char** )
{
    // Create your tasks
    TaskA ta("Provider");
    TaskB tb("Subscriber");
 
    connectPeers(ta, tb);
    // connects interfaces.
    connectInterfaces(ta, tb);
    return 0;
}

New Method, Operation, Service API

This page shows some use cases on how to use the newly proposed services classes in RTT 2.0.

WARNING: This page assumes the reader has familiarity with the current RTT 1.x API.

First, we introduce the new classes that would be added to the RTT:

#include <rtt/TaskContext.hpp>
#include <string>
 
using RTT::TaskContext;
using std::string;
 
/**************************************
 * PART I: New Orocos Classes
 */
 
/**
 * An operation is a function a component offers to do.
 */
template<class T>
class Operation {};
 
/**
 * A Service collects a number of operations.
 */
class ServiceProvider {
public:
    ServiceProvider(string name, TaskContext* owner);
};
 
/**
 * Is the invocation of an Operation.
 * Methods can be executed blocking or non blocking,
 * in the latter case the caller can retrieve the results
 * later on.
 */
template<class T>
class Method {};
 
/**
 * A ServiceRequester collects a number of methods
 */
class ServiceRequester {
public:
    ServiceRequester(string name, TaskContext* owner);
 
    bool ready();
};

What is important to notice here is the symmetry:

 (Operation, ServiceProvider) <-> (Method, ServiceRequester).
The left hand side is offering services, the right hand side is using the services.

First we define that we provide a service. The user starts from his own C++ class with virtual functions. This class is then implemented in a component. A helper class ties the interface to the RTT framework:

/**************************************
 * PART II: User code for PROVIDING a service
 */
 
/**
 * Example Service as abstract C++ interface (non-Orocos).
 */
class MyServiceInterface {
public:
    /**
     * Description.
     * @param name Name of thing to do.
     * @param value Value to use.
     */
    virtual int foo_function(std::string name, double value) = 0;
 
    /**
     * Description.
     * @param name Name of thing to do.
     * @param value Value to use.
     */
    virtual int bar_service(std::string name, double value) = 0;
};
 
/**
 * MyServiceInterface exported as Orocos interface.
 * This could be auto-generated from reading MyServiceInterface.
 *
 */
class MyService {
protected:
    /**
     * These definitions are not required in case of 'addOperation' below.
     */
    Operation<int(const std::string&,double)> operation1;
    Operation<int(const std::string&,double)> operation2;
 
    /**
     * Stores the operations we offer.
     */
    ServiceProvider provider;
public:
    MyService(TaskContext* owner, MyServiceInterface* service)
    : provider("MyService", owner),
      operation1("foo_function"), operation2("bar_service")
    {
                // operation1 ties to foo_function and is executed in caller's thread.
        operation1.calls(&MyServiceInterface::foo_function, service, Service::CallerThread);
        operation1.doc("Description", "name", "Name of thing to do.", "value", "Value to use.");
                provider.addOperation( operation1 );
 
        // OR: (does not need operation1 definition above)
        // Operation executed by caller's thread:
        provider.addOperation("foo_function", &MyServiceInterface::foo_function, service, Service::CallerThread)
                .doc("Description", "name", "Name of thing to do.", "value", "Value to use.");
 
        // Operation executed in component's thread:
        provider.addOperation("bar_service", &MyServiceInterface::bar_service, service, Service::OwnThread)
                .doc("Description", "name", "Name of thing to do.", "value", "Value to use.");
    }
};

Finally, any component is free to provide the service defined above. Note that it shouldn't be that hard to autogenerate most of the above code.

/**
 * A component that implements and provides a service.
 */
class MyComponent : public TaskContext, protected MyServiceInterface
{
    /**
     * The class defined above.
     */
    MyService serv;
public:
    /**
     * Just pass on TaskContext and MyServiceInterface pointers:
     */
    MyComponent() : TaskContext("MC"), serv(this,this)
    {
 
    }
 
protected:
    // Implements MyServiceInterface
    int foo_function(std::string name, double value)
    {
        //...
        return 0;
    }
    // Implements MyServiceInterface
    int bar_service(std::string name, double value)
    {
        //...
        return 0;
    }
};

The second part is about using this service. It creates a ServiceRequester object that stores all the methods it wants to be able to call.

Note that both ServiceRequester below and ServiceProvider above have the same name "MyService". This is how the deployment can link the interfaces together automatically.

/**************************************
 * PART II: User code for REQUIRING a service
 */
 
/**
 * We need something like this to define which services
 * our component requires.
 * This class is written explicitly, but it can also be done
 * automatically, as the example below shows.
 *
 * If possible, this class should be generated too.
 */
class MyServiceUser {
    ServiceRequester rservice;
public:
    Method<int(const string&, double)> foo_function;
    MyServiceUser( TaskContext*  owner )
    : rservice("MyService", owner), foo_function("foo_function")
      {
        rservice.requires(foo_function);
      }
};
 
/**
 * Uses the MyServiceUser helper class.
 */
class UserComponent2 : public TaskContext
{
    // also possible to (privately) inherit from this class.
    MyServiceUser mserv;
public:
    UserComponent2() : TaskContext("User2"), mserv(this)
    {
    }
 
    bool configureHook() {
        if ( ! mserv->ready() ) {
            // service not ready
            return false;
        }
    }
 
    void updateHook() {
        // blocking:
        mserv.foo_function.call("name", 3.14);
        // etc. see updateHook() below.
    }
};

The helper class can again be omitted, but the Method<> definitions must remain in place (in contrast, the Operation<> definitions for providing a service could be omitted).

The code below also demonstrates the different use cases for the Method object.

/**
 * A component that uses a service.
 * This component doesn't need MyServiceUser, it uses
 * the factory functions instead:
 */
class UserComponent : public TaskContext
{
    // A definition like this must always be present because
    // we need it for calling. We also must provide the function signature.
    Method<int(const string&, double)> foo_function;
public:
    UserComponent() : TaskContext("User"), foo_function("foo_function")
    {
        // creates this requirement automatically:
        this->requires("MyService")->add(&foo_function);
    }
 
    bool configureHook() {
        if ( !this->requires("MyService")->ready() ) {
            // service not ready
            return false;
        }
    }
 
    /**
     * Use the service
     */
    void updateHook() {
        // blocking:
        foo_function.call("name", 3.14);
        // short/equivalent to call:
        foo_function("name", 3.14);
 
        // non blocking:
        foo_function.send("name", 3.14);
 
        // blocking collect of return value of foo_function:
        int ret = foo_function.collect();
 
        // blocking collect of any arguments of foo_function:
        string ret1; double ret2;
        int ret = foo_function.collect(ret1, ret2);
 
        // non blocking collect:
        int returnval;
        if ( foo_function.collectIfDone(ret1,ret2,returnval) ) {
            // foo_function was done. Any argument that needed updating has
            // been updated.
        }
    }
};

Finally, we conclude with an example of requiring the same service multiple times, for example, for controlling two stereo-vision cameras.

/**
 * Multi-service case: use same service multiple times.
 * Example: stereo vision with two cameras.
 */
class UserComponent3 : public TaskContext
{
    // also possible to (privately) inherit from this class.
    MyVisionUser vision;
public:
    UserComponent3() : TaskContext("User2"), vision(this)
    {
        // requires a service exactly two times:
        this->requires(vision)["2"];
        // OR any number of times:
        // this->requires(vision)["*"];
        // OR range:
        // this->requires(vision)["0..2"];
    }
 
    bool configureHook() {
        if ( ! vision->ready() ) {
            // only true if both are ready.
            return false;
        }
 
    }
 
    void updateHook() {
        // blocking:
        vision[0].foo_function.call("name", 3.14);
        vision[1].foo_function.call("name", 3.14);
        // or iterate:
        for(int i=0; i != vision.interfaces(); ++i)
            vision[i].foo_function.call("name",3.14);
        // etc. see updateHook() above.
 
        /* Scripting equivalent:
         * for(int i=0; i != vision.interfaces(); ++i)
         *   do vision[i].foo_function.call("name",3.14);
         */
    }
};

Upgrading from RTT 1.x to 2.0

For upgrading, we have:

More details are split into several child pages.

Methods vs Operations

RTT 2.0 has unified events, commands and methods in the Operation interface.

Purpose

To allow one component to provide a function and other components, located anywhere, to call it. This is often called 'offering a service'. Orocos component can offer many functions to any number of components.

Component interface

In Orocos, a C or C++ function is managed by the 'RTT::Operation' object. Click below to read the rest of this post.RTT 2.0 has unified events, commands and methods in the Operation interface.

Purpose

To allow one component to provide a function and other components, located anywhere, to call it. This is often called 'offering a service'. Orocos component can offer many functions to any number of components.

Component interface

In Orocos, a C or C++ function is managed by the 'RTT::Operation' object. So the first task is to create such an operation object for each function you want to provide.

This is how a function is added to the component interface:

  #include <rtt/Operation.hpp>;
  using namespace RTT;
 
  class MyTask
    : public RTT::TaskContext
  {
    public:
    string getType() const { return "SpecialTypeB" }
    // ...
 
    MyTask(std::string name)
      : RTT::TaskContext(name),
    {
       // Add the a C++ method to the operation interface:
       addOperation( "getType", &MyTask::getType, this )
                .doc("Read out the name of the system.");
     }
     // ...
  };
 
  MyTask mytask("ATask");

The writer of the component has written a function 'getType()' which returns a string that other components may need. In order to add this operation to the Component's interface, you use the TaskContext's addOperation function. This is a short-hand notation for:

       // Add the C++ method to the operation interface:
       provides()->addOperation( "getType", &MyTask::getType, this )
                .doc("Read out the name of the system.");

Meaning that we add 'getType()' to the component's main interface (also called 'this' interface). addOperation takes a number of parameters: the first one is always the name, the second one a pointer to the function and the third one is the pointer to the object of that function, in our case, MyTask itself. In case the function is a C function, the third parameter may be omitted.

If you don't want to polute the component's this interface, put the operation in a sub-service:

       // Add the C++ method objects to the operation interface:
       provides("type_interface")
            ->addOperation( "getType", &MyTask::getType, this )
                .doc("Read out the name of the system.");

The code above dynamically created a new service object 'type_interface' to which one operation was added: 'getType()'. This is similar to creating an object oriented interface with one function in it.

Calling an Operation in C++

Now another task wants to call this function. There are two ways to do this: from a script or in C++. This section explains how to do it in C++

Your code needs a few things before it can call a component's operation:

  • It needs to be a peer of instance 'ATask' of MyTask.
  • It needs to know the signature of the operation it wishes to call: string (void) (this is the function's declaration without the function's name).
  • It needs to know the name of the operation it wishes to call: "getType"

Combining these three givens, we must create an OperationCaller object that will manage our call to 'getType':

#include <rtt/OperationCaller.hpp>
//...
 
  // In some other component:
  TaskContext* a_task_ptr = getPeer("ATask");
 
  // create a OperationCaller<Signature> object 'getType':
  OperationCaller<string(void)> getType
       = a_task_ptr->getOperation("getType"); // lookup 'string getType(void)'
 
  // Call 'getType' of ATask:
  cout << getType() <<endl;

A lot of work for calling a function no ? The advantages you get are these:

  • ATask may be located on any computer, or in any process.
  • You didn't need to include the header of ATask, so it's very decoupled.
  • If ATask disappears, the OperationCaller object will let you know, instead of crashing your program.
  • The exposed operation is directly available from the scripting interface.

Calling Operations in scripts

In scripts, operations are accessed far more easier. The above C++ part is reduced to:

var string result = "";
set result = ATask.getType();

Tweaking Operation's Execution

In real-time applications, it is important to know which thread will execute which code. By default the caller's thread will execute the operation's function, but you can change this when adding the operation by specifying the ExecutionType:

       // Add the C++ method to the operation interface:
       // Execute function in component's thread:
       provides("type_interface")
            ->addOperation( "getType", &MyTask::getType, this, OwnThread )
                .doc("Read out the name of the system.");

So this causes that when getType() is called, it gets queued for execution in the ATask component, is executed by its ExecutionEngine, and when done, the caller will resume. The caller (ie the OperationCaller object) will not notice this change of execution path. It will wait for the getType function to complete and return the results.

Not blocking when calling operations

In the examples above, the caller always blocked until the operation returns the result. This is not mandatory. A caller can 'send' an operation execution to a component and collect the returned values later. This is done with the 'send' function:

// This first part is equal to the example above:
 
#include <rtt/OperationCaller.hpp>
//...
 
  // In some other component:
  TaskContext* a_task_ptr = getPeer("ATask");
 
  // create a OperationCaller<Signature> object 'getType':
  OperationCaller<string(void)> getType
       = a_task_ptr->getOperation("getType"); // lookup 'string getType(void)'
 
// Here it is different:
 
  // Send 'getType' to ATask:
  SendHandle<string(void)> sh = getType.send();
 
  // Collect the return value 'some time later':
  sh.collect();             // blocks until getType() completes
  cout << sh.retn() <<endl; // prints the return value of getType().

Other variations on the use of SendHandle are possible, for example polling for the result or retrieving more than one result if the arguments are passed by reference. See the Component Builder's Manual for more details.

RTT 2.0 Data Flow Ports

RTT 2.0 has a more powerful, simple and flexible system to exchange data between components.

Renames

Every instance of ReadDataPort and ReadBufferPort must be renamed to 'InputPort' and every instance of WriteDataPort and WriteBufferPort must be renamed to OutputPort. 'DataPort' and 'BufferPort' must be renamed according to their function.

The rtt2-converter tool will do this renaming for you, or at least, make its best guess.

Usage

InputPort and OutputPort have a read() and a write() function respectively:

using namespace RTT;
double data;
 
InputPort<double> in("name");
FlowStatus fs = in.read( data ); // was: Get( data ) or Pull( data ) in 1.x
 
OutputPort<double> out("name");
out.write( data );               // was: Set( data ) or Push( data ) in 1.x

As you can see, Get() and Pull() are mapped to read(), Set() and Push() to write(). read() returns a FlowStatus object, which can be NoData, OldData, NewData. write() does not return a value (send and forget).

Writing to a not connected port is not an error. Reading from a not connected (or never written to) port returns NoData.

Your component can no longer see if a connection is buffered or not. It doesn't need to know. It can always inspect the return value of read() to see if a new data sample arrived or not. In case multiple data samples are ready to read in a buffer, read() will fetch each sample in order and each time return NewData, until the buffer is empty, in which case it returns the last data sample read with 'OldData'.

If data exchange is buffered or not is now fixed by 'Connection Policies', or 'RTT::ConnPolicy' objects. This allows you to be very flexible on how components are connected, since you only need to specify the policy at deployment time. It is possible to define a default policy for each input port, but it is not recommended to count on a certain default when building serious applications. See the 'RTT::ConnPolicy' API documentation for which policies are available and what the defaults are.

Deployment

The DeploymentComponent has been extended such that it can create new-style connections. You only need to add sections to your XML files, you don't need to change existing ones. The sections to add have the form:

  <!-- You can set per data flow connection policies -->
  <struct name="SensorValuesConnection" type="ConnPolicy">
    <!-- Type is 'shared data' or buffered: DATA: 0 , BUFFER: 1 -->
    <simple name="type" type="short"><value>1</value></simple>
    <!-- buffer size is 12 -->
    <simple name="size" type="short"><value>12</value></simple>
  </struct>
  <!-- You can repeat this struct for each connection below ... -->

Where 'SensorValuesConnection' is a connection between data flow ports, like in the traditional 1.x way.

Consult the deployment component manual for all allowed ConnPolicy XML options.

Real-time with Complex data

The data flow implementation tries to pass on your data as real-time as possible. This requires that your operator=() of your data type is hard real-time. In case your operator=() is only real-time if enough storage is allocated on beforehand, you can inform your output port of the amount of storage to pre-allocate. You can do this by using:

  std::vector<double> joints(10, 0.0);
  OutputPort<std::vector<double> > out("out");
 
  out.setDataSample( joints ); // initialises all current and future connections to hold a vector of size 10.
 
  // modify joint values... add connections etc.
 
  out.write( joints );  // always hard real-time if joints.size() <= 10

As the example shows, a single call to setDataSample() is enough. This is not the same as write() ! A write() will deliver data to each connected InputPort, a setDataSample() will only initialize the connections, but no actual writing is done. Be warned that setDataSample() may clear all data already in a connection, so it is better to call it before any data is written to the OutputPort.

In case your data type is always hard real-time copyable, there is no need to call setDataSample. For example:

  KDL::Frame f = ... ; // KDL::Frame never (de-)allocates memory during copy or construction.
 
  OutputPort< KDL::Frame > out("out");
 
  out.write( f );  // always hard real-time

Further reading

Please also consult the Component Builder's Manual and the Doxygen documentation for further reference.

RTT 2.0 Renaming table

This page lists the renamings/relocations done on the RTT 2.0 branch (available through gitorious on http://www.gitorious.org/orocos-toolchain/rtt/commits/master) and also offers the conversion scripts to do the renaming.

A note about headers/namespaces: If a header is in rtt/extras, the namespace will be RTT::extras and vice versa. A header in rtt/ has namespace RTT. Note: the OS namespace has been renamed to lowercase os. The Corba namespace has been renamed to lowercase corba.

Scripts

The script attached to the bottom of this page converts RTT 1.x code according to the renaming table below. They do so in a quasi-intelligent way and catch most cases quite correctly. Some changes require additional manual intervention because the script can not guess the missing content. You will also need to download the rtt2-converter program from here.

Namespace conversions and simple renames

Many other files moved into sub-namespaces. For all these renames and more, a script is attached to this wiki page. You need to download headers.txt, classdump.txt and to-rtt-2.0.pl.txt and rename the to-rtt-2.0.pl.txt script to to-rtt-2.0.pl:
mv to-rtt-2.0.pl.txt to-rtt-2.0.pl
chmod a+x to-rtt-2.0.pl
./to-rtt-2.0.pl $(find . -name "*.cpp" -o -name "*.hpp")
The script will read headers.txt and class-dump.txt to do its renaming work for every changed RTT header and class on the list of files you give as an argument. Feel free to report problems on the orocos-dev mailing list or RTT-dev forum.

Minor manual fixes may be expected after running this script. Be sure to have your sources version controlled, such that you can first test what the script does before permanently changing files.

Flow port and services conversions

A second program is required to convert flow port and method/command -> operation conversions that go further than a simple rename. This is called rtt2-converter. It requires boost 1.41.0 or newer. You can build rtt2-converter from Eclipse or using the Makefile using 'make all'. It also requires a recent installed version of boost with the regex library (It will link with -lregex and includes regex boost headers). See the rtt2-converter README.txt for instructions and download the converter sources from the toolchain/upgrading link.

tar xjf rtt2-converter-1.1.tar.bz2
cd rtt2-converter-1.1
make
./rtt2-converter Component.hpp Component.cpp

The script takes preferably both header and implementation of your component, but will also accept a single file. It needs both class definition and implementation to make its best guesses on how to convert. If all your code is in a .hpp or .cpp file, you only need to specify that file. If nothing is to be done, the file will remain the same, so you may 'accidentally' feed non-Orocos files, or a file twice.

To run this on a large codebase, you can do something similar to:

# Calls : ./rtt2-converter Component.hpp Component.cpp for each file in orocos-app
for i in $(find /home/user/src/orocos-app -name "*.cpp"); do ./rtt2-converter $(dirname $i)/$(basename $i cpp)hpp $i; done
 
# Calls : ./rtt2-converter Component.cpp for each .cpp file in orocos-app
for i in $(find /home/user/src/orocos-app -name "*.cpp"); do ./rtt2-converter $i; done
 
# Calls : ./rtt2-converter Component.hpp for each .hpp file in orocos-app
for i in $(find /home/user/src/orocos-app -name "*.hpp"); do ./rtt2-converter $i; done
This looks up all .cpp files in an orocos-app directory and calls rtt2-converter on each hpp/cpp pair. The dirname/basename construct is for replacing the .cpp extension with .hpp. If you have a mixed hpp+cpp/cpp/hpp repository, you'll have to run the for loop three times as shown above. The script is robust against calling it multiple times on the same file.

Core API

RTT 1.0 RTT 2.0 Comments
RTT::PeriodicActivity RTT::extras::PeriodicActivity Use of RTT::Activity is prefered
RTT::Timer RTT::os::Timer
RTT::SlaveActivity, SequentialActivity, SimulationThread, IRQActivity, FileDescriptorActivity, EventDrivenActivity, SimulationActivity, ConfigurationInterface, Configurator, TimerThread RTT::extras::... EventDrivenActivity has been removed.
RTT::OS::SingleThread, RTT::OS::PeriodicThread RTT::os::Thread Can do periodic and non-periodic and switch at run-time.
RTT::TimeService RTT::os::TimeService
RTT::DataPort,BufferPort RTT::InputPort,RTT::OutputPort Buffered/unbuffered is decided upon connection time. Only input/output is hardcoded.
RTT::types() RTT::types::Types() The function name collided with the namespace name
RTT::Toolkit* RTT::types::Typekit* More logical name
RTT::Command RTT::Operation Create an 'OwnThread' operation type
RTT::Method RTT::Operation Create an 'ClientThread' operation type
RTT::Event RTT::internal::Signal Events are replaced by OutputPort or Operation, the Signal class is a synchonous-only callback manager.
commands()->getCommand<T>() provides()->getOperation() get a provided operation, no template argument required
commands()->addCommand() provides()->addOperation().doc("Description") add a provided operation, document using .doc("doc").doc("a1","a1 doc")...
methods()->getMethod<T>() provides()->getOperation() get a provided operation, no template argument required
methods()->addMethod() provides()->addOperation().doc("Description") add a provided operation, document using .doc("doc").doc("a1","a1 doc")...
attributes()->getAttribute<T>() provides()->getAttribute() get a provided attribute, no template argument required
attributes()->addAttribute(&a) provides()->addAttribute(a) add a provided attribute, passed by reference, can now also add a normal member variable.
properties()->getProperty<T>() provides()->getProperty() get a provided property, no template argument required
properties()->addProperty(&p) provides()->addProperty(p).doc("Description") add a provided property, passed by reference, can now also add a normal member variable.
events()->getEvent<T>() ports()->getPort() OR provides()->getOperation<T>() Event<T> was replaced by OutputPort<T> or Operation<T>
ports()->addPort(&port, "Description") ports()->addPort( port ).doc("Description") Takes argument by reference and documents using .doc("text").

Scripting

RTT 1.0 RTT 2.0 Comments
scripting() getProvider<Scripting>("scripting") Returns a RTT::Scripting object. Also add #include <rtt/scripting/Scripting.hpp>

Marshalling

RTT 1.0 RTT 2.0 Comments
marshalling() getProvider<Marshalling>("marshalling") Returns a RTT::Marshalling object. Also add #include <rtt/marsh/Marshalling.hpp>
RTT::Marshaller RTT::marsh::MarshallingInterface Normally not needed for normal users.
RTT::Demarshaller RTT::marsh::DemarshallingInterface Normally not needed for normal users.

CORBA Transport

RTT 1.0 RTT 2.0 Comments
RTT::Corba::* RTT::corba::C* Each proxy class or idl interface starts with a 'C' to avoid confusion with the same named RTT C++ classes
RTT::Corba::ControlTaskServer RTT::corba::TaskContextServer renamed for consistency.
RTT::Corba::ControlTaskProxy RTT::corba::TaskContextProxy renamed for consistency.
RTT::Corba::Method,Command RTT::corba::COperationRepository,CSendHandle No need to create these helper objects, call COperationRepository directly
RTT::Corba::AttributeInterface,Expression,AssignableExpression RTT::corba::CAttributeRepository No need to create expression objects, query/use CAttributeRepository directly.
AttachmentSize
class-dump.txt7.89 KB
headers.txt10.17 KB
to-rtt-2.0.pl.txt4.78 KB

Replacing Commands

RTT 2.0 has dropped the support for the RTT::Command class. It has been replaced by the more powerful Methods vs Operations construct.

The rtt2-converter tool will automatically convert your Commands to Method/Operation pairs. Here's what happens:

// RTT 1.x code:
class ATask: public TaskContext
{
  bool prepareForUse();
  bool prepareForUseCompleted() const;
public:
  ATask(): TaskContext("ATask")
  {
    this->commands()->addCommand(RTT::command("prepareForUse",&ATask::prepareForUse,&ATask::prepareForUseCompleted,this),
                                             "prepares the robot for use");
  }
};

After:

// After rtt2-converter: RTT 2.x code:
class ATask: public TaskContext
{
  bool prepareForUse();
  bool prepareForUseCompleted() const;
public:
  ATask(): TaskContext("ATask")
  {
    this->addOperation("prepareForUse", &ATask::prepareForUse, this, RTT::OwnThread).doc("prepares the robot for use");
    this->addOperation("prepareForUseDone", &ATask::prepareForUseCompleted, this, RTT::ClientThread).doc("Returns true when prepareForUse is done.");
  }
};

What has happened is that the RTT 1.0 Command is split into two RTT 2.0 Operations: "prepareForUse" and "prepareForUseDone". The first will be executed in the component's thread ('OwnThread'), analogous to the RTT::Command semantics. The second function, prepareForUseDone, is executed in the callers thread ('ClientThread'), also analogous to the behaviour of the RTT::Command's completion condition.

The old behavior can be simulated at the callers side by these constructs:

Calling a 2.0 Operation as a 1.0 Command in C++

Polling for commands in RTT 1.x was very rudimentary. One way of doing it would have looked like this:
  Command<bool(void)> prepare = atask->commands()->getCommand<bool(void)>("prepareForUse");
  prepare(); // sends the Command object.
  while (prepare.done() == false)
    sleep(1);
You look up the command with the signature bool(void) and invoke it. Next you needed to poll for done() to return true.

In RTT 2.0, the caller's code looks up the prepareForUse Operation and then 'sends' the request to the ATask Component. Optionally, the completion condition is looked up manually and polled for as well:

  Method<bool(void)> prepare = atask->getOperation("prepareForUse");
  Method<bool(void)> prepareDone = atask->getOperation("prepareForUseDone");
  SendHandle h = prepare.send();
 
  while ( !h.collectIfDone() && prepareDone() == false )
     sleep(1);

The collectIfDone() and prepareDone() checks are now made explicit, while they were called implicitly in the RTT 1.x's prepare.done() function. Writing your code like this will case the exact same behaviour in RTT 2.0 as in RTT 1.x.

In case you don't care for the 'done' condition, the above code may just be simplified to:

  Method<bool(void)> prepare = atask->getOperation("prepareForUse");
  prepare.send();

In that case, you may ignore the SendHandle, and the object will cleanup itself at the appropriate time.

Calling a 2.0 Operation as a 1.0 Command in Scripting

Scripting was very convenient for using commands. A typical RTT 1.x script would have looked like:

program foo {
  do atask.prepareForUse();
  // ... rest of the code
}
The script would wait at the prepareForUse() line (using polling) until the command's completion.

To have the same behaviour in RTT 2.x using Operations, you need to make the 'polling' explicit. Furthermore, you need to 'send' the method to indicate that you do not wish to block:

program foo {
  var SendHandle h;
  set h = atask.prepareForUse.send();
  while (h.collectIfDone() == false && atask.prepareForUseDone() == false)
     yield;
  // ... rest of the code
}
Just like in the C++ code, you need to create a SendHandle variable and store the result of the send in it. You can then use h to see if the operation finished yet and if so, check the status of prepareForUseDone(). It may be convenient to put these in a function in RTT 2.x:

function prepare_command() {
  var SendHandle h;
  set h = atask.prepareForUse.send();
  while (h.collectIfDone() == false && atask.prepareForUseDone() == false)
     yield;
}
program foo {
   call prepare_command(); // note: using 'call'
  // ... rest of the code
}
In order to avoid blocking in the 'foo' program, you need prefix prepare_command with 'call'. This will 'inline' the function such that 'foo' will not block the ExecutionEngine until prepare_command returns. For comparison purposes, if you would omit the 'call' prefix, program would need to loop on prepare_command() in turn:

export function prepare_command()  // note: we must export the function
{
  var SendHandle h;
  set h = atask.prepareForUse.send();
  while (h.collectIfDone() == false && atask.prepareForUseDone() == false)
     yield;
}
program foo {
  var SendHandle h;
  set h = prepare_command(); // note: not using 'call'
  while (h.collectIfDone() == false)
     yield;
  // ... rest of the code
}
In RTT 2.x, a program script will only yield when the word 'yield' (equivalent to RTT 1.x 'do nothing') is seen. Both function and program must yield in order to not spin in an endless loop in the ExecutionEngine.

note

Code without 'yield' can spin forever. This blocking 'trap' in 2.0 can be very inconvenient. It's likely that an alternative system will be provided to allow 'transparant' polling for a given function. For example this syntax could be introduced:
program foo {
  prepare_command.call(); // (1) calls and blocks for result.
  prepare_command.send(); // (2) send() and forget.
  prepare_command.poll(); // (3) send() and poll with collectIfDone().
}
Syntax (1) and (2) are already present. Syntax (3) would indicate that the script must send and poll (using collectIfDone() behind the scenes) the prepare_command operation. This would always work and save users from writing the bulky SendHandle code.

Replacing Events

RTT 2.0 no longer supports the RTT::Event class. This page explains how to adapt your code for this.

Rationale

RTT::Event was broken in some subtle ways, especially the unreliable asynchronous delivery and the danger of untrusted clients made the Event fragile. It was choosen to be replaced by an OutputPort or an Operation, depending on the use case.
  • Replace an Event by an OutputPort if you want to broadcast to many components. Any event sender->receiver connection can set the buffering policy, or encapsulate a transport to another (remote) process.
  • Replace an Event by an Operation if you want to react to an interface call *inside* your component, for example, in a state machine script or in C++.

Replacing by an OutputPort

Output ports differ from RTT::Event in that they can take only one value as an argument. If your 1.x Event contained multiple arguments, they need to be taken together in a new struct that you create yourself. Both sender and receiver must know and understand this struct.

For the simple case, when your Event only had one argument:

// RTT 1.x
class MyTask: public TaskContext
{
   RTT::Event<void(int)> samples_processed;
 
   MyTask() : TaskContext("task"), samples_processed("samples_processed") 
   {
      events()->addEvent( &samples_processed );
   }
   // ... your other code here...
};  
Becomes:
// RTT 2.x
class MyTask: public TaskContext
{
   RTT::OutputPort<int> samples_processed;
 
   MyTask() : TaskContext("task"), samples_processed("samples_processed") 
   {
      ports()->addPort( samples_processed ); // note: RTT 2.x dropped the '&'
   }
   // ... your other code here...
};  

Note: the rtt2-converter tool does not do this replacement, see the Operation section below.

Components wishing to receive the number of samples processed, need to define an InputPort<int> and connect their input port to the output port above.

Reacting to event data in scripts

When using the RTT scripting service's state machine, you can react to data arriving on the port. You could for example load this script in the above component:
StateMachine SM {
 
   var int total = 0;
 
   initial state INIT {
     entry {
     }
     // Reads samples_processed and stores the result in 'total'.
     // Only if the port return 'NewData', this branch will be evaluated.
     transition samples_processed( total ) if (total > 0 ) select PROCESSING;
   }
 
   state PROCESSING {
     entry { /* processing code, use 'total' */
     }
   }
 
   final state FINI {}

The transition from state INIT to state PROCESSING will only by taken if samples_processed.read( total ) == NewData and if total > 0. Note: When your TaskContext is periodically executing, the read( total ) statement will be re-tried and overwritten in case of OldData and NewData. Only if the connection of samples_processed is completely empty (never written to or reset), total will not be overwritten.

Replacing by an Operation

Operations are can take the same signature as RTT::Event. The difference is that only the component itself can attach callbacks to an Operation, by means of the signals() function.

For example:

// RTT 1.x
class MyTask: public TaskContext
{
   RTT::Event<void(int, double)> samples_processed;
 
   MyTask() : TaskContext("task"), samples_processed("samples_processed") 
   {
      events()->addEvent( &samples_processed );
   }
   // ... your other code here...
};  
Becomes:
// RTT 2.x
class MyTask: public TaskContext
{
   RTT::Operation<void(int,double)> samples_processed;
 
   MyTask() : TaskContext("task"), samples_processed("samples_processed") 
   {
      provides()->addOperation( samples_processed ); // note: RTT 2.x dropped the '&'
 
      // Attaching a callback handler to the operation object:
      Handle h = samples_processed.signals( &MyTask::react_foo, this );
   }
   // ... your other code here...
 
   void react_foo(int i, double d) {
       cout << i <<", " << d <<endl;
   }
};  

Note: the rtt2-converter tool only does this replacement automatically. Ie. it assumes all your Event objects were only used in the local component. See the RTT 2.0 Renaming table for this tool.

Since an Operation object is always local to the component, no other components can attach callbacks. If your Operation would return a value, the callback functions needs to return it too, but it will be ignored and not received by the caller.

The callback will be executed in the same thread as the operation's function (ie OwnThread vs ClientThread).

Reacting to operations in scripts

When using the RTT scripting service's state machine, you can react to calls on the Operation. You could for example load this script in the above component:
StateMachine SM {
 
   var int total = 0;
 
   initial state INIT {
     entry {
     }
     // Reacts to the samples_processed operation to be invoked
     // and stores the argument in total. If the Operations takes multiple
     // arguments, also here multiple arguments must be given.
     transition samples_processed( total ) if (total > 0 ) select PROCESSING;
   }
 
   state PROCESSING {
     entry { /* processing code, use 'total' */
     }
   }
 
   final state FINI {}

The transition from state INIT to state PROCESSING will only by taken if samples_processed( total ) was called by another component (using a Method object, see Methods vs Operations and if the argument in that call > 0. Note: when samples_processed would return a value, your script can not influence that return value since the return value is determined by the function tied to the Operation, not by signal handlers.

NOTE: RTT 2.0.0-beta1 does not yet support the script syntax.