RTT 2.0 Development update

On the 2.0 front, I'm fully working on the new Data Flow work package.
This week, I paid a visit to the DFKI where Sylvain's pushing Orocos
into their robots. As one could expect, I returned with more questions
than answers, but It gave me the insight on how to proceed the next
month.

I merged Sylvain's patches locally, on top of all the changes we had
up till now. The way Sylvain implemented things solved a bunch of
problems in one strike. For example, the lock free data flow
algorithms no longer need to guess how many threads are going to
access it. They know, it's '2'. There's also a mechanism to avoid
unnecessary copies in the data flow when using images for example.
It's a bit experimental yet and needs documentation, heck, nothing is
documented yet in the manuals, but the API documentation is quite
fine. Sylvain and I will work some further on the data flow classes
once the namespace/directory shuffle is over.

Next to the 'basic design', there are two major issues open: toolkit
generation from user types and hard-realtime data flow.

1. Toolkit generation
The initial idea was to allow users to describe their data types in
idl, and let a tool generate the C++ classes and the toolkit library.
In a perfect world, we would be able to re-use current idl
parsers/code generators and just generate code from an abstract class
representation. But we came to the conclusion that this is a too
limiting approach. IDL can not represent any C++ data structure, and
we might be tied to using the CORBA classes instead of the std
stuff.The golden rule was that 'the transport shouldn't be a
dependency for the C++ types.' ROS and TAO violate this principle,
because they interweave the user's structs with their own functions,
and even base classes. The Type system we have is just better. It
doesn't touch the user type a single bit. We wanted to keep it that
way.

Fortunately there is a solution, and yes, once again it didn't come
from the 'sweet' emacs/eclipse guy but from the 'happy' vim guy. As
posted earlier on this list, Sylvain created a tool for generating
toolkits from C++ classes. What I didn't fully realize is that behind
this tool is a very powerful library, called 'typelib', hosted on
github http://github.com/doudou/typelib/tree/master . It's created to
do type analysis on any 'backend'. It currently understands C++ (to a
degree) but it could parse other formats/languages as well. This
information is then used by 'orogen' to generate the toolkits and the
CORBA bindings. Using this tool, we can automate and do what ROS and
CORBA failed to do: use the user's native type and let him decide on
transport lateron, without any extra cost. This allows data types from
any library you might use to be transported. There are some tricks
involved, but I'm sure we can smooth things out (swig might help us).
But even if the user decided on CORBA or ROS data types, typelib comes
to the rescue and could (given enough patches) integrate these as the
native types to use in the RTT type toolkits.

To summarize, these would be the 3 possible workflows for RTT 2.0 users

A.Using native C++ classes (RTT way)
MyClass.h -> orogen -> MyClass-toolkit.so + MyClass-toolkit-corba.so
The user uses MyClass from MyClass.h as the data type to use in his
Orocos components.

B. Using a ROS type
MyMessage.msg -> orogen -> MyMessage-toolkit.so +
MyMessage-toolkit-ros.so + MyMessage.h (generated by ros tools)
The user uses MyMessage from MyMessage.h as the data type to use in
his Orocos components.

C. Using an IDL type
MyStruct.idl -> orogen -> MyStruct-toolkit.so +
MyStruct-toolkit-corba.so + MyStructC.h (generated by idl tools)
The user uses MyStruct from MyStructC.h as the data type to use in his
Orocos components.

I didn't dare to propose full transparent interoperability C++ <-> ROS
<-> CORBA, but it might be a logical extension. For clarity, orogen
only supports A. today.

2. Real-time Data Flow using Xenomai
I have been probing the Xenomai list to find out which primitive could
be used in combination with real-time select(). The only candidate was
the Xenomai Posix message queue, provided that all threads that access
it are Xenomai threads (the case in RTT). Nevertheless, it is a bit a
freightning idea to rewrite the whole ORB thing in Orocos. It starts
with data flow and before you know it, we overshadowed CORBA 4. On the
other hand, we see currently no way on how to influence TAO's memory
allocation behavior and ask it nicely to use Posix message queues
(that would involve writing a new TAO transport, similar to COIOP, ACE
supports Posix Message queues).

I'm still keeping both options open, but I'll certainly implement a
proof-of-concept using pure Xenomai. We need to have something to look
at in order to be able to evaluate it.

Peter

RTT 2.0 Development update

On Fri, 17 Jul 2009, Peter Soetens wrote:

[...]
> 1. Toolkit generation
> The initial idea was to allow users to describe their data types in
> idl, and let a tool generate the C++ classes and the toolkit library.
> In a perfect world, we would be able to re-use current idl
> parsers/code generators and just generate code from an abstract class
> representation. But we came to the conclusion that this is a too
> limiting approach. IDL can not represent any C++ data structure, and
> we might be tied to using the CORBA classes instead of the std
> stuff.The golden rule was that 'the transport shouldn't be a
> dependency for the C++ types.' ROS and TAO violate this principle,
> because they interweave the user's structs with their own functions,
> and even base classes. The Type system we have is just better. It
> doesn't touch the user type a single bit. We wanted to keep it that
> way.
>
> Fortunately there is a solution, and yes, once again it didn't come
> from the 'sweet' emacs/eclipse guy but from the 'happy' vim guy. As
> posted earlier on this list, Sylvain created a tool for generating
> toolkits from C++ classes. What I didn't fully realize is that behind
> this tool is a very powerful library, called 'typelib', hosted on
> github http://github.com/doudou/typelib/tree/master . It's created to
> do type analysis on any 'backend'. It currently understands C++ (to a
> degree) but it could parse other formats/languages as well. This
> information is then used by 'orogen' to generate the toolkits and the
> CORBA bindings. Using this tool, we can automate and do what ROS and
> CORBA failed to do: use the user's native type and let him decide on
> transport lateron, without any extra cost. This allows data types from
> any library you might use to be transported. There are some tricks
> involved, but I'm sure we can smooth things out (swig might help us).
> But even if the user decided on CORBA or ROS data types, typelib comes
> to the rescue and could (given enough patches) integrate these as the
> native types to use in the RTT type toolkits.
>
> To summarize, these would be the 3 possible workflows for RTT 2.0 users
>
> A.Using native C++ classes (RTT way)
> MyClass.h -> orogen -> MyClass-toolkit.so + MyClass-toolkit-corba.so
> The user uses MyClass from MyClass.h as the data type to use in his
> Orocos components.
>
> B. Using a ROS type
> MyMessage.msg -> orogen -> MyMessage-toolkit.so +
> MyMessage-toolkit-ros.so + MyMessage.h (generated by ros tools)
> The user uses MyMessage from MyMessage.h as the data type to use in
> his Orocos components.
>
> C. Using an IDL type
> MyStruct.idl -> orogen -> MyStruct-toolkit.so +
> MyStruct-toolkit-corba.so + MyStructC.h (generated by idl tools)
> The user uses MyStruct from MyStructC.h as the data type to use in his
> Orocos components.
>
> I didn't dare to propose full transparent interoperability C++ <-> ROS
> <-> CORBA, but it might be a logical extension. For clarity, orogen
> only supports A. today.

Thanks for this nice feature addition! I think it fits well in the
"toolchain" support idea that some of us are pursuing :-) What about
automatic deduction of NetCDF messages? The latter have the nice property
of being self-descriptive; does the typelib approach allows to add such
'semantic information' too?

> 2. Real-time Data Flow using Xenomai
> I have been probing the Xenomai list to find out which primitive could
> be used in combination with real-time select(). The only candidate was
> the Xenomai Posix message queue, provided that all threads that access
> it are Xenomai threads (the case in RTT). Nevertheless, it is a bit a
> freightning idea to rewrite the whole ORB thing in Orocos. It starts
> with data flow and before you know it, we overshadowed CORBA 4. On the
> other hand, we see currently no way on how to influence TAO's memory
> allocation behavior and ask it nicely to use Posix message queues
> (that would involve writing a new TAO transport, similar to COIOP, ACE
> supports Posix Message queues).
>
> I'm still keeping both options open, but I'll certainly implement a
> proof-of-concept using pure Xenomai. We need to have something to look
> at in order to be able to evaluate it.

What about attending the Xenomai workshop at the next Real-Time Linux
Workshop in Dresden, coming September...?

Herman

>
> Peter

RTT 2.0 Development update

On Mon, Aug 10, 2009 at 13:56, Herman
Bruyninckx<Herman [dot] Bruyninckx [..] ...> wrote:
> On Fri, 17 Jul 2009, Peter Soetens wrote:
>>
>> To summarize, these would be the 3 possible workflows for RTT 2.0 users
>>
>> A.Using native C++ classes (RTT way)
>> MyClass.h -> orogen -> MyClass-toolkit.so + MyClass-toolkit-corba.so
>> The user uses MyClass from MyClass.h as the data type to use in his
>> Orocos components.
>>
>> B. Using a ROS type
>> MyMessage.msg -> orogen -> MyMessage-toolkit.so +
>> MyMessage-toolkit-ros.so + MyMessage.h (generated by ros tools)
>> The user uses MyMessage from MyMessage.h as the data type to use in
>> his Orocos components.
>>
>> C. Using an IDL type
>> MyStruct.idl -> orogen -> MyStruct-toolkit.so +
>> MyStruct-toolkit-corba.so + MyStructC.h (generated by idl tools)
>> The user uses MyStruct from MyStructC.h as the data type to use in his
>> Orocos components.
>>
>> I didn't dare to propose full transparent interoperability C++ <-> ROS
>> <-> CORBA, but it might be a logical extension. For clarity, orogen
>> only supports A. today.
>
> Thanks for this nice feature addition! I think it fits well in the
> "toolchain" support idea that some of us are pursuing :-) What about
> automatic deduction of NetCDF messages? The latter have the nice property
> of being self-descriptive; does the typelib approach allows to add such
> 'semantic information' too?

I respect NetCDF very much and believe that people should only be
recommended to use the NetCDF reporting Steven put in place. It's
however mainly an OCL/Reporting thing, so the RTT can only offer
support to allow such automatic encoding, but not the encoding to
netcdf itself. I'll take a closer look to this issue once the dataflow
refactoring is done, but the self-descriptive properties of the RTT
property system may all that we need to get what you're aiming at. It
would certainly make a nice picture :-)

>
>> 2. Real-time Data Flow using Xenomai
...
>> I'm still keeping both options open, but I'll certainly implement a
>> proof-of-concept using pure Xenomai. We need to have something to look
>> at in order to be able to evaluate it.
>
> What about attending the Xenomai workshop at the next Real-Time Linux
> Workshop in Dresden, coming September...?

I'm trying to, but I'm still figuring out if I will be able to make it.

Peter

RTT 2.0 Development update

On Mon, 10 Aug 2009, Peter Soetens wrote:

> On Mon, Aug 10, 2009 at 13:56, Herman
> Bruyninckx<Herman [dot] Bruyninckx [..] ...> wrote:
>> On Fri, 17 Jul 2009, Peter Soetens wrote:
>>>
>>> To summarize, these would be the 3 possible workflows for RTT 2.0 users
>>>
>>> A.Using native C++ classes (RTT way)
>>> MyClass.h -> orogen -> MyClass-toolkit.so + MyClass-toolkit-corba.so
>>> The user uses MyClass from MyClass.h as the data type to use in his
>>> Orocos components.
>>>
>>> B. Using a ROS type
>>> MyMessage.msg -> orogen -> MyMessage-toolkit.so +
>>> MyMessage-toolkit-ros.so + MyMessage.h (generated by ros tools)
>>> The user uses MyMessage from MyMessage.h as the data type to use in
>>> his Orocos components.
>>>
>>> C. Using an IDL type
>>> MyStruct.idl -> orogen -> MyStruct-toolkit.so +
>>> MyStruct-toolkit-corba.so + MyStructC.h (generated by idl tools)
>>> The user uses MyStruct from MyStructC.h as the data type to use in his
>>> Orocos components.
>>>
>>> I didn't dare to propose full transparent interoperability C++ <-> ROS
>>> <-> CORBA, but it might be a logical extension. For clarity, orogen
>>> only supports A. today.
>>
>> Thanks for this nice feature addition! I think it fits well in the
>> "toolchain" support idea that some of us are pursuing :-) What about
>> automatic deduction of NetCDF messages? The latter have the nice property
>> of being self-descriptive; does the typelib approach allows to add such
>> 'semantic information' too?
>
> I respect NetCDF very much and believe that people should only be
> recommended to use the NetCDF reporting Steven put in place. It's
> however mainly an OCL/Reporting thing, so the RTT can only offer
> support to allow such automatic encoding, but not the encoding to
> netcdf itself.
Of course, I agree with this. I was more asking in the direction of
automatically transforming computer language specific bindings from a
_semantic_ description of the data structure, instead of from a
(semantic-less) C++ version of the data structure...

> I'll take a closer look to this issue once the dataflow
> refactoring is done, but the self-descriptive properties of the RTT
> property system may all that we need to get what you're aiming at. It
> would certainly make a nice picture :-)

:-)

>>> 2. Real-time Data Flow using Xenomai
> ...
>>> I'm still keeping both options open, but I'll certainly implement a
>>> proof-of-concept using pure Xenomai. We need to have something to look
>>> at in order to be able to evaluate it.
>>
>> What about attending the Xenomai workshop at the next Real-Time Linux
>> Workshop in Dresden, coming September...?
>
> I'm trying to, but I'm still figuring out if I will be able to make it.

Ok.

Herman

RTT 2.0 Development update

On Monday 10 August 2009 16:15:41 Herman Bruyninckx wrote:
> Of course, I agree with this. I was more asking in the direction of
> automatically transforming computer language specific bindings from a
> _semantic_ description of the data structure, instead of from a
> (semantic-less) C++ version of the data structure...
I don't know NetCDF, so could you explain how is NetCDF adding semantics to
the data structure exactly ?

Sylvain

RTT 2.0 Development update

On Fri, 14 Aug 2009, Sylvain Joyeux wrote:

> On Monday 10 August 2009 16:15:41 Herman Bruyninckx wrote:
>> Of course, I agree with this. I was more asking in the direction of
>> automatically transforming computer language specific bindings from a
>> _semantic_ description of the data structure, instead of from a
>> (semantic-less) C++ version of the data structure...
> I don't know NetCDF, so could you explain how is NetCDF adding semantics to
> the data structure exactly ?
>
NetCDF is a "self descripting" message/file format, that means that it
starts with a _header_ that gives names to the data structures stored in the
rest of the message/file.

Herman

RTT 2.0 Development update

On Aug 14, 2009, at 10:41 , Herman Bruyninckx wrote:

> On Fri, 14 Aug 2009, Sylvain Joyeux wrote:
>
>> On Monday 10 August 2009 16:15:41 Herman Bruyninckx wrote:
>>> Of course, I agree with this. I was more asking in the direction of
>>> automatically transforming computer language specific bindings
>>> from a
>>> _semantic_ description of the data structure, instead of from a
>>> (semantic-less) C++ version of the data structure...
>> I don't know NetCDF, so could you explain how is NetCDF adding
>> semantics to
>> the data structure exactly ?
>>
> NetCDF is a "self descripting" message/file format, that means that it
> starts with a _header_ that gives names to the data structures
> stored in the
> rest of the message/file.

Sounds interesting. What kind of tools are you using on the receiving
end for analysis and display? I presume that this would be an
immediate benefit of using such a "standard" data language?

Stephen

RTT 2.0 Development update

On Fri, 14 Aug 2009, S Roderick wrote:

> On Aug 14, 2009, at 10:41 , Herman Bruyninckx wrote:
>
>> On Fri, 14 Aug 2009, Sylvain Joyeux wrote:
>>
>>> On Monday 10 August 2009 16:15:41 Herman Bruyninckx wrote:
>>>> Of course, I agree with this. I was more asking in the direction of
>>>> automatically transforming computer language specific bindings
>>>> from a
>>>> _semantic_ description of the data structure, instead of from a
>>>> (semantic-less) C++ version of the data structure...
>>> I don't know NetCDF, so could you explain how is NetCDF adding
>>> semantics to
>>> the data structure exactly ?
>>>
>> NetCDF is a "self descripting" message/file format, that means that it
>> starts with a _header_ that gives names to the data structures
>> stored in the
>> rest of the message/file.
>
> Sounds interesting. What kind of tools are you using on the receiving
> end for analysis and display? I presume that this would be an
> immediate benefit of using such a "standard" data language?
>

First a confession: we don't use NetCDF a lot in house, yet. Unfortunately.
(Because of human inertia reasons.) But you can take a look at
<http://www.unidata.ucar.edu/software/netcdf/>
for an overview of supporting tools.
Als <http://en.wikipedia.org/wiki/Hierarchical_Data_Format> and
<http://www.hdfgroup.org/HDF5/> are worth looking at if you want to see
what exist in the domain of efficient, platform neutral storage and
messaging of complex data structures.

Herman

RTT 2.0 Development update

On Friday 14 August 2009 16:52:31 you wrote:
> On Aug 14, 2009, at 10:41 , Herman Bruyninckx wrote:
> > On Fri, 14 Aug 2009, Sylvain Joyeux wrote:
> >> On Monday 10 August 2009 16:15:41 Herman Bruyninckx wrote:
> >>> Of course, I agree with this. I was more asking in the direction of
> >>> automatically transforming computer language specific bindings
> >>> from a
> >>> _semantic_ description of the data structure, instead of from a
> >>> (semantic-less) C++ version of the data structure...
> >>
> >> I don't know NetCDF, so could you explain how is NetCDF adding
> >> semantics to
> >> the data structure exactly ?
> >
> > NetCDF is a "self descripting" message/file format, that means that it
> > starts with a _header_ that gives names to the data structures
> > stored in the
> > rest of the message/file.
OK. Well .. that makes it nice for logging (our own logger based on typelib do
the same), but not nice for data transmission (a lot of overhead).

> Sounds interesting. What kind of tools are you using on the receiving
> end for analysis and display? I presume that this would be an
> immediate benefit of using such a "standard" data language?
Yes. First, your logfiles can be read at all times (no problem that the data
structures changed ...). Moreover, it makes it very suitable for scripting the
data analyzis. Typelib has a Ruby binding that allow to transparently change
in-memory data structures that it describes.

I'll release soon a Orocos-Ruby bridge that is based on this kind of concepts.

Sylvain

RTT 2.0 Development update

On Fri, 14 Aug 2009, Sylvain Joyeux wrote:

> On Friday 14 August 2009 16:52:31 you wrote:
>> On Aug 14, 2009, at 10:41 , Herman Bruyninckx wrote:
>>> On Fri, 14 Aug 2009, Sylvain Joyeux wrote:
>>>> On Monday 10 August 2009 16:15:41 Herman Bruyninckx wrote:
>>>>> Of course, I agree with this. I was more asking in the direction of
>>>>> automatically transforming computer language specific bindings
>>>>> from a
>>>>> _semantic_ description of the data structure, instead of from a
>>>>> (semantic-less) C++ version of the data structure...
>>>>
>>>> I don't know NetCDF, so could you explain how is NetCDF adding
>>>> semantics to
>>>> the data structure exactly ?
>>>
>>> NetCDF is a "self descripting" message/file format, that means that it
>>> starts with a _header_ that gives names to the data structures
>>> stored in the
>>> rest of the message/file.
> OK. Well .. that makes it nice for logging (our own logger based on typelib do
> the same), but not nice for data transmission (a lot of overhead).

As ever, the trade-off between robustness and efficiency makes sense, in
different ways, in different use cases. So, please don't make this kind of
too generic statements :-) One example of a use case where such headers are
definitely not overhead is where each message can have a different content.
And I challenge you to beat the efficiency of the binary encodings and
supporting software that the NetCDF community has developed over the years.
_Plus_, its completely platform-neutral. This is, once more, a domain where
the robotics community is (poorly) reinventing wheels...

Herman

>> Sounds interesting. What kind of tools are you using on the receiving
>> end for analysis and display? I presume that this would be an
>> immediate benefit of using such a "standard" data language?
> Yes. First, your logfiles can be read at all times (no problem that the data
> structures changed ...). Moreover, it makes it very suitable for scripting the
> data analyzis. Typelib has a Ruby binding that allow to transparently change
> in-memory data structures that it describes.
>
> I'll release soon a Orocos-Ruby bridge that is based on this kind of concepts.
>
> Sylvain

--
K.U.Leuven, Mechanical Eng., Mechatronics & Robotics Research Group
<http://people.mech.kuleuven.be/~bruyninc> Tel: +32 16 328056
EURON Coordinator (European Robotics Research Network) <http://www.euron.org>
Open Realtime Control Services <http://www.orocos.org>
Associate Editor JOSER <http://www.joser.org>, IJRR <http://www.ijrr.org>

RTT 2.0 Development update

> > OK. Well .. that makes it nice for logging (our own logger based on
> > typelib do the same), but not nice for data transmission (a lot of
> > overhead).
>
> As ever, the trade-off between robustness and efficiency makes sense, in
> different ways, in different use cases. So, please don't make this kind of
> too generic statements :-) One example of a use case where such headers are
> definitely not overhead is where each message can have a different content.
> And I challenge you to beat the efficiency of the binary encodings and
> supporting software that the NetCDF community has developed over the years.
> _Plus_, its completely platform-neutral. This is, once more, a domain where
> the robotics community is (poorly) reinventing wheels...
Well... Just for the record: NetCDF is definitely overkill for *my* logging
case. Why ? Typelib logs (most of) the data types by doing a pure write().
I.e. no marshalling and no copy involved as long as it is possible (i.e. for
all types that don't have a variable-length vector). While still keeping the
self-contained requirement. Moreover, the logging is completely streamed (no
seeking).

Logging takes 10 to 20% CPU on our system. I don't see how adding marshalling
would improve the situation.

*Now*. What I would definitely like to do is use typelib's marshalled data as a
temporary and convert it offline to NetCDF or HDF5. The issue there (for me) is
the lack of ressource to do it and that NetCDF/HDF5 is currently poorly
supported on Ruby. Switching to Python or something else is not doable because
that would require switching the rest of the toolchain. I don't have those
man-years available (and I don't like Python, which is definitely a bad
incentive).

On the issue of runtime data exchange, the most efficient way to it is to agree
*once* on the layout of your types (i.e. negociating what each type name *is
in practice*), and if the two peers agrees go on by referring to types by
name. I did that once with Genom and Ruby, but since I'm not on the genom
front anymore, it got dead.

Finally, from what I saw, NetCDF seemed to be designed as a logging tool, not
as a runtime

Sylvain

RTT 2.0 Development update

On Mon, 17 Aug 2009, Sylvain Joyeux wrote:

>>> OK. Well .. that makes it nice for logging (our own logger based on
>>> typelib do the same), but not nice for data transmission (a lot of
>>> overhead).
>>
>> As ever, the trade-off between robustness and efficiency makes sense, in
>> different ways, in different use cases. So, please don't make this kind of
>> too generic statements :-) One example of a use case where such headers are
>> definitely not overhead is where each message can have a different content.
>> And I challenge you to beat the efficiency of the binary encodings and
>> supporting software that the NetCDF community has developed over the years.
>> _Plus_, its completely platform-neutral. This is, once more, a domain where
>> the robotics community is (poorly) reinventing wheels...

> Well... Just for the record: NetCDF is definitely overkill for *my* logging
> case. Why ? Typelib logs (most of) the data types by doing a pure write().
> I.e. no marshalling and no copy involved as long as it is possible (i.e. for
> all types that don't have a variable-length vector). While still keeping the
> self-contained requirement. Moreover, the logging is completely streamed (no
> seeking).
>
> Logging takes 10 to 20% CPU on our system. I don't see how adding marshalling
> would improve the situation.

I agree.

> *Now*. What I would definitely like to do is use typelib's marshalled
> data as a temporary and convert it offline to NetCDF or HDF5. The issue
> there (for me) is the lack of ressource to do it and that NetCDF/HDF5 is
> currently poorly supported on Ruby.

<http://ruby.gfd-dennou.org/products/ruby-netcdf/> ? (Last update 2007...)
But it is being used in <http://ruby.gfd-dennou.org/products/gphys/> which
is still very much alive...

> Switching to Python or something else
> is not doable because that would require switching the rest of the
> toolchain. I don't have those man-years available (and I don't like
> Python, which is definitely a bad incentive).

What exactly is a "bad incentive"?

Of course, I understand the practical consequences of "language lock-in".
Everybody is confronted with that problem, to some extent...

> On the issue of runtime data exchange, the most efficient way to it is to
> agree *once* on the layout of your types (i.e. negociating what each type
> name *is in practice*), and if the two peers agrees go on by referring to
> types by name. I did that once with Genom and Ruby, but since I'm not on
> the genom front anymore, it got dead.

This is ok for streaming of always the same data, over a 100% reliable
communication channel. This _is_ a valid use case, of course, but not the
only one. So, I think it would not be a good idea to design this use case
into RTT, and only this use case.

> Finally, from what I saw, NetCDF seemed to be designed as a logging tool, not
> as a runtime

The initial motivation was indeed logging only, but the design does not at
all prevent reuse as runtime messaging infrastructure.

Herman

RTT 2.0 Development update

> > On the issue of runtime data exchange, the most efficient way to it is to
> > agree *once* on the layout of your types (i.e. negociating what each type
> > name *is in practice*), and if the two peers agrees go on by referring to
> > types by name. I did that once with Genom and Ruby, but since I'm not on
> > the genom front anymore, it got dead.
>
> This is ok for streaming of always the same data, over a 100% reliable
> communication channel.
Absolutely not true. The only case where it does not work is if one machine
*changes* the type definitions on the fly (i.e. if the type name to type
definition mapping changes on one given peer). And in this case, NetCDF won't
help you, not Typelib because they don't represent how types change (Google
stream API might help you there ...)

Sylvain