eventports and call-backs

Both 1.x and 2.x contain event ports to which one can attach a
callback when data is received:

void foo(PortInterface* p) { }
 
addEventPort( inport, &foo );

Is anyone relying on that feature ? I have removed the updateHook(
updated_ports ) as discussed on the -dev meeting, and I'm inclined to
also remove the callback argument, so only have:

addEventPort( inport );

Since it adds lots of management code (the callback is called in the
thread of the component) while It's maybe hardly used in practice.

By the way: the 2.0 code on master ignored this callback by accident.

Peter

eventports and call-backs

So you will end up polling again? I'd rather don't.

2010/8/20 Peter Soetens <peter [..] ...>

> Both 1.x and 2.x contain event ports to which one can attach a
> callback when data is received:
>
>

> void foo(PortInterface* p) { }
>
> addEventPort( inport, &foo );
> 

>
> Is anyone relying on that feature ? I have removed the updateHook(
> updated_ports ) as discussed on the -dev meeting, and I'm inclined to
> also remove the callback argument, so only have:
>
>
> addEventPort( inport );
> 

>
> Since it adds lots of management code (the callback is called in the
> thread of the component) while It's maybe hardly used in practice.
>
> By the way: the 2.0 code on master ignored this callback by accident.
>
> Peter
> --
> Orocos-Dev mailing list
> Orocos-Dev [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev
>

eventports and call-backs

On Fri, Aug 20, 2010 at 11:37 PM, Butch Slayer <butch [dot] slayers [..] ...> wrote:
> So you will end up polling again? I'd rather don't.

That's not the idea. An EventPort wakes up the TaskContext ( trigger()
) for each data sample that arrives. That will never go away. In
addition, when doing addEventPort(), you can specify a callback, that
will be executed before updateHook() is executed. It is that feature I
was questioning, not the removal of EventPort's triggering
capabilities... The callback is only useful If you have multiple event
ports, and you want for each port another function to be executed, in
addition to updateHook. So to see if you use that feature, you'd have
something like this:

 // in TaskContext Bar, 1.x style:
 ReadPort<int> rpi;
 ReadPort<double> rpd;
 void data_on_rpi(PortInterface* p);
 void data_on_rpd(PortInterface* p);
 
 // in Bar's constructor:
 this->addEventPort( &rpi, boost::bind(&Bar::data_on_rpi,this,_1) );
 this->addEventPort( &rpd, boost::bind(&Bar::data_on_rpd,this,_1) );

I was thinking such constructs were rare or even never used... In 2.x
it even makes less sense since when you wake up, you can query every
port individually to see which one got new data.

Peter

>
> 2010/8/20 Peter Soetens <peter [..] ...>
>>
>> Both 1.x and 2.x contain event ports to which one can attach a
>> callback when data is received:
>>
>>

>> void foo(PortInterface* p) { }
>>
>> addEventPort( inport, &foo );
>> 

>>
>> Is anyone relying on that feature ? I have removed the updateHook(
>> updated_ports ) as discussed on the -dev meeting, and I'm inclined to
>> also remove the callback argument, so only have:
>>
>>
>> addEventPort( inport );
>> 

>>
>> Since it adds lots of management code (the callback is called in the
>> thread of the component) while It's maybe hardly used in practice.
>>
>> By the way: the 2.0 code on master ignored this callback by accident.
>>
>> Peter
>> --
>> Orocos-Dev mailing list
>> Orocos-Dev [..] ...
>> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev
>
>

eventports and call-backs

I'm not sure ...

In principle, this feature is only useful if you have a great number of
ports (i.e. if always looking at all of them is too expensive) or if you
get triggered very often. Keeping the callback would allow to make sure
we will be able to scale if one of this cases do happen ...

Sylvain

eventports and call-backs

On Sat, 21 Aug 2010, Sylvain Joyeux wrote:

> I'm not sure ...
>
> In principle, this feature is only useful if you have a great number of
> ports (i.e. if always looking at all of them is too expensive) or if you
> get triggered very often. Keeping the callback would allow to make sure
> we will be able to scale if one of this cases do happen ...
>

Interesting insight... However, RTT's specific trade-off is more towards
providing best support for hard realtime applications, and not for "web
services". In the realtime context, scalability is not a issue, or even not
a really desired features: an application in which at one instant in time
one has no events to handle, while at the other instant there are a hundred
of them needing handling, can never be a hard realtime application.

So, if removing the indicidual call-back feature brings a significant
reduction in the lines of code, and in the handling response latency, I
think RTT _has_ to remove the feature.

Herman

eventports and call-backs

On 08/21/2010 01:40 PM, Herman Bruyninckx wrote:
> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>
>
>> I'm not sure ...
>>
>> In principle, this feature is only useful if you have a great number of
>> ports (i.e. if always looking at all of them is too expensive) or if you
>> get triggered very often. Keeping the callback would allow to make sure
>> we will be able to scale if one of this cases do happen ...
>>
>>
> Interesting insight... However, RTT's specific trade-off is more towards
> providing best support for hard realtime applications, and not for "web
> services". In the realtime context, scalability is not a issue, or even not
> a really desired features: an application in which at one instant in time
> one has no events to handle, while at the other instant there are a hundred
> of them needing handling, can never be a hard realtime application.
>
Not relevant.

If, right now, you have a number of ports N then your worst case time
estimate for a reaction is proportional to N times calling read().
Which, in turn, depends on the number of incoming connections on each
ports. So, the response time will be proportional to the total number of
incoming connections to your component.

If you have a way to be triggered on a per-port basis, you can estimate
that time much better, as it becomes proportional to the number of
incoming connections time the number of ports that *have* new data. So,
in the end, the worst case reaction time depends on the dataflow network
(i.e. the actual "waveform" in your data flow).

In both cases, you can be hard-realtime. It is just that you will on
average have better performance in the second case than in the first.

Now, obviously, it depends on what "too many connections" and/or
"triggered too fast" means. If it is "more than 100 connections
triggered at 1kHz" then I'm all for forgetting the feature. If it is "10
connections at 1kHz" then I'm not as it is a case that will happen often.

In practice, I will concur with Steve's conclusion: if making it work
requires little work, then why getting rid of it ? If it is a problem,
then we should get rid of it right now as the highest priority should be
getting RTT2 out.

Sylvain

eventports and call-backs

On Sat, 21 Aug 2010, Sylvain Joyeux wrote:

> On 08/21/2010 01:40 PM, Herman Bruyninckx wrote:
>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>
>>> I'm not sure ...
>>>
>>> In principle, this feature is only useful if you have a great number of
>>> ports (i.e. if always looking at all of them is too expensive) or if you
>>> get triggered very often. Keeping the callback would allow to make sure
>>> we will be able to scale if one of this cases do happen ...
>>>
>>>
>> Interesting insight... However, RTT's specific trade-off is more towards
>> providing best support for hard realtime applications, and not for "web
>> services". In the realtime context, scalability is not a issue, or even not
>> a really desired features: an application in which at one instant in time
>> one has no events to handle, while at the other instant there are a hundred
>> of them needing handling, can never be a hard realtime application.
>>
> Not relevant.
>
> If, right now, you have a number of ports N then your worst case time
> estimate for a reaction is proportional to N times calling read().
> Which, in turn, depends on the number of incoming connections on each
> ports. So, the response time will be proportional to the total number of
> incoming connections to your component.

Under the assumptions that:
- what one _does_ in each handler is short and always of the same
complexity
- there are no logical dependencies between the reactions to different
reads.

> If you have a way to be triggered on a per-port basis, you can estimate
> that time much better, as it becomes proportional to the number of
> incoming connections time the number of ports that *have* new data.

You seem to forget that 'someone' must check all ports! It won't be your
application code, but it will be the framework code. In addition, such
"interrupt" behaviour always takes more time (in software-only systems)
than the synchronous "read-N-times" polling approach; in hardware, things
are different of course, since the hardware takes care of the "polling" in
real parallel "computations".

> So,
> in the end, the worst case reaction time depends on the dataflow network
> (i.e. the actual "waveform" in your data flow).

I do not really understand what you mean by "waveform"...

> In both cases, you can be hard-realtime. It is just that you will on
> average have better performance in the second case than in the first.

Only in case hardware can help you do the polling!

> Now, obviously, it depends on what "too many connections" and/or
> "triggered too fast" means. If it is "more than 100 connections
> triggered at 1kHz" then I'm all for forgetting the feature. If it is "10
> connections at 1kHz" then I'm not as it is a case that will happen often.

You cannot make any statement about whether "100" or "10" is fine or not,
because that depends on hardware and application QoS...

> In practice, I will concur with Steve's conclusion: if making it work
> requires little work, then why getting rid of it ? If it is a problem,
> then we should get rid of it right now as the highest priority should be
> getting RTT2 out.

I tend to agree with this pragmatic suggestion. However, I have no real
idea about what "little work" would mean exactly in this concrete
situation.

> Sylvain

Herman

eventports and call-backs

On 08/21/2010 05:34 PM, Herman Bruyninckx wrote:
> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>
>> On 08/21/2010 01:40 PM, Herman Bruyninckx wrote:
>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>>
>>>> I'm not sure ...
>>>>
>>>> In principle, this feature is only useful if you have a great number of
>>>> ports (i.e. if always looking at all of them is too expensive) or if you
>>>> get triggered very often. Keeping the callback would allow to make sure
>>>> we will be able to scale if one of this cases do happen ...
>>>>
>>>>
>>> Interesting insight... However, RTT's specific trade-off is more towards
>>> providing best support for hard realtime applications, and not for "web
>>> services". In the realtime context, scalability is not a issue, or even not
>>> a really desired features: an application in which at one instant in time
>>> one has no events to handle, while at the other instant there are a hundred
>>> of them needing handling, can never be a hard realtime application.
>>>
>> Not relevant.
>>
>> If, right now, you have a number of ports N then your worst case time
>> estimate for a reaction is proportional to N times calling read().
>> Which, in turn, depends on the number of incoming connections on each
>> ports. So, the response time will be proportional to the total number of
>> incoming connections to your component.
>
> Under the assumptions that:
> - what one _does_ in each handler is short and always of the same
> complexity
> - there are no logical dependencies between the reactions to different
> reads.
>
>> If you have a way to be triggered on a per-port basis, you can estimate
>> that time much better, as it becomes proportional to the number of
>> incoming connections time the number of ports that *have* new data.
>
> You seem to forget that 'someone' must check all ports! It won't be your
> application code, but it will be the framework code. In addition, such
> "interrupt" behaviour always takes more time (in software-only systems)
> than the synchronous "read-N-times" polling approach; in hardware, things
> are different of course, since the hardware takes care of the "polling" in
> real parallel "computations".
Actually, no. The RTT already gets one method call (a.k.a "interrupt")
per write on each channel. That's the big difference: even though the
RTT gets notified for each write, the component has to poll on each
channels.

But, again, the trade off is not where you try to put it. It is not
scalability vs. hard-realtime-ness. It is about having a
hard-realtime-compatible scalability-friendly feature being removed
because it would be too much work to get it to work for RTT2.

Which, again, I *am* fine with if Peter says he can't make it happen in
a realistic time frame (or because its burden is too high).

eventports and call-backs

On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:

> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>
>> I'm not sure ...
>>
>> In principle, this feature is only useful if you have a great number of
>> ports (i.e. if always looking at all of them is too expensive) or if you
>> get triggered very often. Keeping the callback would allow to make sure
>> we will be able to scale if one of this cases do happen ...
>>
>
> Interesting insight... However, RTT's specific trade-off is more towards
> providing best support for hard realtime applications, and not for "web
> services". In the realtime context, scalability is not a issue, or even not
> a really desired features: an application in which at one instant in time
> one has no events to handle, while at the other instant there are a hundred
> of them needing handling, can never be a hard realtime application.

Sorry Herman, but that makes no sense. You are saying that any applicaton that has bursts of data can't be real-time. No way! It simply means your repsonse to worst case data arrival has to be accounted for.

And if you want to scratch an itch, scalability is one of RTT's major issues ... especially in the deployment area.

> So, if removing the indicidual call-back feature brings a significant
> reduction in the lines of code, and in the handling response latency, I
> think RTT _has_ to remove the feature.

If this is a seldom used feature, as Peter is asking, then saving the porting effort to get v2 out is worthwhile. If there is no porting effort required, then don't fix what isn't broken and leave it alone.

I'd certainly trade this to have events in general work again ... hint hint ... ;-)
S

eventports and call-backs

On Sat, 21 Aug 2010, S Roderick wrote:

> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
>
>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>
>>> I'm not sure ...
>>>
>>> In principle, this feature is only useful if you have a great number of
>>> ports (i.e. if always looking at all of them is too expensive) or if you
>>> get triggered very often. Keeping the callback would allow to make sure
>>> we will be able to scale if one of this cases do happen ...
>>>
>>
>> Interesting insight... However, RTT's specific trade-off is more towards
>> providing best support for hard realtime applications, and not for "web
>> services". In the realtime context, scalability is not a issue, or even not
>> a really desired features: an application in which at one instant in time
>> one has no events to handle, while at the other instant there are a hundred
>> of them needing handling, can never be a hard realtime application.
>
> Sorry Herman, but that makes no sense. You are saying that any applicaton that has bursts of data can't be real-time. No way! It simply means your repsonse to worst case data arrival has to be accounted for.

What I am saying is that "worst case" is _NOT SCALABLE_ with the number of
"channels" that one has to deal with. (This is textbook knowledge for
realtime students :-)) In other words, "to be accounted for" will become an
ever more impossible trade-off between latency and number of channels.

> And if you want to scratch an itch, scalability is one of RTT's major
> issues ... especially in the deployment area.

Which is again fine! RTT's focus _is_ and should remain on hard realtime.
If you want large-scale deployments (and those _are_ needed in modern
robotic systems, but _not_ at hard realtime!) than please, please, please,
go and use OSGi! (Or something similar, although not many similar things
exist with such a wide support as OSGi.) RTT should work seamlessly in an
OSGi system (it's on the medium term agenda of our BRICS project, btw...),
but RTT should _only_ focus on deployment-in-the-small, basically
single-process use cases. _That_'s where RTT's soft spot lies, and where it
has not competition, for the time being...

I have been seeing too many feature requests the last two years that want
to scale up RTT to serve the complete set of intelligent, large-scale
robotics systems. While I definitely agree with the need for good software
support for such systems, I _know_ by my long experience that it should
_not_ be done by trying to scale up a framework such as RTT whose focus is
on hard realtime, but by making sure RTT can be seemlessly integrated into
_other_ frameworks that take care of the large-scale, but with much less
realtime Quality of Service. Ruben is working on such an integration
RTT-ROS; other integrations will come over time, since they can all follow
the _same_ "component" features of the service and data ports.

>> So, if removing the indicidual call-back feature brings a significant
>> reduction in the lines of code, and in the handling response latency, I
>> think RTT _has_ to remove the feature.
>
> If this is a seldom used feature, as Peter is asking, then saving the
> porting effort to get v2 out is worthwhile. If there is no porting effort
> required, then don't fix what isn't broken and leave it alone.
>
> I'd certainly trade this to have events in general work again ... hint
> hint ... ;-)
> S

Herman

eventports and call-backs

On Aug 21, 2010, at 09:26 , Herman Bruyninckx wrote:

> On Sat, 21 Aug 2010, S Roderick wrote:
>
>> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
>>
>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>>
>>>> I'm not sure ...
>>>>
>>>> In principle, this feature is only useful if you have a great number of
>>>> ports (i.e. if always looking at all of them is too expensive) or if you
>>>> get triggered very often. Keeping the callback would allow to make sure
>>>> we will be able to scale if one of this cases do happen ...
>>>>
>>>
>>> Interesting insight... However, RTT's specific trade-off is more towards
>>> providing best support for hard realtime applications, and not for "web
>>> services". In the realtime context, scalability is not a issue, or even not
>>> a really desired features: an application in which at one instant in time
>>> one has no events to handle, while at the other instant there are a hundred
>>> of them needing handling, can never be a hard realtime application.
>>
>> Sorry Herman, but that makes no sense. You are saying that any applicaton that has bursts of data can't be real-time. No way! It simply means your repsonse to worst case data arrival has to be accounted for.
>
> What I am saying is that "worst case" is _NOT SCALABLE_ with the number of
> "channels" that one has to deal with. (This is textbook knowledge for
> realtime students :-)) In other words, "to be accounted for" will become an
> ever more impossible trade-off between latency and number of channels.

Great idea in theory, poor in practice.

>> And if you want to scratch an itch, scalability is one of RTT's major
>> issues ... especially in the deployment area.
>
> Which is again fine! RTT's focus _is_ and should remain on hard realtime.
> If you want large-scale deployments (and those _are_ needed in modern
> robotic systems, but _not_ at hard realtime!) than please, please, please,
> go and use OSGi! (Or something similar, although not many similar things
> exist with such a wide support as OSGi.) RTT should work seamlessly in an
> OSGi system (it's on the medium term agenda of our BRICS project, btw...),
> but RTT should _only_ focus on deployment-in-the-small, basically
> single-process use cases. _That_'s where RTT's soft spot lies, and where it
> has not competition, for the time being...

You are making assumptions on what customers want. I need both a) hard realtime, and b) scalability to systems of the size required to solve my customer's problems. Whether you believe that to be relevant or not is up to you, but that is the practicality of my situation.

And BTW, none of the problems I solve are single process use cases. Not one. I believe that also holds for all the systems I saw presented at the developers workshop in Barcelona.

If I can tweak RTT to scale as I need, then OSGi (or ROS) is one less tool/project/framework that my people need to learn. Also, as far as I can see, OSGi requires use of Java. Yet another tool my people have to learn. Huh!? YMMV.

> I have been seeing too many feature requests the last two years that want
> to scale up RTT to serve the complete set of intelligent, large-scale
> robotics systems. While I definitely agree with the need for good software
> support for such systems, I _know_ by my long experience that it should
> _not_ be done by trying to scale up a framework such as RTT whose focus is
> on hard realtime, but by making sure RTT can be seemlessly integrated into
> _other_ frameworks that take care of the large-scale, but with much less
> realtime Quality of Service. Ruben is working on such an integration
> RTT-ROS; other integrations will come over time, since they can all follow
> the _same_ "component" features of the service and data ports.

I'll be very happy to evaluate this, when it is delivered to the community.

For now, I'll continue to use a very good tool (Orocos) and try to make it work in the practical situations that my customers need. Here, scalability is an issue. Again, YMMV.
S

eventports and call-backs

On Sat, 21 Aug 2010, S Roderick wrote:

> On Aug 21, 2010, at 09:26 , Herman Bruyninckx wrote:
>
>> On Sat, 21 Aug 2010, S Roderick wrote:
>>
>>> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
>>>
>>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>>>
>>>>> I'm not sure ...
>>>>>
>>>>> In principle, this feature is only useful if you have a great number of
>>>>> ports (i.e. if always looking at all of them is too expensive) or if you
>>>>> get triggered very often. Keeping the callback would allow to make sure
>>>>> we will be able to scale if one of this cases do happen ...
>>>>>
>>>>
>>>> Interesting insight... However, RTT's specific trade-off is more towards
>>>> providing best support for hard realtime applications, and not for "web
>>>> services". In the realtime context, scalability is not a issue, or even not
>>>> a really desired features: an application in which at one instant in time
>>>> one has no events to handle, while at the other instant there are a hundred
>>>> of them needing handling, can never be a hard realtime application.
>>>
>>> Sorry Herman, but that makes no sense. You are saying that any applicaton that has bursts of data can't be real-time. No way! It simply means your repsonse to worst case data arrival has to be accounted for.
>>
>> What I am saying is that "worst case" is _NOT SCALABLE_ with the number of
>> "channels" that one has to deal with. (This is textbook knowledge for
>> realtime students :-)) In other words, "to be accounted for" will become an
>> ever more impossible trade-off between latency and number of channels.
>
> Great idea in theory, poor in practice.

That's a very disappointing "rebuttal" of my statement... And that
statement is grounded in practice, not in theory! Look at what ROS is
currently providing: lots and lots of "event" messages, resulting in
massive live locks!

>>> And if you want to scratch an itch, scalability is one of RTT's major
>>> issues ... especially in the deployment area.
>>
>> Which is again fine! RTT's focus _is_ and should remain on hard realtime.
>> If you want large-scale deployments (and those _are_ needed in modern
>> robotic systems, but _not_ at hard realtime!) than please, please, please,
>> go and use OSGi! (Or something similar, although not many similar things
>> exist with such a wide support as OSGi.) RTT should work seamlessly in an
>> OSGi system (it's on the medium term agenda of our BRICS project, btw...),
>> but RTT should _only_ focus on deployment-in-the-small, basically
>> single-process use cases. _That_'s where RTT's soft spot lies, and where it
>> has not competition, for the time being...
>
> You are making assumptions on what customers want.
I am making motivated statements about what framework can _deliver_!

> I need both a) hard
> realtime, and b) scalability to systems of the size required to solve my
> customer's problems. Whether you believe that to be relevant or not is up
> to you, but that is the practicality of my situation.

That's the practicality of every application builder, and I fully
understand and support it. But the fact is that you cannot guarantee any
_scalable_ realtime behaviour anymore.

> And BTW, none of the problems I solve are single process use cases. Not
> one. I believe that also holds for all the systems I saw presented at the
> developers workshop in Barcelona.

Yes, but I am talking about _hard realtime_ scalability, you are obviously
not. (I noticed this fundamental difference is 'default' background many
times in the past already...) I have no problems with that, of course, but
I do defend the focus of RTT as being _primarily_ the hard realtime use
case.

> If I can tweak RTT to scale as I need, then OSGi (or ROS) is one less
> tool/project/framework that my people need to learn. Also, as far as I
> can see, OSGi requires use of Java. Yet another tool my people have to
> learn. Huh!? YMMV.

Please, don't introduce arguments that distract from the real issue:
guaranteed hard realtime scalability! Because if _you_ want to have
non-scalable things in RTT for the sole reason of giving your people the
comfort of having to learn only one single framework, than I will object
_if_ that would make RTT less performant for the hard realtime, small-scale
user. This should be the trade-off behind _all_ similar decisions in RTT.

>> I have been seeing too many feature requests the last two years that want
>> to scale up RTT to serve the complete set of intelligent, large-scale
>> robotics systems. While I definitely agree with the need for good software
>> support for such systems, I _know_ by my long experience that it should
>> _not_ be done by trying to scale up a framework such as RTT whose focus is
>> on hard realtime, but by making sure RTT can be seemlessly integrated into
>> _other_ frameworks that take care of the large-scale, but with much less
>> realtime Quality of Service. Ruben is working on such an integration
>> RTT-ROS; other integrations will come over time, since they can all follow
>> the _same_ "component" features of the service and data ports.
>
> I'll be very happy to evaluate this, when it is delivered to the community.
>
> For now, I'll continue to use a very good tool (Orocos) and try to make
> it work in the practical situations that my customers need. Here,
> scalability is an issue. Again, YMMV.

It's not about "YMMV"... It's about the fundamental choice about where the
_first_ focus of a framework must lie. And for RTT, it's _not_ in
scalability.

I am very interested in learning more about your applications'
scalability issues. So, if you could provide a short description, I am sure
other Orocos users (including myself) can learn from the discussion about
whether or not that scalability has to be supported natively in RTT, or
whether it has to come from integration with other, more scalable
middleware frameworks.

Herman

eventports and call-backs

On Aug 21, 2010, at 11:25 , Herman Bruyninckx wrote:

> On Sat, 21 Aug 2010, S Roderick wrote:
>
>> On Aug 21, 2010, at 09:26 , Herman Bruyninckx wrote:
>>
>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>
>>>> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
>>>>
>>>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>>>>

<sni

>> If I can tweak RTT to scale as I need, then OSGi (or ROS) is one less
>> tool/project/framework that my people need to learn. Also, as far as I
>> can see, OSGi requires use of Java. Yet another tool my people have to
>> learn. Huh!? YMMV.
>
> Please, don't introduce arguments that distract from the real issue:
> guaranteed hard realtime scalability! Because if _you_ want to have
> non-scalable things in RTT for the sole reason of giving your people the
> comfort of having to learn only one single framework, than I will object
> _if_ that would make RTT less performant for the hard realtime, small-scale
> user. This should be the trade-off behind _all_ similar decisions in RTT.

Having to justify to management two additional tools that must be learned is a real issue.

>>> I have been seeing too many feature requests the last two years that want
>>> to scale up RTT to serve the complete set of intelligent, large-scale
>>> robotics systems. While I definitely agree with the need for good software
>>> support for such systems, I _know_ by my long experience that it should
>>> _not_ be done by trying to scale up a framework such as RTT whose focus is
>>> on hard realtime, but by making sure RTT can be seemlessly integrated into
>>> _other_ frameworks that take care of the large-scale, but with much less
>>> realtime Quality of Service. Ruben is working on such an integration
>>> RTT-ROS; other integrations will come over time, since they can all follow
>>> the _same_ "component" features of the service and data ports.
>>
>> I'll be very happy to evaluate this, when it is delivered to the community.
>>
>> For now, I'll continue to use a very good tool (Orocos) and try to make
>> it work in the practical situations that my customers need. Here,
>> scalability is an issue. Again, YMMV.
>
> It's not about "YMMV"... It's about the fundamental choice about where the
> _first_ focus of a framework must lie. And for RTT, it's _not_ in
> scalability.

Funny, almost all of the developers at the workshop talked about scalability problems.

> I am very interested in learning more about your applications'
> scalability issues. So, if you could provide a short description, I am sure
> other Orocos users (including myself) can learn from the discussion about
> whether or not that scalability has to be supported natively in RTT, or
> whether it has to come from integration with other, more scalable
> middleware frameworks.

Take a deployment scenario with 10's of components. You want to vary the scenario on several axes, say simulation vs hardware robot, simulation vs hardware wrench sensor, etc. Currently you have to write a single deployment scenario for each combination, and we have probably 3-6 axis in some cases. You can do the math. Also, do the above when you have multiple deployers involved simultaneously.

The OCL deployer does not handle this - one approach would be supporting variables within the deployment file, ala ant properties. At least one set of developers solved this by dropping the deployer and writing scripts to directly load libraries, select components, property files, etc. Basically, they've reproduced the OCL deployer in some sense - a perfect valid solution to a problem that Orocos doesn't deal well with.

YMMV
S

eventports and call-backs

On 08/21/2010 06:53 PM, S Roderick wrote:
> On Aug 21, 2010, at 11:25 , Herman Bruyninckx wrote:
>
>> On Sat, 21 Aug 2010, S Roderick wrote:
>>
>>> On Aug 21, 2010, at 09:26 , Herman Bruyninckx wrote:
>>>
>>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>>
>>>>> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
>>>>>
>>>>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>>>>>
>
> <sni

>
>>> If I can tweak RTT to scale as I need, then OSGi (or ROS) is one less
>>> tool/project/framework that my people need to learn. Also, as far as I
>>> can see, OSGi requires use of Java. Yet another tool my people have to
>>> learn. Huh!? YMMV.
>>
>> Please, don't introduce arguments that distract from the real issue:
>> guaranteed hard realtime scalability! Because if _you_ want to have
>> non-scalable things in RTT for the sole reason of giving your people the
>> comfort of having to learn only one single framework, than I will object
>> _if_ that would make RTT less performant for the hard realtime, small-scale
>> user. This should be the trade-off behind _all_ similar decisions in RTT.
>
> Having to justify to management two additional tools that must be learned is a real issue.
>
>>>> I have been seeing too many feature requests the last two years that want
>>>> to scale up RTT to serve the complete set of intelligent, large-scale
>>>> robotics systems. While I definitely agree with the need for good software
>>>> support for such systems, I _know_ by my long experience that it should
>>>> _not_ be done by trying to scale up a framework such as RTT whose focus is
>>>> on hard realtime, but by making sure RTT can be seemlessly integrated into
>>>> _other_ frameworks that take care of the large-scale, but with much less
>>>> realtime Quality of Service. Ruben is working on such an integration
>>>> RTT-ROS; other integrations will come over time, since they can all follow
>>>> the _same_ "component" features of the service and data ports.
>>>
>>> I'll be very happy to evaluate this, when it is delivered to the community.
>>>
>>> For now, I'll continue to use a very good tool (Orocos) and try to make
>>> it work in the practical situations that my customers need. Here,
>>> scalability is an issue. Again, YMMV.
>>
>> It's not about "YMMV"... It's about the fundamental choice about where the
>> _first_ focus of a framework must lie. And for RTT, it's _not_ in
>> scalability.
>
> Funny, almost all of the developers at the workshop talked about scalability problems.
>
>> I am very interested in learning more about your applications'
>> scalability issues. So, if you could provide a short description, I am sure
>> other Orocos users (including myself) can learn from the discussion about
>> whether or not that scalability has to be supported natively in RTT, or
>> whether it has to come from integration with other, more scalable
>> middleware frameworks.
>
> Take a deployment scenario with 10's of components. You want to vary the scenario on several axes, say simulation vs hardware robot, simulation vs hardware wrench sensor, etc. Currently you have to write a single deployment scenario for each combination, and we have probably 3-6 axis in some cases. You can do the math. Also, do the above when you have multiple deployers involved simultaneously.
>
> The OCL deployer does not handle this - one approach would be supporting variables within the deployment file, ala ant properties. At least one set of developers solved this by dropping the deployer and writing scripts to directly load libraries, select components, property files, etc. Basically, they've reproduced the OCL deployer in some sense - a perfect valid solution to a problem that Orocos doesn't deal well with.
FYI, the model-based deployment system I have is designed to exactly
handle this ...

Sylvain

eventports and call-backs

On Sat, 21 Aug 2010, S Roderick wrote:

> On Aug 21, 2010, at 11:25 , Herman Bruyninckx wrote:
>
>> On Sat, 21 Aug 2010, S Roderick wrote:
>>
>>> On Aug 21, 2010, at 09:26 , Herman Bruyninckx wrote:
>>>
>>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>>
>>>>> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
>>>>>
>>>>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>>>>>
>
> <sni

>
>>> If I can tweak RTT to scale as I need, then OSGi (or ROS) is one less
>>> tool/project/framework that my people need to learn. Also, as far as I
>>> can see, OSGi requires use of Java. Yet another tool my people have to
>>> learn. Huh!? YMMV.
>>
>> Please, don't introduce arguments that distract from the real issue:
>> guaranteed hard realtime scalability! Because if _you_ want to have
>> non-scalable things in RTT for the sole reason of giving your people the
>> comfort of having to learn only one single framework, than I will object
>> _if_ that would make RTT less performant for the hard realtime, small-scale
>> user. This should be the trade-off behind _all_ similar decisions in RTT.
>
> Having to justify to management two additional tools that must be learned is a real issue.

This issue should not come before the issue of making RTT move towards a
"one size fits all" framework, at the expense of deteriorating what should
remain its primary focus: hard realtime.

>>>> I have been seeing too many feature requests the last two years that want
>>>> to scale up RTT to serve the complete set of intelligent, large-scale
>>>> robotics systems. While I definitely agree with the need for good software
>>>> support for such systems, I _know_ by my long experience that it should
>>>> _not_ be done by trying to scale up a framework such as RTT whose focus is
>>>> on hard realtime, but by making sure RTT can be seemlessly integrated into
>>>> _other_ frameworks that take care of the large-scale, but with much less
>>>> realtime Quality of Service. Ruben is working on such an integration
>>>> RTT-ROS; other integrations will come over time, since they can all follow
>>>> the _same_ "component" features of the service and data ports.
>>>
>>> I'll be very happy to evaluate this, when it is delivered to the community.
>>>
>>> For now, I'll continue to use a very good tool (Orocos) and try to make
>>> it work in the practical situations that my customers need. Here,
>>> scalability is an issue. Again, YMMV.
>>
>> It's not about "YMMV"... It's about the fundamental choice about where the
>> _first_ focus of a framework must lie. And for RTT, it's _not_ in
>> scalability.
>
> Funny, almost all of the developers at the workshop talked about scalability problems.

That is what I am worried about: that the current trend among RTT
developers is scalability, and not hard realtime... Both _can_ be married,
however: that the "pure, 'non-scalable' polling" could the default approach
to work with ports, while the "'scalable' interrupt" way can still be
offered as an option, such that the designer can make the decision in which
way to trade-off realtime performance vs scalability. If I am not mistaken,
that was Peter's starting point of this thread: shall we remove the
"interrupt" mode or not?

>> I am very interested in learning more about your applications'
>> scalability issues. So, if you could provide a short description, I am sure
>> other Orocos users (including myself) can learn from the discussion about
>> whether or not that scalability has to be supported natively in RTT, or
>> whether it has to come from integration with other, more scalable
>> middleware frameworks.
>
> Take a deployment scenario with 10's of components. You want to vary the
> scenario on several axes, say simulation vs hardware robot, simulation
> vs hardware wrench sensor, etc. Currently you have to write a single
> deployment scenario for each combination, and we have probably 3-6 axis
> in some cases. You can do the math. Also, do the above when you have
> multiple deployers involved simultaneously.

Not difficult to do the math, indeed :-) But: to me it is clear that you
don't care about realtime performance very much: 10s of components,
switching between simulation and hardware, etc. While the golden rules are:
(i) only one component can be _real_ realtime, and (ii) switching from HW
to SW implementations or vice versa typically implies a thorough redesign
of the realtime budget.

And, more importantly, your (very valid) problem is a problem of the
scalability of the deployment _tooling_, not about a lack of scalability of
the RTT functionality. These are two different things, and the support for
both should not be coupled. But maybe (probably) you have been saying this
from the beginning, while I just didn't get it :-) My apologies if that's
the case!

> The OCL deployer does not handle this - one approach would be supporting
> variables within the deployment file, ala ant properties. At least one
> set of developers solved this by dropping the deployer and writing
> scripts to directly load libraries, select components, property files,
> etc. Basically, they've reproduced the OCL deployer in some sense - a
> perfect valid solution to a problem that Orocos doesn't deal well with.
>
> YMMV

Your previous paragraph confirms one of the points I have been making a
couple of times already: deployment can/should become much more flexible
and scalable than it is now in RTT. But, and this is a big but, not
(necessarily) by scaling up RTT's deployer, but rather by seamless
integration in real deployment software (OSGi, for example). Maybe the best
way out is to make the RTT deployer in such a way that it allows to be
scriptable by external programs.
I must confess that I have no clear idea whether that is not already
possible or not... I have been discussing about too many deployment
frameworks lately and I lost track of what is (im)possible where...:-)

Herman

eventports and call-backs

On Aug 22, 2010, at 04:38 , Herman Bruyninckx wrote:

> On Sat, 21 Aug 2010, S Roderick wrote:
>
>> On Aug 21, 2010, at 11:25 , Herman Bruyninckx wrote:
>>
>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>
>>>> On Aug 21, 2010, at 09:26 , Herman Bruyninckx wrote:
>>>>
>>>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>>>
>>>>>> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
>>>>>>
>>>>>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>>>>>>
>>
>> <sni

>>
>>>> If I can tweak RTT to scale as I need, then OSGi (or ROS) is one less
>>>> tool/project/framework that my people need to learn. Also, as far as I
>>>> can see, OSGi requires use of Java. Yet another tool my people have to
>>>> learn. Huh!? YMMV.
>>>
>>> Please, don't introduce arguments that distract from the real issue:
>>> guaranteed hard realtime scalability! Because if _you_ want to have
>>> non-scalable things in RTT for the sole reason of giving your people the
>>> comfort of having to learn only one single framework, than I will object
>>> _if_ that would make RTT less performant for the hard realtime, small-scale
>>> user. This should be the trade-off behind _all_ similar decisions in RTT.
>>
>> Having to justify to management two additional tools that must be learned is a real issue.
>
> This issue should not come before the issue of making RTT move towards a
> "one size fits all" framework, at the expense of deteriorating what should
> remain its primary focus: hard realtime.

The issue is paramount with RTT. If you burden users with lots of additional dependancies, they will find another tool to use, and then Orocos suffers.

>>>>> I have been seeing too many feature requests the last two years that want
>>>>> to scale up RTT to serve the complete set of intelligent, large-scale
>>>>> robotics systems. While I definitely agree with the need for good software
>>>>> support for such systems, I _know_ by my long experience that it should
>>>>> _not_ be done by trying to scale up a framework such as RTT whose focus is
>>>>> on hard realtime, but by making sure RTT can be seemlessly integrated into
>>>>> _other_ frameworks that take care of the large-scale, but with much less
>>>>> realtime Quality of Service. Ruben is working on such an integration
>>>>> RTT-ROS; other integrations will come over time, since they can all follow
>>>>> the _same_ "component" features of the service and data ports.
>>>>
>>>> I'll be very happy to evaluate this, when it is delivered to the community.
>>>>
>>>> For now, I'll continue to use a very good tool (Orocos) and try to make
>>>> it work in the practical situations that my customers need. Here,
>>>> scalability is an issue. Again, YMMV.
>>>
>>> It's not about "YMMV"... It's about the fundamental choice about where the
>>> _first_ focus of a framework must lie. And for RTT, it's _not_ in
>>> scalability.
>>
>> Funny, almost all of the developers at the workshop talked about scalability problems.
>
> That is what I am worried about: that the current trend among RTT
> developers is scalability, and not hard realtime... Both _can_ be married,
> however: that the "pure, 'non-scalable' polling" could the default approach
> to work with ports, while the "'scalable' interrupt" way can still be
> offered as an option, such that the designer can make the decision in which
> way to trade-off realtime performance vs scalability. If I am not mistaken,
> that was Peter's starting point of this thread: shall we remove the
> "interrupt" mode or not?

We need both. The system must be realtime (hard or soft, this is project dependent for me) and it must be scalable. The first is covered nicely now, while the second is a qualified success.

What you are actually finding, I think, is that Orocos is a victim of its own success. As more people are using it, and they are pushing the boundaries of the size/complexity of the systems they use it for, you are hitting the scalability/tooling issue.

>>> I am very interested in learning more about your applications'
>>> scalability issues. So, if you could provide a short description, I am sure
>>> other Orocos users (including myself) can learn from the discussion about
>>> whether or not that scalability has to be supported natively in RTT, or
>>> whether it has to come from integration with other, more scalable
>>> middleware frameworks.
>>
>> Take a deployment scenario with 10's of components. You want to vary the
>> scenario on several axes, say simulation vs hardware robot, simulation
>> vs hardware wrench sensor, etc. Currently you have to write a single
>> deployment scenario for each combination, and we have probably 3-6 axis
>> in some cases. You can do the math. Also, do the above when you have
>> multiple deployers involved simultaneously.
>
> Not difficult to do the math, indeed :-) But: to me it is clear that you
> don't care about realtime performance very much: 10s of components,
> switching between simulation and hardware, etc. While the golden rules are:
> (i) only one component can be _real_ realtime, and (ii) switching from HW
> to SW implementations or vice versa typically implies a thorough redesign
> of the realtime budget.

Herman, you are making a lot of assumptions here. Who's rules are those? What systems do they apply to? What do you know of my realtime performance requirements?

> And, more importantly, your (very valid) problem is a problem of the
> scalability of the deployment _tooling_, not about a lack of scalability of
> the RTT functionality. These are two different things, and the support for
> both should not be coupled. But maybe (probably) you have been saying this
> from the beginning, while I just didn't get it :-) My apologies if that's
> the case!

I consider Orocos as one project, in reality. While RTT may be the real-time part, and OCL more the tooling, they are part and parcel of the same solution to me (though not to all developers). So scalability to me w.r.t. deployment tooling, which is my number one scalability issue, is about "Orocos". You are talking about RTT itself, so we are thinking in slightly different terms.

OCL does need to deal with deployment tooling better. I don't think anyone will argue this ...

>> The OCL deployer does not handle this - one approach would be supporting
>> variables within the deployment file, ala ant properties. At least one
>> set of developers solved this by dropping the deployer and writing
>> scripts to directly load libraries, select components, property files,
>> etc. Basically, they've reproduced the OCL deployer in some sense - a
>> perfect valid solution to a problem that Orocos doesn't deal well with.
>>
>> YMMV
>
> Your previous paragraph confirms one of the points I have been making a
> couple of times already: deployment can/should become much more flexible
> and scalable than it is now in RTT. But, and this is a big but, not
> (necessarily) by scaling up RTT's deployer, but rather by seamless
> integration in real deployment software (OSGi, for example). Maybe the best
> way out is to make the RTT deployer in such a way that it allows to be
> scriptable by external programs.

That is one possibility, certainly. Like I said though, my customers and I really don't want to have to learn a couple of new tools to do this, if we can instead extend/expand the existing OCL deployer to cope.

> I must confess that I have no clear idea whether that is not already
> possible or not... I have been discussing about too many deployment
> frameworks lately and I lost track of what is (im)possible where...:-)

Understood :-)
S

eventports and call-backs

On Aug 23, 2010, at 08:17 , S Roderick wrote:

> On Aug 22, 2010, at 04:38 , Herman Bruyninckx wrote:
>
>> On Sat, 21 Aug 2010, S Roderick wrote:
>>
>>> On Aug 21, 2010, at 11:25 , Herman Bruyninckx wrote:
>>>
>>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>>
>>>>> On Aug 21, 2010, at 09:26 , Herman Bruyninckx wrote:
>>>>>
>>>>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>>>>
>>>>>>> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
>>>>>>>
>>>>>>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>>>>>>>

<sni

>>> The OCL deployer does not handle this - one approach would be supporting
>>> variables within the deployment file, ala ant properties. At least one
>>> set of developers solved this by dropping the deployer and writing
>>> scripts to directly load libraries, select components, property files,
>>> etc. Basically, they've reproduced the OCL deployer in some sense - a
>>> perfect valid solution to a problem that Orocos doesn't deal well with.
>>>
>>> YMMV
>>
>> Your previous paragraph confirms one of the points I have been making a
>> couple of times already: deployment can/should become much more flexible
>> and scalable than it is now in RTT. But, and this is a big but, not
>> (necessarily) by scaling up RTT's deployer, but rather by seamless
>> integration in real deployment software (OSGi, for example). Maybe the best
>> way out is to make the RTT deployer in such a way that it allows to be
>> scriptable by external programs.
>
> That is one possibility, certainly. Like I said though, my customers and I really don't want to have to learn a couple of new tools to do this, if we can instead extend/expand the existing OCL deployer to cope.

As one example of a possible solution to my deployment problems, would be for the deployer's underlying XML parser to accept the following valid XML. TinyXML doesn't support it, and neither Xerces-C v2 nor v3 seem to take it. I don't know enough about XML to know (yet) what is lacking. The doc's seem to indicate that Xerces understands the "ENTITY" node.

NB if you put the library name back instead of using "&lib", Xerces will accept this and the deployer runs cleanly, but the component is then named "name" instead of "Console". :-(
S

eventports and call-backs

On Wednesday 25 August 2010 02:49:51 S Roderick wrote:
> On Aug 23, 2010, at 08:17 , S Roderick wrote:
> > On Aug 22, 2010, at 04:38 , Herman Bruyninckx wrote:
> >> On Sat, 21 Aug 2010, S Roderick wrote:
> >>> On Aug 21, 2010, at 11:25 , Herman Bruyninckx wrote:
> >>>> On Sat, 21 Aug 2010, S Roderick wrote:
> >>>>> On Aug 21, 2010, at 09:26 , Herman Bruyninckx wrote:
> >>>>>> On Sat, 21 Aug 2010, S Roderick wrote:
> >>>>>>> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
> >>>>>>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>
> <sni

>
> >>> The OCL deployer does not handle this - one approach would be
> >>> supporting variables within the deployment file, ala ant properties. At
> >>> least one set of developers solved this by dropping the deployer and
> >>> writing scripts to directly load libraries, select components, property
> >>> files, etc. Basically, they've reproduced the OCL deployer in some
> >>> sense - a perfect valid solution to a problem that Orocos doesn't deal
> >>> well with.
> >>>
> >>> YMMV
> >>
> >> Your previous paragraph confirms one of the points I have been making a
> >> couple of times already: deployment can/should become much more flexible
> >> and scalable than it is now in RTT. But, and this is a big but, not
> >> (necessarily) by scaling up RTT's deployer, but rather by seamless
> >> integration in real deployment software (OSGi, for example). Maybe the
> >> best way out is to make the RTT deployer in such a way that it allows to
> >> be scriptable by external programs.
> >
> > That is one possibility, certainly. Like I said though, my customers and
> > I really don't want to have to learn a couple of new tools to do this, if
> > we can instead extend/expand the existing OCL deployer to cope.
>
> As one example of a possible solution to my deployment problems, would be
> for the deployer's underlying XML parser to accept the following valid
> XML. TinyXML doesn't support it, and neither Xerces-C v2 nor v3 seem to
> take it. I don't know enough about XML to know (yet) what is lacking. The
> doc's seem to indicate that Xerces understands the "ENTITY" node.
>
> NB if you put the library name back instead of using "&lib", Xerces will
> accept this and the deployer runs cleanly, but the component is then named
> "name" instead of "Console". :-( S

Your example contained 2 bugs:

You need to declare the entity *in* the DOCTYPE tag
You need to close your enities name with ';'.

Can you test the example below:

<?xml version="1.0" encoding="UTF-8"?>
&lt;!DOCTYPE properties SYSTEM "cpf.dtd"
[
&lt;!ENTITY name "Console"&gt;
&lt;!ENTITY lib "liborocos-rtt"&gt;
]
>

<properties>

<simple name="Import" type="string">
<value>&lib;<value>
<simple>
<simple name="Import" type="string">
<value>liborocos-ocl-common<value>
<simple>

<struct name="&name;" type="OCL::HMIConsoleOutput">
<struct>

<properties>

I have validated it with the 'rng-mode' of emacs which is incredible smart and
helpful.

Peter

eventports and call-backs

On Aug 25, 2010, at 08:27 , Peter Soetens wrote:

> On Wednesday 25 August 2010 02:49:51 S Roderick wrote:
>> On Aug 23, 2010, at 08:17 , S Roderick wrote:
>>> On Aug 22, 2010, at 04:38 , Herman Bruyninckx wrote:
>>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>>> On Aug 21, 2010, at 11:25 , Herman Bruyninckx wrote:
>>>>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>>>>> On Aug 21, 2010, at 09:26 , Herman Bruyninckx wrote:
>>>>>>>> On Sat, 21 Aug 2010, S Roderick wrote:
>>>>>>>>> On Aug 21, 2010, at 07:40 , Herman Bruyninckx wrote:
>>>>>>>>>> On Sat, 21 Aug 2010, Sylvain Joyeux wrote:
>>
>> <sni

>>
>>>>> The OCL deployer does not handle this - one approach would be
>>>>> supporting variables within the deployment file, ala ant properties. At
>>>>> least one set of developers solved this by dropping the deployer and
>>>>> writing scripts to directly load libraries, select components, property
>>>>> files, etc. Basically, they've reproduced the OCL deployer in some
>>>>> sense - a perfect valid solution to a problem that Orocos doesn't deal
>>>>> well with.
>>>>>
>>>>> YMMV
>>>>
>>>> Your previous paragraph confirms one of the points I have been making a
>>>> couple of times already: deployment can/should become much more flexible
>>>> and scalable than it is now in RTT. But, and this is a big but, not
>>>> (necessarily) by scaling up RTT's deployer, but rather by seamless
>>>> integration in real deployment software (OSGi, for example). Maybe the
>>>> best way out is to make the RTT deployer in such a way that it allows to
>>>> be scriptable by external programs.
>>>
>>> That is one possibility, certainly. Like I said though, my customers and
>>> I really don't want to have to learn a couple of new tools to do this, if
>>> we can instead extend/expand the existing OCL deployer to cope.
>>
>> As one example of a possible solution to my deployment problems, would be
>> for the deployer's underlying XML parser to accept the following valid
>> XML. TinyXML doesn't support it, and neither Xerces-C v2 nor v3 seem to
>> take it. I don't know enough about XML to know (yet) what is lacking. The
>> doc's seem to indicate that Xerces understands the "ENTITY" node.
>>
>> NB if you put the library name back instead of using "&lib", Xerces will
>> accept this and the deployer runs cleanly, but the component is then named
>> "name" instead of "Console". :-( S
>
> Your example contained 2 bugs:
>
> You need to declare the entity *in* the DOCTYPE tag
> You need to close your enities name with ';'.
>
> Can you test the example below:
>
> <?xml version="1.0" encoding="UTF-8"?>
> &lt;!DOCTYPE properties SYSTEM "cpf.dtd"
&gt; [
> &lt;!ENTITY name "Console"&gt;
> &lt;!ENTITY lib "liborocos-rtt"&gt;
> ]
>>
>
> <properties>
>
> <simple name="Import" type="string">
> <value>&lib;<value>
> <simple>
> <simple name="Import" type="string">
> <value>liborocos-ocl-common<value>
> <simple>
>
> <struct name="&name;" type="OCL::HMIConsoleOutput">
> <struct>
>
> <properties>
>
> I have validated it with the 'rng-mode' of emacs which is incredible smart and
> helpful.
>
> Peter

Missing a ">" to close DOCTYPE, but otherwise it works!! Hmm, now I need to install Xerces on all our machines, and redo our deployment scripts ... :-)

Once I've got some experience with the above, I'll write a wiki page on it.

Also, RTT v1 is _not_ compatible with Xerces 3. The depdom library isn't installed (at least under MacPorts). Whether this is a Xerces problem, or just that we need to update our CMake logic, I don't know. Suspect the latter - don't have time to examine right now.

S

eventports and call-backs

>
> Also, RTT v1 is _not_ compatible with Xerces 3. The depdom library isn't
> installed (at least under MacPorts). Whether this is a Xerces problem, or
> just that we need to update our CMake logic, I don't know. Suspect the
> latter - don't have time to examine right now.
>

This has been previously reported on the list:
http://www.orocos.org/forum/rtt/rtt-dev/findxercescmake-broken-xercesc-31
It should be OK to remove the depdom dependency

Adolfo