Urgent need for RTT/Composition primitive (aka "Warning: current iTaSC implementation is a realtime threat!)

this is a message that I consider to be _strategic_ for the realtime fame
of, both, Orocos/RTT and the BRICS Component Model. It's a rather condensed
email, with the following summary:

1. Need for Composition
2. The problem with execution efficiency
3. Need for adding Computational model to Composition
4. Need for tooling

I hope the Orocos and BRICS developer communities are strong and
forward-looking enough to take action...
I expect several follow-up messages to this "seed", in order to (i) refine
its contents, and (ii) start sharing the development load.

Best regards,

Herman Bruyninckx

===============
1. Need for Composition
In the "5Cs", Composition is singled out as the "coupling" aspect
complementary to the "decoupling" aspects of Computation, Communication,
Configuration and Coordination.
In the BRICS Component Model (BCM), the different Cs come into play at
different phases of the 5-phased development process (functional,
component, composition/system, deployment, runtime); in the context of this
message, I focus on the three phases "in the middle":
- Component phase: developers make components, for maximal reuse and
composability in later systems. Roughly speaking, the "art" here is to
decouple the algorithms/computations inside the component from the access
to the component's functionality (Computation, Communication,
Configuration or Coordination) via Ports (and the "access policies" on
them).
- Composition phase: developers make a system, by composing components
together, via interconnecting Ports, and specifying "buffering policies"
on those Ports.
- Deployment phase: composite components are being put into 'activity
containers' (threads, processes,...) and connections between Ports are
given communication middleware implementations.
Although there is no strong or structured tooling support for these
developments (yet) _and_ there is no explicit Composition primitive (in
RTT, or BRIDE), the good developers in the community have the discipline to
follow the outlined work flow to a large extent, resulting in designs that
are very well ready for distributed deployment, and with very little
coordination problems (deadlocks, data inconsistencies,...).

One recent example is the new iTaSC implementation, using Orocos/RTT as
component framework: <http://orocos.org/wiki/orocos/itasc-wiki>. It uses
another standalone-ready toolkit, rFSM, for its Coordination state
machines: <http://people.mech.kuleuven.be/~mklotzbucher/rfsm/README.html>.

So far so good, because the _decoupling_ aspects of complex component-based
systems are very well satisfied.

But the _composition_ aspect is tremendously overlooked, resulting in
massive wast of computational efficiency. (I explain that below.) I
consider this a STRATEGIC lack in both BCM and RTT, because it is _a_ major
selling point towards serious industrial uptake, and _the_ major
competitive disadvantage with respect to commercial "one-tool-fits all
lock-in" suppliers such as the MathWorks, National Instruments, or 20Sim.

2. The problem with execution efficiency
What is wrong exactly with respect to execution efficiency? The cause of
the problem is that decoupling is taken to the extreme in the
above-mentioned development "tradition", in that each component is deployed
in its own activity (thread within a process, or even worse, different
processes within the operating system). The obvious good result of this is
robustness; the (not so obviously visible) bad results are that:
- events and data are exchanged between components via our very robust
Port-Connector-Port mechanisms, which implies a lot of buffering, and
hence requiring several context switches before data is really being
delivered from its provider to its consumer.
- activities are triggered via Coordination and/or Port buffering events,
which has two flaws:
(i) activities should be triggered by a _scheduler_ (because events are
semantically only there to trigger changes in _behaviour_, and not in
_computation_!); result: too many events and consequently too much
time lost in event handling which should not be there, _and_ lots of
context switches.
(ii) too many context switches to make the data flow robustly through our
provider Ports, connectors and consumer Ports; result: delays of
several time ticks.
Conclusion: the majority of applications allow to deploy all of their
computations in one single thread, even without running the risk of data
corruption, because there is a natural serialization of all computations in
the application. Single threaded execution does away with _all_ of the
above-mentioned computational wastes. But we have no good guidelines yet,
let alone tooling, to support developers with the (not so trivial) task of
efficiently serializing component computations. That's where the
"Composition" phase of the development process comes in, together with the
"Composition" primitive and its associated Computational models.

3. Need for adding Computational model to Composition
The introduction of an explicit Composition phase into the development
process, _and_ the introduction of the corresponding Composition primitive
in BCM/RTT, will lead to the following two extra features which bring
tremendous potential for computational efficiency:

- Scope/closure/context/port promotion: a Composition (= composite
component) is the right place to determine which data/events will only be
used between the components within the composite, and which ones will have
to be "promoted" to be accessible from the outside. The former are the
ones with opportunities of gaining tremendous computational efficiency:
a connection between two Ports within the composite can be replaced by a
shared variable, which can be accessed by both components without delays
and buffering. The same holds for events.

- Computational model:
Of course, this potential gain is only realisable when the execution of
the computations in all components can be _scheduled_ as a _serialized_
list of executions: "first Component A, then Component B, then Component C,
and finally Component A again". Such natural serializations exist in
_all_ robotics applications that I know of, and I have seen many.
Finding the right serialization (i.e., "Computational model") is not
always trivial, obviously. As is finding the right granularity of
"computational codels". The good news is that experts exist for all
specific applications to provide solutions.
(Note: serialization of the computations in components is only _one_
possible Computational model; there are others, but they are outside the
scope of this message.)

At deployment time, one has a bunch of Composite components available, for
which the computational model has already been added and configured at the
composite level (if needed), so that one should then add activities and
communication middleware only _per composite component_, and not per
individual component.

4. Need for tooling
The above-mentioned workflow in the Composition phase is currently not at
all supported by any tool. This is a major hole in the BRIDE/RTT
frameworks. I envisage something in the direction of what Genom is doing,
since that approach has the concept of a "codel", that is, the atomically
'deployable' piece of computations. Where 'deployment' means: to put into a
computational schedule within a Composite. (The latter sentence is _not_
Genom-speak, but could/should become BCM/RTT/BRIDE-speak.)

[software-toolchain] Urgent need for RTT/Composition primitive (

On Sat, 31 Mar 2012, brugali wrote:

>
>
> Il 3/30/2012 8:41 PM, Herman Bruyninckx ha scritto:
>> On Fri, 30 Mar 2012, brugali wrote:
>>
>>> Dear all,
>>>
>>> I would like to contribute my two cents to the discussion on "The problem
>>> with execution
>>> efficiency".
>>>
>>> As far as I've understood the ongoing discussion, I see that the problem
>>> is formulated in terms
>>> of computational waste due to excessive number of threads, which is
>>> originated by an obsessive
>>> attitude to map even simple functionality to coarse grain components which
>>> interact according to
>>> the data flow architectural model.
>>
>> Indeed. The trade-off between (i) the robustness of decoupled components,
>> and (ii) the efficiency of highly coupled components. (Where "component"
>> means: a piece of software whose functionality one accesses through ports.)

> I prefer to think in terms of well-defined (i.e. harmonized and clearly
> separated from implementation) component interfaces.

This is something _any_ piece of well-designed software (or hardware, for
that matter) should have. But it is not differentiating the component-based
approach from others, more specifically from the object-oriented approach.

>>> If my interpretation of the problem is correct, one possible solution
>>> consists in:
>>> a) classifying concurrency at different levels of granularity, i.e. fine,
>>> medium, and large
>>> grain as in [1]
>>> b) map these levels of concurrency to three units of design, respectively:
>>> sequential component,
>>> service component, and container component.
>>> 3) use different architectural models and concurrency mechanisms for
>>> component interaction (i.e.
>>> data flow, client-server).
>>
>> Strange, the Italian "alfabet of counting": a, b, 3! :-)
>>
> nice interpretation of my typo! :-[ I see here your keen sensitivity to
> alphabets (Cs, Ms, ...) :-)
>> But we add "4) allow to use an application-specific schedule of
>> computations for which one _knows_ that all constraints are satisfied for
>> data integrity".
>>
> A global (i.e. application-wide) scheduler is one possible mechanism that
> ensures data integrity.

That is why "composition" is so important: the "composite" brings in the
clear scope ("application-wide") within which the _coupling_ aspects of a
design must/can be described/modelled/supported by tools. Data integrity is
one, computational efficiency is another one, and authorization yet another
one, etc. Wrt data integrity: component (and classes for that matter too)
deal already with their internal data integrity (by the separation between
interface and implementation ("decoupling") that you mentioned), but at
composition level one _has_ to do something extra, because data are not
"hidden" anymore in the interactions ("couplings"!) between components.

As far as I am concerned, this tension between coupling/decoupling is one
of the essential aspects of architecture... (You see, I am already
preparing some discussion sessions on the next Research Camp! :-))

> Other mechanisms (i.e. connectors implementing interaction protocols) can be
> defined for component-wide constraints.
> The Data-Flow model of computation specifies that a component performs a
> computation when all the input data are available.

That is just one particular policy (imposed on us by the Simulink-etc
legacy), but I generalize this: a composite can decide for itself (read:
have a separate computational schedule for) when any of its ports is
"triggered". The "scene graph" component is a major example that shows this
need: it will be interacting with lots of "clients", and it does not make
sense to let it wait until all these clients have provided new data.

> In order to guarantee data
> consistency in a concurrent system and prevent race conditions, input data
> might be tagged in such a way that the component performs a computation only
> when the full set of "matched input data" are available.

Yes, that is _one_ of the relevant policies for triggering _a_
computational schedule inside the component.

[...]
>>> - A set of sequential, service, and container components all together
>>> form a component assembly (focus on Configuration). N.B. for me
>>> Composition is a kind of Configuration aspect (4Cs are enough)
>>
>> I do not agree here :-) Or rather: the reason to separate Composition from
>> Configuration (because one is _not_ a kind of the other) was my major
>> reason to extend the original 4C paradigm.
> I know that you do not agree here, but this is a remnant of our religious war
> on components ;-)

I am past that religious war stage! _I_ now have the one and only true
religion! :-)))))

> For me Composition is about how a component-based system is organized, i.e.
> configured.

The organization and configuration are two complementary but different
aspects, in my opinion:
- "Organization" = "Composition" = setting the _interactions_ between all
components in the composite.
- "Configuration" = setting the _behaviour_ of each of the components in the
composite (including the composite itself...).

> The arrangement of components and connectors in a flat or hierarchical (i.e.
> composition) way is the system configuration.

Yes, but this is too much semantic overloading to my taste, since the
component and composite "configurations" (as _you_ call them) have different
meanings. I agree that a 4C-simplification can still work, but separating
out the Composition "C" has proven _extremely_ useful for me the last year.
For example, it was instrumental for the tremendous breakthrough we have
realised in the Rosetta project, wrt the specification of the tasks that a
robot system has to execute. The Composition in my semantic meaning is
really the thing that connects both "best practice" design in both parts:
the "BRICS" part (component based system design) and the "Rosetta" part
(task specification). They only share "Composition".

> A specific component in the system might be in charge of managing (i.e.
> dynamically reconfiguring) the system configuration.

Sure. That's the Configurator of the Composite :-)

> Reconfiguration might consist in adding/removing/replacing components
> to/from/in composites.

yes, but that _always_ takes place in the context of a larger composite, in
whose scope ("closure") all these components and composite exist, and can
be reasoned about.

Herman

[software-toolchain] Urgent need for RTT/Composition primitive (

Il 3/31/2012 11:28 AM, Herman Bruyninckx ha scritto:
> On Sat, 31 Mar 2012, brugali wrote:
>
>>
>>
>> Il 3/30/2012 8:41 PM, Herman Bruyninckx ha scritto:
>>> On Fri, 30 Mar 2012, brugali wrote:
>>>
>>>> Dear all,
>>>>
>>>> I would like to contribute my two cents to the discussion on "The
>>>> problem with execution
>>>> efficiency".
>>>>
>>>> As far as I've understood the ongoing discussion, I see that the
>>>> problem is formulated in terms
>>>> of computational waste due to excessive number of threads, which is
>>>> originated by an obsessive
>>>> attitude to map even simple functionality to coarse grain
>>>> components which interact according to
>>>> the data flow architectural model.
>>>
>>> Indeed. The trade-off between (i) the robustness of decoupled
>>> components,
>>> and (ii) the efficiency of highly coupled components. (Where
>>> "component"
>>> means: a piece of software whose functionality one accesses through
>>> ports.)
>
>> I prefer to think in terms of well-defined (i.e. harmonized and
>> clearly separated from implementation) component interfaces.
>
> This is something _any_ piece of well-designed software (or hardware, for
> that matter) should have. But it is not differentiating the
> component-based
> approach from others, more specifically from the object-oriented
> approach.
No, I was thinking about a more general way of defining components than
your port-based definition.
The term "port" is somehow overloaded even if many component models
refer to data-flow ports.
>
>>>> If my interpretation of the problem is correct, one possible
>>>> solution consists in:
>>>> a) classifying concurrency at different levels of granularity, i.e.
>>>> fine, medium, and large
>>>> grain as in [1]
>>>> b) map these levels of concurrency to three units of design,
>>>> respectively: sequential component,
>>>> service component, and container component.
>>>> 3) use different architectural models and concurrency mechanisms
>>>> for component interaction (i.e.
>>>> data flow, client-server).
>>>
>>> Strange, the Italian "alfabet of counting": a, b, 3! :-)
>>>
>> nice interpretation of my typo! :-[ I see here your keen sensitivity
>> to alphabets (Cs, Ms, ...) :-)
>>> But we add "4) allow to use an application-specific schedule of
>>> computations for which one _knows_ that all constraints are
>>> satisfied for
>>> data integrity".
>>>
>> A global (i.e. application-wide) scheduler is one possible mechanism
>> that ensures data integrity.
>
> That is why "composition" is so important: the "composite" brings in the
> clear scope ("application-wide") within which the _coupling_ aspects of a
> design must/can be described/modelled/supported by tools. Data
> integrity is
> one, computational efficiency is another one, and authorization yet
> another
> one, etc. Wrt data integrity: component (and classes for that matter too)
> deal already with their internal data integrity (by the separation
> between
> interface and implementation ("decoupling") that you mentioned), but at
> composition level one _has_ to do something extra, because data are not
> "hidden" anymore in the interactions ("couplings"!) between components.
>
> As far as I am concerned, this tension between coupling/decoupling is one
> of the essential aspects of architecture... (You see, I am already
> preparing some discussion sessions on the next Research Camp! :-))
Excellent!
>
>
>> Other mechanisms (i.e. connectors implementing interaction protocols)
>> can be defined for component-wide constraints.
>> The Data-Flow model of computation specifies that a component
>> performs a computation when all the input data are available.
>
> That is just one particular policy (imposed on us by the Simulink-etc
> legacy), but I generalize this: a composite can decide for itself (read:
> have a separate computational schedule for) when any of its ports is
> "triggered". The "scene graph" component is a major example that shows
> this
> need: it will be interacting with lots of "clients", and it does not make
> sense to let it wait until all these clients have provided new data.
Actually I had in mind a different example. When I write "all the input
data are available", I mean all the required input data.

I'm facing this problem right now with the refactoring of the Kinematics
Component that has been developed for the RC3 KUL motion stack. I would
like to have a generic component that performs both position and
velocity direct and inverse kinematics. It would make sense to have 4
input ports and 4 output ports (position and velocity in joint and
Cartesian space).
Inverse velocity kinematics requires as input both joint position and
Cartesian velocity. A mechanism is needed that guarantees consistency
between the data arriving on the two input ports when multiple clients
might write data on those ports.
The KUL motion stack for RC3 uses a Kinematics component that implements
only velocity kinematics, that is the problem is solved by limiting the
functionality (and reusability) of the Kinematics component to those
needed by only one client.

>
>> In order to guarantee data consistency in a concurrent system and
>> prevent race conditions, input data might be tagged in such a way
>> that the component performs a computation only when the full set of
>> "matched input data" are available.
>
> Yes, that is _one_ of the relevant policies for triggering _a_
> computational schedule inside the component.
>
> [...]
>>>> - A set of sequential, service, and container components all together
>>>> form a component assembly (focus on Configuration). N.B. for me
>>>> Composition is a kind of Configuration aspect (4Cs are enough)
>>>
>>> I do not agree here :-) Or rather: the reason to separate
>>> Composition from
>>> Configuration (because one is _not_ a kind of the other) was my major
>>> reason to extend the original 4C paradigm.
>> I know that you do not agree here, but this is a remnant of our
>> religious war on components ;-)
>
> I am past that religious war stage!
What a pity! I was looking forward to celebrating another epiphany!
> _I_ now have the one and only true
> religion! :-)))))
I'm just curios to know if you are the pope of this true religion or if
you have even a higher role. :-D
>
>> For me Composition is about how a component-based system is
>> organized, i.e. configured.
>
> The organization and configuration are two complementary but different
> aspects, in my opinion:
> - "Organization" = "Composition" = setting the _interactions_ between all
> components in the composite.
> - "Configuration" = setting the _behaviour_ of each of the components
> in the
> composite (including the composite itself...).
>
>> The arrangement of components and connectors in a flat or
>> hierarchical (i.e. composition) way is the system configuration.
>
> Yes, but this is too much semantic overloading to my taste, since the
> component and composite "configurations" (as _you_ call them) have
> different
> meanings. I agree that a 4C-simplification can still work, but separating
> out the Composition "C" has proven _extremely_ useful for me the last
> year.
> For example, it was instrumental for the tremendous breakthrough we have
> realised in the Rosetta project, wrt the specification of the tasks
> that a
> robot system has to execute. The Composition in my semantic meaning is
> really the thing that connects both "best practice" design in both parts:
> the "BRICS" part (component based system design) and the "Rosetta" part
> (task specification). They only share "Composition".
>
>> A specific component in the system might be in charge of managing (i.e.
>> dynamically reconfiguring) the system configuration.
>
> Sure. That's the Configurator of the Composite :-)
>
>> Reconfiguration might consist in adding/removing/replacing components
>> to/from/in composites.
>
> yes, but that _always_ takes place in the context of a larger
> composite, in
> whose scope ("closure") all these components and composite exist, and can
> be reasoned about.
Indeed, an application can be defined as the largest composite in a
component-based system.
>
> Herman
>
Davide

[software-toolchain] Urgent need for RTT/Composition primitive (

On Sat, 31 Mar 2012, brugali wrote:

>
> Il 3/31/2012 11:28 AM, Herman Bruyninckx ha scritto:
>> On Sat, 31 Mar 2012, brugali wrote:
>>>
>>> Il 3/30/2012 8:41 PM, Herman Bruyninckx ha scritto:
>>>> On Fri, 30 Mar 2012, brugali wrote:
>>>>
>>>>> Dear all,
>>>>>
>>>>> I would like to contribute my two cents to the discussion on "The
>>>>> problem with execution
>>>>> efficiency".
>>>>>
>>>>> As far as I've understood the ongoing discussion, I see that the problem
>>>>> is formulated in terms
>>>>> of computational waste due to excessive number of threads, which is
>>>>> originated by an obsessive
>>>>> attitude to map even simple functionality to coarse grain components
>>>>> which interact according to
>>>>> the data flow architectural model.
>>>>
>>>> Indeed. The trade-off between (i) the robustness of decoupled components,
>>>> and (ii) the efficiency of highly coupled components. (Where "component"
>>>> means: a piece of software whose functionality one accesses through
>>>> ports.)
>>
>>> I prefer to think in terms of well-defined (i.e. harmonized and clearly
>>> separated from implementation) component interfaces.
>>
>> This is something _any_ piece of well-designed software (or hardware, for
>> that matter) should have. But it is not differentiating the component-based
>> approach from others, more specifically from the object-oriented approach.
> No, I was thinking about a more general way of defining components than your
> port-based definition.
> The term "port" is somehow overloaded even if many component models refer to
> data-flow ports.

"Port" in my semantics means: a uniquely defined entry point into the
functionality of the component. It is the obvious place to connect the
following thing to: data transformation, buffering, authorization, API,
triggering of computations,...
In other words, it separates the inside from the outside of a component.

[...]
>>> Other mechanisms (i.e. connectors implementing interaction protocols) can
>>> be defined for component-wide constraints.
>>> The Data-Flow model of computation specifies that a component performs a
>>> computation when all the input data are available.
>>
>> That is just one particular policy (imposed on us by the Simulink-etc
>> legacy), but I generalize this: a composite can decide for itself (read:
>> have a separate computational schedule for) when any of its ports is
>> "triggered". The "scene graph" component is a major example that shows this
>> need: it will be interacting with lots of "clients", and it does not make
>> sense to let it wait until all these clients have provided new data.

> Actually I had in mind a different example. When I write "all the input data
> are available", I mean all the required input data.

Ok! But somehow we will have to represent what 'required' means exactly :-)

> I'm facing this problem right now with the refactoring of the Kinematics
> Component that has been developed for the RC3 KUL motion stack. I would like
> to have a generic component that performs both position and velocity direct
> and inverse kinematics. It would make sense to have 4 input ports and 4
> output ports (position and velocity in joint and Cartesian space).

yes, and even many more: "desired", "actual", "error",... variations of
these nominal ports; plus the possibility to monitor the deviations that
occur inside the component in order to fire events when these deviations
are too large; etc.

This "API explosion" is the reason why we are now trying to go the "data
flow component" way.

> Inverse velocity kinematics requires as input both joint position and
> Cartesian velocity. A mechanism is needed that guarantees consistency between
> the data arriving on the two input ports when multiple clients might write
> data on those ports.

This is the responsibility of the system builder: (s)he should make sure
that components are configured to "run" only when a consistent set of data
is on their ports.

> The KUL motion stack for RC3 uses a Kinematics component that implements only
> velocity kinematics, that is the problem is solved by limiting the
> functionality (and reusability) of the Kinematics component to those needed
> by only one client.

Yes! Another reason to go to a data flow architecture: different ports can
give access for different kinds of "clients".

>> [...]
>>>>> - A set of sequential, service, and container components all together
>>>>> form a component assembly (focus on Configuration). N.B. for me
>>>>> Composition is a kind of Configuration aspect (4Cs are enough)
>>>>
>>>> I do not agree here :-) Or rather: the reason to separate Composition
>>>> from
>>>> Configuration (because one is _not_ a kind of the other) was my major
>>>> reason to extend the original 4C paradigm.
>>> I know that you do not agree here, but this is a remnant of our religious
>>> war on components ;-)
>>
>> I am past that religious war stage!
> What a pity! I was looking forward to celebrating another epiphany!
>> _I_ now have the one and only true
>> religion! :-)))))
> I'm just curios to know if you are the pope of this true religion or if you
> have even a higher role. :-D

This kind of indiscretions can only be the subject of private email
conversations... :-)

Herman

[software-toolchain] Urgent need for RTT/Composition primitive (

On Mon, Apr 2, 2012 at 12:05, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
>> Actually I had in mind a different example. When I write "all the input data
>> are available", I mean all the required input data.
>
> Ok! But somehow we will have to represent what 'required' means exactly :-)

This problem has been already recognized a long, long time ago by R. Brooks.
At that time, there were no "components", but only "modules" :) It was called
"Event dispatch" and allowed to express and/or conditions of data arrivals and
expirations of time delays (This "feature" does not deals with the subsumption,
rather with the internal operation of the modules).

In general, it is not difficult to find a DSL for expressing what
'required' means.
The question is - what should be allowed as the input data for evaluating
"requirement" conditions? The more is allowed, the more difficult it is to
formalize the domain.

[1] A robust layered control system for a mobile robot, R Brooks -
IEEE Journal of
Robotics and Automation, 1986.

[software-toolchain] Urgent need for RTT/Composition primitive (

On Mon, 2 Apr 2012, Piotr Trojanek wrote:

> On Mon, Apr 2, 2012 at 12:05, Herman Bruyninckx
> <Herman [dot] Bruyninckx [..] ...> wrote:
>>> Actually I had in mind a different example. When I write "all the input data
>>> are available", I mean all the required input data.
>>
>> Ok! But somehow we will have to represent what 'required' means exactly :-)
>
> This problem has been already recognized a long, long time ago by R. Brooks.

Now you are bringing in a very suspicious reference into the discussion... :-)
Especially because that reference did not provide any practical answers to
the problem...

> At that time, there were no "components", but only "modules" :)

And one of the major practical problems of the "behaviour-based" approach
that Brooks brought into the robotics domain was its extreme lack of
semantic clarity of what everything meant exactly.... Let alone the ways in
which one could make a "design" deterministic and performant.

> It was called "Event dispatch" and allowed to express and/or conditions
> of data arrivals and expirations of time delays (This "feature" does not
> deals with the subsumption, rather with the internal operation of the
> modules).

It was indeed limited to "time delays" on ports, while we know already that
port semantics play a more important role.

> In general, it is not difficult to find a DSL for expressing what
> 'required' means.

Then show me some "best practice" examples, please! I'm dying to get to
know them better...

> The question is - what should be allowed as the input data for evaluating
> "requirement" conditions? The more is allowed, the more difficult it is to
> formalize the domain.
>
> [1] A robust layered control system for a mobile robot, R Brooks -
> IEEE Journal of Robotics and Automation, 1986.

> Piotr Trojanek

Herman

[software-toolchain] Urgent need for RTT/Composition primitive (

On Mon, Apr 2, 2012 at 13:23, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
>> <Herman [dot] Bruyninckx [..] ...> wrote:
>>>> Actually I had in mind a different example. When I write "all the input
>>>> data
>>>> are available", I mean all the required input data.
>>>
>>>
>>> Ok! But somehow we will have to represent what 'required' means exactly
>>> :-)
>>
>>
>> This problem has been already recognized a long, long time ago by R.
>> Brooks.
>
>
> Now you are bringing in a very suspicious reference into the discussion...
> :-)
> Especially because that reference did not provide any practical answers to
> the problem...

Please, do not get me wrong - I am not advocating Brook's solution for the
"required data" problem, nor his approach to structuring the control system.

Rather, my goal was to show, that each "architecture"/"framework" need to
deal with this problem (but I have never seen this issue a first-class citizen
of the framework). It is difficult to talk about about explicit solution (in the
form of a DSL) without first making the domain explicit (i.e., modeling
what data is available for expressing and evaluating the "required data"
condition).

>> At that time, there were no "components", but only "modules" :)
>
> And one of the major practical problems of the "behaviour-based" approach
> that Brooks brought into the robotics domain was its extreme lack of
> semantic clarity of what everything meant exactly.... Let alone the ways in
> which one could make a "design" deterministic and performant.

His solution was to provide a translational semantics of his concepts by means
of a LISP-based "reference implementation" - the 'Behavior language'. Once
again - this is what he did (and this is not what I like). But I guess, that the
language was clear enough for the users of Brooks's compiler :)

>> It was called "Event dispatch" and allowed to express and/or conditions
>> of data arrivals and expirations of time delays (This "feature" does not
>> deals with the subsumption, rather with the internal operation of the
>> modules).
>
>
> It was indeed limited to "time delays" on ports, while we know already that
> port semantics play a more important role.

Definitely!

>
>
>> In general, it is not difficult to find a DSL for expressing what
>> 'required' means.
>
>
> Then show me some "best practice" examples, please! I'm dying to get to
> know them better...

Me to :-) But until then, I still believe that this is mostly a matter
of what is allowed
to consider within a "required data" condition.

E.g., one can imagine a condition like: "<there is at least N items on
the port A>
AND <no activity on port B for the last 5ms> OR <the value on the integer port C
is above parameter P>". Of course, this is freaky - but where are the
borders between
*complete*, "easy to use" and "enough for most of use-cases"?

Definitely the "best practice" is to use a pure (i.e. without side
effects), boolean-value
function for expressing these kind of predicates. On the
implementation side one of "best
practices" I see, is to use condition variable concurrency construct
(but I am not sure how
does it match lock-free algorithms). For the efficiency and
determinism the "guard
expression" should be evaluated only by port writers and the result
kept in a single
Boolean variable (similar to restrictions in the Ravenscar profile of Ada).

Of course, these are for a general-case scenario and there are many
patterns, for which
one can fine tune a better, dedicated solution (e.g. FBSched). On the
other hand - if the
focus is only on avoidance of context switches, then it is easy to
apply the above solution
with a lock-less condition variable construct in a single-thread
composites. However, it
requires to make the "required data" conditions accessible for the "scheduler".

[software-toolchain] Urgent need for RTT/Composition primitive (

On 02/04/12 21:58, Piotr Trojanek wrote:
> On Mon, Apr 2, 2012 at 13:23, Herman Bruyninckx
> <Herman [dot] Bruyninckx [..] ...> wrote:
>>> In general, it is not difficult to find a DSL for expressing what
>>> 'required' means.
>>
>>
>> Then show me some "best practice" examples, please! I'm dying to get to
>> know them better...
>
> Me to :-) But until then, I still believe that this is mostly a matter
> of what is allowed
> to consider within a "required data" condition.
>
> E.g., one can imagine a condition like: "<there is at least N items on
> the port A>
> AND<no activity on port B for the last 5ms> OR<the value on the integer port C
> is above parameter P>". Of course, this is freaky - but where are the
> borders between
> *complete*, "easy to use" and "enough for most of use-cases"?
>
> Definitely the "best practice" is to use a pure (i.e. without side
> effects), boolean-value
> function for expressing these kind of predicates. On the
> implementation side one of "best
> practices" I see, is to use condition variable concurrency construct
> (but I am not sure how
> does it match lock-free algorithms). For the efficiency and
> determinism the "guard
> expression" should be evaluated only by port writers and the result
> kept in a single
> Boolean variable (similar to restrictions in the Ravenscar profile of Ada).
>
> Of course, these are for a general-case scenario and there are many
> patterns, for which
> one can fine tune a better, dedicated solution (e.g. FBSched). On the
> other hand - if the
> focus is only on avoidance of context switches, then it is easy to
> apply the above solution
> with a lock-less condition variable construct in a single-thread
> composites. However, it
> requires to make the "required data" conditions accessible for the "scheduler".

Although probably not aimed exactly at what you two want, Erlang uses
its pattern matching syntax (and accompanying semantics, I suppose) to
do something similar. The basic concept is that any function is made up
of one or more bodies, with each body guarded by the pattern of its
arguments. The first one in the specified order that matches the
provided arguments is executed. This is a fairly common concept in
functional programming, as I understand it, so nothing new here. See
here for reference:

http://www.erlang.org/doc/reference_manual/functions.html#id73940

(Although perhaps relevant to what Piotr said above is that Erlang
allows guard expressions as an addition to the pattern matching, and
they *must* be side-effect free tests, which means user functions cannot
be used because those cannot be guaranteed to be side-effect free -
Erlang is non-pure in this regard.)

In Erlang, a *very* common idiom that builds on this is emptying a
process's mailbox. For those who don't know, Erlang supports extremely
lightweight processes and advertises itself as a concurrency-oriented
language. A typical process pattern is a recursive function that empties
the process mailbox, one message at a time, but *not* necessarily in the
arrival order. The messages in the mailbox are tested against a pattern,
which can be pretty much anything, and the first message to match a
pattern is processed. When no messages *that match the patterns* are
available, the process sleeps until a new message arrives in its
mailbox, at which point it tests all the messages again. (The size of a
mailbox obviously has an effect on testing time, so usually you have a
catch-all to catch unexpected messages.) Additionally, it is possible to
have multiple functions like this, and shift between them depending on
messages received, changing how your process responds to messages based
on changing state.

The effect of this is to cause a process to only perform some
functionality when the data necessary for that functionality is
available. Because pattern matching is built into the Erlang syntax,
this idiom is very easy to specify.

There are several good, simple examples on this page:

http://www.erlang.org/course/concurrent_programming.html

Just thought I'd throw that into the conversation for reference. It
seems to me like one possible syntax for specifying "when this data is
available, do that."

Geoff

[software-toolchain] Urgent need for RTT/Composition primitive (

On Tue, 3 Apr 2012, Geoffrey Biggs wrote:

> On 02/04/12 21:58, Piotr Trojanek wrote:
>> On Mon, Apr 2, 2012 at 13:23, Herman Bruyninckx
>> <Herman [dot] Bruyninckx [..] ...> wrote:
>>>> In general, it is not difficult to find a DSL for expressing what
>>>> 'required' means.
>>>
>>>
>>> Then show me some "best practice" examples, please! I'm dying to get to
>>> know them better...
>>
>> Me to :-) But until then, I still believe that this is mostly a matter
>> of what is allowed
>> to consider within a "required data" condition.
>>
>> E.g., one can imagine a condition like: "<there is at least N items on
>> the port A>
>> AND<no activity on port B for the last 5ms> OR<the value on the integer port C
>> is above parameter P>". Of course, this is freaky - but where are the
>> borders between
>> *complete*, "easy to use" and "enough for most of use-cases"?
>>
>> Definitely the "best practice" is to use a pure (i.e. without side
>> effects), boolean-value
>> function for expressing these kind of predicates. On the
>> implementation side one of "best
>> practices" I see, is to use condition variable concurrency construct
>> (but I am not sure how
>> does it match lock-free algorithms). For the efficiency and
>> determinism the "guard
>> expression" should be evaluated only by port writers and the result
>> kept in a single
>> Boolean variable (similar to restrictions in the Ravenscar profile of Ada).
>>
>> Of course, these are for a general-case scenario and there are many
>> patterns, for which
>> one can fine tune a better, dedicated solution (e.g. FBSched). On the
>> other hand - if the
>> focus is only on avoidance of context switches, then it is easy to
>> apply the above solution
>> with a lock-less condition variable construct in a single-thread
>> composites. However, it
>> requires to make the "required data" conditions accessible for the "scheduler".
>
> Although probably not aimed exactly at what you two want, Erlang uses
> its pattern matching syntax (and accompanying semantics, I suppose) to
> do something similar. The basic concept is that any function is made up
> of one or more bodies, with each body guarded by the pattern of its
> arguments. The first one in the specified order that matches the
> provided arguments is executed. This is a fairly common concept in
> functional programming, as I understand it, so nothing new here. See
> here for reference:
>
> http://www.erlang.org/doc/reference_manual/functions.html#id73940
>
> (Although perhaps relevant to what Piotr said above is that Erlang
> allows guard expressions as an addition to the pattern matching, and
> they *must* be side-effect free tests, which means user functions cannot
> be used because those cannot be guaranteed to be side-effect free -
> Erlang is non-pure in this regard.)
>
> In Erlang, a *very* common idiom that builds on this is emptying a
> process's mailbox. For those who don't know, Erlang supports extremely
> lightweight processes and advertises itself as a concurrency-oriented
> language. A typical process pattern is a recursive function that empties
> the process mailbox, one message at a time, but *not* necessarily in the
> arrival order. The messages in the mailbox are tested against a pattern,
> which can be pretty much anything, and the first message to match a
> pattern is processed. When no messages *that match the patterns* are
> available, the process sleeps until a new message arrives in its
> mailbox, at which point it tests all the messages again. (The size of a
> mailbox obviously has an effect on testing time, so usually you have a
> catch-all to catch unexpected messages.) Additionally, it is possible to
> have multiple functions like this, and shift between them depending on
> messages received, changing how your process responds to messages based
> on changing state.
>
> The effect of this is to cause a process to only perform some
> functionality when the data necessary for that functionality is
> available. Because pattern matching is built into the Erlang syntax,
> this idiom is very easy to specify.
>
> There are several good, simple examples on this page:
>
> http://www.erlang.org/course/concurrent_programming.html
>
Thanks for the interesting explanations!

> Just thought I'd throw that into the conversation for reference. It
> seems to me like one possible syntax for specifying "when this data is
> available, do that."

There is definitely room for such components in a modern robotic system! We
are also looking into Erlang for such purposes.

I see this functionality as a potential policy for the data processing in "Ports".

> Geoff

Herman

[software-toolchain] Urgent need for RTT/Composition primitive (

On Tue, Apr 3, 2012 at 07:47, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
>> Although probably not aimed exactly at what you two want, Erlang uses
>> its pattern matching syntax (and accompanying semantics, I suppose) to
>> do something similar.

Indeed, Erlang is very close here! I was trying to explain the "best practice"
on the implementation level, but the example with Erlang is much more clear
for explaining the required syntax and semantics. Thanks for sharing it!

>> (Although perhaps relevant to what Piotr said above is that Erlang
>> allows guard expressions as an addition to the pattern matching, and
>> they *must* be side-effect free tests, which means user functions cannot
>> be used because those cannot be guaranteed to be side-effect free -
>> Erlang is non-pure in this regard.)

Just to complete the reference list - here are test
functions, which are allowed in the guard sequences:
http://www.erlang.org/doc/reference_manual/expressions.html#id79005
(together with composites of the above).

As you can see - there is no much out there and this is why I claimed,
that it is not difficult to build a DSL around these expressions. Erlang's
guard expressions rely on a message-box semantics for the "Ports". The
more complicated semantics/policy for the "Port", the more issues need
to be addressed in the "required data" DSL.

>> In Erlang, a *very* common idiom that builds on this is emptying a
>> process's mailbox. For those who don't know, Erlang supports extremely
>> lightweight processes and advertises itself as a concurrency-oriented
>> language. A typical process pattern is a recursive function that empties
>> the process mailbox, one message at a time, but *not* necessarily in the
>> arrival order. The messages in the mailbox are tested against a pattern,
>> which can be pretty much anything, and the first message to match a
>> pattern is processed. When no messages *that match the patterns* are
>> available, the process sleeps until a new message arrives in its
>> mailbox, at which point it tests all the messages again. (The size of a
>> mailbox obviously has an effect on testing time, so usually you have a
>> catch-all to catch unexpected messages.) Additionally, it is possible to
>> have multiple functions like this, and shift between them depending on
>> messages received, changing how your process responds to messages based
>> on changing state.
>>
>> The effect of this is to cause a process to only perform some
>> functionality when the data necessary for that functionality is
>> available. Because pattern matching is built into the Erlang syntax,
>> this idiom is very easy to specify.
>>
>> There are several good, simple examples on this page:
>>
>> http://www.erlang.org/course/concurrent_programming.html
>>
> Thanks for the interesting explanations!
>
>> Just thought I'd throw that into the conversation for reference. It
>> seems to me like one possible syntax for specifying "when this data is
>> available, do that."
>
> There is definitely room for such components in a modern robotic system! We
> are also looking into Erlang for such purposes.

Please note, that Erlang also define a (relative) timeout in addition to the
above expressions. This allows to (almost) capture semantics of the
periodically triggered data processing.

> I see this functionality as a potential policy for the data processing in "Ports".
>
>> Geoff
>
> Herman
> --
> Orocos-Dev mailing list
> Orocos-Dev [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev

[software-toolchain] Urgent need for RTT/Composition primitive (

On Apr 3, 2012, at 7:48 PM, Piotr Trojanek wrote:

> On Tue, Apr 3, 2012 at 07:47, Herman Bruyninckx
> <Herman [dot] Bruyninckx [..] ...> wrote:
>>> Although probably not aimed exactly at what you two want, Erlang uses
>>> its pattern matching syntax (and accompanying semantics, I suppose) to
>>> do something similar.
>
> Indeed, Erlang is very close here! I was trying to explain the "best practice"
> on the implementation level, but the example with Erlang is much more clear
> for explaining the required syntax and semantics. Thanks for sharing it!

It's also a nice syntax/semantics to use. I have a reasonable amount of Erlang experience now, and I always find it a pleasure to program in. In my opinion, Erlang is doing something right.

Geoff

[software-toolchain] Urgent need for RTT/Composition primitive (

On Thu, 5 Apr 2012, Geoffrey Biggs wrote:

> On Apr 3, 2012, at 7:48 PM, Piotr Trojanek wrote:
>
>> On Tue, Apr 3, 2012 at 07:47, Herman Bruyninckx
>> <Herman [dot] Bruyninckx [..] ...> wrote:
>>>> Although probably not aimed exactly at what you two want, Erlang uses
>>>> its pattern matching syntax (and accompanying semantics, I suppose) to
>>>> do something similar.
>>
>> Indeed, Erlang is very close here! I was trying to explain the "best practice"
>> on the implementation level, but the example with Erlang is much more clear
>> for explaining the required syntax and semantics. Thanks for sharing it!
>
> It's also a nice syntax/semantics to use. I have a reasonable amount of Erlang experience now, and I always find it a pleasure to program in. In my opinion, Erlang is doing something right.
>
It is. But only for "discrete" guards. But eventually, all robotics guard
conditions must be reduced to a discrete interpretation, I guess.

> Geoff

Herman

[software-toolchain] Urgent need for RTT/Composition primitive (

On Sat, Mar 31, 2012 at 11:28, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
>> Other mechanisms (i.e. connectors implementing interaction protocols) can be
>> defined for component-wide constraints.
>> The Data-Flow model of computation specifies that a component performs a
>> computation when all the input data are available.
>
> That is just one particular policy (imposed on us by the Simulink-etc
> legacy), but I generalize this: a composite can decide for itself (read:
> have a separate computational schedule for) when any of its ports is
> "triggered". The "scene graph" component is a major example that shows this
> need: it will be interacting with lots of "clients", and it does not make
> sense to let it wait until all these clients have provided new data.

I like to think about it not in terms of "interaction protocols" or "policies",
but "patterns". These patterns are just a proven solutions to some
common problems related to data and control flow. (The component
itself can be though of as a pattern, which deals mostly with the problem
of code reusability).

In general, there is no limit on a number of a data and control flow patterns
like the above. E.g. one can think about a pattern of repeating some activity
in order to deal with a hardware failure [1].

If your goal is to deal only with the "Simulink-like" pattern, then the solution
is relatively easy. However, if you want to deal with _composition_, then
I think you also need to deal with how to define different "building block
patterns" and also "relationship patterns".

In pair with a pattern an efficient method for its implementation should
be provided. FBSched component is an excellent example, but I think, that
a more general approach is needed - a pattern language dedicated to data
and control flow in the robotics domain. In believe, that explicit MDE is the
right tool for the job of working with this kind of language.

[1] Non-linear sequencing, E. Gat - Aerospace and Electronic Systems Magazine,
IEEE, 2009.

Urgent need for RTT/Composition primitive (aka "Warning: current

On Fri, 30 Mar 2012, Geoffrey Biggs wrote:

> On Mar 29, 2012, at 10:35 PM, Herman Bruyninckx wrote:
>> Easy: none of the frameworks even _has_ an _explicit_ component model :-)
>> (Except for OpenRTM, to some extent.)
>
> Yay us! (To some extent.)

:-)

Is there any accessible source of more concrete information about what
models exactly OpenRTM uses, and how they are formalised in computer
readable form.

I am taking a lot about the BRICS Component Model, but I must admit that
there is not much "out there" yet as far as explicit formalization is
concerned. It turns out to be rather complex, not so much wrt what
knowledge to represent (or code to _execute_ models), but rather wrt to
what formal language to use.

Herman

Urgent need for RTT/Composition primitive (aka "Warning: current

On 30/03/12 15:19, Herman Bruyninckx wrote:
> On Fri, 30 Mar 2012, Geoffrey Biggs wrote:
>
>> On Mar 29, 2012, at 10:35 PM, Herman Bruyninckx wrote:
>>> Easy: none of the frameworks even _has_ an _explicit_ component model
>>> :-)
>>> (Except for OpenRTM, to some extent.)
>>
>> Yay us! (To some extent.)
>
> :-)
>
> Is there any accessible source of more concrete information about what
> models exactly OpenRTM uses, and how they are formalised in computer
> readable form.
>
> I am taking a lot about the BRICS Component Model, but I must admit that
> there is not much "out there" yet as far as explicit formalization is
> concerned. It turns out to be rather complex, not so much wrt what
> knowledge to represent (or code to _execute_ models), but rather wrt to
> what formal language to use.

Apart from the RTC model, there is nothing in English other than the
research papers you no doubt already know about. This is a pretty severe
defect, in my opinion, because having a complete model of the
architecture (there is *much* more in there beyond the RTC model) has
benefits both for defining and creating tools, and, for me as a
researcher, for communicating ideas.

In Japanese, we do have a couple of additional models for system
specification. We are working on better models, most notably for
deployment (that one will be available in English). Because a lot of our
work is related to OMG processes, we use UML, and so we work with a UML
expert when creating these models.

I think there is actually quite a bit out there with respect to formal
models of components. Certainly I've seen a lot of different standards
and specifications with their own formal component models. The
difficulty is finding one that's *complete*, and that they all use their
own dialects and so lack compatibility. A universal way to formally
describe component models of varying types seems like a holy grail, in
some respects.

Geoff

Urgent need for RTT/Composition primitive (aka "Warning: current

On Mon, 2 Apr 2012, Geoffrey Biggs wrote:

> On 30/03/12 15:19, Herman Bruyninckx wrote:
>> On Fri, 30 Mar 2012, Geoffrey Biggs wrote:
>>
>>> On Mar 29, 2012, at 10:35 PM, Herman Bruyninckx wrote:
>>>> Easy: none of the frameworks even _has_ an _explicit_ component model
>>>> :-)
>>>> (Except for OpenRTM, to some extent.)
>>>
>>> Yay us! (To some extent.)
>>
>> :-)
>>
>> Is there any accessible source of more concrete information about what
>> models exactly OpenRTM uses, and how they are formalised in computer
>> readable form.
>>
>> I am taking a lot about the BRICS Component Model, but I must admit that
>> there is not much "out there" yet as far as explicit formalization is
>> concerned. It turns out to be rather complex, not so much wrt what
>> knowledge to represent (or code to _execute_ models), but rather wrt to
>> what formal language to use.
>
> Apart from the RTC model, there is nothing in English other than the research
> papers you no doubt already know about. This is a pretty severe defect, in my
> opinion, because having a complete model of the architecture (there is *much*
> more in there beyond the RTC model) has benefits both for defining and
> creating tools, and, for me as a researcher, for communicating ideas.
>
> In Japanese, we do have a couple of additional models for system
> specification. We are working on better models, most notably for deployment
> (that one will be available in English). Because a lot of our work is related
> to OMG processes, we use UML, and so we work with a UML expert when creating
> these models.

How are you representing constraints on the models? Via OCL?
For example, constraining composition to only hierarchies? Or, the
constraints on the "promotion" of a port from the inside to the outside of
a component?

> I think there is actually quite a bit out there with respect to formal models
> of components. Certainly I've seen a lot of different standards and
> specifications with their own formal component models. The difficulty is
> finding one that's *complete*, and that they all use their own dialects and
> so lack compatibility. A universal way to formally describe component models
> of varying types seems like a holy grail, in some respects.

In many respects... Some for of standardization is necessary; the closest
ones are via the route you mentioned: OMG -> UML -> profiles. But this road
brings in huge model environments, hence they are difficult to use outside
of Eclipse.

> Geoff

Herman

Urgent need for RTT/Composition primitive (aka "Warning: current

On 02/04/12 13:47, Herman Bruyninckx wrote:
> On Mon, 2 Apr 2012, Geoffrey Biggs wrote:
>> Apart from the RTC model, there is nothing in English other than the
>> research papers you no doubt already know about. This is a pretty
>> severe defect, in my opinion, because having a complete model of the
>> architecture (there is *much* more in there beyond the RTC model) has
>> benefits both for defining and creating tools, and, for me as a
>> researcher, for communicating ideas.
>>
>> In Japanese, we do have a couple of additional models for system
>> specification. We are working on better models, most notably for
>> deployment (that one will be available in English). Because a lot of
>> our work is related to OMG processes, we use UML, and so we work with
>> a UML expert when creating these models.
>
> How are you representing constraints on the models? Via OCL?
> For example, constraining composition to only hierarchies? Or, the
> constraints on the "promotion" of a port from the inside to the outside of
> a component?

I don't think we have a solution for that yet.

>> I think there is actually quite a bit out there with respect to formal
>> models of components. Certainly I've seen a lot of different standards
>> and specifications with their own formal component models. The
>> difficulty is finding one that's *complete*, and that they all use
>> their own dialects and so lack compatibility. A universal way to
>> formally describe component models of varying types seems like a holy
>> grail, in some respects.
>
> In many respects... Some for of standardization is necessary; the closest
> ones are via the route you mentioned: OMG -> UML -> profiles. But this road
> brings in huge model environments, hence they are difficult to use outside
> of Eclipse.

I don't entirely agree. It's possible to represent a UML model using a
text-based representation, which would remove the need for a heavy
environment such as Eclipse. It's just that, as far as I know, there is
no standard text representation of UML (yet?). The other option, of
course, is model-to-model transforms into, for example, an XML
representation. That's what we typically use between tools.

Geoff

Urgent need for RTT/Composition primitive (aka "Warning: current

On Mon, Apr 2, 2012 at 08:11, Geoffrey Biggs <geoffrey [dot] biggs [..] ....j

wrote:
> I don't entirely agree. It's possible to represent a UML model using a
> text-based representation, which would remove the need for a hewhichavy
> environment such as Eclipse. It's just that, as far as I know, there is
> no standard text representation of UML (yet?).

There is OMG's HUTN (Human-Usable Textual Notation) and "anything that can be
modeled in MOF (which includes all of UML) can have an HUTN language" (sec.
2.1 of the standard).

I've never been using HUTN for UML, but it is acceptable when auto-generated for
meta-models created by hand in Ecore.

Urgent need for RTT/Composition primitive (aka "Warning: current

On Mon, 2 Apr 2012, Geoffrey Biggs wrote:

[...]
>>> I think there is actually quite a bit out there with respect to formal
>>> models of components. Certainly I've seen a lot of different standards
>>> and specifications with their own formal component models. The
>>> difficulty is finding one that's *complete*, and that they all use
>>> their own dialects and so lack compatibility. A universal way to
>>> formally describe component models of varying types seems like a holy
>>> grail, in some respects.
>>
>> In many respects... Some for of standardization is necessary; the closest
>> ones are via the route you mentioned: OMG -> UML -> profiles. But this road
>> brings in huge model environments, hence they are difficult to use outside
>> of Eclipse.
>
> I don't entirely agree. It's possible to represent a UML model using a
> text-based representation, which would remove the need for a heavy
> environment such as Eclipse.

_The_ problem with the UML approach is that about the _meaning_ of the
model primitives and of the _magnitude_ of primitives you have to take
along. In addition, UML ofte semantically unclear, and profiles put
restrictions on _many_ primitives in UML.
Summary: UML was never meant to be a formal knowledge representation
language for computers, but it was (and still is) a tool for human
developers.

But I stand to be corrected!

> It's just that, as far as I know, there is no standard text
> representation of UML (yet?). The other option, of course, is
> model-to-model transforms into, for example, an XML representation.
> That's what we typically use between tools.

And what is the metametamodel for these model-to-model transformations?

> Geoff

Herman

Urgent need for RTT/Composition primitive (aka "Warning: current

On 02/04/12 15:23, Herman Bruyninckx wrote:
> On Mon, 2 Apr 2012, Geoffrey Biggs wrote:
>
> [...]
>>>> I think there is actually quite a bit out there with respect to formal
>>>> models of components. Certainly I've seen a lot of different standards
>>>> and specifications with their own formal component models. The
>>>> difficulty is finding one that's *complete*, and that they all use
>>>> their own dialects and so lack compatibility. A universal way to
>>>> formally describe component models of varying types seems like a holy
>>>> grail, in some respects.
>>>
>>> In many respects... Some for of standardization is necessary; the
>>> closest
>>> ones are via the route you mentioned: OMG -> UML -> profiles. But
>>> this road
>>> brings in huge model environments, hence they are difficult to use
>>> outside
>>> of Eclipse.
>>
>> I don't entirely agree. It's possible to represent a UML model using a
>> text-based representation, which would remove the need for a heavy
>> environment such as Eclipse.
>
> _The_ problem with the UML approach is that about the _meaning_ of the
> model primitives and of the _magnitude_ of primitives you have to take
> along. In addition, UML ofte semantically unclear, and profiles put
> restrictions on _many_ primitives in UML.
> Summary: UML was never meant to be a formal knowledge representation
> language for computers, but it was (and still is) a tool for human
> developers.

I see what you mean now. Yes, UML has always been meant as a tool for
aiding human designers. Executable UML is meant to be the formal version
of it, but I don't think it's made any realistic progress for years. For
myself, I find the lack of formalism in UML to cripple my thought
processes when I try to use it: without any formality, there's no tool
to tell me if I'm using UML correctly, which means I constantly worry
that I'm not, because I'm a programmer at heart and used to tools like
the compiler and lint telling me when I'm doing something illegal. I am
not aware of any UML tool that does this to any significant extent (I
don't count telling you when something is against the *standard*, I'm
talking about making nonsense designs that are not "physically" possible).

> But I stand to be corrected!
>
>> It's just that, as far as I know, there is no standard text
>> representation of UML (yet?). The other option, of course, is
>> model-to-model transforms into, for example, an XML representation.
>> That's what we typically use between tools.
>
> And what is the metametamodel for these model-to-model transformations?

Alas, we don't have one. I think we discussed this a few weeks ago. ;)

Geoff

Urgent need for RTT/Composition primitive (aka "Warning: current

On Mon, 2 Apr 2012, Geoffrey Biggs wrote:

> On 02/04/12 15:23, Herman Bruyninckx wrote:
>> On Mon, 2 Apr 2012, Geoffrey Biggs wrote:
>>
>> [...]
>>>>> I think there is actually quite a bit out there with respect to formal
>>>>> models of components. Certainly I've seen a lot of different standards
>>>>> and specifications with their own formal component models. The
>>>>> difficulty is finding one that's *complete*, and that they all use
>>>>> their own dialects and so lack compatibility. A universal way to
>>>>> formally describe component models of varying types seems like a holy
>>>>> grail, in some respects.
>>>>
>>>> In many respects... Some for of standardization is necessary; the
>>>> closest
>>>> ones are via the route you mentioned: OMG -> UML -> profiles. But
>>>> this road
>>>> brings in huge model environments, hence they are difficult to use
>>>> outside
>>>> of Eclipse.
>>>
>>> I don't entirely agree. It's possible to represent a UML model using a
>>> text-based representation, which would remove the need for a heavy
>>> environment such as Eclipse.
>>
>> _The_ problem with the UML approach is that about the _meaning_ of the
>> model primitives and of the _magnitude_ of primitives you have to take
>> along. In addition, UML ofte semantically unclear, and profiles put
>> restrictions on _many_ primitives in UML.
>> Summary: UML was never meant to be a formal knowledge representation
>> language for computers, but it was (and still is) a tool for human
>> developers.
>
> I see what you mean now. Yes, UML has always been meant as a tool for aiding
> human designers. Executable UML is meant to be the formal version of it, but
> I don't think it's made any realistic progress for years. For myself, I find
> the lack of formalism in UML to cripple my thought processes when I try to
> use it: without any formality, there's no tool to tell me if I'm using UML
> correctly, which means I constantly worry that I'm not, because I'm a
> programmer at heart and used to tools like the compiler and lint telling me
> when I'm doing something illegal. I am not aware of any UML tool that does
> this to any significant extent (I don't count telling you when something is
> against the *standard*, I'm talking about making nonsense designs that are
> not "physically" possible).

I fully agree. Because your statements are just another way of saying that
UML is not a good knowledge representation format.

>> But I stand to be corrected!
>>
>>> It's just that, as far as I know, there is no standard text
>>> representation of UML (yet?). The other option, of course, is
>>> model-to-model transforms into, for example, an XML representation.
>>> That's what we typically use between tools.
>>
>> And what is the metametamodel for these model-to-model transformations?
>
> Alas, we don't have one. I think we discussed this a few weeks ago. ;)

Yes, and that's where I think the (M3-M2 levels of the) BCM will prove most useful...

Herman

Urgent need for RTT/Composition primitive (aka "Warning: current

On Wed, Mar 28, 2012 at 09:18:47AM +0200, Herman Bruyninckx wrote:
>
> this is a message that I consider to be _strategic_ for the realtime fame
> of, both, Orocos/RTT and the BRICS Component Model. It's a rather condensed
> email, with the following summary:
>
> 1. Need for Composition
> 2. The problem with execution efficiency
> 3. Need for adding Computational model to Composition
> 4. Need for tooling
>
> I hope the Orocos and BRICS developer communities are strong and
> forward-looking enough to take action...
> I expect several follow-up messages to this "seed", in order to (i) refine
> its contents, and (ii) start sharing the development load.
>
> Best regards,
>
> Herman Bruyninckx
>
> ===============
> 1. Need for Composition
> In the "5Cs", Composition is singled out as the "coupling" aspect
> complementary to the "decoupling" aspects of Computation, Communication,
> Configuration and Coordination.
> In the BRICS Component Model (BCM), the different Cs come into play at
> different phases of the 5-phased development process (functional,
> component, composition/system, deployment, runtime); in the context of this
> message, I focus on the three phases "in the middle":
> - Component phase: developers make components, for maximal reuse and
> composability in later systems. Roughly speaking, the "art" here is to
> decouple the algorithms/computations inside the component from the access
> to the component's functionality (Computation, Communication,
> Configuration or Coordination) via Ports (and the "access policies" on
> them).
> - Composition phase: developers make a system, by composing components
> together, via interconnecting Ports, and specifying "buffering policies"
> on those Ports.
> - Deployment phase: composite components are being put into 'activity
> containers' (threads, processes,...) and connections between Ports are
> given communication middleware implementations.
> Although there is no strong or structured tooling support for these
> developments (yet) _and_ there is no explicit Composition primitive (in
> RTT, or BRIDE), the good developers in the community have the discipline to
> follow the outlined work flow to a large extent, resulting in designs that
> are very well ready for distributed deployment, and with very little
> coordination problems (deadlocks, data inconsistencies,...).
>
> One recent example is the new iTaSC implementation, using Orocos/RTT as
> component framework: <http://orocos.org/wiki/orocos/itasc-wiki>. It uses
> another standalone-ready toolkit, rFSM, for its Coordination state
> machines: <http://people.mech.kuleuven.be/~mklotzbucher/rfsm/README.html>.
>
> So far so good, because the _decoupling_ aspects of complex component-based
> systems are very well satisfied.
>
> But the _composition_ aspect is tremendously overlooked, resulting in
> massive wast of computational efficiency. (I explain that below.) I
> consider this a STRATEGIC lack in both BCM and RTT, because it is _a_ major
> selling point towards serious industrial uptake, and _the_ major
> competitive disadvantage with respect to commercial "one-tool-fits all
> lock-in" suppliers such as the MathWorks, National Instruments, or 20Sim.
>
> 2. The problem with execution efficiency
> What is wrong exactly with respect to execution efficiency? The cause of
> the problem is that decoupling is taken to the extreme in the
> above-mentioned development "tradition", in that each component is deployed
> in its own activity (thread within a process, or even worse, different
> processes within the operating system). The obvious good result of this is
> robustness; the (not so obviously visible) bad results are that:
> - events and data are exchanged between components via our very robust
> Port-Connector-Port mechanisms, which implies a lot of buffering, and
> hence requiring several context switches before data is really being
> delivered from its provider to its consumer.
> - activities are triggered via Coordination and/or Port buffering events,
> which has two flaws:
> (i) activities should be triggered by a _scheduler_ (because events are
> semantically only there to trigger changes in _behaviour_, and not in
> _computation_!); result: too many events and consequently too much
> time lost in event handling which should not be there, _and_ lots of
> context switches.
> (ii) too many context switches to make the data flow robustly through our
> provider Ports, connectors and consumer Ports; result: delays of
> several time ticks.
> Conclusion: the majority of applications allow to deploy all of their
> computations in one single thread, even without running the risk of data
> corruption, because there is a natural serialization of all computations in
> the application. Single threaded execution does away with _all_ of the
> above-mentioned computational wastes. But we have no good guidelines yet,
> let alone tooling, to support developers with the (not so trivial) task of
> efficiently serializing component computations. That's where the
> "Composition" phase of the development process comes in, together with the
> "Composition" primitive and its associated Computational models.
>
> 3. Need for adding Computational model to Composition
> The introduction of an explicit Composition phase into the development
> process, _and_ the introduction of the corresponding Composition primitive
> in BCM/RTT, will lead to the following two extra features which bring
> tremendous potential for computational efficiency:
>
> - Scope/closure/context/port promotion: a Composition (= composite
> component) is the right place to determine which data/events will only be
> used between the components within the composite, and which ones will have
> to be "promoted" to be accessible from the outside. The former are the
> ones with opportunities of gaining tremendous computational efficiency:
> a connection between two Ports within the composite can be replaced by a
> shared variable, which can be accessed by both components without delays
> and buffering. The same holds for events.
>
> - Computational model:
> Of course, this potential gain is only realisable when the execution of
> the computations in all components can be _scheduled_ as a _serialized_
> list of executions: "first Component A, then Component B, then Component C,
> and finally Component A again". Such natural serializations exist in
> _all_ robotics applications that I know of, and I have seen many.
> Finding the right serialization (i.e., "Computational model") is not
> always trivial, obviously. As is finding the right granularity of
> "computational codels". The good news is that experts exist for all
> specific applications to provide solutions.
> (Note: serialization of the computations in components is only _one_
> possible Computational model; there are others, but they are outside the
> scope of this message.)
>
> At deployment time, one has a bunch of Composite components available, for
> which the computational model has already been added and configured at the
> composite level (if needed), so that one should then add activities and
> communication middleware only _per composite component_, and not per
> individual component.
>
> 4. Need for tooling
> The above-mentioned workflow in the Composition phase is currently not at
> all supported by any tool. This is a major hole in the BRIDE/RTT
> frameworks. I envisage something in the direction of what Genom is doing,
> since that approach has the concept of a "codel", that is, the atomically
> 'deployable' piece of computations. Where 'deployment' means: to put into a
> computational schedule within a Composite. (The latter sentence is _not_
> Genom-speak, but could/should become BCM/RTT/BRIDE-speak.)

Some complementary remarks from a discussion this morning:

We need to distinguish two types of composition:

1. Composition of Systems

- This requires aggregating components, connections configuration
and their coordination into a composite that can be reused or
deployed. Rock does this at the model level. The challenging part
for this is how to manage/configure activities of the composed
components which you are not really to care about once you have
the composite.

2. Composition of computations

This is about composing functional blocks (represented as
functions essentially) by defining how they are wired together
(wiring meaning which function's results are "fed" as arguments to
which other function). The result of such a composition would be a
computational composite, that in contrast to the system
composition above, does not require any activity because it's
run-time interface itself boils down to a "step" function that
takes the required in- and out-arguments.

The generally most efficient way to deploy such a composite would
be to execute it within a single periodically triggered
TaskContext, hence entirely avoiding copying of data. However, to
be able to use this approach in practice requires a tool that
supports the modeling of such composites, their paramters etc. and
then generation of this fat component from that model.

As we currently we don't have such a tool*, Herman and I agreed
that the best option to implement this scheduling for RTT is to
use the (somewhat forgotten?) SlaveActivity to schedule execution
of computational function blocks (implemented as RTT TaskContexts)
within one single thread. This has the minor disadvantage of being
slightly less efficient than the previous solution, however for
larger data chunks this approach can be also further optimized to
pass around pointers instead of values.

Of course the modeling approach for computational-composition has
the larger advantage of permiting generating code/components for
multiple targets.

Markus

[*] potential candidates we are considering are Scilab/Scicos,
OpenModelica, Genom3 or BRIDE.

Urgent need for RTT/Composition primitive (aka "Warning: current

On 03/28/2012 09:18 AM, Herman Bruyninckx wrote:
> this is a message that I consider to be _strategic_ for the realtime fame
> of, both, Orocos/RTT and the BRICS Component Model. It's a rather condensed
> email, with the following summary:
>
> 1. Need for Composition
> 2. The problem with execution efficiency
> 3. Need for adding Computational model to Composition
> 4. Need for tooling
>
> I hope the Orocos and BRICS developer communities are strong and
> forward-looking enough to take action...
> I expect several follow-up messages to this "seed", in order to (i) refine
> its contents, and (ii) start sharing the development load.
>
> Best regards,
>
> Herman Bruyninckx
>
> ===============
> 1. Need for Composition
> In the "5Cs", Composition is singled out as the "coupling" aspect
> complementary to the "decoupling" aspects of Computation, Communication,
> Configuration and Coordination.
> In the BRICS Component Model (BCM), the different Cs come into play at
> different phases of the 5-phased development process (functional,
> component, composition/system, deployment, runtime); in the context of this
> message, I focus on the three phases "in the middle":
> - Component phase: developers make components, for maximal reuse and
> composability in later systems. Roughly speaking, the "art" here is to
> decouple the algorithms/computations inside the component from the access
> to the component's functionality (Computation, Communication,
> Configuration or Coordination) via Ports (and the "access policies" on
> them).
> - Composition phase: developers make a system, by composing components
> together, via interconnecting Ports, and specifying "buffering policies"
> on those Ports.
> - Deployment phase: composite components are being put into 'activity
> containers' (threads, processes,...) and connections between Ports are
> given communication middleware implementations.
> Although there is no strong or structured tooling support for these
> developments (yet) _and_ there is no explicit Composition primitive (in
> RTT, or BRIDE), the good developers in the community have the discipline to
> follow the outlined work flow to a large extent, resulting in designs that
> are very well ready for distributed deployment, and with very little
> coordination problems (deadlocks, data inconsistencies,...).
>
> One recent example is the new iTaSC implementation, using Orocos/RTT as
> component framework:<http://orocos.org/wiki/orocos/itasc-wiki>. It uses
> another standalone-ready toolkit, rFSM, for its Coordination state
> machines:<http://people.mech.kuleuven.be/~mklotzbucher/rfsm/README.html>.
>
> So far so good, because the _decoupling_ aspects of complex component-based
> systems are very well satisfied.
>
> But the _composition_ aspect is tremendously overlooked, resulting in
> massive wast of computational efficiency. (I explain that below.) I
> consider this a STRATEGIC lack in both BCM and RTT, because it is _a_ major
> selling point towards serious industrial uptake, and _the_ major
> competitive disadvantage with respect to commercial "one-tool-fits all
> lock-in" suppliers such as the MathWorks, National Instruments, or 20Sim.
>
> 2. The problem with execution efficiency
> What is wrong exactly with respect to execution efficiency? The cause of
> the problem is that decoupling is taken to the extreme in the
> above-mentioned development "tradition", in that each component is deployed
> in its own activity (thread within a process, or even worse, different
> processes within the operating system). The obvious good result of this is
> robustness; the (not so obviously visible) bad results are that:
> - events and data are exchanged between components via our very robust
> Port-Connector-Port mechanisms, which implies a lot of buffering, and
> hence requiring several context switches before data is really being
> delivered from its provider to its consumer.
> - activities are triggered via Coordination and/or Port buffering events,
> which has two flaws:
> (i) activities should be triggered by a _scheduler_ (because events are
> semantically only there to trigger changes in _behaviour_, and not in
> _computation_!); result: too many events and consequently too much
> time lost in event handling which should not be there, _and_ lots of
> context switches.
> (ii) too many context switches to make the data flow robustly through our
> provider Ports, connectors and consumer Ports; result: delays of
> several time ticks.
> Conclusion: the majority of applications allow to deploy all of their
> computations in one single thread, even without running the risk of data
> corruption, because there is a natural serialization of all computations in
> the application. Single threaded execution does away with _all_ of the
> above-mentioned computational wastes. But we have no good guidelines yet,
> let alone tooling, to support developers with the (not so trivial) task of
> efficiently serializing component computations. That's where the
> "Composition" phase of the development process comes in, together with the
> "Composition" primitive and its associated Computational models.
(1) Scheduling and the need to break the loop:
Many components in robotics are used within periodically executed
feed-back loops.
When these loops are closed, one has to "break" this loop to establish
the schedule
(i.e. execute a part of the loop in the next periodic cycle). I
advocate that the composer
_has to_ explicitly specify where to break the loop.
The schedule can then be automatically determine from the partial
ordering of components imposed
by the data-flow.

(2) Execution efficiency and "composition" are independent from each other:
The explicit scheduling and corresponding low-weight communication
protocols are independent
features wrt composition. Even more, resolving the schedulling at the
composition can lead
to an inefficient schedules (i.e. schedules that introduce more delays
then necessary), by not taking
into account ordering that is imposed by the connections outside the
composite.

The schedule is something that you can only determined at deployment,
when all components in a given single thread are known. It should not
be determined or fixed at the level of the "composition" or during the
composition phase.
>
> 3. Need for adding Computational model to Composition
> The introduction of an explicit Composition phase into the development
> process, _and_ the introduction of the corresponding Composition primitive
> in BCM/RTT, will lead to the following two extra features which bring
> tremendous potential for computational efficiency:
>
> - Scope/closure/context/port promotion: a Composition (= composite
> component) is the right place to determine which data/events will only be
> used between the components within the composite, and which ones will have
> to be "promoted" to be accessible from the outside.
this promotion should also involve renaming of the data/events, since
the correct name internal to the
composition is probably not a good name for someone outside the composition.
> The former are the
> ones with opportunities of gaining tremendous computational efficiency:
> a connection between two Ports within the composite can be replaced by a
> shared variable, which can be accessed by both components without delays
> and buffering. The same holds for events.
Again this advantage is not inherent to composition but inherent to more
explicit scheduling methods and
appropriate communication protocols.
>
> - Computational model:
> Of course, this potential gain is only realisable when the execution of
> the computations in all components can be _scheduled_ as a _serialized_
> list of executions: "first Component A, then Component B, then Component C,
> and finally Component A again". Such natural serializations exist in
> _all_ robotics applications that I know of, and I have seen many.
If you "break the loop" at certain places.
Composition phase is the best time to indicate where to break the loop.
Deployment time is the best time to compute/specify the schedule
(computational model).
This computational model relates to all components on the given single
thread, not
to each composite.
> Finding the right serialization (i.e., "Computational model") is not
> always trivial, obviously. As is finding the right granularity of
> "computational codels". The good news is that experts exist for all
> specific applications to provide solutions.
> (Note: serialization of the computations in components is only _one_
> possible Computational model; there are others, but they are outside the
> scope of this message.)
>
> At deployment time, one has a bunch of Composite components available, for
> which the computational model has already been added and configured at the
> composite level (if needed),
I do not agree ( see above).
> so that one should then add activities and
> communication middleware only _per composite component_, and not per
> individual component.
>
> 4. Need for tooling
> The above-mentioned workflow in the Composition phase is currently not at
> all supported by any tool. This is a major hole in the BRIDE/RTT
> frameworks. I envisage something in the direction of what Genom is doing,
> since that approach has the concept of a "codel", that is, the atomically
> 'deployable' piece of computations. Where 'deployment' means: to put into a
> computational schedule within a Composite. (The latter sentence is _not_
> Genom-speak, but could/should become BCM/RTT/BRIDE-speak.)
>
>

Best regards,
Erwin.

Urgent need for RTT/Composition primitive (aka "Warning: current

On Thu, 29 Mar 2012, Erwin Aertbelien wrote:

> On 03/28/2012 09:18 AM, Herman Bruyninckx wrote:
>> this is a message that I consider to be _strategic_ for the realtime fame
>> of, both, Orocos/RTT and the BRICS Component Model. It's a rather condensed
>> email, with the following summary:
>>
>> 1. Need for Composition
>> 2. The problem with execution efficiency
>> 3. Need for adding Computational model to Composition
>> 4. Need for tooling
>>
>> I hope the Orocos and BRICS developer communities are strong and
>> forward-looking enough to take action...
>> I expect several follow-up messages to this "seed", in order to (i) refine
>> its contents, and (ii) start sharing the development load.
>>
>> Best regards,
>>
>> Herman Bruyninckx
>>
>> ===============
>> 1. Need for Composition
>> In the "5Cs", Composition is singled out as the "coupling" aspect
>> complementary to the "decoupling" aspects of Computation, Communication,
>> Configuration and Coordination.
>> In the BRICS Component Model (BCM), the different Cs come into play at
>> different phases of the 5-phased development process (functional,
>> component, composition/system, deployment, runtime); in the context of this
>> message, I focus on the three phases "in the middle":
>> - Component phase: developers make components, for maximal reuse and
>> composability in later systems. Roughly speaking, the "art" here is to
>> decouple the algorithms/computations inside the component from the access
>> to the component's functionality (Computation, Communication,
>> Configuration or Coordination) via Ports (and the "access policies" on
>> them).
>> - Composition phase: developers make a system, by composing components
>> together, via interconnecting Ports, and specifying "buffering policies"
>> on those Ports.
>> - Deployment phase: composite components are being put into 'activity
>> containers' (threads, processes,...) and connections between Ports are
>> given communication middleware implementations.
>> Although there is no strong or structured tooling support for these
>> developments (yet) _and_ there is no explicit Composition primitive (in
>> RTT, or BRIDE), the good developers in the community have the discipline to
>> follow the outlined work flow to a large extent, resulting in designs that
>> are very well ready for distributed deployment, and with very little
>> coordination problems (deadlocks, data inconsistencies,...).
>>
>> One recent example is the new iTaSC implementation, using Orocos/RTT as
>> component framework:<http://orocos.org/wiki/orocos/itasc-wiki>. It uses
>> another standalone-ready toolkit, rFSM, for its Coordination state
>> machines:<http://people.mech.kuleuven.be/~mklotzbucher/rfsm/README.html>.
>>
>> So far so good, because the _decoupling_ aspects of complex component-based
>> systems are very well satisfied.
>>
>> But the _composition_ aspect is tremendously overlooked, resulting in
>> massive wast of computational efficiency. (I explain that below.) I
>> consider this a STRATEGIC lack in both BCM and RTT, because it is _a_ major
>> selling point towards serious industrial uptake, and _the_ major
>> competitive disadvantage with respect to commercial "one-tool-fits all
>> lock-in" suppliers such as the MathWorks, National Instruments, or 20Sim.
>>
>> 2. The problem with execution efficiency
>> What is wrong exactly with respect to execution efficiency? The cause of
>> the problem is that decoupling is taken to the extreme in the
>> above-mentioned development "tradition", in that each component is deployed
>> in its own activity (thread within a process, or even worse, different
>> processes within the operating system). The obvious good result of this is
>> robustness; the (not so obviously visible) bad results are that:
>> - events and data are exchanged between components via our very robust
>> Port-Connector-Port mechanisms, which implies a lot of buffering, and
>> hence requiring several context switches before data is really being
>> delivered from its provider to its consumer.
>> - activities are triggered via Coordination and/or Port buffering events,
>> which has two flaws:
>> (i) activities should be triggered by a _scheduler_ (because events are
>> semantically only there to trigger changes in _behaviour_, and not in
>> _computation_!); result: too many events and consequently too much
>> time lost in event handling which should not be there, _and_ lots of
>> context switches.
>> (ii) too many context switches to make the data flow robustly through our
>> provider Ports, connectors and consumer Ports; result: delays of
>> several time ticks.
>> Conclusion: the majority of applications allow to deploy all of their
>> computations in one single thread, even without running the risk of data
>> corruption, because there is a natural serialization of all computations in
>> the application. Single threaded execution does away with _all_ of the
>> above-mentioned computational wastes. But we have no good guidelines yet,
>> let alone tooling, to support developers with the (not so trivial) task of
>> efficiently serializing component computations. That's where the
>> "Composition" phase of the development process comes in, together with the
>> "Composition" primitive and its associated Computational models.
> (1) Scheduling and the need to break the loop:
> Many components in robotics are used within periodically executed
> feed-back loops.
> When these loops are closed, one has to "break" this loop to establish
> the schedule
> (i.e. execute a part of the loop in the next periodic cycle). I
> advocate that the composer
> _has to_ explicitly specify where to break the loop.

Oops, you are interpreting things a bit wrongly here... The schedule _is_
the representation of "where to break the loop"!
Periodic execution or not is not relevant in this discussion; what is
relevant is to provide a schedule for the different cases in which data can
arrive at each of the in-Ports of a computational component.

> The schedule can then be automatically determine from the partial
> ordering of components imposed by the data-flow.

Automatic support for the creation of computational schedules is part of
the (lack of) tooling that I mentioned.

> (2) Execution efficiency and "composition" are independent from each other:
> The explicit scheduling and corresponding low-weight communication
> protocols are independent features wrt composition. Even more, resolving
> the schedulling at the composition can lead to an inefficient schedules
> (i.e. schedules that introduce more delays then necessary), by not taking
> into account ordering that is imposed by the connections outside the
> composite.

Argh... I am _not_ talking about the scheduling of the execution of
component activities, but about the order in which to call the different
"function blocks" inside one single computational component. The latter is
indeed fully decoupled from communication protocols; it is only connected
to the "inside" of the in-Ports over which a computational component
receives new data on which it has to perform its computations.

> The schedule is something that you can only determined at deployment,
> when all components in a given single thread are known. It should not
> be determined or fixed at the level of the "composition" or during the
> composition phase.

Again: I am _not_ discussing activity scheduling. The kind of scheduling
you are considering is, indeed, to be determined at deployment time. But at
"system composition" time, there _are_ no activities yet, so there is no
need to talk about the scheduling of those activities.

>> 3. Need for adding Computational model to Composition
>> The introduction of an explicit Composition phase into the development
>> process, _and_ the introduction of the corresponding Composition primitive
>> in BCM/RTT, will lead to the following two extra features which bring
>> tremendous potential for computational efficiency:
>>
>> - Scope/closure/context/port promotion: a Composition (= composite
>> component) is the right place to determine which data/events will only be
>> used between the components within the composite, and which ones will have
>> to be "promoted" to be accessible from the outside.

> this promotion should also involve renaming of the data/events, since the
> correct name internal to the composition is probably not a good name for
> someone outside the composition.

I fully agree.

>> The former are the
>> ones with opportunities of gaining tremendous computational efficiency:
>> a connection between two Ports within the composite can be replaced by a
>> shared variable, which can be accessed by both components without delays
>> and buffering. The same holds for events.

> Again this advantage is not inherent to composition but inherent to more
> explicit scheduling methods and appropriate communication protocols.

Same "Argh" remark as made earlier in this thread... :-)

>> - Computational model:
>> Of course, this potential gain is only realisable when the execution of
>> the computations in all components can be _scheduled_ as a _serialized_
>> list of executions: "first Component A, then Component B, then Component C,
>> and finally Component A again". Such natural serializations exist in
>> _all_ robotics applications that I know of, and I have seen many.

> If you "break the loop" at certain places.
> Composition phase is the best time to indicate where to break the loop.
> Deployment time is the best time to compute/specify the schedule
> (computational model).

You have a very different interpretation of what a "computational model" is
than what I explained. The computational model of a (data flow; functional)
computational Component is the order in which the functions in the
component must be called by the Component's "meta function"

> This computational model relates to all components on the given single
> thread, not to each composite.

>> Finding the right serialization (i.e., "Computational model") is not
>> always trivial, obviously. As is finding the right granularity of
>> "computational codels". The good news is that experts exist for all
>> specific applications to provide solutions.
>> (Note: serialization of the computations in components is only _one_
>> possible Computational model; there are others, but they are outside the
>> scope of this message.)
>>
>> At deployment time, one has a bunch of Composite components available, for
>> which the computational model has already been added and configured at the
>> composite level (if needed),
> I do not agree ( see above).
>> so that one should then add activities and
>> communication middleware only _per composite component_, and not per
>> individual component.
>>
>> 4. Need for tooling
>> The above-mentioned workflow in the Composition phase is currently not at
>> all supported by any tool. This is a major hole in the BRIDE/RTT
>> frameworks. I envisage something in the direction of what Genom is doing,
>> since that approach has the concept of a "codel", that is, the atomically
>> 'deployable' piece of computations. Where 'deployment' means: to put into a
>> computational schedule within a Composite. (The latter sentence is _not_
>> Genom-speak, but could/should become BCM/RTT/BRIDE-speak.)
>>
>>
>
> Best regards,
> Erwin.

Herman

[software-toolchain] Urgent need for RTT/Composition primitive (

Dear all,

I would like to contribute my two cents to the discussion on "The
problem with execution efficiency".

As far as I've understood the ongoing discussion, I see that the problem
is formulated in terms of computational waste due to excessive number of
threads, which is originated by an obsessive attitude to map even simple
functionality to coarse grain components which interact according to the
data flow architectural model.

If my interpretation of the problem is correct, one possible solution
consists in:
a) classifying concurrency at different levels of granularity, i.e.
fine, medium, and large grain as in [1]
b) map these levels of concurrency to three units of design,
respectively: sequential component, service component, and container
component.
3) use different architectural models and concurrency mechanisms for
component interaction (i.e. data flow, client-server).

*[1]* /R. S. Chin and S. T. Chanson, ''Distributed, object-based
programming systems,'' ACM Comput. Surv., vol. 23, no. 1, pp. 91--124,
1991./

The separation of sequential/service/container components can be
motivated in terms of different variability concerns:

- Sequential components encapsulate data structures and operations that
implement specific processing algorithms. They should conveniently be
designed to be middleware- and application independent (focus on
Computation)

- Service components implement the logic and embed the dynamic
specification of robot control activities, such as closing the loop
between sensors and actuators for motion control. They are mostly
application-specific components, as they define the execution,
interaction, and coordination of robot activities (focus on Coordination)

- Container components provide the environment for the concurrent
threads and encapsulate the shared resources. They are to a great extent
middleware-specific and functionality independent (focus on Communication)

- A set of sequential, service, and container components all together
form a component assembly (focus on Configuration). N.B. for me
Composition is a kind of Configuration aspect (4Cs are enough)

For more details see [2].

*[2]* D. /Brugali, A. Shakhimardanov, Component-Based Robotic
Engineering (Part II): Systems and Models, IEEE Robotics and Automation
Magazine, March 2010./
http://www.best-of-robotics.org/pages/publications/UniBergamo_HBRS_Compo...

Best regards,
Davide

Il 3/29/2012 6:05 AM, Herman Bruyninckx ha scritto:
> On Thu, 29 Mar 2012, Erwin Aertbelien wrote:
>
>> On 03/28/2012 09:18 AM, Herman Bruyninckx wrote:
>>> this is a message that I consider to be _strategic_ for the realtime fame
>>> of, both, Orocos/RTT and the BRICS Component Model. It's a rather condensed
>>> email, with the following summary:
>>>
>>> 1. Need for Composition
>>> 2. The problem with execution efficiency
>>> 3. Need for adding Computational model to Composition
>>> 4. Need for tooling
>>>
>>> I hope the Orocos and BRICS developer communities are strong and
>>> forward-looking enough to take action...
>>> I expect several follow-up messages to this "seed", in order to (i) refine
>>> its contents, and (ii) start sharing the development load.
>>>
>>> Best regards,
>>>
>>> Herman Bruyninckx
>>>
>>> ===============
>>> 1. Need for Composition
>>> In the "5Cs", Composition is singled out as the "coupling" aspect
>>> complementary to the "decoupling" aspects of Computation, Communication,
>>> Configuration and Coordination.
>>> In the BRICS Component Model (BCM), the different Cs come into play at
>>> different phases of the 5-phased development process (functional,
>>> component, composition/system, deployment, runtime); in the context of this
>>> message, I focus on the three phases "in the middle":
>>> - Component phase: developers make components, for maximal reuse and
>>> composability in later systems. Roughly speaking, the "art" here is to
>>> decouple the algorithms/computations inside the component from the access
>>> to the component's functionality (Computation, Communication,
>>> Configuration or Coordination) via Ports (and the "access policies" on
>>> them).
>>> - Composition phase: developers make a system, by composing components
>>> together, via interconnecting Ports, and specifying "buffering policies"
>>> on those Ports.
>>> - Deployment phase: composite components are being put into 'activity
>>> containers' (threads, processes,...) and connections between Ports are
>>> given communication middleware implementations.
>>> Although there is no strong or structured tooling support for these
>>> developments (yet) _and_ there is no explicit Composition primitive (in
>>> RTT, or BRIDE), the good developers in the community have the discipline to
>>> follow the outlined work flow to a large extent, resulting in designs that
>>> are very well ready for distributed deployment, and with very little
>>> coordination problems (deadlocks, data inconsistencies,...).
>>>
>>> One recent example is the new iTaSC implementation, using Orocos/RTT as
>>> component framework:<http://orocos.org/wiki/orocos/itasc-wiki>. It uses
>>> another standalone-ready toolkit, rFSM, for its Coordination state
>>> machines:<http://people.mech.kuleuven.be/~mklotzbucher/rfsm/README.html>.
>>>
>>> So far so good, because the _decoupling_ aspects of complex component-based
>>> systems are very well satisfied.
>>>
>>> But the _composition_ aspect is tremendously overlooked, resulting in
>>> massive wast of computational efficiency. (I explain that below.) I
>>> consider this a STRATEGIC lack in both BCM and RTT, because it is _a_ major
>>> selling point towards serious industrial uptake, and _the_ major
>>> competitive disadvantage with respect to commercial "one-tool-fits all
>>> lock-in" suppliers such as the MathWorks, National Instruments, or 20Sim.
>>>
>>> 2. The problem with execution efficiency
>>> What is wrong exactly with respect to execution efficiency? The cause of
>>> the problem is that decoupling is taken to the extreme in the
>>> above-mentioned development "tradition", in that each component is deployed
>>> in its own activity (thread within a process, or even worse, different
>>> processes within the operating system). The obvious good result of this is
>>> robustness; the (not so obviously visible) bad results are that:
>>> - events and data are exchanged between components via our very robust
>>> Port-Connector-Port mechanisms, which implies a lot of buffering, and
>>> hence requiring several context switches before data is really being
>>> delivered from its provider to its consumer.
>>> - activities are triggered via Coordination and/or Port buffering events,
>>> which has two flaws:
>>> (i) activities should be triggered by a _scheduler_ (because events are
>>> semantically only there to trigger changes in _behaviour_, and not in
>>> _computation_!); result: too many events and consequently too much
>>> time lost in event handling which should not be there, _and_ lots of
>>> context switches.
>>> (ii) too many context switches to make the data flow robustly through our
>>> provider Ports, connectors and consumer Ports; result: delays of
>>> several time ticks.
>>> Conclusion: the majority of applications allow to deploy all of their
>>> computations in one single thread, even without running the risk of data
>>> corruption, because there is a natural serialization of all computations in
>>> the application. Single threaded execution does away with _all_ of the
>>> above-mentioned computational wastes. But we have no good guidelines yet,
>>> let alone tooling, to support developers with the (not so trivial) task of
>>> efficiently serializing component computations. That's where the
>>> "Composition" phase of the development process comes in, together with the
>>> "Composition" primitive and its associated Computational models.
>> (1) Scheduling and the need to break the loop:
>> Many components in robotics are used within periodically executed
>> feed-back loops.
>> When these loops are closed, one has to "break" this loop to establish
>> the schedule
>> (i.e. execute a part of the loop in the next periodic cycle). I
>> advocate that the composer
>> _has to_ explicitly specify where to break the loop.
> Oops, you are interpreting things a bit wrongly here... The schedule _is_
> the representation of "where to break the loop"!
> Periodic execution or not is not relevant in this discussion; what is
> relevant is to provide a schedule for the different cases in which data can
> arrive at each of the in-Ports of a computational component.
>
>> The schedule can then be automatically determine from the partial
>> ordering of components imposed by the data-flow.
> Automatic support for the creation of computational schedules is part of
> the (lack of) tooling that I mentioned.
>
>> (2) Execution efficiency and "composition" are independent from each other:
>> The explicit scheduling and corresponding low-weight communication
>> protocols are independent features wrt composition. Even more, resolving
>> the schedulling at the composition can lead to an inefficient schedules
>> (i.e. schedules that introduce more delays then necessary), by not taking
>> into account ordering that is imposed by the connections outside the
>> composite.
> Argh... I am _not_ talking about the scheduling of the execution of
> component activities, but about the order in which to call the different
> "function blocks" inside one single computational component. The latter is
> indeed fully decoupled from communication protocols; it is only connected
> to the "inside" of the in-Ports over which a computational component
> receives new data on which it has to perform its computations.
>
>> The schedule is something that you can only determined at deployment,
>> when all components in a given single thread are known. It should not
>> be determined or fixed at the level of the "composition" or during the
>> composition phase.
> Again: I am _not_ discussing activity scheduling. The kind of scheduling
> you are considering is, indeed, to be determined at deployment time. But at
> "system composition" time, there _are_ no activities yet, so there is no
> need to talk about the scheduling of those activities.
>
>>> 3. Need for adding Computational model to Composition
>>> The introduction of an explicit Composition phase into the development
>>> process, _and_ the introduction of the corresponding Composition primitive
>>> in BCM/RTT, will lead to the following two extra features which bring
>>> tremendous potential for computational efficiency:
>>>
>>> - Scope/closure/context/port promotion: a Composition (= composite
>>> component) is the right place to determine which data/events will only be
>>> used between the components within the composite, and which ones will have
>>> to be "promoted" to be accessible from the outside.
>> this promotion should also involve renaming of the data/events, since the
>> correct name internal to the composition is probably not a good name for
>> someone outside the composition.
> I fully agree.
>
>>> The former are the
>>> ones with opportunities of gaining tremendous computational efficiency:
>>> a connection between two Ports within the composite can be replaced by a
>>> shared variable, which can be accessed by both components without delays
>>> and buffering. The same holds for events.
>> Again this advantage is not inherent to composition but inherent to more
>> explicit scheduling methods and appropriate communication protocols.
> Same "Argh" remark as made earlier in this thread... :-)
>
>>> - Computational model:
>>> Of course, this potential gain is only realisable when the execution of
>>> the computations in all components can be _scheduled_ as a _serialized_
>>> list of executions: "first Component A, then Component B, then Component C,
>>> and finally Component A again". Such natural serializations exist in
>>> _all_ robotics applications that I know of, and I have seen many.
>> If you "break the loop" at certain places.
>> Composition phase is the best time to indicate where to break the loop.
>> Deployment time is the best time to compute/specify the schedule
>> (computational model).
> You have a very different interpretation of what a "computational model" is
> than what I explained. The computational model of a (data flow; functional)
> computational Component is the order in which the functions in the
> component must be called by the Component's "meta function"
>
>> This computational model relates to all components on the given single
>> thread, not to each composite.
>>> Finding the right serialization (i.e., "Computational model") is not
>>> always trivial, obviously. As is finding the right granularity of
>>> "computational codels". The good news is that experts exist for all
>>> specific applications to provide solutions.
>>> (Note: serialization of the computations in components is only _one_
>>> possible Computational model; there are others, but they are outside the
>>> scope of this message.)
>>>
>>> At deployment time, one has a bunch of Composite components available, for
>>> which the computational model has already been added and configured at the
>>> composite level (if needed),
>> I do not agree ( see above).
>>> so that one should then add activities and
>>> communication middleware only _per composite component_, and not per
>>> individual component.
>>>
>>> 4. Need for tooling
>>> The above-mentioned workflow in the Composition phase is currently not at
>>> all supported by any tool. This is a major hole in the BRIDE/RTT
>>> frameworks. I envisage something in the direction of what Genom is doing,
>>> since that approach has the concept of a "codel", that is, the atomically
>>> 'deployable' piece of computations. Where 'deployment' means: to put into a
>>> computational schedule within a Composite. (The latter sentence is _not_
>>> Genom-speak, but could/should become BCM/RTT/BRIDE-speak.)
>>>
>>>
>> Best regards,
>> Erwin.
> Herman
> _______________________________________________
> software-toolchain mailing list
> software-toolchain [..] ...
> http://mailman.gps-stuttgart.de/mailman/listinfo/software-toolchain
>
>

Urgent need for RTT/Composition primitive (aka "Warning: current

On 03/28/2012 09:18 AM, Herman Bruyninckx wrote:
> Although there is no strong or structured tooling support for these
> developments (yet)
This is not true. Rock already provides the necessary modeling and
tooling for design-through-composition *and* for building higher-level
primitives on top.

> _and_ there is no explicit Composition primitive (in
> RTT, or BRIDE),
I personally don't think that it is something that should go in RTT. It
is a system design issue, not a component development issue *and*
design-by-composition requires handling *graphs*, and managing these
graphs at runtime can only be done if you have a system-level view, not
a "I'm a single C++ composition placed somewhere and have nothing to do
with the rest of the world" kind of creation.

In other words, I place composition closer to deployment and further
away from components.

But I do indeed agree that adding some computational models to (some)
compositions would allow to do some deployment decisions such as "run
these things in a single thread".

Urgent need for RTT/Composition primitive (aka "Warning: current

On Wed, 28 Mar 2012, Sylvain Joyeux wrote:

> On 03/28/2012 09:18 AM, Herman Bruyninckx wrote:
>> Although there is no strong or structured tooling support for these
>> developments (yet)
> This is not true. Rock already provides the necessary modeling and tooling
> for design-through-composition *and* for building higher-level primitives on
> top.

Not for explicit _computational_ models, as far as I know.... Only for
component composition models.

>> _and_ there is no explicit Composition primitive (in
>> RTT, or BRIDE),
> I personally don't think that it is something that should go in RTT. It is a
> system design issue, not a component development issue *and*
> design-by-composition requires handling *graphs*, and managing these graphs
> at runtime can only be done if you have a system-level view, not a "I'm a
> single C++ composition placed somewhere and have nothing to do with the rest
> of the world" kind of creation.

I do not see where your criticism is based on. Composition is fundamental
in component-based design, indeed at the system level. I have _never_
claimed the opposite, The fact that composition is not an RTT primitive now
is a design bug, because I cannot find another name for_not_ supporting the
system level in a component-based framework.

> In other words, I place composition closer to deployment and further away
> from components.

I place it exactly in between, and please prove me constructively wrong,
instead of saying what we should _not_ do.

> But I do indeed agree that adding some computational models to (some)
> compositions would allow to do some deployment decisions such as "run these
> things in a single thread".

Urgent need for RTT/Composition primitive (aka "Warning: current

On 03/28/2012 09:16 PM, Herman Bruyninckx wrote:
> On Wed, 28 Mar 2012, Sylvain Joyeux wrote:
>
>> On 03/28/2012 09:18 AM, Herman Bruyninckx wrote:
>>> Although there is no strong or structured tooling support for these
>>> developments (yet)
>> This is not true. Rock already provides the necessary modeling and
>> tooling for design-through-composition *and* for building higher-level
>> primitives on top.
>
> Not for explicit _computational_ models, as far as I know.... Only for
> component composition models.

Full quote of the earlier mail
> - Composition phase: developers make a system, by composing components
> together, via interconnecting Ports, and specifying "buffering policies"
> on those Ports.
> - Deployment phase: composite components are being put into 'activity
> containers' (threads, processes,...) and connections between Ports are
> given communication middleware implementations.
> Although there is no strong or structured tooling support for these
> developments (yet) _and_ there is no explicit Composition primitive (in
> RTT, or BRIDE), the good developers in the community have the discipline to
> follow the outlined work flow to a large extent, resulting in designs that
> are very well ready for distributed deployment, and with very little
> coordination problems (deadlocks, data inconsistencies,...).

You define the composition *exactly* how Rock's modelling tools define it.

As you point out yourself, in a dataflow-oriented system, the component
composition model is the basis for all kind of computation you can do on
computations: compositions *are* binding together components at the
dataflow level. The ability to "understand" the computation that is
being done there to, for instance, automatically decide how to deploy
the composition, is an add-on on top of that.

>>> _and_ there is no explicit Composition primitive (in
>>> RTT, or BRIDE),
>> I personally don't think that it is something that should go in RTT.
>> It is a system design issue, not a component development issue *and*
>> design-by-composition requires handling *graphs*, and managing these
>> graphs at runtime can only be done if you have a system-level view,
>> not a "I'm a single C++ composition placed somewhere and have nothing
>> to do with the rest of the world" kind of creation.
>
> I do not see where your criticism is based on. Composition is fundamental
> in component-based design, indeed at the system level. I have _never_
> claimed the opposite, The fact that composition is not an RTT primitive now
> is a design bug, because I cannot find another name for_not_ supporting the
> system level in a component-based framework.
My point of view is that RTT is a component framework, not a
component-based, do-everything framework. System concerns should be left
out of RTT itself (to keep it simple) and inside extensions such as
BRIDE or Rock's own system management layer.

Simply because managing a system is NOT the same thing than managing a
single component. Managing a system at runtime is the job of a system
management tool, not of a component-oriented one.

Urgent need for RTT/Composition primitive (aka "Warning: current

On Thu, 29 Mar 2012, Sylvain Joyeux wrote:

[...]
>>>> _and_ there is no explicit Composition primitive (in
>>>> RTT, or BRIDE),
>>> I personally don't think that it is something that should go in RTT.
>>> It is a system design issue, not a component development issue *and*
>>> design-by-composition requires handling *graphs*, and managing these
>>> graphs at runtime can only be done if you have a system-level view,
>>> not a "I'm a single C++ composition placed somewhere and have nothing
>>> to do with the rest of the world" kind of creation.
>>
>> I do not see where your criticism is based on. Composition is fundamental
>> in component-based design, indeed at the system level. I have _never_
>> claimed the opposite, The fact that composition is not an RTT primitive now
>> is a design bug, because I cannot find another name for_not_ supporting the
>> system level in a component-based framework.
> My point of view is that RTT is a component framework, not a component-based,
> do-everything framework.
> System concerns should be left out of RTT itself (to
> keep it simple) and inside extensions such as BRIDE or Rock's own system
> management layer.
> Simply because managing a system is NOT the same thing than managing a single
> component. Managing a system at runtime is the job of a system management
> tool, not of a component-oriented one.

My scope of "component model" (frameworks, tools, composition, ....) spans
the "development phases from functional design, to component design, to
system composition, to deployment and to runtime. So, indeed, there are
fundamental differences in what you do with components; and the lesson I
have learned the hard way is that people abuse a component framework that
would support only one of these phases because they _need_ to cover all the
other phases too. And there is the concept of "coherence" of a framework,
that means, what is the best trade-off between feature richness and
simplicity; for me, that trade-off lies in supporting _all_ the mentioned
phases in one framework.

Your trade-off lies somewhere else, apparently.

Urgent need for RTT/Composition primitive (aka "Warning: current

On 03/29/2012 01:07 PM, Herman Bruyninckx wrote:
> My scope of "component model" (frameworks, tools, composition, ....) spans
> the "development phases from functional design, to component design, to
> system composition, to deployment and to runtime.
So does mine.

> So, indeed, there are
> fundamental differences in what you do with components; and the lesson I
> have learned the hard way is that people abuse a component framework that
> would support only one of these phases because they _need_ to cover all the
> other phases too.
The *toolchain* is what should guide them into not doing that.

> And there is the concept of "coherence" of a framework,
> that means, what is the best trade-off between feature richness and
> simplicity; for me, that trade-off lies in supporting _all_ the mentioned
> phases in one framework.
>
> Your trade-off lies somewhere else, apparently.
It does, because I believe that different phases need different tools.

Obviously, it depends on the definition of "framework". In my case, Rock
is the "complete" framework, not RTT.

Urgent need for RTT/Composition primitive (aka "Warning: current

On Thu, 29 Mar 2012, Sylvain Joyeux wrote:

> On 03/29/2012 01:07 PM, Herman Bruyninckx wrote:
>> My scope of "component model" (frameworks, tools, composition, ....) spans
>> the "development phases from functional design, to component design, to
>> system composition, to deployment and to runtime.
> So does mine.
>
>> So, indeed, there are
>> fundamental differences in what you do with components; and the lesson I
>> have learned the hard way is that people abuse a component framework that
>> would support only one of these phases because they _need_ to cover all the
>> other phases too.
> The *toolchain* is what should guide them into not doing that.

As with all of our previous "clashes", you consider code and tools to rule
everything, while I consider the models as the "masters" (and code and
tools as slaves). In other words, _I_ consider it essential to have (the
semantics of) component primitives and compositions being _modelled_ for
all these phases; after that, I am more than happy to have people like you
who can provide good tools (and code) behind some of these models and their
transformations.

>> And there is the concept of "coherence" of a framework,
>> that means, what is the best trade-off between feature richness and
>> simplicity; for me, that trade-off lies in supporting _all_ the mentioned
>> phases in one framework.
>>
>> Your trade-off lies somewhere else, apparently.
> It does, because I believe that different phases need different tools.

Sure, I am all for lean and mean tools, but as I mentioned before, they all
need consistent _models_ before they can seemlessly work together :-)

(Unless you expect people to look into the implementations of the tools;
which is normal but not-so-best practice in the robotics world, for tools
as well as for functionality.)

> Obviously, it depends on the definition of "framework". In my case, Rock is
> the "complete" framework, not RTT.

And none of them cover the scope of the "BRICS Component Model" :-) (Yet.)

Urgent need for RTT/Composition primitive (aka "Warning: current

> As with all of our previous "clashes", you consider code and tools to rule
> everything, while I consider the models as the "masters" (and code and
> tools as slaves). In other words, _I_ consider it essential to have (the
> semantics of) component primitives and compositions being _modelled_ for
> all these phases; after that, I am more than happy to have people like you
> who can provide good tools (and code) behind some of these models and their
> transformations.
You're wrong there. That's *why* I think that adding compositions to RTT
is a bad idea. Having composition MODELS and using them is what's
needed. NOT pushing to add the primitive to the C++ RTT implementation.

As you say: the master is the abstract model. However, you *have* to run
what your models spit out somehow, and that's where the toolchain kicks
in. RTT is part of that toolchain. It does not have to be a 1:1 mapping
of the model, as, you know, the model is the master, not the implementation.

> And none of them cover the scope of the "BRICS Component Model" :-) (Yet.)
I'm not sure what you mean by that sentence. But if what you mean is
that the BRICS component model is a superset of all that's currently
existing, you'll have to prove that to me. What I saw so far in
BRICS-related stuff is a subset, not a superset, of what other
model-based approaches like PROTEUS and rock can represent.

[software-toolchain] Urgent need for RTT/Composition primitive (

I don't know where in this thread to start picking in and still avoid
a me too/me not kind of polemic... so here we go.

On Thu, Mar 29, 2012 at 1:01 PM, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>
> > As with all of our previous "clashes", you consider code and tools to rule
> > everything, while I consider the models as the "masters" (and code and
> > tools as slaves). In other words, _I_ consider it essential to have (the
> > semantics of) component primitives and compositions being _modelled_ for
> > all these phases; after that, I am more than happy to have people like you
> > who can provide good tools (and code) behind some of these models and their
> > transformations.
> You're wrong there. That's *why* I think that adding compositions to RTT
> is a bad idea. Having composition MODELS and using them is what's
> needed. NOT pushing to add the primitive to the C++ RTT implementation.

I agree to the conclusion of this thread in the follow-up mails. The
main problem RTT users encounter is the lack of a graphical tool for
composition. Yes, graphical. Such a thing can only work well if a
model of a composition is available, and some tools to create and
inspect compositions. So we all agree on those fundamentals, I guess.
But in the end, it's making Orocos applications scalable upto hundreds
components, and I know no other way than to do this graphically.
OpenRTM-AIST is the only modern framework I know that succeeds in
doing this (both having a model and a graphical tool). So while we're
talking about it, they are doing it. Unfortunately they aren't very
open, and I don't think it's only about language, it's also about how
they organize themselves.

How does Rock scale to this extent ?

>
> As you say: the master is the abstract model. However, you *have* to run
> what your models spit out somehow, and that's where the toolchain kicks
> in. RTT is part of that toolchain. It does not have to be a 1:1 mapping
> of the model, as, you know, the model is the master, not the implementation.

Ack.

Peter

[software-toolchain] Urgent need for RTT/Composition primitive (

On 03/29/2012 10:49 PM, Peter Soetens wrote:
> I don't know where in this thread to start picking in and still avoid
> a me too/me not kind of polemic... so here we go.
>
> On Thu, Mar 29, 2012 at 1:01 PM, Sylvain Joyeux<sylvain [dot] joyeux [..] ...> wrote:
>>
>>> As with all of our previous "clashes", you consider code and tools to rule
>>> everything, while I consider the models as the "masters" (and code and
>>> tools as slaves). In other words, _I_ consider it essential to have (the
>>> semantics of) component primitives and compositions being _modelled_ for
>>> all these phases; after that, I am more than happy to have people like you
>>> who can provide good tools (and code) behind some of these models and their
>>> transformations.
>> You're wrong there. That's *why* I think that adding compositions to RTT
>> is a bad idea. Having composition MODELS and using them is what's
>> needed. NOT pushing to add the primitive to the C++ RTT implementation.
>
> I agree to the conclusion of this thread in the follow-up mails. The
> main problem RTT users encounter is the lack of a graphical tool for
> composition. Yes, graphical. Such a thing can only work well if a
> model of a composition is available, and some tools to create and
> inspect compositions.
> So we all agree on those fundamentals, I guess.
> But in the end, it's making Orocos applications scalable upto hundreds
> components, and I know no other way than to do this graphically.
Well, actually, I don't believe that "graphical" is the way to go when
you want to scale up to a hundred components *and* be dynamic (i.e. be
able to turn some subsystems on or off at runtime while keeping the rest
of the system working).

Rock has a different approach in this respect, where you do define your
subsystems separately (which means having ~10 components) and let the
algorithms make them work together.

We can inspect both models and instanciated networks graphically, as
well as graphically display the runtime trace. However, there are no
graphical design tools (yet).

I personally believe that in this domain, GUIs smoothen the learning
curve, but does not really help advanced applications.

[software-toolchain] Urgent need for RTT/Composition primitive (

On Fri, Mar 30, 2012 at 10:09, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>>> Having composition MODELS and using them is what's
>>> needed. NOT pushing to add the primitive to the C++ RTT implementation.
>>
>> I agree to the conclusion of this thread in the follow-up mails. The
>> main problem RTT users encounter is the lack of a graphical tool for
>> composition. Yes, graphical. Such a thing can only work well if a
>> model of a composition is available, and some tools to create and
>> inspect compositions.
>> So we all agree on those fundamentals, I guess.
>> But in the end, it's making Orocos applications scalable upto hundreds
>> components, and I know no other way than to do this graphically.
> Well,  actually, I don't believe that "graphical" is the way to go when
> you want to scale up to a hundred components *and* be dynamic (i.e. be
> able to turn some subsystems on or off at runtime while keeping the rest
> of the system working).

As an advocate of explicit meta-modeling I agree, that what is needed
in the first
place is an explicit model of both the components and their composition.

Question about graphical vs textual tools for working with models is secondary.
I think, that even the same developer will prefer one over another depending
on the scenario (e.g. textual for simple and graphical for advanced tasks).
An example here is AADL, which allows to work with models using both textual
and graphical notations.

Explicit meta-model not only helps to separate underlying concepts and
respective
notation. It also makes development of the tools for working with
models much easier.

[software-toolchain] Urgent need for RTT/Composition primitive (

Quoting Piotr Trojanek <piotr [dot] trojanek [..] ...>:

> On Fri, Mar 30, 2012 at 10:09, Sylvain Joyeux <sylvain [dot] joyeux [..] ...> wrote:
>>>> Having composition MODELS and using them is what's
>>>> needed. NOT pushing to add the primitive to the C++ RTT implementation.
>>>
>>> I agree to the conclusion of this thread in the follow-up mails. The
>>> main problem RTT users encounter is the lack of a graphical tool for
>>> composition. Yes, graphical. Such a thing can only work well if a
>>> model of a composition is available, and some tools to create and
>>> inspect compositions.
>>> So we all agree on those fundamentals, I guess.
>>> But in the end, it's making Orocos applications scalable upto hundreds
>>> components, and I know no other way than to do this graphically.
>> Well,  actually, I don't believe that "graphical" is the way to go when
>> you want to scale up to a hundred components *and* be dynamic (i.e. be
>> able to turn some subsystems on or off at runtime while keeping the rest
>> of the system working).
>
> As an advocate of explicit meta-modeling I agree, that what is needed
> in the first
> place is an explicit model of both the components and their composition.
>
> Question about graphical vs textual tools for working with models is
> secondary.
> I think, that even the same developer will prefer one over another depending
> on the scenario (e.g. textual for simple and graphical for advanced tasks).
> An example here is AADL, which allows to work with models using both textual
> and graphical notations.

+1

Indeed. We had a very similar discussion two months ago on this ML.
Tools, Models, Languages, and Meta-Models are different, even though
they interact with each other.

>
> Explicit meta-model not only helps to separate underlying concepts and
> respective
> notation. It also makes development of the tools for working with
> models much easier.
>
> --
> Piotr Trojanek
> --
> Orocos-Dev mailing list
> Orocos-Dev [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev
>
>

[software-toolchain] Urgent need for RTT/Composition primitive (

On Fri, 30 Mar 2012, Sylvain Joyeux wrote:

> On 03/29/2012 10:49 PM, Peter Soetens wrote:
>> I don't know where in this thread to start picking in and still avoid
>> a me too/me not kind of polemic... so here we go.
>>
>> On Thu, Mar 29, 2012 at 1:01 PM, Sylvain Joyeux<sylvain [dot] joyeux [..] ...>
>> wrote:
>>>
>>>> As with all of our previous "clashes", you consider code and tools to
>>>> rule
>>>> everything, while I consider the models as the "masters" (and code and
>>>> tools as slaves). In other words, _I_ consider it essential to have (the
>>>> semantics of) component primitives and compositions being _modelled_ for
>>>> all these phases; after that, I am more than happy to have people like
>>>> you
>>>> who can provide good tools (and code) behind some of these models and
>>>> their
>>>> transformations.
>>> You're wrong there. That's *why* I think that adding compositions to RTT
>>> is a bad idea. Having composition MODELS and using them is what's
>>> needed. NOT pushing to add the primitive to the C++ RTT implementation.
>>
>> I agree to the conclusion of this thread in the follow-up mails. The
>> main problem RTT users encounter is the lack of a graphical tool for
>> composition. Yes, graphical. Such a thing can only work well if a
>> model of a composition is available, and some tools to create and
>> inspect compositions.
>> So we all agree on those fundamentals, I guess.
>> But in the end, it's making Orocos applications scalable upto hundreds
>> components, and I know no other way than to do this graphically.
> Well, actually, I don't believe that "graphical" is the way to go when you
> want to scale up to a hundred components *and* be dynamic (i.e. be able to
> turn some subsystems on or off at runtime while keeping the rest of the
> system working).
>
> Rock has a different approach in this respect, where you do define your
> subsystems separately (which means having ~10 components) and let the
> algorithms make them work together.
>
> We can inspect both models and instanciated networks graphically, as well as
> graphically display the runtime trace. However, there are no graphical design
> tools (yet).
>
> I personally believe that in this domain, GUIs smoothen the learning curve,
> but does not really help advanced applications.

My gut feeling tells me you are right... Especially when the functionality
is needed at runtime, and _by the robot systems themselves_...

Urgent need for RTT/Composition primitive (aka "Warning: current

On Thu, 29 Mar 2012, Sylvain Joyeux wrote:

>> As with all of our previous "clashes", you consider code and tools to rule
>> everything, while I consider the models as the "masters" (and code and
>> tools as slaves). In other words, _I_ consider it essential to have (the
>> semantics of) component primitives and compositions being _modelled_ for
>> all these phases; after that, I am more than happy to have people like you
>> who can provide good tools (and code) behind some of these models and their
>> transformations.
> You're wrong there. That's *why* I think that adding compositions to RTT is a
> bad idea. Having composition MODELS and using them is what's needed. NOT
> pushing to add the primitive to the C++ RTT implementation.

I stand corrected! Anyway, the model level is indeed what I was talking about
whole the time in the context of the original post.

> As you say: the master is the abstract model. However, you *have* to run what
> your models spit out somehow, and that's where the toolchain kicks in. RTT is
> part of that toolchain. It does not have to be a 1:1 mapping of the model,
> as, you know, the model is the master, not the implementation.

I agree!

>> And none of them cover the scope of the "BRICS Component Model" :-) (Yet.)

> I'm not sure what you mean by that sentence. But if what you mean is that the
> BRICS component model is a superset of all that's currently existing, you'll
> have to prove that to me.

Easy: none of the frameworks even _has_ an _explicit_ component model :-)
(Except for OpenRTM, to some extent.)

> What I saw so far in BRICS-related stuff is a
> subset, not a superset, of what other model-based approaches like PROTEUS and
> rock can represent.

Tiens, where are the Coordination, Communication, Configuration and
Computational models then in the projects you mention...?

Urgent need for RTT/Composition primitive (aka "Warning: current

On Mar 29, 2012, at 10:35 PM, Herman Bruyninckx wrote:
> Easy: none of the frameworks even _has_ an _explicit_ component model :-)
> (Except for OpenRTM, to some extent.)

Yay us! (To some extent.)

Application use case: KDL IK solver ( Urgent need for RTT/Compos

On Wed, 28 Mar 2012, Herman Bruyninckx wrote:

> this is a message that I consider to be _strategic_ for the realtime fame
> of, both, Orocos/RTT and the BRICS Component Model.

In this message, I illustrate the reason to have the Composition primitive
in our toolbox explicitly, in the more concrete context of recent
discussions about new KDL solvers. I focus on only one aspect, the "data
flow" Computational model.

The running example is a "typical" IK solver, say one with a dynamics
model, Cartesian constraints at some "feature frames" on the kinematic
chain, with a (joint-space, of course) redundancy solution, and with a
joint limit avoidance strategy.

[...]
> 3. Need for adding Computational model to Composition
> The introduction of an explicit Composition phase into the development
> process, _and_ the introduction of the corresponding Composition primitive
> in BCM/RTT, will lead to the following two extra features which bring
> tremendous potential for computational efficiency:
>
> - Scope/closure/context/port promotion: a Composition (= composite
> component) is the right place to determine which data/events will only be
> used between the components within the composite, and which ones will have
> to be "promoted" to be accessible from the outside.

I have advocated already a couple of times why a "data flow" version of the
KDL solvers makes a lot more sense than the "OO-driven class hierarchies".
"Composition" is the core differentiator here:
- in the running example, there are several "computations" to be composed:
- dynamics computations;
- with or without taking the (mechanical, electrical, pneumatic,
hydraulic) actuator dynamics into account;
- redundancy resolution;
- joint limit avoidance.
All these sub-parts have their own inputs, outputs, configuration
variables, constraint violation limits, etc. In the OO way, you decide
which of those you put in the method call API, and which ones to hide in
the implementation. In Composition, you make a similar decision, via
"Promotion" of internal ports to the outside of the Composite component.
The advantage of the latter approach is that a new Composition and/or
promotion is very easy, requires very little code, and is very transparant
to people trying to understand the code; in the OO approach, each new
"composition" requires a new implementation of the whole interface,
possibly with some "promotion-dependent" changes in the parameter
signature. In other words, finding out the "diff" between several OO solvers
is a matter of inspecting the whole code; the diff between two Compositions
is visible at the (configuration part of the) Composite level (only),
without one having to delve into the implementations.

> The former are the
> ones with opportunities of gaining tremendous computational efficiency:
> a connection between two Ports within the composite can be replaced by a
> shared variable, which can be accessed by both components without delays
> and buffering. The same holds for events.

This is another "diff" wrt to the OO approach: I advocate the opposite of
"information hiding", _but_ only in the scope of the Composite: instead of
hiding all "computational state variables" behind the interface, the new
approach makes _all_ of them visible, but only to the "friends" in the
scope of the Composite.

So, the scope of a Composition allows all sub-computational components to
really share variables with extreme computational efficiency without
risking data inconsistency, because a well-chosen scope also allows to use
a naturally appropriate computational schedule (see below).

(Sidenote: "friends" is one of the many aspects of component-based programming that
have sneaked in into OO programming, although they do not belong there;
"closure" is another very relevant one, since both of them together are what
you need for "composition"...)

In our running example, _all_ the variables that you have to allocate are
within the scope of the Composite that is chosen for one particular
"solver". _All_ functions that are used are pure computations, with an API
like this: "function(&in,&out,&config)". _No_ implicit/explicit "new" or
"delete" inside a function; no side effects; just computations.

> - Computational model:
> Of course, this potential gain is only realisable when the execution of
> the computations in all components can be _scheduled_ as a _serialized_
> list of executions: "first Component A, then Component B, then Component C,
> and finally Component A again". Such natural serializations exist in
> _all_ robotics applications that I know of, and I have seen many.

In the context of the running IK solver example, this is the natural
computational schedule:

- structural iteration: the structure of the kinematic chain also provides the
natural serialisation of all computations. More concretely:
- initialisation computation for the whole chain;
- traversing the chain, one node and/or edge at the time, and at every
edge/node one encounters, there are "hooks" to which other computations
can be attached.
- one finalisation computation for the whole chain;
- hooks for Cartesian constraint control: typically attached to "leave
nodes" of the kinematic chain.
- hooks for redundancy resolution:
- at initialisation: setting "weights" for the "inertia" objective
function that the solver is going to minimize; selection of which
"hooks" to compute or not;...
- at each "joint" node: posture control torques/velocities/accelerations,
depending on the redundancy policy; local constraint violation monitoring;
- at finalisation: global constraint violation checking;
- hooks for joint limit avoidance: very similar to the redundancy
resolution, but with other policies, configurations, selections,...

The initialisation and finalisations computations for the whole chain
(read: at the composite level) are useful for things like:
- logging;
- taking care of "memory" policies (which data to forget, etc.);
- computational performance benchmarking;
- computation of "external" observers/estimators that are configured in
("stored procedures") by other components in the larger system, but for
which it is computationally more optimal to let them take place there were
the "kinematic/dynamic state" is freshly and directly available. (Instead
of sending over this state to the "observer component", and doing the same
computations there.)
- computations of the "state machine coordinator(s)" that is connected to the
composite; the result could be that a _different_ computational
schedule is selected to be used in the next invocation of the Composite.
This means the most instantaneously possible switch in behaviour possible.
- computation and selection of events connected to all the monitoring
computations that have taken place during the structural iteration, and
deciding which ones to make visible at the promoted ports.
- serving the "introspection" aspects of the composite components: there
might have been an event (triggered internally or externally) that
signals the need to 'reason' about the components working. For example,
in a humanoid robot, when the second foot has now reached the ground and
two arms are already in contact with a table, what should be the right
new behavior to support?

The latter kinds of "composite-level" computations help _a lot_ in
preventing delays as well as "event avalanches" that are often generated
when different sub-components are triggering (the same or different) events
because of (the same or different) reasons, and/or because several
"parallel" coordinator state machines are running.

There are tons of possible combinations of functions to use in the
different types of hooks, but the overall computational schedule remain the
same. The differences do not require to reimplement the same API with
different contents (and somewhat different method call signature), since it
is as simple as putting the contents of the composite component into the
introspection interface of the composite component; in that way, other
components using this "IK solver" can find out (online!) what the IK solver
component is offering, and, even better, what it _can_ offer after an
(online!) reconfiguration.

In summary, the Composite approach opens up an order of magnitude larger
flexibility, because of an order of magnitude more code reuse, and an order
of magnitude more computational efficiency.

The bad news: (i) we have no good tools yet, and (ii) we don't even have
the appropriate data structures available that the tools (or,
alternatively, the human developers) have to use. The latter are rather
trivial to construct: one composite of all computational state of all
"codels", one or more computational iteration schedules.

To be continued.

Herman

[software-toolchain] Urgent need for RTT/Composition primitive (

On Fri, 30 Mar 2012, brugali wrote:

> Dear all,
>
> I would like to contribute my two cents to the discussion on "The problem with execution
> efficiency".
>
> As far as I've understood the ongoing discussion, I see that the problem is formulated in terms
> of computational waste due to excessive number of threads, which is originated by an obsessive
> attitude to map even simple functionality to coarse grain components which interact according to
> the data flow architectural model.

Indeed. The trade-off between (i) the robustness of decoupled components,
and (ii) the efficiency of highly coupled components. (Where "component"
means: a piece of software whose functionality one accesses through ports.)

> If my interpretation of the problem is correct, one possible solution consists in:
> a) classifying concurrency at different levels of granularity, i.e. fine, medium, and large
> grain as in [1]
> b) map these levels of concurrency to three units of design, respectively: sequential component,
> service component, and container component.
> 3) use different architectural models and concurrency mechanisms for component interaction (i.e.
> data flow, client-server).

Strange, the Italian "alfabet of counting": a, b, 3! :-)

But we add "4) allow to use an application-specific schedule of
computations for which one _knows_ that all constraints are satisfied for
data integrity".

> [1]   R. S. Chin and S. T. Chanson, ‘‘Distributed, object-based programming systems,’’ ACM
> Comput. Surv., vol. 23, no. 1, pp. 91–124, 1991.
>
> The separation of sequential/service/container components can be motivated in terms of different
> variability concerns:
>
> - Sequential components encapsulate data structures and operations that implement specific
> processing algorithms. They should conveniently be designed to be middleware- and application
> independent (focus on Computation)

Agreed.

> - Service components implement the logic and embed the dynamic specification of robot control
> activities, such as closing the loop between sensors and actuators for motion control. They are
> mostly application-specific components, as they define the execution, interaction, and
> coordination of robot activities (focus on Coordination)

I would not call sensor-based motion control loops a form of
Coordination... Or rather, _all_ industrial use cases do it with pure
computations. Coordination comes in more and more, but only slowly. (And
the slowness is due to a gap in the training of developers, not due to a
lack of tools or software.)

> - Container components provide the environment for the concurrent threads and encapsulate the
> shared resources. They are to a great extent middleware-specific and functionality independent
> (focus on Communication)

Agreed.

> - A set of sequential, service, and container components all together
> form a component assembly (focus on Configuration). N.B. for me
> Composition is a kind of Configuration aspect (4Cs are enough)

I do not agree here :-) Or rather: the reason to separate Composition from
Configuration (because one is _not_ a kind of the other) was my major
reason to extend the original 4C paradigm.

> For more details see [2].
>
> [2] D. Brugali, A. Shakhimardanov, Component-Based Robotic Engineering (Part II): Systems and
> Models, IEEE Robotics and Automation Magazine, March 2010.
> http://www.best-of-robotics.org/pages/publications/UniBergamo_HBRS_Compo...
> azine_2010.pdf
>
> Best regards,
> Davide

Herman

>
> Il 3/29/2012 6:05 AM, Herman Bruyninckx ha scritto:
>
> On Thu, 29 Mar 2012, Erwin Aertbelien wrote:
>
> On 03/28/2012 09:18 AM, Herman Bruyninckx wrote:
>
> this is a message that I consider to be _strategic_ for the realtime fame
> of, both, Orocos/RTT and the BRICS Component Model. It's a rather condensed
> email, with the following summary:
>
> 1. Need for Composition
> 2. The problem with execution efficiency
> 3. Need for adding Computational model to Composition
> 4. Need for tooling
>
> I hope the Orocos and BRICS developer communities are strong and
> forward-looking enough to take action...
> I expect several follow-up messages to this "seed", in order to (i) refine
> its contents, and (ii) start sharing the development load.
>
> Best regards,
>
> Herman Bruyninckx
>
> ===============
> 1. Need for Composition
> In the "5Cs", Composition is singled out as the "coupling" aspect
> complementary to the "decoupling" aspects of Computation, Communication,
> Configuration and Coordination.
> In the BRICS Component Model (BCM), the different Cs come into play at
> different phases of the 5-phased development process (functional,
> component, composition/system, deployment, runtime); in the context of this
> message, I focus on the three phases "in the middle":
> - Component phase: developers make components, for maximal reuse and
> composability in later systems. Roughly speaking, the "art" here is to
> decouple the algorithms/computations inside the component from the access
> to the component's functionality (Computation, Communication,
> Configuration or Coordination) via Ports (and the "access policies" on
> them).
> - Composition phase: developers make a system, by composing components
> together, via interconnecting Ports, and specifying "buffering policies"
> on those Ports.
> - Deployment phase: composite components are being put into 'activity
> containers' (threads, processes,...) and connections between Ports are
> given communication middleware implementations.
> Although there is no strong or structured tooling support for these
> developments (yet) _and_ there is no explicit Composition primitive (in
> RTT, or BRIDE), the good developers in the community have the discipline to
> follow the outlined work flow to a large extent, resulting in designs that
> are very well ready for distributed deployment, and with very little
> coordination problems (deadlocks, data inconsistencies,...).
>
> One recent example is the new iTaSC implementation, using Orocos/RTT as
> component framework:<http://orocos.org/wiki/orocos/itasc-wiki>. It uses
> another standalone-ready toolkit, rFSM, for its Coordination state
> machines:<http://people.mech.kuleuven.be/~mklotzbucher/rfsm/README.html>.
>
> So far so good, because the _decoupling_ aspects of complex component-based
> systems are very well satisfied.
>
> But the _composition_ aspect is tremendously overlooked, resulting in
> massive wast of computational efficiency. (I explain that below.) I
> consider this a STRATEGIC lack in both BCM and RTT, because it is _a_ major
> selling point towards serious industrial uptake, and _the_ major
> competitive disadvantage with respect to commercial "one-tool-fits all
> lock-in" suppliers such as the MathWorks, National Instruments, or 20Sim.
>
> 2. The problem with execution efficiency
> What is wrong exactly with respect to execution efficiency? The cause of
> the problem is that decoupling is taken to the extreme in the
> above-mentioned development "tradition", in that each component is deployed
> in its own activity (thread within a process, or even worse, different
> processes within the operating system). The obvious good result of this is
> robustness; the (not so obviously visible) bad results are that:
> - events and data are exchanged between components via our very robust
> Port-Connector-Port mechanisms, which implies a lot of buffering, and
> hence requiring several context switches before data is really being
> delivered from its provider to its consumer.
> - activities are triggered via Coordination and/or Port buffering events,
> which has two flaws:
> (i) activities should be triggered by a _scheduler_ (because events are
> semantically only there to trigger changes in _behaviour_, and not in
> _computation_!); result: too many events and consequently too much
> time lost in event handling which should not be there, _and_ lots of
> context switches.
> (ii) too many context switches to make the data flow robustly through our
> provider Ports, connectors and consumer Ports; result: delays of
> several time ticks.
> Conclusion: the majority of applications allow to deploy all of their
> computations in one single thread, even without running the risk of data
> corruption, because there is a natural serialization of all computations in
> the application. Single threaded execution does away with _all_ of the
> above-mentioned computational wastes. But we have no good guidelines yet,
> let alone tooling, to support developers with the (not so trivial) task of
> efficiently serializing component computations. That's where the
> "Composition" phase of the development process comes in, together with the
> "Composition" primitive and its associated Computational models.
>
> (1) Scheduling and the need to break the loop:
> Many components in robotics are used within periodically executed
> feed-back loops.
> When these loops are closed, one has to "break" this loop to establish
> the schedule
> (i.e. execute a part of the loop in the next periodic cycle). I
> advocate that the composer
> _has to_ explicitly specify where to break the loop.
>
> Oops, you are interpreting things a bit wrongly here... The schedule _is_
> the representation of "where to break the loop"!
> Periodic execution or not is not relevant in this discussion; what is
> relevant is to provide a schedule for the different cases in which data can
> arrive at each of the in-Ports of a computational component.
>
> The schedule can then be automatically determine from the partial
> ordering of components imposed by the data-flow.
>
> Automatic support for the creation of computational schedules is part of
> the (lack of) tooling that I mentioned.
>
> (2) Execution efficiency and "composition" are independent from each other:
> The explicit scheduling and corresponding low-weight communication
> protocols are independent features wrt composition. Even more, resolving
> the schedulling at the composition can lead to an inefficient schedules
> (i.e. schedules that introduce more delays then necessary), by not taking
> into account ordering that is imposed by the connections outside the
> composite.
>
> Argh... I am _not_ talking about the scheduling of the execution of
> component activities, but about the order in which to call the different
> "function blocks" inside one single computational component. The latter is
> indeed fully decoupled from communication protocols; it is only connected
> to the "inside" of the in-Ports over which a computational component
> receives new data on which it has to perform its computations.
>
> The schedule is something that you can only determined at deployment,
> when all components in a given single thread are known. It should not
> be determined or fixed at the level of the "composition" or during the
> composition phase.
>
> Again: I am _not_ discussing activity scheduling. The kind of scheduling
> you are considering is, indeed, to be determined at deployment time. But at
> "system composition" time, there _are_ no activities yet, so there is no
> need to talk about the scheduling of those activities.
>
> 3. Need for adding Computational model to Composition
> The introduction of an explicit Composition phase into the development
> process, _and_ the introduction of the corresponding Composition primitive
> in BCM/RTT, will lead to the following two extra features which bring
> tremendous potential for computational efficiency:
>
> - Scope/closure/context/port promotion: a Composition (= composite
> component) is the right place to determine which data/events will only be
> used between the components within the composite, and which ones will have
> to be "promoted" to be accessible from the outside.
>
> this promotion should also involve renaming of the data/events, since the
> correct name internal to the composition is probably not a good name for
> someone outside the composition.
>
> I fully agree.
>
> The former are the
> ones with opportunities of gaining tremendous computational efficiency:
> a connection between two Ports within the composite can be replaced by a
> shared variable, which can be accessed by both components without delays
> and buffering. The same holds for events.
>
> Again this advantage is not inherent to composition but inherent to more
> explicit scheduling methods and appropriate communication protocols.
>
> Same "Argh" remark as made earlier in this thread... :-)
>
> - Computational model:
> Of course, this potential gain is only realisable when the execution of
> the computations in all components can be _scheduled_ as a _serialized_
> list of executions: "first Component A, then Component B, then Component C,
> and finally Component A again". Such natural serializations exist in
> _all_ robotics applications that I know of, and I have seen many.
>
> If you "break the loop" at certain places.
> Composition phase is the best time to indicate where to break the loop.
> Deployment time is the best time to compute/specify the schedule
> (computational model).
>
> You have a very different interpretation of what a "computational model" is
> than what I explained. The computational model of a (data flow; functional)
> computational Component is the order in which the functions in the
> component must be called by the Component's "meta function"
>
> This computational model relates to all components on the given single
> thread, not to each composite.
>
> Finding the right serialization (i.e., "Computational model") is not
> always trivial, obviously. As is finding the right granularity of
> "computational codels". The good news is that experts exist for all
> specific applications to provide solutions.
> (Note: serialization of the computations in components is only _one_
> possible Computational model; there are others, but they are outside the
> scope of this message.)
>
> At deployment time, one has a bunch of Composite components available, for
> which the computational model has already been added and configured at the
> composite level (if needed),
>
> I do not agree ( see above).
>
> so that one should then add activities and
> communication middleware only _per composite component_, and not per
> individual component.
>
> 4. Need for tooling
> The above-mentioned workflow in the Composition phase is currently not at
> all supported by any tool. This is a major hole in the BRIDE/RTT
> frameworks. I envisage something in the direction of what Genom is doing,
> since that approach has the concept of a "codel", that is, the atomically
> 'deployable' piece of computations. Where 'deployment' means: to put into a
> computational schedule within a Composite. (The latter sentence is _not_
> Genom-speak, but could/should become BCM/RTT/BRIDE-speak.)
>
>
> Best regards,
> Erwin.
>
> Herman
> _______________________________________________
> software-toolchain mailing list
> software-toolchain [..] ...
> http://mailman.gps-stuttgart.de/mailman/listinfo/software-toolchain
>
>
>

[software-toolchain] Urgent need for RTT/Composition primitive (

Il 3/30/2012 8:41 PM, Herman Bruyninckx ha scritto:
> On Fri, 30 Mar 2012, brugali wrote:
>
>> Dear all,
>>
>> I would like to contribute my two cents to the discussion on "The
>> problem with execution
>> efficiency".
>>
>> As far as I've understood the ongoing discussion, I see that the
>> problem is formulated in terms
>> of computational waste due to excessive number of threads, which is
>> originated by an obsessive
>> attitude to map even simple functionality to coarse grain components
>> which interact according to
>> the data flow architectural model.
>
> Indeed. The trade-off between (i) the robustness of decoupled components,
> and (ii) the efficiency of highly coupled components. (Where "component"
> means: a piece of software whose functionality one accesses through
> ports.)
I prefer to think in terms of well-defined (i.e. harmonized and clearly
separated from implementation) component interfaces.
>
>> If my interpretation of the problem is correct, one possible solution
>> consists in:
>> a) classifying concurrency at different levels of granularity, i.e.
>> fine, medium, and large
>> grain as in [1]
>> b) map these levels of concurrency to three units of design,
>> respectively: sequential component,
>> service component, and container component.
>> 3) use different architectural models and concurrency mechanisms for
>> component interaction (i.e.
>> data flow, client-server).
>
> Strange, the Italian "alfabet of counting": a, b, 3! :-)
>
nice interpretation of my typo! :-[ I see here your keen sensitivity to
alphabets (Cs, Ms, ...) :-)
> But we add "4) allow to use an application-specific schedule of
> computations for which one _knows_ that all constraints are satisfied for
> data integrity".
>
A global (i.e. application-wide) scheduler is one possible mechanism
that ensures data integrity.
Other mechanisms (i.e. connectors implementing interaction protocols)
can be defined for component-wide constraints.
The Data-Flow model of computation specifies that a component performs a
computation when all the input data are available. In order to guarantee
data consistency in a concurrent system and prevent race conditions,
input data might be tagged in such a way that the component performs a
computation only when the full set of "matched input data" are available.

>> [1] R. S. Chin and S. T. Chanson, ‘‘Distributed, object-based
>> programming systems,’’ ACM
>> Comput. Surv., vol. 23, no. 1, pp. 91–124, 1991.
>>
>> The separation of sequential/service/container components can be
>> motivated in terms of different
>> variability concerns:
>>
>> - Sequential components encapsulate data structures and operations
>> that implement specific
>> processing algorithms. They should conveniently be designed to be
>> middleware- and application
>> independent (focus on Computation)
>
> Agreed.
>
>> - Service components implement the logic and embed the dynamic
>> specification of robot control
>> activities, such as closing the loop between sensors and actuators
>> for motion control. They are
>> mostly application-specific components, as they define the execution,
>> interaction, and
>> coordination of robot activities (focus on Coordination)
>
> I would not call sensor-based motion control loops a form of
> Coordination... Or rather, _all_ industrial use cases do it with pure
> computations. Coordination comes in more and more, but only slowly. (And
> the slowness is due to a gap in the training of developers, not due to a
> lack of tools or software.)
You are right. The "focus" is not only on Coordination and clearly
motion control is not an example of coordination.
I stand corrected.
>
>> - Container components provide the environment for the concurrent
>> threads and encapsulate the
>> shared resources. They are to a great extent middleware-specific and
>> functionality independent
>> (focus on Communication)
>
> Agreed.
>
>> - A set of sequential, service, and container components all together
>> form a component assembly (focus on Configuration). N.B. for me
>> Composition is a kind of Configuration aspect (4Cs are enough)
>
> I do not agree here :-) Or rather: the reason to separate Composition
> from
> Configuration (because one is _not_ a kind of the other) was my major
> reason to extend the original 4C paradigm.
I know that you do not agree here, but this is a remnant of our
religious war on components ;-)

For me Composition is about how a component-based system is organized,
i.e. configured.
The arrangement of components and connectors in a flat or hierarchical
(i.e. composition) way is the system configuration. A specific component
in the system might be in charge of managing (i.e. dynamically
reconfiguring) the system configuration. Reconfiguration might consist
in adding/removing/replacing components to/from/in composites.
>
>> For more details see [2].
>>
>> [2] D. Brugali, A. Shakhimardanov, Component-Based Robotic
>> Engineering (Part II): Systems and
>> Models, IEEE Robotics and Automation Magazine, March 2010.
>> http://www.best-of-robotics.org/pages/publications/UniBergamo_HBRS_Compo...
>>
>> azine_2010.pdf
>>
>> Best regards,
>> Davide
>
> Herman

Best regards,
Davide
>
>>
>> Il 3/29/2012 6:05 AM, Herman Bruyninckx ha scritto:
>>
>> On Thu, 29 Mar 2012, Erwin Aertbelien wrote:
>>
>> On 03/28/2012 09:18 AM, Herman Bruyninckx wrote:
>>
>> this is a message that I consider to be _strategic_ for the realtime
>> fame
>> of, both, Orocos/RTT and the BRICS Component Model. It's a rather
>> condensed
>> email, with the following summary:
>>
>> 1. Need for Composition
>> 2. The problem with execution efficiency
>> 3. Need for adding Computational model to Composition
>> 4. Need for tooling
>>
>> I hope the Orocos and BRICS developer communities are strong and
>> forward-looking enough to take action...
>> I expect several follow-up messages to this "seed", in order to (i)
>> refine
>> its contents, and (ii) start sharing the development load.
>>
>> Best regards,
>>
>> Herman Bruyninckx
>>
>> ===============
>> 1. Need for Composition
>> In the "5Cs", Composition is singled out as the "coupling" aspect
>> complementary to the "decoupling" aspects of Computation, Communication,
>> Configuration and Coordination.
>> In the BRICS Component Model (BCM), the different Cs come into play at
>> different phases of the 5-phased development process (functional,
>> component, composition/system, deployment, runtime); in the context
>> of this
>> message, I focus on the three phases "in the middle":
>> - Component phase: developers make components, for maximal reuse and
>> composability in later systems. Roughly speaking, the "art" here
>> is to
>> decouple the algorithms/computations inside the component from
>> the access
>> to the component's functionality (Computation, Communication,
>> Configuration or Coordination) via Ports (and the "access
>> policies" on
>> them).
>> - Composition phase: developers make a system, by composing components
>> together, via interconnecting Ports, and specifying "buffering
>> policies"
>> on those Ports.
>> - Deployment phase: composite components are being put into 'activity
>> containers' (threads, processes,...) and connections between
>> Ports are
>> given communication middleware implementations.
>> Although there is no strong or structured tooling support for these
>> developments (yet) _and_ there is no explicit Composition primitive (in
>> RTT, or BRIDE), the good developers in the community have the
>> discipline to
>> follow the outlined work flow to a large extent, resulting in designs
>> that
>> are very well ready for distributed deployment, and with very little
>> coordination problems (deadlocks, data inconsistencies,...).
>>
>> One recent example is the new iTaSC implementation, using Orocos/RTT as
>> component framework:<http://orocos.org/wiki/orocos/itasc-wiki>. It uses
>> another standalone-ready toolkit, rFSM, for its Coordination state
>> machines:<http://people.mech.kuleuven.be/~mklotzbucher/rfsm/README.html>.
>>
>>
>> So far so good, because the _decoupling_ aspects of complex
>> component-based
>> systems are very well satisfied.
>>
>> But the _composition_ aspect is tremendously overlooked, resulting in
>> massive wast of computational efficiency. (I explain that below.) I
>> consider this a STRATEGIC lack in both BCM and RTT, because it is _a_
>> major
>> selling point towards serious industrial uptake, and _the_ major
>> competitive disadvantage with respect to commercial "one-tool-fits all
>> lock-in" suppliers such as the MathWorks, National Instruments, or
>> 20Sim.
>>
>> 2. The problem with execution efficiency
>> What is wrong exactly with respect to execution efficiency? The cause of
>> the problem is that decoupling is taken to the extreme in the
>> above-mentioned development "tradition", in that each component is
>> deployed
>> in its own activity (thread within a process, or even worse, different
>> processes within the operating system). The obvious good result of
>> this is
>> robustness; the (not so obviously visible) bad results are that:
>> - events and data are exchanged between components via our very robust
>> Port-Connector-Port mechanisms, which implies a lot of buffering,
>> and
>> hence requiring several context switches before data is really being
>> delivered from its provider to its consumer.
>> - activities are triggered via Coordination and/or Port buffering
>> events,
>> which has two flaws:
>> (i) activities should be triggered by a _scheduler_ (because
>> events are
>> semantically only there to trigger changes in _behaviour_,
>> and not in
>> _computation_!); result: too many events and consequently too
>> much
>> time lost in event handling which should not be there, _and_
>> lots of
>> context switches.
>> (ii) too many context switches to make the data flow robustly
>> through our
>> provider Ports, connectors and consumer Ports; result: delays of
>> several time ticks.
>> Conclusion: the majority of applications allow to deploy all of their
>> computations in one single thread, even without running the risk of data
>> corruption, because there is a natural serialization of all
>> computations in
>> the application. Single threaded execution does away with _all_ of the
>> above-mentioned computational wastes. But we have no good guidelines
>> yet,
>> let alone tooling, to support developers with the (not so trivial)
>> task of
>> efficiently serializing component computations. That's where the
>> "Composition" phase of the development process comes in, together
>> with the
>> "Composition" primitive and its associated Computational models.
>>
>> (1) Scheduling and the need to break the loop:
>> Many components in robotics are used within periodically executed
>> feed-back loops.
>> When these loops are closed, one has to "break" this loop to establish
>> the schedule
>> (i.e. execute a part of the loop in the next periodic cycle). I
>> advocate that the composer
>> _has to_ explicitly specify where to break the loop.
>>
>> Oops, you are interpreting things a bit wrongly here... The schedule
>> _is_
>> the representation of "where to break the loop"! Periodic execution
>> or not is not relevant in this discussion; what is
>> relevant is to provide a schedule for the different cases in which
>> data can
>> arrive at each of the in-Ports of a computational component.
>>
>> The schedule can then be automatically determine from the partial
>> ordering of components imposed by the data-flow.
>>
>> Automatic support for the creation of computational schedules is part of
>> the (lack of) tooling that I mentioned.
>>
>> (2) Execution efficiency and "composition" are independent from each
>> other:
>> The explicit scheduling and corresponding low-weight communication
>> protocols are independent features wrt composition. Even more,
>> resolving
>> the schedulling at the composition can lead to an inefficient schedules
>> (i.e. schedules that introduce more delays then necessary), by not
>> taking
>> into account ordering that is imposed by the connections outside the
>> composite.
>>
>> Argh... I am _not_ talking about the scheduling of the execution of
>> component activities, but about the order in which to call the different
>> "function blocks" inside one single computational component. The
>> latter is
>> indeed fully decoupled from communication protocols; it is only
>> connected
>> to the "inside" of the in-Ports over which a computational component
>> receives new data on which it has to perform its computations.
>>
>> The schedule is something that you can only determined at deployment,
>> when all components in a given single thread are known. It should not
>> be determined or fixed at the level of the "composition" or during the
>> composition phase.
>>
>> Again: I am _not_ discussing activity scheduling. The kind of scheduling
>> you are considering is, indeed, to be determined at deployment time.
>> But at
>> "system composition" time, there _are_ no activities yet, so there is no
>> need to talk about the scheduling of those activities.
>>
>> 3. Need for adding Computational model to Composition
>> The introduction of an explicit Composition phase into the development
>> process, _and_ the introduction of the corresponding Composition
>> primitive
>> in BCM/RTT, will lead to the following two extra features which bring
>> tremendous potential for computational efficiency:
>>
>> - Scope/closure/context/port promotion: a Composition (= composite
>> component) is the right place to determine which data/events will
>> only be
>> used between the components within the composite, and which ones
>> will have
>> to be "promoted" to be accessible from the outside.
>>
>> this promotion should also involve renaming of the data/events, since
>> the
>> correct name internal to the composition is probably not a good name for
>> someone outside the composition.
>>
>> I fully agree.
>>
>> The former are the
>> ones with opportunities of gaining tremendous computational
>> efficiency:
>> a connection between two Ports within the composite can be
>> replaced by a
>> shared variable, which can be accessed by both components without
>> delays
>> and buffering. The same holds for events.
>>
>> Again this advantage is not inherent to composition but inherent to more
>> explicit scheduling methods and appropriate communication protocols.
>>
>> Same "Argh" remark as made earlier in this thread... :-)
>>
>> - Computational model:
>> Of course, this potential gain is only realisable when the
>> execution of
>> the computations in all components can be _scheduled_ as a
>> _serialized_
>> list of executions: "first Component A, then Component B, then
>> Component C,
>> and finally Component A again". Such natural serializations exist in
>> _all_ robotics applications that I know of, and I have seen many.
>>
>> If you "break the loop" at certain places.
>> Composition phase is the best time to indicate where to break the loop.
>> Deployment time is the best time to compute/specify the schedule
>> (computational model).
>>
>> You have a very different interpretation of what a "computational
>> model" is
>> than what I explained. The computational model of a (data flow;
>> functional)
>> computational Component is the order in which the functions in the
>> component must be called by the Component's "meta function"
>>
>> This computational model relates to all components on the given single
>> thread, not to each composite.
>>
>> Finding the right serialization (i.e., "Computational model") is not
>> always trivial, obviously. As is finding the right granularity of
>> "computational codels". The good news is that experts exist for all
>> specific applications to provide solutions.
>> (Note: serialization of the computations in components is only _one_
>> possible Computational model; there are others, but they are
>> outside the
>> scope of this message.)
>>
>> At deployment time, one has a bunch of Composite components
>> available, for
>> which the computational model has already been added and configured
>> at the
>> composite level (if needed),
>>
>> I do not agree ( see above).
>>
>> so that one should then add activities and
>> communication middleware only _per composite component_, and not per
>> individual component.
>>
>> 4. Need for tooling
>> The above-mentioned workflow in the Composition phase is currently
>> not at
>> all supported by any tool. This is a major hole in the BRIDE/RTT
>> frameworks. I envisage something in the direction of what Genom is
>> doing,
>> since that approach has the concept of a "codel", that is, the
>> atomically
>> 'deployable' piece of computations. Where 'deployment' means: to put
>> into a
>> computational schedule within a Composite. (The latter sentence is _not_
>> Genom-speak, but could/should become BCM/RTT/BRIDE-speak.)
>>
>>
>> Best regards,
>> Erwin.
>>
>> Herman
>> _______________________________________________
>> software-toolchain mailing list
>> software-toolchain [..] ...
>> http://mailman.gps-stuttgart.de/mailman/listinfo/software-toolchain
>>
>>
>>