[ANN] FBSched - Function Block Scheduling Component

Hi,

Here's a simple, fast C++ component that permits scheduling "function
blocks" implemented as RTT TaskContexts in a well defined order using
the SlaveActivity. This is to minimize scheduling overhead by
serializing everything in one thread.

The function block components must be configured using SlaveActivities
and will be triggered by the FBSched component when itself is
triggered. This can be either achieved by sending events to a
"trigger" port or by configuring it with a periodic activity.

Similar scheduling could be implemented by a scripting component,
however this simple (yet common) case of just triggering a set of
function blocks in a certain order justifies a dedicated bare-metal
c++ one for ultimate efficiency :-)

Code is here:

https://github.com/kmarkus/fbsched

This is a small contribution with the goal to illustrate how to do
composition of computations differs from the composition of systems
(the latter is better done using FSMs!).

Credits go to Herman for suggesting this distinction.

In a next version I plan to extend this component to measure
min/avg/max duration of each function block and of the composite to
facilitate optimization/debugging.

Comments/Feedback welcome as usual!

Markus

[ANN] FBSched - Function Block Scheduling Component

2012/3/30 Markus Klotzbuecher <markus [dot] klotzbuecher [..] ...>

> Hi,
>
> Here's a simple, fast C++ component that permits scheduling "function
> blocks" implemented as RTT TaskContexts in a well defined order using
> the SlaveActivity. This is to minimize scheduling overhead by
> serializing everything in one thread.
>
> The function block components must be configured using SlaveActivities
> and will be triggered by the FBSched component when itself is
> triggered. This can be either achieved by sending events to a
> "trigger" port or by configuring it with a periodic activity.
>
> Similar scheduling could be implemented by a scripting component,
> however this simple (yet common) case of just triggering a set of
> function blocks in a certain order justifies a dedicated bare-metal
> c++ one for ultimate efficiency :-)
>
> Code is here:
>
> https://github.com/kmarkus/fbsched
>
> This is a small contribution with the goal to illustrate how to do
> composition of computations differs from the composition of systems
> (the latter is better done using FSMs!).
>
> Credits go to Herman for suggesting this distinction.
>
> In a next version I plan to extend this component to measure
> min/avg/max duration of each function block and of the composite to
> facilitate optimization/debugging.
>
> Comments/Feedback welcome as usual!
>
> Markus
> --
> Orocos-Users mailing list
> Orocos-Users [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>

Hi Markus,

See enclose my local changes. Please have a review and see if it is of
interest and how you would like contribution to be formated. It's generated
with git format-patch origin/master/

I have some stats for my personnal use. Before sending anything, tell me if
you have ideas about what you need.

My time reports are doing the following :
_ save locally 2 vectors containing loop duration and cycle period (I
generally use something like 1k samples)
_ provide tic() tac() functions which populate the 2 vectors
_ provide a getReport operation that compute min, max, average, stddev
values and return then as a preformatted string.

In my Fbsched I tic() at the beginning of the updateHook and tac() at the
end, so I only have a general report. I think it would be nice to have a
per-peer report. In my last compagny they were using a special typekit to
nicely print scheduling as bargraphs.

I wonder if there is any existing framework for doing something similar.

[ANN] FBSched - Function Block Scheduling Component

On Sat, Apr 28, 2012 at 04:43:07PM +0200, Willy Lambert wrote:
>
> See enclose my local changes. Please have a review and see if it is of
> interest and how you would like contribution to be formated. It's generated
> with git format-patch origin/master/

Applied, thanks.

> I have some stats for my personnal use. Before sending anything, tell me if you
> have ideas about what you need.
>
> My time reports are doing the following :
> _ save locally 2 vectors containing loop duration and cycle period (I generally
> use something like 1k samples)
> _ provide tic() tac() functions which populate the 2 vectors
> _ provide a getReport operation that compute min, max, average, stddev values
> and return then as a preformatted string.
>
> In my Fbsched I tic() at the beginning of the updateHook and tac() at the end,
> so I only have a general report. I think it would be nice to have a per-peer
> report. In my last compagny they were using a special typekit to nicely print
> scheduling as bargraphs.

For me it would be essential to monitor the worst-case duration of
each of the individual triggers and of the whole loop. This should be
monitored infinitely to be able do execute long running latency tests
(or even generate events when configurable worst-case values are
exceeded). Having min values and average would be nice too.

A good inspiration is the cyclictest utility from rt-tests[1].

Regarding the storage of recorded data, I wonder if it would make
sense to decouple the storage from the gathering and for instance have
the reporter dump the data to file which can be processed into a
report. With 1K samples and triggering at 1KHz you will not look back
very far.

What do you think?

> I wonder if there is any existing framework for doing something similar.

I'm not aware of anything. I have a set of simple functions I reuse
for the time accounting.

Markus

[1] http://git.kernel.org/?p=linux/kernel/git/clrkwllms/rt-tests.git

[ANN] FBSched - Function Block Scheduling Component

On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
> Hi,
>
> Here's a simple, fast C++ component that permits scheduling "function
> blocks" implemented as RTT TaskContexts in a well defined order using
> the SlaveActivity. This is to minimize scheduling overhead by
> serializing everything in one thread.
>
> The function block components must be configured using SlaveActivities
> and will be triggered by the FBSched component when itself is
> triggered. This can be either achieved by sending events to a
> "trigger" port or by configuring it with a periodic activity.
>
> Similar scheduling could be implemented by a scripting component,
> however this simple (yet common) case of just triggering a set of
> function blocks in a certain order justifies a dedicated bare-metal
> c++ one for ultimate efficiency :-)
>
> Code is here:
>
> https://github.com/kmarkus/fbsched
>
> This is a small contribution with the goal to illustrate how to do
> composition of computations differs from the composition of systems
> (the latter is better done using FSMs!).
>
> Credits go to Herman for suggesting this distinction.
And thank you for showing that there are fundamentally absolutely no
distinction ;-)

The "common" computation models (i.e. simulink-like) assume that you
have an acyclic graph of computations (you cycle using a delay), and
that each components generates one output on each of its outputs each
time its triggered. Basically, the ordering is a topological sort over
the connection graphs.

I'm annoyed at myself, right now, for not having the deployer integrated
in Rock, and show how the *current* Rock composition models perfectly
allow to use something like fbsched to run compositions (as modelled in
Rock) when it makes sense. The only issue is to segregate the components
between those that fit the simulink-like component model and those that
don't, something that can be done by tagging them with services
(Srv::SimulinkComputationModel)

[ANN] FBSched - Function Block Scheduling Component

On Fri, Mar 30, 2012 at 04:49:21PM +0200, Sylvain Joyeux wrote:
> On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
> >Hi,
> >
> >Here's a simple, fast C++ component that permits scheduling "function
> >blocks" implemented as RTT TaskContexts in a well defined order using
> >the SlaveActivity. This is to minimize scheduling overhead by
> >serializing everything in one thread.
> >
> >The function block components must be configured using SlaveActivities
> >and will be triggered by the FBSched component when itself is
> >triggered. This can be either achieved by sending events to a
> >"trigger" port or by configuring it with a periodic activity.
> >
> >Similar scheduling could be implemented by a scripting component,
> >however this simple (yet common) case of just triggering a set of
> >function blocks in a certain order justifies a dedicated bare-metal
> >c++ one for ultimate efficiency :-)
> >
> >Code is here:
> >
> >https://github.com/kmarkus/fbsched
> >
> >This is a small contribution with the goal to illustrate how to do
> >composition of computations differs from the composition of systems
> >(the latter is better done using FSMs!).
> >
> >Credits go to Herman for suggesting this distinction.
> And thank you for showing that there are fundamentally absolutely no
> distinction ;-)

Yes, in the end it's only machine code... Seriously, I think the main
purpose of the distinction is to support people to make the right
choice to run their compositions.

> The "common" computation models (i.e. simulink-like) assume that you
> have an acyclic graph of computations (you cycle using a delay), and
> that each components generates one output on each of its outputs
> each time its triggered. Basically, the ordering is a topological
> sort over the connection graphs.
>
> I'm annoyed at myself, right now, for not having the deployer
> integrated in Rock, and show how the *current* Rock composition
> models perfectly allow to use something like fbsched to run
> compositions (as modelled in Rock) when it makes sense. The only

Yes, why not reuse the same tools? But again, making the distinction
explicit would help people to make the right choice to do so!

> issue is to segregate the components between those that fit the
> simulink-like component model and those that don't, something that
> can be done by tagging them with services
> (Srv::SimulinkComputationModel)

So you are distinguishing on a semantic level!

Markus

[ANN] FBSched - Function Block Scheduling Component

On 04/02/2012 10:44 AM, Markus Klotzbuecher wrote:
> On Fri, Mar 30, 2012 at 04:49:21PM +0200, Sylvain Joyeux wrote:
>> On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
>>> Hi,
>>>
>>> Here's a simple, fast C++ component that permits scheduling "function
>>> blocks" implemented as RTT TaskContexts in a well defined order using
>>> the SlaveActivity. This is to minimize scheduling overhead by
>>> serializing everything in one thread.
>>>
>>> The function block components must be configured using SlaveActivities
>>> and will be triggered by the FBSched component when itself is
>>> triggered. This can be either achieved by sending events to a
>>> "trigger" port or by configuring it with a periodic activity.
>>>
>>> Similar scheduling could be implemented by a scripting component,
>>> however this simple (yet common) case of just triggering a set of
>>> function blocks in a certain order justifies a dedicated bare-metal
>>> c++ one for ultimate efficiency :-)
>>>
>>> Code is here:
>>>
>>> https://github.com/kmarkus/fbsched
>>>
>>> This is a small contribution with the goal to illustrate how to do
>>> composition of computations differs from the composition of systems
>>> (the latter is better done using FSMs!).
>>>
>>> Credits go to Herman for suggesting this distinction.
>> And thank you for showing that there are fundamentally absolutely no
>> distinction ;-)
>
> Yes, in the end it's only machine code... Seriously, I think the main
> purpose of the distinction is to support people to make the right
> choice to run their compositions.
>
>> The "common" computation models (i.e. simulink-like) assume that you
>> have an acyclic graph of computations (you cycle using a delay), and
>> that each components generates one output on each of its outputs
>> each time its triggered. Basically, the ordering is a topological
>> sort over the connection graphs.
>>
>> I'm annoyed at myself, right now, for not having the deployer
>> integrated in Rock, and show how the *current* Rock composition
>> models perfectly allow to use something like fbsched to run
>> compositions (as modelled in Rock) when it makes sense. The only
>
> Yes, why not reuse the same tools? But again, making the distinction
> explicit would help people to make the right choice to do so!
>
>> issue is to segregate the components between those that fit the
>> simulink-like component model and those that don't, something that
>> can be done by tagging them with services
>> (Srv::SimulinkComputationModel)
>
> So you are distinguishing on a semantic level!
You obviously have to give more information to your system to support
such a workflow. That's what I do here: I'm telling it "here is a
component that fits computation model X, do what you want with this
information"

What I don't see is the need to completely separate: to say "there is
composition type A and composition type B and they are different".
They're not. What *is* different is the amount of information you have
available for certain components, an how much of that information is
usable by algorithms to either generate or verify the runtime
policies/deployments/configurations.

Mixing both freely is a few orders of magnitude more powerful. For
instance, let's assume that I have a "computation" model for components
A, B and C and not for D. I have compositions C1 and C2

C1 is made of A, B and is therefore a "computation-composition"
C2 is made of C, D and is therefore a "normal composition"
(+1 for Herman: we need to find separate names here)

Now, I connect C1 to C2. I therefore have A>B>C>D

Why couldn't I consider C when making the execution policy for this
composition of compositions ?

[ANN] FBSched - Function Block Scheduling Component

On Mon, 2 Apr 2012, Sylvain Joyeux wrote:

> On 04/02/2012 10:44 AM, Markus Klotzbuecher wrote:
>> On Fri, Mar 30, 2012 at 04:49:21PM +0200, Sylvain Joyeux wrote:
>>> On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
>>>> Hi,
>>>>
>>>> Here's a simple, fast C++ component that permits scheduling "function
>>>> blocks" implemented as RTT TaskContexts in a well defined order using
>>>> the SlaveActivity. This is to minimize scheduling overhead by
>>>> serializing everything in one thread.
>>>>
>>>> The function block components must be configured using SlaveActivities
>>>> and will be triggered by the FBSched component when itself is
>>>> triggered. This can be either achieved by sending events to a
>>>> "trigger" port or by configuring it with a periodic activity.
>>>>
>>>> Similar scheduling could be implemented by a scripting component,
>>>> however this simple (yet common) case of just triggering a set of
>>>> function blocks in a certain order justifies a dedicated bare-metal
>>>> c++ one for ultimate efficiency :-)
>>>>
>>>> Code is here:
>>>>
>>>> https://github.com/kmarkus/fbsched
>>>>
>>>> This is a small contribution with the goal to illustrate how to do
>>>> composition of computations differs from the composition of systems
>>>> (the latter is better done using FSMs!).
>>>>
>>>> Credits go to Herman for suggesting this distinction.
>>> And thank you for showing that there are fundamentally absolutely no
>>> distinction ;-)
>>
>> Yes, in the end it's only machine code... Seriously, I think the main
>> purpose of the distinction is to support people to make the right
>> choice to run their compositions.
>>
>>> The "common" computation models (i.e. simulink-like) assume that you
>>> have an acyclic graph of computations (you cycle using a delay), and
>>> that each components generates one output on each of its outputs
>>> each time its triggered. Basically, the ordering is a topological
>>> sort over the connection graphs.
>>>
>>> I'm annoyed at myself, right now, for not having the deployer
>>> integrated in Rock, and show how the *current* Rock composition
>>> models perfectly allow to use something like fbsched to run
>>> compositions (as modelled in Rock) when it makes sense. The only
>>
>> Yes, why not reuse the same tools? But again, making the distinction
>> explicit would help people to make the right choice to do so!
>>
>>> issue is to segregate the components between those that fit the
>>> simulink-like component model and those that don't, something that
>>> can be done by tagging them with services
>>> (Srv::SimulinkComputationModel)
>>
>> So you are distinguishing on a semantic level!
> You obviously have to give more information to your system to support
> such a workflow. That's what I do here: I'm telling it "here is a
> component that fits computation model X, do what you want with this
> information"
>
> What I don't see is the need to completely separate: to say "there is
> composition type A and composition type B and they are different".
> They're not. What *is* different is the amount of information you have
> available for certain components, an how much of that information is
> usable by algorithms to either generate or verify the runtime
> policies/deployments/configurations.
>
> Mixing both freely is a few orders of magnitude more powerful. For
> instance, let's assume that I have a "computation" model for components
> A, B and C and not for D. I have compositions C1 and C2
>
> C1 is made of A, B and is therefore a "computation-composition"
> C2 is made of C, D and is therefore a "normal composition"
> (+1 for Herman: we need to find separate names here)
>
> Now, I connect C1 to C2. I therefore have A>B>C>D
>
> Why couldn't I consider C when making the execution policy for this
> composition of compositions ?

You could! Since I think _any_ component will come with a default
composition model. Namely the one that is behind component-based system
design _by nature_: connecting compatible ports to each other.

It's only when you want to achieve _higher performance_ that you would have
to spend the effort in giving more information. And since there _is_ a
semantic difference between "containers" and "computations" (I will keep on
repeating this...!) it makes sense to introduce separate composition
primitives for both.

> Sylvain Joyeux (Dr.Ing.)

Herman

> Space & Security Robotics
>
> !!! Achtung, neue Telefonnummer!!!
>
> Standort Bremen:
> DFKI GmbH
> Robotics Innovation Center
> Robert-Hooke-Straße 5
> 28359 Bremen, Germany
>
> Phone: +49 (0)421 178-454136
> Fax: +49 (0)421 218-454150
> E-Mail: robotik [..] ...
>
> Weitere Informationen: http://www.dfki.de/robotik
> -----------------------------------------------------------------------
> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern
> Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
> (Vorsitzender) Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
> Amtsgericht Kaiserslautern, HRB 2313
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> USt-Id.Nr.: DE 148646973
> Steuernummer: 19/673/0060/3
> -----------------------------------------------------------------------

[ANN] FBSched - Function Block Scheduling Component

On Mon, Apr 02, 2012 at 11:27:19AM +0200, Sylvain Joyeux wrote:
> On 04/02/2012 10:44 AM, Markus Klotzbuecher wrote:
> >On Fri, Mar 30, 2012 at 04:49:21PM +0200, Sylvain Joyeux wrote:
> >>On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
> >>>Hi,
> >>>
> >>>Here's a simple, fast C++ component that permits scheduling "function
> >>>blocks" implemented as RTT TaskContexts in a well defined order using
> >>>the SlaveActivity. This is to minimize scheduling overhead by
> >>>serializing everything in one thread.
> >>>
> >>>The function block components must be configured using SlaveActivities
> >>>and will be triggered by the FBSched component when itself is
> >>>triggered. This can be either achieved by sending events to a
> >>>"trigger" port or by configuring it with a periodic activity.
> >>>
> >>>Similar scheduling could be implemented by a scripting component,
> >>>however this simple (yet common) case of just triggering a set of
> >>>function blocks in a certain order justifies a dedicated bare-metal
> >>>c++ one for ultimate efficiency :-)
> >>>
> >>>Code is here:
> >>>
> >>>https://github.com/kmarkus/fbsched
> >>>
> >>>This is a small contribution with the goal to illustrate how to do
> >>>composition of computations differs from the composition of systems
> >>>(the latter is better done using FSMs!).
> >>>
> >>>Credits go to Herman for suggesting this distinction.
> >>And thank you for showing that there are fundamentally absolutely no
> >>distinction ;-)
> >
> >Yes, in the end it's only machine code... Seriously, I think the main
> >purpose of the distinction is to support people to make the right
> >choice to run their compositions.
> >
> >>The "common" computation models (i.e. simulink-like) assume that you
> >>have an acyclic graph of computations (you cycle using a delay), and
> >>that each components generates one output on each of its outputs
> >>each time its triggered. Basically, the ordering is a topological
> >>sort over the connection graphs.
> >>
> >>I'm annoyed at myself, right now, for not having the deployer
> >>integrated in Rock, and show how the *current* Rock composition
> >>models perfectly allow to use something like fbsched to run
> >>compositions (as modelled in Rock) when it makes sense. The only
> >
> >Yes, why not reuse the same tools? But again, making the distinction
> >explicit would help people to make the right choice to do so!
> >
> >>issue is to segregate the components between those that fit the
> >>simulink-like component model and those that don't, something that
> >>can be done by tagging them with services
> >>(Srv::SimulinkComputationModel)
> >
> >So you are distinguishing on a semantic level!
> You obviously have to give more information to your system to
> support such a workflow. That's what I do here: I'm telling it "here
> is a component that fits computation model X, do what you want with
> this information"
>
> What I don't see is the need to completely separate: to say "there
> is composition type A and composition type B and they are
> different". They're not. What *is* different is the amount of
> information you have available for certain components, an how much
> of that information is usable by algorithms to either generate or
> verify the runtime policies/deployments/configurations.
>
> Mixing both freely is a few orders of magnitude more powerful. For
> instance, let's assume that I have a "computation" model for
> components A, B and C and not for D. I have compositions C1 and C2
>
> C1 is made of A, B and is therefore a "computation-composition"
> C2 is made of C, D and is therefore a "normal composition"
> (+1 for Herman: we need to find separate names here)
>
> Now, I connect C1 to C2. I therefore have A>B>C>D
>
> Why couldn't I consider C when making the execution policy for this
> composition of compositions ?

You can. But you seem to be missing that this is all about hard
real-time scheduling of computations, hence composing arbitrary D's
into you loop will foobar determinism. Hence, if your models contain
sufficient information to detect and warn about this, then all the
better.

Markus

[ANN] FBSched - Function Block Scheduling Component

On Fri, 30 Mar 2012, Sylvain Joyeux wrote:

> On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
>> Hi,
>>
>> Here's a simple, fast C++ component that permits scheduling "function
>> blocks" implemented as RTT TaskContexts in a well defined order using
>> the SlaveActivity. This is to minimize scheduling overhead by
>> serializing everything in one thread.
>>
>> The function block components must be configured using SlaveActivities
>> and will be triggered by the FBSched component when itself is
>> triggered. This can be either achieved by sending events to a
>> "trigger" port or by configuring it with a periodic activity.
>>
>> Similar scheduling could be implemented by a scripting component,
>> however this simple (yet common) case of just triggering a set of
>> function blocks in a certain order justifies a dedicated bare-metal
>> c++ one for ultimate efficiency :-)
>>
>> Code is here:
>>
>> https://github.com/kmarkus/fbsched
>>
>> This is a small contribution with the goal to illustrate how to do
>> composition of computations differs from the composition of systems
>> (the latter is better done using FSMs!).
>>
>> Credits go to Herman for suggesting this distinction.

> And thank you for showing that there are fundamentally absolutely no
> distinction ;-)

You mean that _you_ don't see a distinction, I assume...? :-)

> The "common" computation models (i.e. simulink-like) assume that you
> have an acyclic graph of computations (you cycle using a delay), and
> that each components generates one output on each of its outputs each
> time its triggered. Basically, the ordering is a topological sort over
> the connection graphs.

You are mixing mechanism and policy. As most people do. I mean:
simulink-like tools _choose_ one particular acyclic graph for you. Behind
your back. And if one doesn't like the one that is chosen by the tool (and
not being made explicit somewhere) you "fool" the system by adding
artifical delays. That is a good example of changing the _model_ to let your
_tool_ do what you wanted it to do with your original model in the first
place.

We are doing our best to avoid these "semantic bugs" in the models we use.

> I'm annoyed at myself, right now, for not having the deployer integrated
> in Rock, and show how the *current* Rock composition models perfectly
> allow to use something like fbsched to run compositions (as modelled in
> Rock) when it makes sense. The only issue is to segregate the components
> between those that fit the simulink-like component model and those that
> don't, something that can be done by tagging them with services
> (Srv::SimulinkComputationModel)

The tagging is not enough, because there _are_ semantic differences between
composing "Container" components (read: taskcontext in RTT speak) and
"Computational" components (read: a lot _more_ than what Simulink offers,
such as, for example, the possibility to loop N times over one "function
block" in the computational composition because another "function block"
changed its computational behaviour).

[ANN] FBSched - Function Block Scheduling Component

On 03/30/2012 08:13 PM, Herman Bruyninckx wrote:
>> And thank you for showing that there are fundamentally absolutely no
>> distinction ;-)
>
> You mean that _you_ don't see a distinction, I assume...? :-)
Obviously ;-)

>
>> The "common" computation models (i.e. simulink-like) assume that you
>> have an acyclic graph of computations (you cycle using a delay), and
>> that each components generates one output on each of its outputs each
>> time its triggered. Basically, the ordering is a topological sort over
>> the connection graphs.
>
> You are mixing mechanism and policy. As most people do. I mean:
> simulink-like tools _choose_ one particular acyclic graph for you. Behind
> your back. And if one doesn't like the one that is chosen by the tool (and
> not being made explicit somewhere) you "fool" the system by adding
> artifical delays.
The delays are absolutely not artificial. They are required, and make
the meaning of the computation much more explicit than a manual
scheduling order. All of this falls into synchronous computation, which
is an underlying model in simulink and a very explicit one in
dataflow-oriented synchronous languages like Lustre (and SCADE)

Let me explain.

Let's assume you have a processing pipeline A => B => C (i.e. B uses the
output of A, C uses the output of B). Here, the order is obvious and you
should compute A, then B and then C

Let's now assume that you have a cycle: B has both an input from A and
one from C. The dataflow is:

OUT > A
A > B
B > C
C > B
C > OUT

where OUT is the outside of the computation model.

In this scheme, B CANNOT use the data from A and the data from C *from
the same execution cycle: to generate its output, B needs an input from
BOTH C and A, and to generate its output, C needs an output from B. I.e.
without delays, you would require the following dataflow to happen:

A(t) => B(t)
C(t) => B(t)
B(t) => C(t)

Which breaks causality. You therefore have to make a choice: in
principle, either you delay A > B or you delay C > B.

> That is a good example of changing the _model_ to let
> your _tool_ do what you wanted it to do with your original model in the
> first
> place.
In these tools, the output of the complete block is what is needed by
the outside world. You actually *need* to run each of your computation
nodes once per execution cycle to get that output. In other words, the
order of computation is irrelevant to the latency of your computation.
It is only required so that each of the nodes get the inputs it needs.
In other words, *any* topological sort will give the same result *in the
frame of simulink-like computation model*.

> We are doing our best to avoid these "semantic bugs" in the models we use.
Oh. I'm all for allowing manual specifications to replace automatic
specifications. But I'm all for it because, unfortunately, models rarely
fit reality, i.e. to BREAK out of the models.

>> I'm annoyed at myself, right now, for not having the deployer integrated
>> in Rock, and show how the *current* Rock composition models perfectly
>> allow to use something like fbsched to run compositions (as modelled in
>> Rock) when it makes sense. The only issue is to segregate the components
>> between those that fit the simulink-like component model and those that
>> don't, something that can be done by tagging them with services
>> (Srv::SimulinkComputationModel)
>
> The tagging is not enough, because there _are_ semantic differences between
> composing "Container" components (read: taskcontext in RTT speak) and
> "Computational" components (read: a lot _more_ than what Simulink offers,
> such as, for example, the possibility to loop N times over one "function
> block" in the computational composition because another "function block"
> changed its computational behaviour).
I have the feeling that what you are talking about is recreating a
complete programming language. I'm really wondering if there is any
point to it.

The most advanced, "usable", computation models that are actually good
for something (i.e. do provide something more than "normal" programming
languages) that I know of are synchronous languages. Lustre has
basically the same expressiveness than simulink (a bit more expressive,
but not much more). Esterel looks different, but cannot do much more
either. But my point would be: if you go more complex than those, it is
highly likely that you would better just stick to an actual programming
language.

[ANN] FBSched - Function Block Scheduling Component

On Mon, 2 Apr 2012, Sylvain Joyeux wrote:

> On 03/30/2012 08:13 PM, Herman Bruyninckx wrote:
[...]
>>> The "common" computation models (i.e. simulink-like) assume that you
>>> have an acyclic graph of computations (you cycle using a delay), and
>>> that each components generates one output on each of its outputs each
>>> time its triggered. Basically, the ordering is a topological sort over
>>> the connection graphs.
>>
>> You are mixing mechanism and policy. As most people do. I mean:
>> simulink-like tools _choose_ one particular acyclic graph for you. Behind
>> your back. And if one doesn't like the one that is chosen by the tool (and
>> not being made explicit somewhere) you "fool" the system by adding
>> artifical delays.
> The delays are absolutely not artificial. They are required, and make the
> meaning of the computation much more explicit than a manual scheduling order.

But the are not necessarily composable... I mean, bringing in a delay in an
inner loop might not make sense in all outer loops that the inner loop can
be composed with. For example: when composing six joint control loops in
one kinematic chain control loop, one needs just one delay for the whole
thing, and not six. Easy for a human to take them out, not so obvious for
an automatic composition tool.

> All of this falls into synchronous computation, which is an underlying model
> in simulink and a very explicit one in dataflow-oriented synchronous
> languages like Lustre (and SCADE)
>
> Let me explain.
>
> Let's assume you have a processing pipeline A => B => C (i.e. B uses the
> output of A, C uses the output of B). Here, the order is obvious and you
> should compute A, then B and then C
>
> Let's now assume that you have a cycle: B has both an input from A and one
> from C. The dataflow is:
>
> OUT > A
> A > B
> B > C
> C > B
> C > OUT
>
> where OUT is the outside of the computation model.
>
> In this scheme, B CANNOT use the data from A and the data from C *from the
> same execution cycle: to generate its output, B needs an input from BOTH C
> and A, and to generate its output, C needs an output from B. I.e. without
> delays, you would require the following dataflow to happen:
>
> A(t) => B(t)
> C(t) => B(t)
> B(t) => C(t)
>
> Which breaks causality. You therefore have to make a choice: in principle,
> either you delay A > B or you delay C > B.

Thanks for making this clear. (It was to me, but maybe not to others
following this thread.) But allow me to repeat myself: the meaning of "OUT"
is not an absolute property of a "control loop", since it can change when
you use the loop in another composite. For example, when adding more inputs
to the loop. While in _control_ loops, there are already "composition
patterns" that experts in the domain follow rather "obviously" without
questioning them, the same does not hold in other domains. The most
challenging example I know is that of a shared 'world model'; where the
'causality' can have to change drastically when new 'clients' or
'providers' are added to the system. (Or old ones are removed.)

>> That is a good example of changing the _model_ to let
>> your _tool_ do what you wanted it to do with your original model in the
>> first
>> place.
> In these tools, the output of the complete block is what is needed by the
> outside world. You actually *need* to run each of your computation nodes once
> per execution cycle to get that output. In other words, the order of
> computation is irrelevant to the latency of your computation. It is only
> required so that each of the nodes get the inputs it needs. In other words,
> *any* topological sort will give the same result *in the frame of
> simulink-like computation model*.

Yes, but this "conclusion" does not generalize to other domains than
control. At least, that is my current working hypothesis.

>> We are doing our best to avoid these "semantic bugs" in the models we use.
> Oh. I'm all for allowing manual specifications to replace automatic
> specifications. But I'm all for it because, unfortunately, models rarely fit
> reality, i.e. to BREAK out of the models.

>>> I'm annoyed at myself, right now, for not having the deployer integrated
>>> in Rock, and show how the *current* Rock composition models perfectly
>>> allow to use something like fbsched to run compositions (as modelled in
>>> Rock) when it makes sense. The only issue is to segregate the components
>>> between those that fit the simulink-like component model and those that
>>> don't, something that can be done by tagging them with services
>>> (Srv::SimulinkComputationModel)
>>
>> The tagging is not enough, because there _are_ semantic differences between
>> composing "Container" components (read: taskcontext in RTT speak) and
>> "Computational" components (read: a lot _more_ than what Simulink offers,
>> such as, for example, the possibility to loop N times over one "function
>> block" in the computational composition because another "function block"
>> changed its computational behaviour).

> I have the feeling that what you are talking about is recreating a complete
> programming language. I'm really wondering if there is any point to it.

My first ambition is not about a programming language, but more about a
decent data structure and "iterators" on top of it.

> The most advanced, "usable", computation models that are actually good for
> something (i.e. do provide something more than "normal" programming
> languages) that I know of are synchronous languages. Lustre has basically the
> same expressiveness than simulink (a bit more expressive, but not much more).
> Esterel looks different, but cannot do much more either. But my point would
> be: if you go more complex than those, it is highly likely that you would
> better just stick to an actual programming language.

yes, of course. But it is not about the semantic richness of the
programming language, but about the semantic of computational composition.
Of course, you can do it with generic programming languages, but, also, of
course the synchronous languages are not enough in themselves, because
all of them make some very strong assumptions about what "synchronous"
means exactly, in order to be able to provide verification and automatic
generation of error-free solutions. They definitely have their place for
these things, and I think also that they are probably the first place we
have to look for _if_ we want to suggest _programming languages_. (Which is
not yet my ambition, for the coming months.)

[ANN] FBSched - Function Block Scheduling Component

On 04/02/2012 09:50 AM, Herman Bruyninckx wrote:
> But the are not necessarily composable... I mean, bringing in a delay in an
> inner loop might not make sense in all outer loops that the inner loop can
> be composed with. For example: when composing six joint control loops in
> one kinematic chain control loop, one needs just one delay for the whole
> thing, and not six. Easy for a human to take them out, not so obvious for
> an automatic composition tool.
Not clear to me. Could you actually detail the example the same way than
I did ?

[ANN] FBSched - Function Block Scheduling Component

On Mon, 2 Apr 2012, Sylvain Joyeux wrote:

> On 04/02/2012 09:50 AM, Herman Bruyninckx wrote:
>> But the are not necessarily composable... I mean, bringing in a delay in an
>> inner loop might not make sense in all outer loops that the inner loop can
>> be composed with. For example: when composing six joint control loops in
>> one kinematic chain control loop, one needs just one delay for the whole
>> thing, and not six. Easy for a human to take them out, not so obvious for
>> an automatic composition tool.
> Not clear to me. Could you actually detail the example the same way than I
> did ?

Sure. (Although with some less lines than what you provided...)

Take an existing control loop (A->B->C->D, e.g.), and you want to compose
it with a "disturbance observer", that fits in as a parallel loop somewhere
between "A->B" and "C->D". If you would have "cut" the original loop with
an 'artificial delay' at "B->C", this could have a negative effect on the
performance of the composite control loop. Of course, "could", because much
depends on what is happening inside the control blocks exactly.

Hope this helps to clarify my statements!

Herman

[ANN] FBSched - Function Block Scheduling Component

On 04/02/2012 10:09 AM, Herman Bruyninckx wrote:
> On Mon, 2 Apr 2012, Sylvain Joyeux wrote:
>
>> On 04/02/2012 09:50 AM, Herman Bruyninckx wrote:
>>> But the are not necessarily composable... I mean, bringing in a delay in an
>>> inner loop might not make sense in all outer loops that the inner loop can
>>> be composed with. For example: when composing six joint control loops in
>>> one kinematic chain control loop, one needs just one delay for the whole
>>> thing, and not six. Easy for a human to take them out, not so obvious for
>>> an automatic composition tool.
>> Not clear to me. Could you actually detail the example the same way than I
>> did ?
>
> Sure. (Although with some less lines than what you provided...)
>
> Take an existing control loop (A->B->C->D, e.g.), and you want to compose
> it with a "disturbance observer", that fits in as a parallel loop somewhere
> between "A->B" and "C->D". If you would have "cut" the original loop with
> an 'artificial delay' at "B->C", this could have a negative effect on the
> performance of the composite control loop. Of course, "could", because much
> depends on what is happening inside the control blocks exactly.

Instead of specifying artificial delays, you could specify "loop
breaking points" where the loop can be broken. Whether or not a delay
will really occur, this will depend on the overall schedule that will be
generated from the composite.

>
> Hope this helps to clarify my statements!
>
> Herman
>

[ANN] FBSched - Function Block Scheduling Component

On Mon, 2 Apr 2012, Erwin Aertbelien wrote:

> On 04/02/2012 10:09 AM, Herman Bruyninckx wrote:
>> On Mon, 2 Apr 2012, Sylvain Joyeux wrote:
>>
>>> On 04/02/2012 09:50 AM, Herman Bruyninckx wrote:
>>>> But the are not necessarily composable... I mean, bringing in a delay in
>>>> an
>>>> inner loop might not make sense in all outer loops that the inner loop
>>>> can
>>>> be composed with. For example: when composing six joint control loops in
>>>> one kinematic chain control loop, one needs just one delay for the whole
>>>> thing, and not six. Easy for a human to take them out, not so obvious for
>>>> an automatic composition tool.
>>> Not clear to me. Could you actually detail the example the same way than I
>>> did ?
>>
>> Sure. (Although with some less lines than what you provided...)
>>
>> Take an existing control loop (A->B->C->D, e.g.), and you want to compose
>> it with a "disturbance observer", that fits in as a parallel loop somewhere
>> between "A->B" and "C->D". If you would have "cut" the original loop with
>> an 'artificial delay' at "B->C", this could have a negative effect on the
>> performance of the composite control loop. Of course, "could", because much
>> depends on what is happening inside the control blocks exactly.
>
> Instead of specifying artificial delays, you could specify "loop breaking
> points" where the loop can be broken.

I am not at all sure that one can specify such points in the
loop-to-be-composed itself... I do think one can specify the opposite: to
indicate points where it should definitely _not_ be broken.

> Whether or not a delay will really occur, this will depend on the overall
> schedule that will be generated from the composite.

Yes, and "delays" are only one of the composition primitives that can be
used; multiplexers are an example of other often occuring composition
primitives.

>> Hope this helps to clarify my statements!

Herman

[ANN] FBSched - Function Block Scheduling Component

On 04/02/2012 10:09 AM, Herman Bruyninckx wrote:
> On Mon, 2 Apr 2012, Sylvain Joyeux wrote:
>
>> On 04/02/2012 09:50 AM, Herman Bruyninckx wrote:
>>> But the are not necessarily composable... I mean, bringing in a delay
>>> in an
>>> inner loop might not make sense in all outer loops that the inner
>>> loop can
>>> be composed with. For example: when composing six joint control loops in
>>> one kinematic chain control loop, one needs just one delay for the whole
>>> thing, and not six. Easy for a human to take them out, not so obvious
>>> for
>>> an automatic composition tool.
>> Not clear to me. Could you actually detail the example the same way
>> than I did ?
>
> Sure. (Although with some less lines than what you provided...)
>
> Take an existing control loop (A->B->C->D, e.g.), and you want to compose
> it with a "disturbance observer", that fits in as a parallel loop somewhere
> between "A->B" and "C->D". If you would have "cut" the original loop with
> an 'artificial delay' at "B->C", this could have a negative effect on the
> performance of the composite control loop. Of course, "could", because much
> depends on what is happening inside the control blocks exactly.
>
> Hope this helps to clarify my statements!

Maybe. Could you give more details about what you call "composability" ?
In your example, you transform (i.e. change) the composition model (you
add and remove elements and links). My point of view about composability
was that you have the ability to use a composition as an element of
another composition (i.e. you would tell the system to use composition E
in place of element B)

In any case, regardless of the fact that you use delays or that you use
manual schedules, you have something to modify to change the original
loop to fit the new one. You either modify the composition model (in
Rock, you would probably do that with specializations, i.e. tell the
system "set delay at connection U>I if element B matches Y") or you
modify the schedule.

Or, even better, you add additional semantic to the roles of your
elements in your composition to make the choice of where to put delays
sensible (with the ability to specify the delays it manually *and* the
ability to override the schedule manually in the end).

That's already more or less what rock-roby does for connection policies:
you either leave them "open" and the system generates policies
automatically based on additional model information (dataflow
propagation information and requirements on inputs), but you can
override it in the end as well.

[ANN] FBSched - Function Block Scheduling Component

On Mon, 2 Apr 2012, Sylvain Joyeux wrote:

> On 04/02/2012 10:09 AM, Herman Bruyninckx wrote:
>> On Mon, 2 Apr 2012, Sylvain Joyeux wrote:
>>
>>> On 04/02/2012 09:50 AM, Herman Bruyninckx wrote:
>>>> But the are not necessarily composable... I mean, bringing in a delay
>>>> in an
>>>> inner loop might not make sense in all outer loops that the inner
>>>> loop can
>>>> be composed with. For example: when composing six joint control loops in
>>>> one kinematic chain control loop, one needs just one delay for the whole
>>>> thing, and not six. Easy for a human to take them out, not so obvious
>>>> for
>>>> an automatic composition tool.
>>> Not clear to me. Could you actually detail the example the same way
>>> than I did ?
>>
>> Sure. (Although with some less lines than what you provided...)
>>
>> Take an existing control loop (A->B->C->D, e.g.), and you want to compose
>> it with a "disturbance observer", that fits in as a parallel loop somewhere
>> between "A->B" and "C->D". If you would have "cut" the original loop with
>> an 'artificial delay' at "B->C", this could have a negative effect on the
>> performance of the composite control loop. Of course, "could", because much
>> depends on what is happening inside the control blocks exactly.
>>
>> Hope this helps to clarify my statements!
>
> Maybe. Could you give more details about what you call "composability" ? In
> your example, you transform (i.e. change) the composition model (you add and
> remove elements and links).

No, I just add ("compose"). But removing "duplicates" during composition
would still fall within my definition of composition.

> My point of view about composability was that you
> have the ability to use a composition as an element of another composition
> (i.e. you would tell the system to use composition E in place of element B)

That is the composability that I see at the "container" level. (Pfff, we
are definitely confusing everyone with the overloaden use of all these
words...) The difference between composition of "container" composites and
"computational" composites is exactly the fact that, in the former, the
composition primitives are just adding or replacing, while in the latter,
"optimization" is possible by (i) removing "duplicates" (ports,
buffers,...), and (ii) changing the computational schedule.

There is room for both, I guess. And I think you are supporting already the
first very decently in the rock tooling.

> In any case, regardless of the fact that you use delays or that you use
> manual schedules, you have something to modify to change the original loop to
> fit the new one. You either modify the composition model (in Rock, you would
> probably do that with specializations, i.e. tell the system "set delay at
> connection U>I if element B matches Y") or you modify the schedule.

Both, indeed.

> Or, even better, you add additional semantic to the roles of your elements in
> your composition to make the choice of where to put delays sensible (with the
> ability to specify the delays it manually *and* the ability to override the
> schedule manually in the end).

I don't like to see "delay" being used as a composition primitive, since it
only makes sense in a subset of the domains where computational composition
makes sense.

> That's already more or less what rock-roby does for connection policies: you
> either leave them "open" and the system generates policies automatically
> based on additional model information (dataflow propagation information and
> requirements on inputs), but you can override it in the end as well.

> Sylvain Joyeux (Dr.Ing.)

Herman

[ANN] FBSched - Function Block Scheduling Component

On 04/02/2012 10:34 AM, Herman Bruyninckx wrote:
>>> Hope this helps to clarify my statements!
>>
>> Maybe. Could you give more details about what you call "composability"
>> ? In your example, you transform (i.e. change) the composition model
>> (you add and remove elements and links).
>
> No, I just add ("compose"). But removing "duplicates" during composition
> would still fall within my definition of composition.
If you add an element between B and C, you are first removing a
connection (B > C) and then adding. But you have to remove first. This
ability to change the composition model made me feel funny during the
BRICS configuration tool presentation.

>> My point of view about composability was that you have the ability to
>> use a composition as an element of another composition (i.e. you would
>> tell the system to use composition E in place of element B)
>
> That is the composability that I see at the "container" level. (Pfff, we
> are definitely confusing everyone with the overloaden use of all these
> words...) The difference between composition of "container" composites and
> "computational" composites is exactly the fact that, in the former, the
> composition primitives are just adding or replacing, while in the latter,
> "optimization" is possible by (i) removing "duplicates" (ports,
> buffers,...), and (ii) changing the computational schedule.
>
> There is room for both, I guess. And I think you are supporting already the
> first very decently in the rock tooling.
I still don't see room for "separation" here. Optimizations are possible
as soon as you have enough information to perform them (i.e. either
because a human being tells you to do so or because you have algorithms
that can do it for you). I would even go as far as saying that the best
model is the one where you can mix both cases freely.

What you want, at the end, is an executable system configuration
(dataflow meeting constraints, properties set, hierarchy properly
constructed, policies set, deployments selected). Where the information
comes from is irrelevant, you just *need* to have this information.

Making richer models only allows to provide high-level information about
components / execution nodes and generate the low-level information
needed for execution automatically. The algorithms that would do that
would, obviously, do it only for the parts of the system where it is
possible. That's why I mentioned the dataflow thing: the policies will
be determined automatically for the parts of the system where enough
high-level information is available, and will require manual input for
the rest. But, nowhere, do you need to separate between the two "types"
of composition.

>> Or, even better, you add additional semantic to the roles of your
>> elements in your composition to make the choice of where to put delays
>> sensible (with the ability to specify the delays it manually *and* the
>> ability to override the schedule manually in the end).
>
> I don't like to see "delay" being used as a composition primitive, since it
> only makes sense in a subset of the domains where computational composition
> makes sense.
Delay is not a composition primitive, it is part of a certain
computation model. I can't really say if that model is composable or not
(I have honestly no idea). What I can say, though, is that it provides a
very high-level representation of your computation in a way that allows
to automatically generate the required execution policies.

In other words, *if* your computation fits a simulink-like model, *then*
you can get a completely automated deployment done for you (i.e. there
is a transformation between the two models). Otherwise, you might have
to do it manually. But since the synchronous model is very successful at
representing quite a lot of things, discarding it right away seems ...
less than optimal.

[ANN] FBSched - Function Block Scheduling Component

On Mon, 2 Apr 2012, Sylvain Joyeux wrote:

> On 04/02/2012 10:34 AM, Herman Bruyninckx wrote:
>>>> Hope this helps to clarify my statements!
>>>
>>> Maybe. Could you give more details about what you call "composability"
>>> ? In your example, you transform (i.e. change) the composition model
>>> (you add and remove elements and links).
>>
>> No, I just add ("compose"). But removing "duplicates" during composition
>> would still fall within my definition of composition.
> If you add an element between B and C, you are first removing a connection (B
> C) and then adding. But you have to remove first.

Sorry, my example was not completely clear, apparently: I would not
_remove_ the computational link between B and C, but adding a new
"parallel" computation, hence the existing links are extended. In practice,
this extension might only be possible by first removing the old one and
then adding the new one, yes. But this is an implementation-level detail
that is not relevant at the modelling level. And also not at the
computational composition level, I guess: letting two "function blocks" use
the same shared "input" data instead of only one does not make any
difference in a single-threaded context. The "removal" operations are only
necessary when "Ports" are being used explicitly.

> This ability to change the composition model made me feel funny during
> the BRICS configuration tool presentation.

Where exactly did that "funny feeling" come from?

>>> My point of view about composability was that you have the ability to
>>> use a composition as an element of another composition (i.e. you would
>>> tell the system to use composition E in place of element B)
>>
>> That is the composability that I see at the "container" level. (Pfff, we
>> are definitely confusing everyone with the overloaden use of all these
>> words...) The difference between composition of "container" composites and
>> "computational" composites is exactly the fact that, in the former, the
>> composition primitives are just adding or replacing, while in the latter,
>> "optimization" is possible by (i) removing "duplicates" (ports,
>> buffers,...), and (ii) changing the computational schedule.
>>
>> There is room for both, I guess. And I think you are supporting already the
>> first very decently in the rock tooling.

> I still don't see room for "separation" here. Optimizations are possible as
> soon as you have enough information to perform them (i.e. either because a
> human being tells you to do so or because you have algorithms that can do it
> for you). I would even go as far as saying that the best model is the one
> where you can mix both cases freely.

That is "best" in terms of minimality of tool support. But not necessarily
best in terms of semantic clarity. I have seen already several cases where
the "tool level guys" make arguments that are implicitly driven by their
knowledge about how their tool can support the various semantically
different concepts. That's good, at the tool level, but not necessarily at
the modelling level. All depends on to what extent the semantics of the
various concepts are really different. And in my view, they _are_ different
for (i) container-level composition, and (ii) computation-level
composition.

> What you want, at the end, is an executable system configuration (dataflow
> meeting constraints, properties set, hierarchy properly constructed, policies
> set, deployments selected). Where the information comes from is irrelevant,
> you just *need* to have this information.

> Making richer models only allows to provide high-level information about
> components / execution nodes and generate the low-level information needed
> for execution automatically.

I only partially agree here:
- I agree that richer models can help to automate the generation.
- I do not agree that this is the _only_ reason to introduce richer models.
Because one major other reason is to model semantically different things.

And since container composition and computational composition are relevant
at different phases of the "development process", they _are_ sufficiently
different to me to motivate the introduction of extra model primitives.
This is, of course, not an absolute truth, that can be proven or disproven.

> The algorithms that would do that would,
> obviously, do it only for the parts of the system where it is possible.
> That's why I mentioned the dataflow thing: the policies will be determined
> automatically for the parts of the system where enough high-level information
> is available, and will require manual input for the rest. But, nowhere, do
> you need to separate between the two "types" of composition.

I think I do, for the above-mentioned semantic differentiation. You are
always refering to the _implementation_ aspects, which is fine, but a bit
too myopic in the "grand vision of things" :-)

>>> Or, even better, you add additional semantic to the roles of your
>>> elements in your composition to make the choice of where to put delays
>>> sensible (with the ability to specify the delays it manually *and* the
>>> ability to override the schedule manually in the end).
>>
>> I don't like to see "delay" being used as a composition primitive, since it
>> only makes sense in a subset of the domains where computational composition
>> makes sense.
> Delay is not a composition primitive, it is part of a certain computation
> model.

Yes it is, _but_ it is the 'default solution' in the "Simulink" world to
tell the tool behind the screes how to do the computational serialization.
And that's a _hack_, not more and not less!

> I can't really say if that model is composable or not (I have honestly
> no idea). What I can say, though, is that it provides a very high-level
> representation of your computation in a way that allows to automatically
> generate the required execution policies.

Yes, but the "delay primitive" can get in the way when you compose multiple
of those models!

> In other words, *if* your computation fits a simulink-like model, *then* you
> can get a completely automated deployment done for you (i.e. there is a
> transformation between the two models). Otherwise, you might have to do it
> manually. But since the synchronous model is very successful at representing
> quite a lot of things, discarding it right away seems ... less than optimal.

I am not at all _discarding_ it! I am just saying that it should not be
taken as _the_ model of choice that fits all use cases.

> Sylvain Joyeux (Dr.Ing.)

Herman

[ANN] FBSched - Function Block Scheduling Component

On 03/30/2012 08:13 PM, Herman Bruyninckx wrote:
> On Fri, 30 Mar 2012, Sylvain Joyeux wrote:
>
>> On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
>>> Hi,
>>>
>>> Here's a simple, fast C++ component that permits scheduling "function
>>> blocks" implemented as RTT TaskContexts in a well defined order using
>>> the SlaveActivity. This is to minimize scheduling overhead by
>>> serializing everything in one thread.
>>>
>>> The function block components must be configured using SlaveActivities
>>> and will be triggered by the FBSched component when itself is
>>> triggered. This can be either achieved by sending events to a
>>> "trigger" port or by configuring it with a periodic activity.
>>>
>>> Similar scheduling could be implemented by a scripting component,
>>> however this simple (yet common) case of just triggering a set of
>>> function blocks in a certain order justifies a dedicated bare-metal
>>> c++ one for ultimate efficiency :-)
>>>
>>> Code is here:
>>>
>>> https://github.com/kmarkus/fbsched
>>>
>>> This is a small contribution with the goal to illustrate how to do
>>> composition of computations differs from the composition of systems
>>> (the latter is better done using FSMs!).
>>>
>>> Credits go to Herman for suggesting this distinction.
>> And thank you for showing that there are fundamentally absolutely no
>> distinction ;-)
> You mean that _you_ don't see a distinction, I assume...? :-)
>
>> The "common" computation models (i.e. simulink-like) assume that you
>> have an acyclic graph of computations (you cycle using a delay), and
>> that each components generates one output on each of its outputs each
>> time its triggered. Basically, the ordering is a topological sort over
>> the connection graphs.
> You are mixing mechanism and policy. As most people do. I mean:
> simulink-like tools _choose_ one particular acyclic graph for you. Behind
> your back. And if one doesn't like the one that is chosen by the tool (and
> not being made explicit somewhere) you "fool" the system by adding
> artifical delays. That is a good example of changing the _model_ to let your
> _tool_ do what you wanted it to do with your original model in the first
> place.

You are right, the tool should not hide these decisions! However, the
decision is made for a reason: to solve a causal conflict in computation.

How can one solve this problem for a component architecture? There are
two options (as far as I know):
1) Make the ordering explicit during deployment.
This requires the designer to know the component architecture's context
and make the execution order explicit.
2) Leave it up to the framework/OS scheduler to solve the causality.
This is a mistake (in my opinion), since the framework/OS does not know
the semantics of the component composition and thereby choses a
particular ordering (maybe, on the fly). This might give the same
results as with 1) (assuming a smart designer was at work), but could
also result in less predictable timing and/or outputs of the algorithm.

To conclude: The FBSched should be added to Orocos permanently. :-)

> We are doing our best to avoid these "semantic bugs" in the models we use.
>
>> I'm annoyed at myself, right now, for not having the deployer integrated
>> in Rock, and show how the *current* Rock composition models perfectly
>> allow to use something like fbsched to run compositions (as modelled in
>> Rock) when it makes sense. The only issue is to segregate the components
>> between those that fit the simulink-like component model and those that
>> don't, something that can be done by tagging them with services
>> (Srv::SimulinkComputationModel)
> The tagging is not enough, because there _are_ semantic differences between
> composing "Container" components (read: taskcontext in RTT speak) and
> "Computational" components (read: a lot _more_ than what Simulink offers,
> such as, for example, the possibility to loop N times over one "function
> block" in the computational composition because another "function block"
> changed its computational behaviour).
>

[ANN] FBSched - Function Block Scheduling Component

On Sun, 1 Apr 2012, Robert Wilterdink wrote:

> On 03/30/2012 08:13 PM, Herman Bruyninckx wrote:
>> On Fri, 30 Mar 2012, Sylvain Joyeux wrote:
>>
>>> On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
>>>> Hi,
>>>>
>>>> Here's a simple, fast C++ component that permits scheduling "function
>>>> blocks" implemented as RTT TaskContexts in a well defined order using
>>>> the SlaveActivity. This is to minimize scheduling overhead by
>>>> serializing everything in one thread.
>>>>
>>>> The function block components must be configured using SlaveActivities
>>>> and will be triggered by the FBSched component when itself is
>>>> triggered. This can be either achieved by sending events to a
>>>> "trigger" port or by configuring it with a periodic activity.
>>>>
>>>> Similar scheduling could be implemented by a scripting component,
>>>> however this simple (yet common) case of just triggering a set of
>>>> function blocks in a certain order justifies a dedicated bare-metal
>>>> c++ one for ultimate efficiency :-)
>>>>
>>>> Code is here:
>>>>
>>>> https://github.com/kmarkus/fbsched
>>>>
>>>> This is a small contribution with the goal to illustrate how to do
>>>> composition of computations differs from the composition of systems
>>>> (the latter is better done using FSMs!).
>>>>
>>>> Credits go to Herman for suggesting this distinction.
>>> And thank you for showing that there are fundamentally absolutely no
>>> distinction ;-)
>> You mean that _you_ don't see a distinction, I assume...? :-)
>>
>>> The "common" computation models (i.e. simulink-like) assume that you
>>> have an acyclic graph of computations (you cycle using a delay), and
>>> that each components generates one output on each of its outputs each
>>> time its triggered. Basically, the ordering is a topological sort over
>>> the connection graphs.
>> You are mixing mechanism and policy. As most people do. I mean:
>> simulink-like tools _choose_ one particular acyclic graph for you. Behind
>> your back. And if one doesn't like the one that is chosen by the tool (and
>> not being made explicit somewhere) you "fool" the system by adding
>> artifical delays. That is a good example of changing the _model_ to let
>> your
>> _tool_ do what you wanted it to do with your original model in the first
>> place.
>
> You are right, the tool should not hide these decisions! However, the
> decision is made for a reason: to solve a causal conflict in computation.
>
> How can one solve this problem for a component architecture? There are two
> options (as far as I know):
> 1) Make the ordering explicit during deployment.
> This requires the designer to know the component architecture's context and
> make the execution order explicit.
> 2) Leave it up to the framework/OS scheduler to solve the causality.
> This is a mistake (in my opinion), since the framework/OS does not know the
> semantics of the component composition and thereby choses a particular
> ordering (maybe, on the fly).

I agree.

> This might give the same results as with 1)
> (assuming a smart designer was at work), but could also result in less
> predictable timing and/or outputs of the algorithm.
>
> To conclude: The FBSched should be added to Orocos permanently. :-)

Well, strictly speaking, it _is_ already there :-) In the sense that
Markus did not add any extra features to RTT; he "just" wiped some dust
from a longstanding feature that was mostly forgotten, because there is no
good documentation for it. The SlaveActivity feature was introduced by
Peter a long time ago, when we had already these use cases in mind. But
since then, the bulk of the focus in the development (and the applications)
went into the support for robust distributed components and hard realtime
in-process IPC.

Herman

[ANN] FBSched - Function Block Scheduling Component

On 04/01/2012 07:34 PM, Herman Bruyninckx wrote:
> On Sun, 1 Apr 2012, Robert Wilterdink wrote:
>
>> On 03/30/2012 08:13 PM, Herman Bruyninckx wrote:
>>> On Fri, 30 Mar 2012, Sylvain Joyeux wrote:
>>>
>>>> On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
>>>>> Hi,
>>>>>
>>>>> Here's a simple, fast C++ component that permits scheduling "function
>>>>> blocks" implemented as RTT TaskContexts in a well defined order using
>>>>> the SlaveActivity. This is to minimize scheduling overhead by
>>>>> serializing everything in one thread.
>>>>>
>>>>> The function block components must be configured using SlaveActivities
>>>>> and will be triggered by the FBSched component when itself is
>>>>> triggered. This can be either achieved by sending events to a
>>>>> "trigger" port or by configuring it with a periodic activity.
>>>>>
>>>>> Similar scheduling could be implemented by a scripting component,
>>>>> however this simple (yet common) case of just triggering a set of
>>>>> function blocks in a certain order justifies a dedicated bare-metal
>>>>> c++ one for ultimate efficiency :-)
>>>>>
>>>>> Code is here:
>>>>>
>>>>> https://github.com/kmarkus/fbsched
>>>>>
>>>>> This is a small contribution with the goal to illustrate how to do
>>>>> composition of computations differs from the composition of systems
>>>>> (the latter is better done using FSMs!).
>>>>>
>>>>> Credits go to Herman for suggesting this distinction.
>>>> And thank you for showing that there are fundamentally absolutely no
>>>> distinction ;-)
>>> You mean that _you_ don't see a distinction, I assume...? :-)
>>>
>>>> The "common" computation models (i.e. simulink-like) assume that you
>>>> have an acyclic graph of computations (you cycle using a delay), and
>>>> that each components generates one output on each of its outputs each
>>>> time its triggered. Basically, the ordering is a topological sort over
>>>> the connection graphs.
>>> You are mixing mechanism and policy. As most people do. I mean:
>>> simulink-like tools _choose_ one particular acyclic graph for you. Behind
>>> your back. And if one doesn't like the one that is chosen by the tool (and
>>> not being made explicit somewhere) you "fool" the system by adding
>>> artifical delays. That is a good example of changing the _model_ to let
>>> your
>>> _tool_ do what you wanted it to do with your original model in the first
>>> place.
>>
>> You are right, the tool should not hide these decisions! However, the
>> decision is made for a reason: to solve a causal conflict in computation.
>>
>> How can one solve this problem for a component architecture? There are two
>> options (as far as I know):
>> 1) Make the ordering explicit during deployment.
>> This requires the designer to know the component architecture's context and
>> make the execution order explicit.
>> 2) Leave it up to the framework/OS scheduler to solve the causality.
>> This is a mistake (in my opinion), since the framework/OS does not know the
>> semantics of the component composition and thereby choses a particular
>> ordering (maybe, on the fly).
>
> I agree.
>
>> This might give the same results as with 1)
>> (assuming a smart designer was at work), but could also result in less
>> predictable timing and/or outputs of the algorithm.
>>
There is a third option:
3) Specifying the causality, and let a given policy in the framework
decide on the execution order. For example, this can be done by
specifying where to break the loop within each (possibly hierarchically
composed) composite.

With this last option, at or near _deployment time_, a manual policy
can specify explicitly the order or an automatic policy can determine
the order. The automatic policy would construct a DAG out of the
flattened connection information between the computational blocks,
taking into account the specifications on where to break the loop. From
this DAG, an an execution schedule can be computed.

By explicitly specifying the points at which you break the loop, the
semantics of the computational block composition become clear.
Note: this is not the same as adding artificial delays.

>> To conclude: The FBSched should be added to Orocos permanently. :-)
FBSched is an easy way to specify an execution schedule of a list of
computational blocks. However, I do not yet consider it a composite:

(1) it only implicitly specifies its member components. A
computational block is specified as a part of the composite by adding it
to the schedule. The effect is that a hierarchical composition
always leads to a hierarchical schedule. The optimal schedule for a
hierarchical composite is however not always an hierarchical schedule.

(2) it does not yet deal with promotion of ports: you cannot connect
to a port of FBSched (besides the trigger), you can only connect
directly to ports of the computational blocks inside FBSched. So,
FBSched is quite different from its member computational components.

>
> Well, strictly speaking, it _is_ already there :-) In the sense that
> Markus did not add any extra features to RTT; he "just" wiped some dust
> from a longstanding feature that was mostly forgotten, because there is no
> good documentation for it. The SlaveActivity feature was introduced by
> Peter a long time ago, when we had already these use cases in mind. But
> since then, the bulk of the focus in the development (and the applications)
> went into the support for robust distributed components and hard realtime
> in-process IPC.
>
> Herman

[ANN] FBSched - Function Block Scheduling Component

On Mon, 2 Apr 2012, Erwin Aertbelien wrote:

>
>
> On 04/01/2012 07:34 PM, Herman Bruyninckx wrote:
>> On Sun, 1 Apr 2012, Robert Wilterdink wrote:
>>
>>> On 03/30/2012 08:13 PM, Herman Bruyninckx wrote:
>>>> On Fri, 30 Mar 2012, Sylvain Joyeux wrote:
>>>>
>>>>> On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
>>>>>> Hi,
>>>>>>
>>>>>> Here's a simple, fast C++ component that permits scheduling "function
>>>>>> blocks" implemented as RTT TaskContexts in a well defined order using
>>>>>> the SlaveActivity. This is to minimize scheduling overhead by
>>>>>> serializing everything in one thread.
>>>>>>
>>>>>> The function block components must be configured using SlaveActivities
>>>>>> and will be triggered by the FBSched component when itself is
>>>>>> triggered. This can be either achieved by sending events to a
>>>>>> "trigger" port or by configuring it with a periodic activity.
>>>>>>
>>>>>> Similar scheduling could be implemented by a scripting component,
>>>>>> however this simple (yet common) case of just triggering a set of
>>>>>> function blocks in a certain order justifies a dedicated bare-metal
>>>>>> c++ one for ultimate efficiency :-)
>>>>>>
>>>>>> Code is here:
>>>>>>
>>>>>> https://github.com/kmarkus/fbsched
>>>>>>
>>>>>> This is a small contribution with the goal to illustrate how to do
>>>>>> composition of computations differs from the composition of systems
>>>>>> (the latter is better done using FSMs!).
>>>>>>
>>>>>> Credits go to Herman for suggesting this distinction.
>>>>> And thank you for showing that there are fundamentally absolutely no
>>>>> distinction ;-)
>>>> You mean that _you_ don't see a distinction, I assume...? :-)
>>>>
>>>>> The "common" computation models (i.e. simulink-like) assume that you
>>>>> have an acyclic graph of computations (you cycle using a delay), and
>>>>> that each components generates one output on each of its outputs each
>>>>> time its triggered. Basically, the ordering is a topological sort over
>>>>> the connection graphs.
>>>> You are mixing mechanism and policy. As most people do. I mean:
>>>> simulink-like tools _choose_ one particular acyclic graph for you. Behind
>>>> your back. And if one doesn't like the one that is chosen by the tool
>>>> (and
>>>> not being made explicit somewhere) you "fool" the system by adding
>>>> artifical delays. That is a good example of changing the _model_ to let
>>>> your
>>>> _tool_ do what you wanted it to do with your original model in the first
>>>> place.
>>>
>>> You are right, the tool should not hide these decisions! However, the
>>> decision is made for a reason: to solve a causal conflict in computation.
>>>
>>> How can one solve this problem for a component architecture? There are two
>>> options (as far as I know):
>>> 1) Make the ordering explicit during deployment.
>>> This requires the designer to know the component architecture's context
>>> and
>>> make the execution order explicit.
>>> 2) Leave it up to the framework/OS scheduler to solve the causality.
>>> This is a mistake (in my opinion), since the framework/OS does not know
>>> the
>>> semantics of the component composition and thereby choses a particular
>>> ordering (maybe, on the fly).
>>
>> I agree.
>>
>>> This might give the same results as with 1)
>>> (assuming a smart designer was at work), but could also result in less
>>> predictable timing and/or outputs of the algorithm.
>>>
> There is a third option:
> 3) Specifying the causality, and let a given policy in the framework
> decide on the execution order. For example, this can be done by specifying
> where to break the loop within each (possibly hierarchically composed)
> composite.

This is the way of using "declarative" specifications, which represent
"what" is desired, and not "how" it should be realised. This is _always_ a
good option to have, and in some domains (like what Robert mentioned) the
domain knowledge is already far enough advanced to allow this. Good!
But in other domains, looking for purely declarative specifications is
still some sort of holy grail :-) For example, in general probabilistic
graphical models, only a sub-optimal automation of the computational
causalilty determination can be achieved.

> With this last option, at or near _deployment time_, a manual policy can
> specify explicitly the order or an automatic policy can determine the order.
> The automatic policy would construct a DAG out of the flattened connection
> information between the computational blocks, taking into account the
> specifications on where to break the loop. From this DAG, an an execution
> schedule can be computed.

This is indeed the theory behind every kind of automatic causality
determination :-) For completeness: also poly-trees are useful
representations <http://en.wikipedia.org/wiki/Polytree>, somewhere between
trees and DAGs.

> By explicitly specifying the points at which you break the loop, the
> semantics of the computational block composition become clear.
> Note: this is not the same as adding artificial delays.

It should not! The "artifical delay" thing is a "dirty hack" to make a tree
out of a graph :-)

>>> To conclude: The FBSched should be added to Orocos permanently. :-)
> FBSched is an easy way to specify an execution schedule of a list of
> computational blocks. However, I do not yet consider it a composite:
>
> (1) it only implicitly specifies its member components.

Indeed. I also want to see an _explicit_ representation.

> A computational
> block is specified as a part of the composite by adding it to the schedule.
> The effect is that a hierarchical composition
> always leads to a hierarchical schedule. The optimal schedule for a
> hierarchical composite is however not always an hierarchical schedule.

In theory, you are probably right. In practice, it might be a very good
fall-back solution. We urgently need to make this discussion more concrete,
but identifying use cases.

> (2) it does not yet deal with promotion of ports: you cannot connect to a
> port of FBSched (besides the trigger), you can only connect directly to ports
> of the computational blocks inside FBSched. So, FBSched is quite different
> from its member computational components.

Herman

[ANN] FBSched - Function Block Scheduling Component

On 03/30/2012 04:49 PM, Sylvain Joyeux wrote:
> On 03/30/2012 02:59 PM, Markus Klotzbuecher wrote:
>> Hi,
>>
>> Here's a simple, fast C++ component that permits scheduling "function
>> blocks" implemented as RTT TaskContexts in a well defined order using
>> the SlaveActivity. This is to minimize scheduling overhead by
>> serializing everything in one thread.
>>
>> The function block components must be configured using SlaveActivities
>> and will be triggered by the FBSched component when itself is
>> triggered. This can be either achieved by sending events to a
>> "trigger" port or by configuring it with a periodic activity.
>>
>> Similar scheduling could be implemented by a scripting component,
>> however this simple (yet common) case of just triggering a set of
>> function blocks in a certain order justifies a dedicated bare-metal
>> c++ one for ultimate efficiency :-)
>>
>> Code is here:
>>
>> https://github.com/kmarkus/fbsched
>>
>> This is a small contribution with the goal to illustrate how to do
>> composition of computations differs from the composition of systems
>> (the latter is better done using FSMs!).
>>
>> Credits go to Herman for suggesting this distinction.
> And thank you for showing that there are fundamentally absolutely no
> distinction ;-)
>
> The "common" computation models (i.e. simulink-like) assume that you
> have an acyclic graph of computations (you cycle using a delay), and
> that each components generates one output on each of its outputs each
> time its triggered. Basically, the ordering is a topological sort over
> the connection graphs.
>
> I'm annoyed at myself, right now, for not having the deployer integrated
> in Rock, and show how the *current* Rock composition models perfectly
> allow to use something like fbsched to run compositions (as modelled in
> Rock) when it makes sense. The only issue is to segregate the components
> between those that fit the simulink-like component model and those that
> don't, something that can be done by tagging them with services
> (Srv::SimulinkComputationModel)

Or have a subtype of Composition that would be FunctionalComposition

[ANN] FBSched - Function Block Scheduling Component

On Mar 30, 2012, at 08:59 , Markus Klotzbuecher wrote:

> Hi,
>
> Here's a simple, fast C++ component that permits scheduling "function
> blocks" implemented as RTT TaskContexts in a well defined order using
> the SlaveActivity. This is to minimize scheduling overhead by
> serializing everything in one thread.
>
> The function block components must be configured using SlaveActivities
> and will be triggered by the FBSched component when itself is
> triggered. This can be either achieved by sending events to a
> "trigger" port or by configuring it with a periodic activity.
>
> Similar scheduling could be implemented by a scripting component,
> however this simple (yet common) case of just triggering a set of
> function blocks in a certain order justifies a dedicated bare-metal
> c++ one for ultimate efficiency :-)
>
> Code is here:
>
> https://github.com/kmarkus/fbsched
>
> This is a small contribution with the goal to illustrate how to do
> composition of computations differs from the composition of systems
> (the latter is better done using FSMs!).
>
> Credits go to Herman for suggesting this distinction.
>
> In a next version I plan to extend this component to measure
> min/avg/max duration of each function block and of the composite to
> facilitate optimization/debugging.
>
> Comments/Feedback welcome as usual!

Nice example, Markus.

We do the exact same thing, but use an RTT state machine in a coordinator component to do the ordering. We also provide the min/max/avg type of diagnostic data.

This paradigm crops up over and over and over again … the majority of our coordinators are exactly this; coordinating a sequence of computational components within a single thread of execution. It's definitely a type of composition. For our more complicated cases, the state machine provides more advanced control (a Lua script or similar would do the same), but many of our basic coordinators could use just this paradigm.

Cheers
S

[ANN] FBSched - Function Block Scheduling Component

On Fri, Mar 30, 2012 at 09:09:24AM -0400, S Roderick wrote:
> On Mar 30, 2012, at 08:59 , Markus Klotzbuecher wrote:
>
> > Hi,
> >
> > Here's a simple, fast C++ component that permits scheduling "function
> > blocks" implemented as RTT TaskContexts in a well defined order using
> > the SlaveActivity. This is to minimize scheduling overhead by
> > serializing everything in one thread.
> >
> > The function block components must be configured using SlaveActivities
> > and will be triggered by the FBSched component when itself is
> > triggered. This can be either achieved by sending events to a
> > "trigger" port or by configuring it with a periodic activity.
> >
> > Similar scheduling could be implemented by a scripting component,
> > however this simple (yet common) case of just triggering a set of
> > function blocks in a certain order justifies a dedicated bare-metal
> > c++ one for ultimate efficiency :-)
> >
> > Code is here:
> >
> > https://github.com/kmarkus/fbsched
> >
> > This is a small contribution with the goal to illustrate how to do
> > composition of computations differs from the composition of systems
> > (the latter is better done using FSMs!).
> >
> > Credits go to Herman for suggesting this distinction.
> >
> > In a next version I plan to extend this component to measure
> > min/avg/max duration of each function block and of the composite to
> > facilitate optimization/debugging.
> >
> > Comments/Feedback welcome as usual!
>
> Nice example, Markus.

thanks!

> We do the exact same thing, but use an RTT state machine in a
> coordinator component to do the ordering. We also provide the
> min/max/avg type of diagnostic data.

I think it's essential for understanding the RT behaviour...

> This paradigm crops up over and over and over again … the majority
> of our coordinators are exactly this; coordinating a sequence of
> computational components within a single thread of execution. It's
> definitely a type of composition. For our more complicated cases,
> the state machine provides more advanced control (a Lua script or
> similar would do the same), but many of our basic coordinators could
> use just this paradigm.

Agreed!

Markus

[ANN] FBSched - Function Block Scheduling Component

On Fri, 30 Mar 2012, S Roderick wrote:

> On Mar 30, 2012, at 08:59 , Markus Klotzbuecher wrote:
>
>> Hi,
>>
>> Here's a simple, fast C++ component that permits scheduling "function
>> blocks" implemented as RTT TaskContexts in a well defined order using
>> the SlaveActivity. This is to minimize scheduling overhead by
>> serializing everything in one thread.
>>
>> The function block components must be configured using SlaveActivities
>> and will be triggered by the FBSched component when itself is
>> triggered. This can be either achieved by sending events to a
>> "trigger" port or by configuring it with a periodic activity.
>>
>> Similar scheduling could be implemented by a scripting component,
>> however this simple (yet common) case of just triggering a set of
>> function blocks in a certain order justifies a dedicated bare-metal
>> c++ one for ultimate efficiency :-)
>>
>> Code is here:
>>
>> https://github.com/kmarkus/fbsched
>>
>> This is a small contribution with the goal to illustrate how to do
>> composition of computations differs from the composition of systems
>> (the latter is better done using FSMs!).
>>
>> Credits go to Herman for suggesting this distinction.
>>
>> In a next version I plan to extend this component to measure
>> min/avg/max duration of each function block and of the composite to
>> facilitate optimization/debugging.
>>
>> Comments/Feedback welcome as usual!
>
> Nice example, Markus.
>
> We do the exact same thing, but use an RTT state machine in a coordinator
> component to do the ordering.

Our work was to a large extent triggered exactly to avoid this semantic
mixing of "behaviour coordination" (via state machines) and "execution
ordering" (via a "scheduler"). The former is a (discrete!) _activity_, the
latter is a _data structure_ that is used by the execution activity
provided by the "os platform".

> We also provide the min/max/avg type of diagnostic data.
>
> This paradigm crops up over and over and over again … the majority of our
> coordinators are exactly this; coordinating a sequence of computational
> components within a single thread of execution. It's definitely a type of
> composition. For our more complicated cases, the state machine provides
> more advanced control (a Lua script or similar would do the same), but
> many of our basic coordinators could use just this paradigm.

My experience is exactly the same, but only for "centralised control" use
cases. It's only when you try do build systems-of-systems that the
"behaviour coordination" (in the form of pure event processors in a set of
state machines) proves its value. (Because it is clear that behaviour
coordination over a distributed system of more or less autonomic "agents"
can not be done by a scheduler.

> Cheers
> S

Herman

[ANN] FBSched - Function Block Scheduling Component

On Fri, Mar 30, 2012 at 2:59 PM, Markus Klotzbuecher <
markus [dot] klotzbuecher [..] ...> wrote:

> Hi,
>
> Here's a simple, fast C++ component that permits scheduling "function
> blocks" implemented as RTT TaskContexts in a well defined order using
> the SlaveActivity. This is to minimize scheduling overhead by
> serializing everything in one thread.
>
> The function block components must be configured using SlaveActivities
> and will be triggered by the FBSched component when itself is
> triggered. This can be either achieved by sending events to a
> "trigger" port or by configuring it with a periodic activity.
>
> Similar scheduling could be implemented by a scripting component,
> however this simple (yet common) case of just triggering a set of
> function blocks in a certain order justifies a dedicated bare-metal
> c++ one for ultimate efficiency :-)
>
> Code is here:
>
> https://github.com/kmarkus/fbsched
>
> This is a small contribution with the goal to illustrate how to do
> composition of computations differs from the composition of systems
> (the latter is better done using FSMs!).
>
> Credits go to Herman for suggesting this distinction.
>

Now that's what I call a fast response!.

...and as an unrelated note, I hadn't seen a goto statement in C++ code in
a looong time!.

> In a next version I plan to extend this component to measure
> min/avg/max duration of each function block and of the composite to
> facilitate optimization/debugging.
>
> Comments/Feedback welcome as usual!
>
> Markus
> --
> Orocos-Users mailing list
> Orocos-Users [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>

[ANN] FBSched - Function Block Scheduling Component

On Fri, Mar 30, 2012 at 03:11:39PM +0200, Adolfo Rodríguez Tsouroukdissian wrote:
>
>
> On Fri, Mar 30, 2012 at 2:59 PM, Markus Klotzbuecher <
> markus [dot] klotzbuecher [..] ...> wrote:
>
> Hi,
>
> Here's a simple, fast C++ component that permits scheduling "function
> blocks" implemented as RTT TaskContexts in a well defined order using
> the SlaveActivity. This is to minimize scheduling overhead by
> serializing everything in one thread.
>
> The function block components must be configured using SlaveActivities
> and will be triggered by the FBSched component when itself is
> triggered. This can be either achieved by sending events to a
> "trigger" port or by configuring it with a periodic activity.
>
> Similar scheduling could be implemented by a scripting component,
> however this simple (yet common) case of just triggering a set of
> function blocks in a certain order justifies a dedicated bare-metal
> c++ one for ultimate efficiency :-)
>
> Code is here:
>
> https://github.com/kmarkus/fbsched
>
> This is a small contribution with the goal to illustrate how to do
> composition of computations differs from the composition of systems
> (the latter is better done using FSMs!).
>
> Credits go to Herman for suggesting this distinction.
>
>
> Now that's what I call a fast response!.
>
> ...and as an unrelated note, I hadn't seen a goto statement in C++ code in a
> looong time!.

Hehe, if you wan't to see some more, go take a look at ocl/lua/rtt.cpp
:-)

Markus

[ANN] FBSched - Function Block Scheduling Component

On Mar 30, 2012, at 09:11 , Adolfo Rodríguez Tsouroukdissian wrote:

>
>
> On Fri, Mar 30, 2012 at 2:59 PM, Markus Klotzbuecher <markus [dot] klotzbuecher [..] ...> wrote:
> Hi,
>
> Here's a simple, fast C++ component that permits scheduling "function
> blocks" implemented as RTT TaskContexts in a well defined order using
> the SlaveActivity. This is to minimize scheduling overhead by
> serializing everything in one thread.
>
> The function block components must be configured using SlaveActivities
> and will be triggered by the FBSched component when itself is
> triggered. This can be either achieved by sending events to a
> "trigger" port or by configuring it with a periodic activity.
>
> Similar scheduling could be implemented by a scripting component,
> however this simple (yet common) case of just triggering a set of
> function blocks in a certain order justifies a dedicated bare-metal
> c++ one for ultimate efficiency :-)
>
> Code is here:
>
> https://github.com/kmarkus/fbsched
>
> This is a small contribution with the goal to illustrate how to do
> composition of computations differs from the composition of systems
> (the latter is better done using FSMs!).
>
> Credits go to Herman for suggesting this distinction.
>
> Now that's what I call a fast response!.
>
> ...and as an unrelated note, I hadn't seen a goto statement in C++ code in a looong time!.

Poor man's real-time exceptions … we use them frequently. :-)
S

[ANN] FBSched - Function Block Scheduling Component

On Fri, Mar 30, 2012 at 09:16:29AM -0400, S Roderick wrote:
> On Mar 30, 2012, at 09:11 , Adolfo Rodr guez Tsouroukdissian wrote:
>
>
>
>
> On Fri, Mar 30, 2012 at 2:59 PM, Markus Klotzbuecher <
> markus [dot] klotzbuecher [..] ...> wrote:
>
> Hi,
>
> Here's a simple, fast C++ component that permits scheduling "function
> blocks" implemented as RTT TaskContexts in a well defined order using
> the SlaveActivity. This is to minimize scheduling overhead by
> serializing everything in one thread.
>
> The function block components must be configured using SlaveActivities
> and will be triggered by the FBSched component when itself is
> triggered. This can be either achieved by sending events to a
> "trigger" port or by configuring it with a periodic activity.
>
> Similar scheduling could be implemented by a scripting component,
> however this simple (yet common) case of just triggering a set of
> function blocks in a certain order justifies a dedicated bare-metal
> c++ one for ultimate efficiency :-)
>
> Code is here:
>
> https://github.com/kmarkus/fbsched
>
> This is a small contribution with the goal to illustrate how to do
> composition of computations differs from the composition of systems
> (the latter is better done using FSMs!).
>
> Credits go to Herman for suggesting this distinction.
>
>
> Now that's what I call a fast response!.
>
> ...and as an unrelated note, I hadn't seen a goto statement in C++ code in
> a looong time!.
>
>
> Poor man's real-time exceptions … we use them frequently. :-)

s/poor/smart/ ;-)

Markus

Orocos history page translated in Finnish

FYI.

Michael Sirola (from Tampere, Finland) has made a Finnish translation of
the Orocos history page: <http://www.designcontest.com/show/orocos-content-fi>.

World domination has started! :-)

Thanks Michael!

Herman

[ANN] FBSched - Function Block Scheduling Component

2012/3/30 S Roderick <kiwi [dot] net [..] ...>:
> On Mar 30, 2012, at 09:11 , Adolfo Rodríguez Tsouroukdissian wrote:
>
>
>
> On Fri, Mar 30, 2012 at 2:59 PM, Markus Klotzbuecher
> <markus [dot] klotzbuecher [..] ...> wrote:
>>
>> Hi,
>>
>> Here's a simple, fast C++ component that permits scheduling "function
>> blocks" implemented as RTT TaskContexts in a well defined order using
>> the SlaveActivity. This is to minimize scheduling overhead by
>> serializing everything in one thread.
>>
>> The function block components must be configured using SlaveActivities
>> and will be triggered by the FBSched component when itself is
>> triggered. This can be either achieved by sending events to a
>> "trigger" port or by configuring it with a periodic activity.
>>
>> Similar scheduling could be implemented by a scripting component,
>> however this simple (yet common) case of just triggering a set of
>> function blocks in a certain order justifies a dedicated bare-metal
>> c++ one for ultimate efficiency :-)
>>
>> Code is here:
>>
>> https://github.com/kmarkus/fbsched

I will just have had this need in couple of day for my apps :D, I
think I'll take this.
Can't wait to have timings stats ^^

>>
>> This is a small contribution with the goal to illustrate how to do
>> composition of computations differs from the composition of systems
>> (the latter is better done using FSMs!).
>>
>> Credits go to Herman for suggesting this distinction.
>
>
> Now that's what I call a fast response!.
>
> ...and as an unrelated note, I hadn't seen a goto statement in C++ code in a
> looong time!.
>
>
> Poor man's real-time exceptions … we use them frequently. :-)
> S
>
yep !

>
> --
> Orocos-Users mailing list
> Orocos-Users [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>