Draft Next Generation State Machines

Dear List,

Here is a summary of the feedback of the last discussion incl. my
ideas for the next generation OROCOS state machines. Please understand
this less as a proposal but more as summary of work in progress.

User visible changes
--------------------

The main goal is to be more in-line with UML2 state machines. Below is
a summary of the (potential) changes which will be most visible to a
user of state machines (note: some of these changes result in things
done differently, not necessarily better. The benefit is that using
state machines will be more straightforward for anyone familiar with
UML).

* there will be only one mode that reacts to all kinds of events,
changing conditions, timers.... If a certain event is not needed,
it is simply not used in the state machine description.

* Generally, any event will be able to interrupt a running "run"
program if it triggers a transition and the event is not deferred.

* the run program will be executed only once, and when it completes a
CompletionEvent will be generated which can be used to trigger a
subsequent transition.

* TimeEvents: allow for the possibility to trigger a transition
at/after a certain amount of time.

* the handle will go away. Instead of having a program that gets
executed if no transition triggers, UML defines a default
transition triggered by an AnyReceiveEvent.

* preconditions will go away. Choices with guard conditions can be
used to reduce the number of outgoing guards from states.

* change execution order of programs when transitioning, see here [1]

* Self transitions will cause exit/enter to be run.

* Support of composite states: this allows a state to contain a state
machine, possibly history states (allow to continue execution in a
composite state at the point it was interrupted by a higher level
transition)

* Support of AND states / orthogonal regions: two independent state
machines can be executed concurrently.

* Support of junctions and choices: static and dynamic branches

* deferred triggers

* debugging infrastructure: it should be able to
start/stop/pause/reset/getState/step/, maybe (trace, break on a
state, ...). Debugging features could allow a GUI such as [2] to
connect to the component with the state machine for remote
debugging ... :-)

* QoS / Non functional properties

A state machine should be able to support dealing with
non-functional properties:

For instance a state machine should be able to generate (or support
generation) of data that can be used as quality measures. This
could for example be the jitter of TimeEvents, the amount of
transitions in a certain period, mean number of supersteps, etc.

One step further, an OROCOS state machine description could allow
MARTE-style non-functional property annotations. Such annotations
then could be used to make assertions or impose
constraints. Hypothetical pseudocode example:

"if mean number of events in StateMachine queue > X -> generate EventY"

* Syntax: I think the current state machine syntax is well defined
and besides the above changes should remain the same.

Implementation details
----------------------

Here are some thoughts regarding the implementation:

* How modularize the state machine implementation: Herman suggested a
binary interface: object and component level. This means (if I
understand correctly) separating a generic real-time state machine
implementation (the synchronous parts) from the OROCOS/real world
specific component level details (the asynchronous parts). I agree,
for instance this will allow the state machine to be reused in
other places.

At object level the following will be dealt with:

the raw state machine functionality (the "mechanics"):
- creating
- stepping
- querying the current states
- pausing
- ...

At component level the following will be dealt with
- handling of TimeEvents / dealing with time
- handling of ChangeEvents
- handling of non-functional properties / QoS
- specification of the algorithm for dequeuing events

So to some extent the component level functionality could be seen
as plug-ins for the semantic variation points not implemented at
the object level.

* Parsing: the obvious way would be to modify the boost:spirit based
StateMachine parser. The less obvious but possibly more
maintainable and extensible way could be to use ANTLR to generate a
C/C++ parser. This would be a nice Model Based approach and could
facilitate future syntax extensions or the addition of parsers for
"foreign" state machine formats. I'm not sure though how deeply
parsing of state machines and OROCOS program scripts is tied
together and if this is feasible therefore.

Open Issues
-----------

* Who will implement all of this? ;-)

* Best way to implement ChangeEvents - polling vs synchronous
evaluation

* QoS/NFP needs more investigation

* Best practices need to be found

That's about it... Comment are of course welcome!

Best regards
Markus

[1] http://people.mech.kuleuven.be/~s0202242/#x1-110002.5.1
[3] http://unimod.sourceforge.net/intro.html

Draft Next Generation State Machines

Hi Markus,

On Fri, Oct 10, 2008 at 1:07 PM, Markus Klotzbücher
<markus [dot] klotzbuecher [..] ...> wrote:
> Here is a summary of the feedback of the last discussion incl. my
> ideas for the next generation OROCOS state machines. Please understand
> this less as a proposal but more as summary of work in progress.
>
> User visible changes
> --------------------
>
> The main goal is to be more in-line with UML2 state machines. Below is
> a summary of the (potential) changes which will be most visible to a
> user of state machines (note: some of these changes result in things
> done differently, not necessarily better. The benefit is that using
> state machines will be more straightforward for anyone familiar with
> UML).
>
> * there will be only one mode that reacts to all kinds of events,
> changing conditions, timers.... If a certain event is not needed,
> it is simply not used in the state machine description.

What do you mean with this last sentence? It seems rather trivial to
me that events which are not relevant for the SM are not used in the
SM description (and this is currently also the case?)

> * Generally, any event will be able to interrupt a running "run"
> program if it triggers a transition and the event is not deferred.

Can you elaborate on this using a piece of example code. As Peter
explained in a previous mail, one should make a distinction between
syn/asyn-preempt/non-preempt-etc on a logical state machine level
versus on the taskContext level. Therefor, I think it's also
important to explain which level you are talking about, and what the
implications (on a execution level) are for Periodic/NonPeriodic
TaskContexts.

> * the run program will be executed only once, and when it completes a
> CompletionEvent will be generated which can be used to trigger a
> subsequent transition.

This means that a while loop in the run statement will be necessary if
you want to obtain similar behaviour as before?
(and this while loop will execute
* continuously in the case of a NonPeriodic taskContext
* periodically (if the loop completes) in the case of a Periodic TaskContext
until an event comes in as described in the previous bullet?

> * TimeEvents: allow for the possibility to trigger a transition
> at/after a certain amount of time.
>
> * the handle will go away. Instead of having a program that gets
> executed if no transition triggers, UML defines a default
> transition triggered by an AnyReceiveEvent.

So will it be the execution logic that fires this event?

> * preconditions will go away. Choices with guard conditions can be
> used to reduce the number of outgoing guards from states.
>
> * change execution order of programs when transitioning, see here [1]
>
> * Self transitions will cause exit/enter to be run.
>
> * Support of composite states: this allows a state to contain a state
> machine, possibly history states (allow to continue execution in a
> composite state at the point it was interrupted by a higher level
> transition)
>
> * Support of AND states / orthogonal regions: two independent state
> machines can be executed concurrently.

How [should this be implemented]/[would you implement this] on a thread level?

> * Support of junctions and choices: static and dynamic branches
>
> * deferred triggers

Can you explain this somewhat more (or refer to a previous mail if I
already forgot that :-))?

> * debugging infrastructure: it should be able to
> start/stop/pause/reset/getState/step/, maybe (trace, break on a
> state, ...). Debugging features could allow a GUI such as [2] to
> connect to the component with the state machine for remote
> debugging ... :-)
>
> * QoS / Non functional properties
>
> A state machine should be able to support dealing with
> non-functional properties:
>
> For instance a state machine should be able to generate (or support
> generation) of data that can be used as quality measures. This
> could for example be the jitter of TimeEvents, the amount of
> transitions in a certain period, mean number of supersteps, etc.
>
> One step further, an OROCOS state machine description could allow
> MARTE-style non-functional property annotations. Such annotations
> then could be used to make assertions or impose
> constraints. Hypothetical pseudocode example:
>
> "if mean number of events in StateMachine queue > X -> generate EventY"
>
> * Syntax: I think the current state machine syntax is well defined
> and besides the above changes should remain the same.
>
>
> Implementation details
> ----------------------
>
> Here are some thoughts regarding the implementation:
>
> * How modularize the state machine implementation: Herman suggested a
> binary interface: object and component level. This means (if I
> understand correctly) separating a generic real-time state machine
> implementation (the synchronous parts) from the OROCOS/real world
> specific component level details (the asynchronous parts). I agree,
> for instance this will allow the state machine to be reused in
> other places.

Not sure I understand this completely, but this somehow corresponds to
my first remark above about expressing behaviour on a SM and on a
TaskContext level, right?

> At object level the following will be dealt with:
>
> the raw state machine functionality (the "mechanics"):
> - creating
> - stepping
> - querying the current states
> - pausing
> - ...
>
> At component level the following will be dealt with
> - handling of TimeEvents / dealing with time
> - handling of ChangeEvents
> - handling of non-functional properties / QoS
> - specification of the algorithm for dequeuing events
>
> So to some extent the component level functionality could be seen
> as plug-ins for the semantic variation points not implemented at
> the object level.

Talking about semantic variation points (SVP): Can you describe
*which* SVP of the UML spec you fill in with the orocos state machines
new generation as you describe them above, and *how* you fill them in.

> * Parsing: the obvious way would be to modify the boost:spirit based
> StateMachine parser. The less obvious but possibly more
> maintainable and extensible way could be to use ANTLR to generate a
> C/C++ parser. This would be a nice Model Based approach and could
> facilitate future syntax extensions or the addition of parsers for
> "foreign" state machine formats. I'm not sure though how deeply
> parsing of state machines and OROCOS program scripts is tied
> together and if this is feasible therefore.
>
>
> Open Issues
> -----------
>
> * Who will implement all of this? ;-)

Is that a rethorical question? :-)))

> * Best way to implement ChangeEvents - polling vs synchronous
> evaluation
>
> * QoS/NFP needs more investigation
>
> * Best practices need to be found
>
> That's about it... Comment are of course welcome!

Thanks for your efforts!

Klaas

> [1] http://people.mech.kuleuven.be/~s0202242/#x1-110002.5.1
> [3] http://unimod.sourceforge.net/intro.html

Draft Next Generation State Machines

Hi Klaas,

On Mon, Oct 13, 2008 at 10:26:48AM +0200, Klaas Gadeyne wrote:
> On Fri, Oct 10, 2008 at 1:07 PM, Markus Klotzbücher
> <markus [dot] klotzbuecher [..] ...> wrote:
> > Here is a summary of the feedback of the last discussion incl. my
> > ideas for the next generation OROCOS state machines. Please understand
> > this less as a proposal but more as summary of work in progress.
> >
> > User visible changes
> > --------------------
> >
> > The main goal is to be more in-line with UML2 state machines. Below is
> > a summary of the (potential) changes which will be most visible to a
> > user of state machines (note: some of these changes result in things
> > done differently, not necessarily better. The benefit is that using
> > state machines will be more straightforward for anyone familiar with
> > UML).
> >
> > * there will be only one mode that reacts to all kinds of events,
> > changing conditions, timers.... If a certain event is not needed,
> > it is simply not used in the state machine description.
>
> What do you mean with this last sentence? It seems rather trivial to
> me that events which are not relevant for the SM are not used in the
> SM description (and this is currently also the case?)

I meant to point out two things that will change, namely that a) an
event will have a broader meaning for state machines (e.g. there will
be event types which are not OROCOS events) and b) that there will be
no means to select only a subset of events that are allowed to trigger
transitions (such as reactive mode allows only OROCOS Events to
trigger transitions). I agree that the last sentence is not very
helpful.

> > * Generally, any event will be able to interrupt a running "run"
> > program if it triggers a transition and the event is not deferred.
>
> Can you elaborate on this using a piece of example code. As Peter
> explained in a previous mail, one should make a distinction between
> syn/asyn-preempt/non-preempt-etc on a logical state machine level
> versus on the taskContext level. Therefor, I think it's also
> important to explain which level you are talking about, and what the
> implications (on a execution level) are for Periodic/NonPeriodic
> TaskContexts.

Here I'm strictly talking about the logical state machine
level. Consider the following snippet:

StateMachine SM1 {
state waitForResponse {
transitions {
if TimeEventAfter(500) // 500 ms
select TimeOut
if AnyReceiveEvent()
select ResponseReceived
}
run {
while true {
if ResponseCondition == true
break;
}
}
}

state TimeOut {
// do something
}

state ResponseReceived {
// continue work...
...
}
}

Use case: timed out waiting for a condition.

The state machine will loop in the run() program until either the
TimeEvent "after 500ms" triggers the transition to TimeOut or the
AnyReceiveEvent (which catches any Event not specified in other
transitions) is triggered by a CompletionEvent which in turn is
automatically generated if the run program finishes because the
condition polled became true.

This sounds nice, but as you suggested the real question of course is
how this maps to OROCOS. See below.

> > * the run program will be executed only once, and when it completes a
> > CompletionEvent will be generated which can be used to trigger a
> > subsequent transition.
>
> This means that a while loop in the run statement will be necessary if
> you want to obtain similar behaviour as before?

Yes.

> (and this while loop will execute
> * continuously in the case of a NonPeriodic taskContext

I think this would be the right thing to do for a NonPeriodic
TaskContext.

> * periodically (if the loop completes) in the case of a Periodic TaskContext
> until an event comes in as described in the previous bullet?

This seems right too, but would result in very different behavior of
the SM dependent of the Activity used. I think that would be bad. For
the sake of consistency it might be better to just let it run until it
finishes (or is interrupted as in the previous bullet). The user is
then responsible for designing the run program so that it won't impact
other TC driven by the same PeriodicActivity (e.g. keeping it short,
best practice?). Similar behavior as with the current implementation
(run being restarted periodically) could be obtained by defining a
self transition triggered by a CompletionEvent.

> > * TimeEvents: allow for the possibility to trigger a transition
> > at/after a certain amount of time.
> >
> > * the handle will go away. Instead of having a program that gets
> > executed if no transition triggers, UML defines a default
> > transition triggered by an AnyReceiveEvent.
>
> So will it be the execution logic that fires this event?

Yes, although this "StateMachine Event" does not (necessarily)
corresponds to an OROCOS event. It might never be visible outside the
state machine core.

> > * preconditions will go away. Choices with guard conditions can be
> > used to reduce the number of outgoing guards from states.
> >
> > * change execution order of programs when transitioning, see here [1]
> >
> > * Self transitions will cause exit/enter to be run.
> >
> > * Support of composite states: this allows a state to contain a state
> > machine, possibly history states (allow to continue execution in a
> > composite state at the point it was interrupted by a higher level
> > transition)
> >
> > * Support of AND states / orthogonal regions: two independent state
> > machines can be executed concurrently.
>
> How [should this be implemented]/[would you implement this] on a thread level?

I think the most elegant solution would be to implement this without
having to complicate the state machine core with such concurrency
issues. Note that (according to UML2) Run-to-completion steps are
applied to the entire state machine, not to orthogonal regions. This
means that the only parts of two or more orthogonal composite states
that can really run concurrently are their respective run programs. By
adding deep history states and TimeEvents I think it should be quite
easy to transform (by pure syntax transformation) two or more
orthogonal regions into a new state machine, which transitions via
TimeEvents (reschedules) after a certain amount of time x to the
history state of the next composite state and thereby switches to the
next run program. After the last state has executed its run program
for its "time slice", it transitions to the history state of the first
again, until all state have finished.

This would allow to keep the state machine core clean of such "green
state machine threads".

> > * Support of junctions and choices: static and dynamic branches
> >
> > * deferred triggers
>
> Can you explain this somewhat more (or refer to a previous mail if I
> already forgot that :-))?

Sure! This mechanism allows to specify events which are not allowed to
trigger a transition from the state that specified it. Instead they
will simply remain in the event queue until a state is reached that
does not defer it.

...

> > Implementation details
> > ----------------------
> >
> > Here are some thoughts regarding the implementation:
> >
> > * How modularize the state machine implementation: Herman suggested a
> > binary interface: object and component level. This means (if I
> > understand correctly) separating a generic real-time state machine
> > implementation (the synchronous parts) from the OROCOS/real world
> > specific component level details (the asynchronous parts). I agree,
> > for instance this will allow the state machine to be reused in
> > other places.
>
> Not sure I understand this completely, but this somehow corresponds to
> my first remark above about expressing behaviour on a SM and on a
> TaskContext level, right?

Yes exactly.

> > At object level the following will be dealt with:
> >
> > the raw state machine functionality (the "mechanics"):
> > - creating
> > - stepping
> > - querying the current states
> > - pausing
> > - ...
> >
> > At component level the following will be dealt with
> > - handling of TimeEvents / dealing with time
> > - handling of ChangeEvents
> > - handling of non-functional properties / QoS
> > - specification of the algorithm for dequeuing events
> >
> > So to some extent the component level functionality could be seen
> > as plug-ins for the semantic variation points not implemented at
> > the object level.
>
> Talking about semantic variation points (SVP): Can you describe
> *which* SVP of the UML spec you fill in with the orocos state machines
> new generation as you describe them above, and *how* you fill them in.

Well, actually I didn't (at least not intentionally) described any SVP
above, as I'm still investigating this topic. That said, I think that
for most of these (here's [1] a full list) it should be fairly easy to
come up with a reasonable default. Of course excluding the
ChangeEvent, which is tricky. I'm more and more agreeing with Peter
that it will be almost impossible to find an efficient
implementation. In fact Peter's suggestion could become a first "best
practice": We choose a very simple and low overhead implementation
(for example evaluate ChangeConditions once after "entry" has finished
and once before "exit" is called). Any condition that requires more
sophisticated evaluation should be moved out and tied to an event that
triggers the state machine from outside.

If it turns out that there are reoccurring patterns in external
ChangeEvent generation implementations it could be likewise moved back
again.

> > * Parsing: the obvious way would be to modify the boost:spirit based
> > StateMachine parser. The less obvious but possibly more
> > maintainable and extensible way could be to use ANTLR to generate a
> > C/C++ parser. This would be a nice Model Based approach and could
> > facilitate future syntax extensions or the addition of parsers for
> > "foreign" state machine formats. I'm not sure though how deeply
> > parsing of state machines and OROCOS program scripts is tied
> > together and if this is feasible therefore.
> >
> >
> > Open Issues
> > -----------
> >
> > * Who will implement all of this? ;-)
>
> Is that a rethorical question? :-)))

Yes, absolutely :-)))

> > * Best way to implement ChangeEvents - polling vs synchronous
> > evaluation
> >
> > * QoS/NFP needs more investigation
> >
> > * Best practices need to be found
> >
> > That's about it... Comment are of course welcome!
>
> Thanks for your efforts!

Thanks for your comments!

Best regards
Markus

[1] http://people.mech.kuleuven.be/~s0202242/#x1-200003

--
Orocos-Dev mailing list
Orocos-Dev [..] ...
http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

Draft Next Generation State Machines

On Mon, Oct 13, 2008 at 7:35 PM, Markus Klotzbücher
<markus [dot] klotzbuecher [..] ...> wrote:
> On Mon, Oct 13, 2008 at 10:26:48AM +0200, Klaas Gadeyne wrote:
>> On Fri, Oct 10, 2008 at 1:07 PM, Markus Klotzbücher
>> <markus [dot] klotzbuecher [..] ...> wrote:
>> > Here is a summary of the feedback of the last discussion incl. my
>> > ideas for the next generation OROCOS state machines. Please understand
>> > this less as a proposal but more as summary of work in progress.
[...]
>> > * there will be only one mode that reacts to all kinds of events,
>> > changing conditions, timers.... If a certain event is not needed,
>> > it is simply not used in the state machine description.
>>
>> What do you mean with this last sentence? It seems rather trivial to
>> me that events which are not relevant for the SM are not used in the
>> SM description (and this is currently also the case?)
>
> I meant to point out two things that will change, namely that a) an
> event will have a broader meaning for state machines (e.g. there will
> be event types which are not OROCOS events)

I see (I think). Could you give a list of events which would not be
orocos events?

> and b) that there will be
> no means to select only a subset of events that are allowed to trigger
> transitions (such as reactive mode allows only OROCOS Events to
> trigger transitions). I agree that the last sentence is not very
> helpful.

OK

>> > * Generally, any event will be able to interrupt a running "run"
>> > program if it triggers a transition and the event is not deferred.
>>
>> Can you elaborate on this using a piece of example code. As Peter
>> explained in a previous mail, one should make a distinction between
>> syn/asyn-preempt/non-preempt-etc on a logical state machine level
>> versus on the taskContext level. Therefor, I think it's also
>> important to explain which level you are talking about, and what the
>> implications (on a execution level) are for Periodic/NonPeriodic
>> TaskContexts.
>
> Here I'm strictly talking about the logical state machine
> level. Consider the following snippet:
>
> StateMachine SM1 {
> state waitForResponse {
> transitions {
> if TimeEventAfter(500) // 500 ms
> select TimeOut
> if AnyReceiveEvent()
> select ResponseReceived
> }
> run {
> while true {
> if ResponseCondition == true
> break;
> }
> }
> }
>
> state TimeOut {
> // do something
> }
>
> state ResponseReceived {
> // continue work...
> ...
> }
> }
>
> Use case: timed out waiting for a condition.
>
> The state machine will loop in the run() program until either the
> TimeEvent "after 500ms" triggers the transition to TimeOut or the
> AnyReceiveEvent (which catches any Event not specified in other
> transitions) is triggered by a CompletionEvent which in turn is
> automatically generated if the run program finishes because the
> condition polled became true.
>
> This sounds nice, but as you suggested the real question of course is
> how this maps to OROCOS. See below.
>
>> > * the run program will be executed only once, and when it completes a
>> > CompletionEvent will be generated which can be used to trigger a
>> > subsequent transition.
>>
>> This means that a while loop in the run statement will be necessary if
>> you want to obtain similar behaviour as before?
>
> Yes.
>
>> (and this while loop will execute
>> * continuously in the case of a NonPeriodic taskContext
>
> I think this would be the right thing to do for a NonPeriodic
> TaskContext.
>
>> * periodically (if the loop completes) in the case of a Periodic TaskContext
>> until an event comes in as described in the previous bullet?
>
> This seems right too, but would result in very different behavior of
> the SM dependent of the Activity used. I think that would be bad. For
> the sake of consistency it might be better to just let it run until it
> finishes (or is interrupted as in the previous bullet). The user is
> then responsible for designing the run program so that it won't impact
> other TC driven by the same PeriodicActivity (e.g. keeping it short,
> best practice?). Similar behavior as with the current implementation
> (run being restarted periodically) could be obtained by defining a
> self transition triggered by a CompletionEvent.

AFAIS there are 2 approaches to this problem, and orocos is currently
somewhat caught in the middle. The "real-time active object" approach
(mostly implemented in UML-based frameworks, such as Rhapsody and
friends) only considers objects connected to what orocos call
NonPeriodicActivities. If you want periodic behaviour, you have to
implement it yourself (e.g. by creating a SM with the necessary
TimeEvents).
The other approaches (Simulink and friends) are basically designed for
periodic activitities only (though they have some ---forgive me the
word--- workarounds for aperiodic stuff). For instance, using a
timeEvent in StateFlow like

after(5s) will result in

(if counter++ > 500)

in the generated code if the basic period of the generated code is 100Hz.

Orocos somewhat implements both, which obviously complicates stuff.

[personal note, haven't really thought hard about this, please skip if
this seems very vague/unclear]
I somewhat favour the first approach since I consider it more general,
although I realise the importance of having COTS periodic components.
Therefor it seems to me inheritage in FSMs is also something important
to consider -> Having PeriodicTaskContext being a
NonPeriodicTaskContext with a "default" periodic statemachine seems to
be a solution (that requires quite some refactoring on the orocos
side, remember roadmap orocos 2.0 discussions.
[/]

[...]

>> > * deferred triggers
>>
>> Can you explain this somewhat more (or refer to a previous mail if I
>> already forgot that :-))?
>
> Sure! This mechanism allows to specify events which are not allowed to
> trigger a transition from the state that specified it. Instead they
> will simply remain in the event queue until a state is reached that
> does not defer it.

Is this a separate "defer-queue", or are they put at the end of the
current queue?

Thx,

Klaas
--
Orocos-Dev mailing list
Orocos-Dev [..] ...
http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

Draft Next Generation State Machines

Hi Klaas,

On Tue, Oct 14, 2008 at 10:46:03AM +0200, Klaas Gadeyne wrote:
> On Mon, Oct 13, 2008 at 7:35 PM, Markus Klotzbücher
> <markus [dot] klotzbuecher [..] ...> wrote:
> > On Mon, Oct 13, 2008 at 10:26:48AM +0200, Klaas Gadeyne wrote:
> >> On Fri, Oct 10, 2008 at 1:07 PM, Markus Klotzbücher
> >> <markus [dot] klotzbuecher [..] ...> wrote:
> >> > Here is a summary of the feedback of the last discussion incl. my
> >> > ideas for the next generation OROCOS state machines. Please understand
> >> > this less as a proposal but more as summary of work in progress.
> [...]
> >> > * there will be only one mode that reacts to all kinds of events,
> >> > changing conditions, timers.... If a certain event is not needed,
> >> > it is simply not used in the state machine description.
> >>
> >> What do you mean with this last sentence? It seems rather trivial to
> >> me that events which are not relevant for the SM are not used in the
> >> SM description (and this is currently also the case?)
> >
> > I meant to point out two things that will change, namely that a) an
> > event will have a broader meaning for state machines (e.g. there will
> > be event types which are not OROCOS events)
>
> I see (I think). Could you give a list of events which would not be
> orocos events?

The following three:

CompletionEvent: generated by the SM core when a "run" program
finishes.

AnyReceiveEvent: Not really an event at all, more like a regular
expression that matches any event which is not
used in other outgoing transitions of a state.

TimeEvent: I think this could be both, either as non OROCOS event at
component level state machine logic by using the RTT::Timer
class Peter mentioned or as OROCOS Event generated by a
Timer Component from somewhere else.

[...]

> >> > * the run program will be executed only once, and when it completes a
> >> > CompletionEvent will be generated which can be used to trigger a
> >> > subsequent transition.
> >>
> >> This means that a while loop in the run statement will be necessary if
> >> you want to obtain similar behaviour as before?
> >
> > Yes.
> >
> >> (and this while loop will execute
> >> * continuously in the case of a NonPeriodic taskContext
> >
> > I think this would be the right thing to do for a NonPeriodic
> > TaskContext.
> >
> >> * periodically (if the loop completes) in the case of a Periodic TaskContext
> >> until an event comes in as described in the previous bullet?
> >
> > This seems right too, but would result in very different behavior of
> > the SM dependent of the Activity used. I think that would be bad. For
> > the sake of consistency it might be better to just let it run until it
> > finishes (or is interrupted as in the previous bullet). The user is
> > then responsible for designing the run program so that it won't impact
> > other TC driven by the same PeriodicActivity (e.g. keeping it short,
> > best practice?). Similar behavior as with the current implementation
> > (run being restarted periodically) could be obtained by defining a
> > self transition triggered by a CompletionEvent.
>
> AFAIS there are 2 approaches to this problem, and orocos is currently
> somewhat caught in the middle. The "real-time active object" approach
> (mostly implemented in UML-based frameworks, such as Rhapsody and
> friends) only considers objects connected to what orocos call
> NonPeriodicActivities. If you want periodic behaviour, you have to
> implement it yourself (e.g. by creating a SM with the necessary
> TimeEvents).

But I think a real-time active object differ from a
NonPeriodicActivity in that it acts as an dispatcher for creating and
managing concurrent threads, whereas a NonPeriodicActivity executes a
RunnableInterface in a single thread, right?

But I get your point: if you want something periodic you have to
create it yourself in this model.

> The other approaches (Simulink and friends) are basically designed for
> periodic activitities only (though they have some ---forgive me the
> word--- workarounds for aperiodic stuff). For instance, using a
> timeEvent in StateFlow like
>
> after(5s) will result in
>
> (if counter++ > 500)
>
> in the generated code if the basic period of the generated code is 100Hz.

Interesting, I didn't know this...

> Orocos somewhat implements both, which obviously complicates stuff.
>
> [personal note, haven't really thought hard about this, please skip if
> this seems very vague/unclear]
> I somewhat favour the first approach since I consider it more general,
> although I realise the importance of having COTS periodic components.
> Therefor it seems to me inheritage in FSMs is also something important
> to consider -> Having PeriodicTaskContext being a
> NonPeriodicTaskContext with a "default" periodic statemachine seems to
> be a solution (that requires quite some refactoring on the orocos

I like this idea, although I think adding UML state machine
inheritance (incl. syntax) is quite tricky. In a first step one could
simply provide a way to overwrite the default SM with a customized
version.

But am I right that this would change the behavior of periodic
TaskContexts in that each would always be run in a single thread, and
serialization would not be possible anymore? What was the rationale
behind the serialization of TCs of same priority and periodicity?

> side, remember roadmap orocos 2.0 discussions.
> [/]

Maybe such 2.0 ideas should be collected in some place like the wiki?

> [...]
>
> >> > * deferred triggers
> >>
> >> Can you explain this somewhat more (or refer to a previous mail if I
> >> already forgot that :-))?
> >
> > Sure! This mechanism allows to specify events which are not allowed to
> > trigger a transition from the state that specified it. Instead they
> > will simply remain in the event queue until a state is reached that
> > does not defer it.
>
> Is this a separate "defer-queue", or are they put at the end of the
> current queue?

With respect to UML this is unspecified. I think a single queue would
be sufficient, as it is a semantic variation point *how* events are
taken from the queue. A state machines dequeueEvent() event function
needs to do some prioritization anyway (a CompletionEvent for example
has highest priority), so it might as well ignore the currently
deferred events.

Best regards
Markus
--
Orocos-Dev mailing list
Orocos-Dev [..] ...
http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

Draft Next Generation State Machines

On Tue, Oct 14, 2008 at 3:19 PM, Markus Klotzbücher
<markus [dot] klotzbuecher [..] ...> wrote:
[...]
>> I see (I think). Could you give a list of events which would not be
>> orocos events?
>
[...]
>
> TimeEvent: I think this could be both, either as non OROCOS event at
> component level state machine logic by using the RTT::Timer
> class Peter mentioned or as OROCOS Event generated by a
> Timer Component from somewhere else.

Maybe even RTT::Timer might generate orocos events without being a
TaskContext (in orocos 2.0, that is ;-)

[...]
>> AFAIS there are 2 approaches to this problem, and orocos is currently
>> somewhat caught in the middle. The "real-time active object" approach
>> (mostly implemented in UML-based frameworks, such as Rhapsody and
>> friends) only considers objects connected to what orocos call
>> NonPeriodicActivities. If you want periodic behaviour, you have to
>> implement it yourself (e.g. by creating a SM with the necessary
>> TimeEvents).
>
> But I think a real-time active object differ from a
> NonPeriodicActivity in that it acts as an dispatcher for creating and
> managing concurrent threads, whereas a NonPeriodicActivity executes a
> RunnableInterface in a single thread, right?

In my view, a real-time active object rather corresponds to a
"TaskContext connected to a NonPeriodicActivity", its
RunnableInterface is already implemented by the TaskContext. You
might be right that a RT active object is not necessarily connected to
a _single_ thread though!

> But I get your point: if you want something periodic you have to
> create it yourself in this model.

Exactly!

>> Orocos somewhat implements both, which obviously complicates stuff.
>>
>> [personal note, haven't really thought hard about this, please skip if
>> this seems very vague/unclear]
>> I somewhat favour the first approach since I consider it more general,
>> although I realise the importance of having COTS periodic components.
>> Therefor it seems to me inheritage in FSMs is also something important
>> to consider -> Having PeriodicTaskContext being a
>> NonPeriodicTaskContext with a "default" periodic statemachine seems to
>> be a solution (that requires quite some refactoring on the orocos
>
> I like this idea, although I think adding UML state machine
> inheritance (incl. syntax) is quite tricky. In a first step one could
> simply provide a way to overwrite the default SM with a customized
> version.
>
> But am I right that this would change the behavior of periodic
> TaskContexts in that each would always be run in a single thread, and
> serialization would not be possible anymore?

Hmm, maybe it still might be possible, but I can't see how at this
time (at least not without adding lots of jitter!)

> What was the rationale behind the serialization of TCs of same priority and periodicity?

I didn't implement this, but from the top of my head [so what follows
is probably plain nonsense :-)] I guess its origins are from the time
that orocos taskcontext were still evolving from mere "objects" to
real "components". In the beginning, orocos components were typically
- smaller in terms of granularity
- somewhat "controller" oriented
E.g. at that time we had some Sensor component, estimation component,
algorithm component and actuactor component and the order in which
they were running could be determined by running them in a single
thread.
If you look at it from the point that the behaviour from a "component"
should be selfstanding, maybe this makes less sense.
*However*, certainly from an embedded and real-time point of view, the
concept of activities grouped in threads still has its value I think
(lower memory footprint, no locking or lockless stuff necessary if you
exchange data between components in the same thread).

>> side, remember roadmap orocos 2.0 discussions.
>> [/]
>
> Maybe such 2.0 ideas should be collected in some place like the wiki?

You're right (basically this won't get done due to lack of time :-(

Klaas
--
Orocos-Dev mailing list
Orocos-Dev [..] ...
http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

Draft Next Generation State Machines

On Tuesday 14 October 2008 17:54:18 Klaas Gadeyne wrote:
> On Tue, Oct 14, 2008 at 3:19 PM, Markus Klotzbücher
> > But am I right that this would change the behavior of periodic
> > TaskContexts in that each would always be run in a single thread, and
> > serialization would not be possible anymore?
>
> Hmm, maybe it still might be possible, but I can't see how at this
> time (at least not without adding lots of jitter!)
>
> > What was the rationale behind the serialization of TCs of same priority
> > and periodicity?

It's an optional default :-). The idea was that *if two threads have the same
priority and the same period, you might as well execute their functions in
the same thread, otherwise, they would only be competing for the same
resources every time. The disadvantage of this method is that we need to be
prepared to line up periodic activities even if this is not the case in our
application => code bloat and larger latencies. Fortunately, you are free to
write your own PeriodicActivity which inherits from RTT::OS::PeriodicThread
and RTT::ActivityInterface and which does not have this overhead. You can
then use SlaveActivity to line up tasks anyway if required.

> >> side, remember roadmap orocos 2.0 discussions.
> >> [/]
> >
> > Maybe such 2.0 ideas should be collected in some place like the wiki?
>
> You're right (basically this won't get done due to lack of time :-(

You mean lack of time for *you* ?

Peter

[OT] Draft Next Generation State Machines

On Tue, Oct 21, 2008 at 8:50 AM, Peter Soetens

<peter [dot] soetens [..] ...> wrote:
> On Tuesday 14 October 2008 17:54:18 Klaas Gadeyne wrote:
>> > What was the rationale behind the serialization of TCs of same priority
>> > and periodicity?
>
> It's an optional default :-). The idea was that *if two threads have the same
> priority and the same period, you might as well execute their functions in
> the same thread, otherwise, they would only be competing for the same
> resources every time. The disadvantage of this method is that we need to be
> prepared to line up periodic activities even if this is not the case in our
> application => code bloat and larger latencies. Fortunately, you are free to
> write your own PeriodicActivity which inherits from RTT::OS::PeriodicThread
> and RTT::ActivityInterface and which does not have this overhead. You can
> then use SlaveActivity to line up tasks anyway if required.
>
>> >> side, remember roadmap orocos 2.0 discussions.
>> >> [/]
>> >
>> > Maybe such 2.0 ideas should be collected in some place like the wiki?
>>
>> You're right (basically this won't get done due to lack of time :-(
>
> You mean lack of time for *you* ?

Yes, ofcourse.
Contrary to other on this ML, I wouldn't dare to speak for someone else ;-]

Klaas

Draft Next Generation State Machines

On Oct 21, 2008, at 02:50 , Peter Soetens wrote:

> On Tuesday 14 October 2008 17:54:18 Klaas Gadeyne wrote:
>> On Tue, Oct 14, 2008 at 3:19 PM, Markus Klotzbücher
>>> But am I right that this would change the behavior of periodic
>>> TaskContexts in that each would always be run in a single thread,
>>> and
>>> serialization would not be possible anymore?
>>
>> Hmm, maybe it still might be possible, but I can't see how at this
>> time (at least not without adding lots of jitter!)
>>
>>> What was the rationale behind the serialization of TCs of same
>>> priority
>>> and periodicity?
>
> It's an optional default :-). The idea was that *if two threads have
> the same
> priority and the same period, you might as well execute their
> functions in
> the same thread, otherwise, they would only be competing for the same
> resources every time. The disadvantage of this method is that we
> need to be
> prepared to line up periodic activities even if this is not the case
> in our
> application => code bloat and larger latencies. Fortunately, you are
> free to
> write your own PeriodicActivity which inherits from
> RTT::OS::PeriodicThread
> and RTT::ActivityInterface and which does not have this overhead.
> You can
> then use SlaveActivity to line up tasks anyway if required.

We have used this explicitly to help alleviate/avoid certain startup
transients. We could have gotten around them without this explicit
synchronization, but it would have been more involved. Just FYI ...
S

Draft Next Generation State Machines

On Tue, 21 Oct 2008, S Roderick wrote:

> On Oct 21, 2008, at 02:50 , Peter Soetens wrote:
>
>> On Tuesday 14 October 2008 17:54:18 Klaas Gadeyne wrote:
>>> On Tue, Oct 14, 2008 at 3:19 PM, Markus Klotzbücher
>>>> But am I right that this would change the behavior of periodic
>>>> TaskContexts in that each would always be run in a single thread,
>>>> and
>>>> serialization would not be possible anymore?
>>>
>>> Hmm, maybe it still might be possible, but I can't see how at this
>>> time (at least not without adding lots of jitter!)
>>>
>>>> What was the rationale behind the serialization of TCs of same
>>>> priority
>>>> and periodicity?
>>
>> It's an optional default :-). The idea was that *if two threads have
>> the same
>> priority and the same period, you might as well execute their
>> functions in
>> the same thread, otherwise, they would only be competing for the same
>> resources every time. The disadvantage of this method is that we
>> need to be
>> prepared to line up periodic activities even if this is not the case
>> in our
>> application => code bloat and larger latencies. Fortunately, you are
>> free to
>> write your own PeriodicActivity which inherits from
>> RTT::OS::PeriodicThread
>> and RTT::ActivityInterface and which does not have this overhead.
>> You can
>> then use SlaveActivity to line up tasks anyway if required.
>
> We have used this explicitly to help alleviate/avoid certain startup
> transients. We could have gotten around them without this explicit
> synchronization, but it would have been more involved. Just FYI ...

The better solution would have been to use a state machine logic in the
deployment and in the components involved, to realize the desired
synchronization. Sematically simple Petri Net primitives are been designed
for this ("fork", "join") but they are mathematically equivalent to rather
simple state machines.

Herman

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

Draft Next Generation State Machines

On Tue, 21 Oct 2008, Peter Soetens wrote:

> On Tuesday 14 October 2008 17:54:18 Klaas Gadeyne wrote:
>> On Tue, Oct 14, 2008 at 3:19 PM, Markus Klotzbücher
>>> But am I right that this would change the behavior of periodic
>>> TaskContexts in that each would always be run in a single thread, and
>>> serialization would not be possible anymore?
>>
>> Hmm, maybe it still might be possible, but I can't see how at this
>> time (at least not without adding lots of jitter!)
>>
>>> What was the rationale behind the serialization of TCs of same priority
>>> and periodicity?
>
> It's an optional default :-). The idea was that *if two threads have the same
> priority and the same period, you might as well execute their functions in
> the same thread, otherwise, they would only be competing for the same
> resources every time. The disadvantage of this method is that we need to be
> prepared to line up periodic activities even if this is not the case in our
> application => code bloat and larger latencies.

Where would the larger latencies come from? In my opinion, bo OS scheduler is
faster than an explicit serialization... And one should only serialize when
one _knows_ that the chosen serialization fits the purposes of the
application.

The "code bloat" would only be in the deployment support, isn't it?

And explicit serialization is a good feature to allow for deeply embedded
systems, that have high needs for minimal runtime code, and whose
environments requirements are constant, so hard optimization is useful.

Herman

Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

Draft Next Generation State Machines

On Tue, Oct 21, 2008 at 9:28 AM, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
> On Tue, 21 Oct 2008, Peter Soetens wrote:
>
>> On Tuesday 14 October 2008 17:54:18 Klaas Gadeyne wrote:
>>>
>>> On Tue, Oct 14, 2008 at 3:19 PM, Markus Klotzbücher
>>>>
>>>> But am I right that this would change the behavior of periodic
>>>> TaskContexts in that each would always be run in a single thread, and
>>>> serialization would not be possible anymore?
>>>
>>> Hmm, maybe it still might be possible, but I can't see how at this
>>> time (at least not without adding lots of jitter!)
>>>
>>>> What was the rationale behind the serialization of TCs of same priority
>>>> and periodicity?
>>
>> It's an optional default :-). The idea was that *if two threads have the
>> same
>> priority and the same period, you might as well execute their functions in
>> the same thread, otherwise, they would only be competing for the same
>> resources every time. The disadvantage of this method is that we need to
>> be
>> prepared to line up periodic activities even if this is not the case in
>> our
>> application => code bloat and larger latencies.
>
> Where would the larger latencies come from? In my opinion, bo OS scheduler
> is faster than an explicit serialization... And one should only serialize when
> one _knows_ that the chosen serialization fits the purposes of the
> application.

I think this depends what your application goal is.
Let's say you have two periodic tasks which should be scheduler every
second, on the second (so the absolute time is more important than the
relative time). In case of explicit serialization, one task will
always have larger latencies than in the case you use 2 threads with a
random scheduler.

I have to admit though that
1/ I never encountered such a use case so far
2/ if you are using SCHED_FIFO or SCHED_RR you will get the exact same
result (+ some extra latency due to the context switch)

> The "code bloat" would only be in the deployment support, isn't it?

Maybe I don't understand what you mean with this, but AFAIS every TC
connected to an activity now suffers from this "code bloat". Then
again, "suffers from code bloat" are maybe very hard words in this
case.

> And explicit serialization is a good feature to allow for deeply embedded
> systems, that have high needs for minimal runtime code, and whose
> environments requirements are constant, so hard optimization is useful.

Indeed, as stated above, I see both pro's and cons for both scenario's
(so maybe we should make it configurable). That said, "we" refers to
"a friend of mine" ;-)

Klqaas