Realtime logging

Hello,

At the TU/e we've constructed several controllers each containing several components. Every component acts on a triggerport.

So the sequence goes as follows:

ReadEncoders -> CalculateErrors -> Gain -> WriteOutput

The first component has an update frequency of 1khz.

Now in order to identify our hardware we want to log the data send over these channels without missing samples. We hope to get a text file were every line represents the date send at that milisecond over each channel:

TimeStamp ReadEncoders CalculateErrors Gain WriteOutput 0.0000 0.000 0.000 0.000 0.000 0.0010 0.300 0.100 0.300 3.000 0.0020 0.400 0.104 0.390 3.330 etc.

However the current reporter has hardcoded buffers as I understood, so what is the best way of doing this? It is also important that every line represents sequential data.

Thanks in advance!

Tim

logging

On Jul 13, 2012, at 17:42 , Dustin Gooding wrote:

> On 07/13/2012 12:31 PM, Stephen Roderick wrote:
>> On Jul 13, 2012, at 11:21 , Dustin Gooding wrote:
>>
>>> On 07/11/2012 04:20 PM, Peter Soetens wrote:
>>>> Hi Dustin,
>>>>
>>>> On Fri, Jul 6, 2012 at 9:23 PM, Dustin Gooding
>>>> <dustin [dot] r [dot] gooding [..] ...> wrote:
>>>>> On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>>>>>
>>>>> Fundamentally you need to set the log4cpp category factory before log4cpp is
>>>>> used, and from then on it will automatically create OCL::Category objects.
>>>>> So if you can do that first, and ensure you have setup the OCL logging
>>>>> service, _before_ the library is used, I think you might end up with what
>>>>> you want. The whole point of the mod's we made to log4cpp were to ensure it
>>>>> only created OCL::Category logger objects instead of the standard
>>>>> log4cpp::Category objects. But it's been a couple of years since we did
>>>>> those mod's ... if you use a deployer (or copy the setup code for
>>>>> rtalloc/log4cpp to your app) you might just get away with it. Get your
>>>>> deployment/app running, and then trigger the logCategories() method in the
>>>>> OCL::LoggingService component. Examine the output and see whether you have
>>>>> any log4cpp::Category objects in your category hierarchy.
>>>>>
>>>>> HTH
>>>>> S
>>>>>
>>>>> I'm attempting to get OCL::Logging up and running using the examples
>>>>> described on
>>>>> http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...
>>>>> I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example
>>>>> in 3.4. Unfortunately, I am getting an error:
>>>>>
>>>>> /**
>>>>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>>>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>>>> 0.073 [ ERROR
>>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category
>>>>> 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is
>>>>> 'N7log4cpp8CategoryE'
>>>>> 0.074 [ ERROR
>>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable
>>>>> to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>>>>> **/
>>>>>
>>>>> This looks suspiciously like the log4cpp factory hasn't been changed. Or
>>>>> that a category has been setup _before_ you change the factory.
>>>>>
>>>>> I'm not sure what's wrong. I checked the TestComponent for how it's
>>>>> creating the Category and it's as follows:
>>>>>
>>>>> TestComponent.hpp
>>>>> /**
>>>>> class Component : public RTT::TaskContext
>>>>> {
>>>>> public:
>>>>> Component(std::string name);
>>>>> virtual ~Component();
>>>>>
>>>>> protected:
>>>>> virtual bool startHook();
>>>>> virtual void updateHook();
>>>>>
>>>>> /// Name of our category
>>>>> std::string categoryName;
>>>>> /// Our logging category
>>>>> OCL::logging::Category* logger; <---- this looks right to me
>>>>>
>>>>> Good
>>>>>
>>>>> };
>>>>>
>>>>> **/
>>>>>
>>>>> TestComponent.cpp
>>>>> /**
>>>>> Component::Component(std::string name) :
>>>>> RTT::TaskContext(name),
>>>>> categoryName(parentCategory + std::string(".") + name),
>>>>> logger(dynamic_cast<OCL::logging::Category*>(
>>>>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>>>>>
>>>>> Good.
>>>>>
>>>>> {
>>>>> }
>>>>>
>>>>> bool Component::startHook()
>>>>> {
>>>>> bool ok = (0 != logger);
>>>>> if (!ok)
>>>>> {
>>>>> log(Error) << "Unable to find existing OCL category '"
>>>>> << categoryName << "'" << endlog();
>>>>> }
>>>>>
>>>>> return ok;
>>>>> }
>>>>>
>>>>> **/
>>>>>
>>>>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>>>>>
>>>>> The code above looks like what we use, though ours is based on OCL v1.
>>>>> Fundamentally I think this part of the codebase is virtually unchanged
>>>>> between v1 and v2.
>>>>>
>>>>> Ideas?
>>>>>
>>>>> Dig into the initialization sequence, and make sure that the category you
>>>>> are trying to use is created by the OCL logging service, and that TLSF and
>>>>> the OCL::Logging are setup as done in the the deployer (I think that the v2
>>>>> sequence is the same as the v1 version we use). This approach does work, but
>>>>> getting the sequence right is the first obstacle.
>>>>>
>>>>> HTH
>>>>> S
>>>>>
>>>>>
>>>>> I found a thread that addresses this issue. My Google-Foo was poor earlier,
>>>>> sorry.
>>>>> http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
>>>>> mentions that a call of
>>>>> _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
>>>>> is required before doing anything else. In fact, I see mention of this call
>>>>> in the ocl/logging/test/testlogging.cpp main()..... but how do I make this
>>>>> call from RTTLua? As that thread suggests, should that be done from RTTLua,
>>>>> or by modifying the Deployer or LoggingService?
>>>> rttlua does not yet support OCL::Logging, it misses these few lines
>>>> tlsf+logging code which the deployers do have. Do not confuse the lua
>>>> tlsf code with the RTT tlsf code, they are not the same tlsf pool !!
>>>> That's why nothing about tlsf was printed as well.
>>>>
>>>> In the end, they are 'trivially' to add to LuaComponent.cpp, and since
>>>> it's OCL's 'rttlua', I think they should be there if OCL is configured
>>>> to support the logging.
>>>>
>>>> I have added an untested patch which sets the cmake logic and adds
>>>> some code to LuaComponent.cpp. Since I didn't even compile this, there
>>>> will be issues, but I expect that we should be at 95%...
>>>>
>>>> Peter
>>> Found the first issue (sorry for the delay). The OCL::memorySize type is defined in deployer-funcs.hpp. The only things that include that header are the various deployers... Should LuaComponent.cpp include deployer-funcs.hpp? Seems odd to me for a component to need a deployer's header. Would a more appropriate mechanism be to put the OCL::memorySize type (and others like it) in a different header (say memoryTypes.hpp) that both the deployers and LuaComponent can depend on? I'm happy to do it, but I'm not sure if that's the preferred approach.
>>>
>>> -dustin
>> No, that would be unnecessary coupling. The memory size type is only there to work with validation with boost program_options, which is only useful to deployers (I'm presuming here that whatever program you run to get Lua doesn't need it also). We should change the internals to accept size_t (or ssize_t) and keep the memory size type only in the deployers.
>>
>> My 2c
>> S
>
> I modified Peter's patch to get rid of the use of OCL::memorySize and directly assign "size_t memSize=ORO_DEFAULT_RTALLOC_SIZE".
>
> But, now I get an issue where init_memory_pool() (and other methods declared in rtt/os/tlsf/tlsf.h are undeclared (according to the compiler).
>
> /**
> [ 87%] Building CXX object lua/CMakeFiles/rttlua.dir/LuaComponent.cpp.o
> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp: In function ‘int ORO_main_impl(int, char**)’:
> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:278:54: error: ‘init_memory_pool’ was not declared in this scope
> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:322:56: error: ‘get_max_size’ was not declared in this scope
> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:323:63: error: ‘get_used_size’ was not declared in this scope
> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:327:34: error: ‘destroy_memory_pool’ was not declared in this scope
> **/
>
> I think the issue is that there's a mixup between which header/source are being compiled and linked... <ocl/lua/tlsf.h> and <rtt/os/tlsf/tlsf.h>. The rtt one declares init_memory_pool, and the lua one doesn't. RTTLua's CMakeLists.txt is non-specific as to which tlsf it's building/linking, but I think it's the lua one... which might explain the declaration error.
>
> LuaComponent.cpp includes <rtt/os/tlsf/tlsf.h>. Changing that to "tlsf.h" and changing init_memory_pool to rtl_init_memory_pool (which is declared in the lua tlsf.h) doesn't help. "rtl_init_memory_pool" is not declared either. ("extern" only matters at link time, right? extern declarations are still declarations, as far as the compiler is concerned, right?)
>
> I've confirmed (as best I can) that the right compiler directives are being set by CMake, such that the right preprocessor branches are being taken (e.g., OS_RT_MALLOC, ORO_BUILD_LOGGING).
>
> Any ideas?
>
> -dustin
>

Check out Peter's patch again. These lines need to be first IIRC, otherwise some functions aren't declared in TLSF. This was a deliberate choice we made, but I don't recall why ... :-(

#include <rtt/rtt-config.h>
#ifdef OS_RT_MALLOC
// need access to all TLSF functions embedded in RTT
#define ORO_MEMORY_POOL
#include <rtt/os/tlsf/tlsf.h>
#endif

logging

On 07/13/2012 05:00 PM, Stephen Roderick wrote:
> On Jul 13, 2012, at 17:42 , Dustin Gooding wrote:
>
>> On 07/13/2012 12:31 PM, Stephen Roderick wrote:
>>> On Jul 13, 2012, at 11:21 , Dustin Gooding wrote:
>>>
>>>> On 07/11/2012 04:20 PM, Peter Soetens wrote:
>>>>> Hi Dustin,
>>>>>
>>>>> On Fri, Jul 6, 2012 at 9:23 PM, Dustin Gooding
>>>>> <dustin [dot] r [dot] gooding [..] ...> wrote:
>>>>>> On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>>>>>>
>>>>>> Fundamentally you need to set the log4cpp category factory before log4cpp is
>>>>>> used, and from then on it will automatically create OCL::Category objects.
>>>>>> So if you can do that first, and ensure you have setup the OCL logging
>>>>>> service, _before_ the library is used, I think you might end up with what
>>>>>> you want. The whole point of the mod's we made to log4cpp were to ensure it
>>>>>> only created OCL::Category logger objects instead of the standard
>>>>>> log4cpp::Category objects. But it's been a couple of years since we did
>>>>>> those mod's ... if you use a deployer (or copy the setup code for
>>>>>> rtalloc/log4cpp to your app) you might just get away with it. Get your
>>>>>> deployment/app running, and then trigger the logCategories() method in the
>>>>>> OCL::LoggingService component. Examine the output and see whether you have
>>>>>> any log4cpp::Category objects in your category hierarchy.
>>>>>>
>>>>>> HTH
>>>>>> S
>>>>>>
>>>>>> I'm attempting to get OCL::Logging up and running using the examples
>>>>>> described on
>>>>>> http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...
>>>>>> I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example
>>>>>> in 3.4. Unfortunately, I am getting an error:
>>>>>>
>>>>>> /**
>>>>>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>>>>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>>>>> 0.073 [ ERROR
>>>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category
>>>>>> 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is
>>>>>> 'N7log4cpp8CategoryE'
>>>>>> 0.074 [ ERROR
>>>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable
>>>>>> to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>>>>>> **/
>>>>>>
>>>>>> This looks suspiciously like the log4cpp factory hasn't been changed. Or
>>>>>> that a category has been setup _before_ you change the factory.
>>>>>>
>>>>>> I'm not sure what's wrong. I checked the TestComponent for how it's
>>>>>> creating the Category and it's as follows:
>>>>>>
>>>>>> TestComponent.hpp
>>>>>> /**
>>>>>> class Component : public RTT::TaskContext
>>>>>> {
>>>>>> public:
>>>>>> Component(std::string name);
>>>>>> virtual ~Component();
>>>>>>
>>>>>> protected:
>>>>>> virtual bool startHook();
>>>>>> virtual void updateHook();
>>>>>>
>>>>>> /// Name of our category
>>>>>> std::string categoryName;
>>>>>> /// Our logging category
>>>>>> OCL::logging::Category* logger; <---- this looks right to me
>>>>>>
>>>>>> Good
>>>>>>
>>>>>> };
>>>>>>
>>>>>> **/
>>>>>>
>>>>>> TestComponent.cpp
>>>>>> /**
>>>>>> Component::Component(std::string name) :
>>>>>> RTT::TaskContext(name),
>>>>>> categoryName(parentCategory + std::string(".") + name),
>>>>>> logger(dynamic_cast<OCL::logging::Category*>(
>>>>>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>>>>>>
>>>>>> Good.
>>>>>>
>>>>>> {
>>>>>> }
>>>>>>
>>>>>> bool Component::startHook()
>>>>>> {
>>>>>> bool ok = (0 != logger);
>>>>>> if (!ok)
>>>>>> {
>>>>>> log(Error) << "Unable to find existing OCL category '"
>>>>>> << categoryName << "'" << endlog();
>>>>>> }
>>>>>>
>>>>>> return ok;
>>>>>> }
>>>>>>
>>>>>> **/
>>>>>>
>>>>>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>>>>>>
>>>>>> The code above looks like what we use, though ours is based on OCL v1.
>>>>>> Fundamentally I think this part of the codebase is virtually unchanged
>>>>>> between v1 and v2.
>>>>>>
>>>>>> Ideas?
>>>>>>
>>>>>> Dig into the initialization sequence, and make sure that the category you
>>>>>> are trying to use is created by the OCL logging service, and that TLSF and
>>>>>> the OCL::Logging are setup as done in the the deployer (I think that the v2
>>>>>> sequence is the same as the v1 version we use). This approach does work, but
>>>>>> getting the sequence right is the first obstacle.
>>>>>>
>>>>>> HTH
>>>>>> S
>>>>>>
>>>>>>
>>>>>> I found a thread that addresses this issue. My Google-Foo was poor earlier,
>>>>>> sorry.
>>>>>> http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
>>>>>> mentions that a call of
>>>>>> _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
>>>>>> is required before doing anything else. In fact, I see mention of this call
>>>>>> in the ocl/logging/test/testlogging.cpp main()..... but how do I make this
>>>>>> call from RTTLua? As that thread suggests, should that be done from RTTLua,
>>>>>> or by modifying the Deployer or LoggingService?
>>>>> rttlua does not yet support OCL::Logging, it misses these few lines
>>>>> tlsf+logging code which the deployers do have. Do not confuse the lua
>>>>> tlsf code with the RTT tlsf code, they are not the same tlsf pool !!
>>>>> That's why nothing about tlsf was printed as well.
>>>>>
>>>>> In the end, they are 'trivially' to add to LuaComponent.cpp, and since
>>>>> it's OCL's 'rttlua', I think they should be there if OCL is configured
>>>>> to support the logging.
>>>>>
>>>>> I have added an untested patch which sets the cmake logic and adds
>>>>> some code to LuaComponent.cpp. Since I didn't even compile this, there
>>>>> will be issues, but I expect that we should be at 95%...
>>>>>
>>>>> Peter
>>>> Found the first issue (sorry for the delay). The OCL::memorySize type is defined in deployer-funcs.hpp. The only things that include that header are the various deployers... Should LuaComponent.cpp include deployer-funcs.hpp? Seems odd to me for a component to need a deployer's header. Would a more appropriate mechanism be to put the OCL::memorySize type (and others like it) in a different header (say memoryTypes.hpp) that both the deployers and LuaComponent can depend on? I'm happy to do it, but I'm not sure if that's the preferred approach.
>>>>
>>>> -dustin
>>> No, that would be unnecessary coupling. The memory size type is only there to work with validation with boost program_options, which is only useful to deployers (I'm presuming here that whatever program you run to get Lua doesn't need it also). We should change the internals to accept size_t (or ssize_t) and keep the memory size type only in the deployers.
>>>
>>> My 2c
>>> S
>> I modified Peter's patch to get rid of the use of OCL::memorySize and directly assign "size_t memSize=ORO_DEFAULT_RTALLOC_SIZE".
>>
>> But, now I get an issue where init_memory_pool() (and other methods declared in rtt/os/tlsf/tlsf.h are undeclared (according to the compiler).
>>
>> /**
>> [ 87%] Building CXX object lua/CMakeFiles/rttlua.dir/LuaComponent.cpp.o
>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp: In function ‘int ORO_main_impl(int, char**)’:
>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:278:54: error: ‘init_memory_pool’ was not declared in this scope
>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:322:56: error: ‘get_max_size’ was not declared in this scope
>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:323:63: error: ‘get_used_size’ was not declared in this scope
>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:327:34: error: ‘destroy_memory_pool’ was not declared in this scope
>> **/
>>
>> I think the issue is that there's a mixup between which header/source are being compiled and linked... <ocl/lua/tlsf.h> and <rtt/os/tlsf/tlsf.h>. The rtt one declares init_memory_pool, and the lua one doesn't. RTTLua's CMakeLists.txt is non-specific as to which tlsf it's building/linking, but I think it's the lua one... which might explain the declaration error.
>>
>> LuaComponent.cpp includes <rtt/os/tlsf/tlsf.h>. Changing that to "tlsf.h" and changing init_memory_pool to rtl_init_memory_pool (which is declared in the lua tlsf.h) doesn't help. "rtl_init_memory_pool" is not declared either. ("extern" only matters at link time, right? extern declarations are still declarations, as far as the compiler is concerned, right?)
>>
>> I've confirmed (as best I can) that the right compiler directives are being set by CMake, such that the right preprocessor branches are being taken (e.g., OS_RT_MALLOC, ORO_BUILD_LOGGING).
>>
>> Any ideas?
>>
>> -dustin
>>
> Check out Peter's patch again. These lines need to be first IIRC, otherwise some functions aren't declared in TLSF. This was a deliberate choice we made, but I don't recall why ... :-(
>
> #include <rtt/rtt-config.h>
> #ifdef OS_RT_MALLOC
> // need access to all TLSF functions embedded in RTT
> #define ORO_MEMORY_POOL
> #include <rtt/os/tlsf/tlsf.h>
> #endif
>

Yeah, those lines are there in LuaComponent.cpp, before any mention of
init_memory_pool.

However, moving those lines up closer to the top of LuaComponent.cpp
(just after the first call to #ifndef OCL_COMPONENT_ONLY) seems to make
things a bit happier. rosmake completes without issue.

I'll let you how things go from here.

-dustin

logging

On 07/13/2012 05:13 PM, Dustin Gooding wrote:
> On 07/13/2012 05:00 PM, Stephen Roderick wrote:
>> On Jul 13, 2012, at 17:42 , Dustin Gooding wrote:
>>
>>> On 07/13/2012 12:31 PM, Stephen Roderick wrote:
>>>> On Jul 13, 2012, at 11:21 , Dustin Gooding wrote:
>>>>
>>>>> On 07/11/2012 04:20 PM, Peter Soetens wrote:
>>>>>> Hi Dustin,
>>>>>>
>>>>>> On Fri, Jul 6, 2012 at 9:23 PM, Dustin Gooding
>>>>>> <dustin [dot] r [dot] gooding [..] ...> wrote:
>>>>>>> On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>>>>>>>
>>>>>>> Fundamentally you need to set the log4cpp category factory before log4cpp is
>>>>>>> used, and from then on it will automatically create OCL::Category objects.
>>>>>>> So if you can do that first, and ensure you have setup the OCL logging
>>>>>>> service, _before_ the library is used, I think you might end up with what
>>>>>>> you want. The whole point of the mod's we made to log4cpp were to ensure it
>>>>>>> only created OCL::Category logger objects instead of the standard
>>>>>>> log4cpp::Category objects. But it's been a couple of years since we did
>>>>>>> those mod's ... if you use a deployer (or copy the setup code for
>>>>>>> rtalloc/log4cpp to your app) you might just get away with it. Get your
>>>>>>> deployment/app running, and then trigger the logCategories() method in the
>>>>>>> OCL::LoggingService component. Examine the output and see whether you have
>>>>>>> any log4cpp::Category objects in your category hierarchy.
>>>>>>>
>>>>>>> HTH
>>>>>>> S
>>>>>>>
>>>>>>> I'm attempting to get OCL::Logging up and running using the examples
>>>>>>> described on
>>>>>>> http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...
>>>>>>> I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example
>>>>>>> in 3.4. Unfortunately, I am getting an error:
>>>>>>>
>>>>>>> /**
>>>>>>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>>>>>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>>>>>> 0.073 [ ERROR
>>>>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category
>>>>>>> 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is
>>>>>>> 'N7log4cpp8CategoryE'
>>>>>>> 0.074 [ ERROR
>>>>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable
>>>>>>> to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>>>>>>> **/
>>>>>>>
>>>>>>> This looks suspiciously like the log4cpp factory hasn't been changed. Or
>>>>>>> that a category has been setup _before_ you change the factory.
>>>>>>>
>>>>>>> I'm not sure what's wrong. I checked the TestComponent for how it's
>>>>>>> creating the Category and it's as follows:
>>>>>>>
>>>>>>> TestComponent.hpp
>>>>>>> /**
>>>>>>> class Component : public RTT::TaskContext
>>>>>>> {
>>>>>>> public:
>>>>>>> Component(std::string name);
>>>>>>> virtual ~Component();
>>>>>>>
>>>>>>> protected:
>>>>>>> virtual bool startHook();
>>>>>>> virtual void updateHook();
>>>>>>>
>>>>>>> /// Name of our category
>>>>>>> std::string categoryName;
>>>>>>> /// Our logging category
>>>>>>> OCL::logging::Category* logger; <---- this looks right to me
>>>>>>>
>>>>>>> Good
>>>>>>>
>>>>>>> };
>>>>>>>
>>>>>>> **/
>>>>>>>
>>>>>>> TestComponent.cpp
>>>>>>> /**
>>>>>>> Component::Component(std::string name) :
>>>>>>> RTT::TaskContext(name),
>>>>>>> categoryName(parentCategory + std::string(".") + name),
>>>>>>> logger(dynamic_cast<OCL::logging::Category*>(
>>>>>>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>>>>>>>
>>>>>>> Good.
>>>>>>>
>>>>>>> {
>>>>>>> }
>>>>>>>
>>>>>>> bool Component::startHook()
>>>>>>> {
>>>>>>> bool ok = (0 != logger);
>>>>>>> if (!ok)
>>>>>>> {
>>>>>>> log(Error) << "Unable to find existing OCL category '"
>>>>>>> << categoryName << "'" << endlog();
>>>>>>> }
>>>>>>>
>>>>>>> return ok;
>>>>>>> }
>>>>>>>
>>>>>>> **/
>>>>>>>
>>>>>>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>>>>>>>
>>>>>>> The code above looks like what we use, though ours is based on OCL v1.
>>>>>>> Fundamentally I think this part of the codebase is virtually unchanged
>>>>>>> between v1 and v2.
>>>>>>>
>>>>>>> Ideas?
>>>>>>>
>>>>>>> Dig into the initialization sequence, and make sure that the category you
>>>>>>> are trying to use is created by the OCL logging service, and that TLSF and
>>>>>>> the OCL::Logging are setup as done in the the deployer (I think that the v2
>>>>>>> sequence is the same as the v1 version we use). This approach does work, but
>>>>>>> getting the sequence right is the first obstacle.
>>>>>>>
>>>>>>> HTH
>>>>>>> S
>>>>>>>
>>>>>>>
>>>>>>> I found a thread that addresses this issue. My Google-Foo was poor earlier,
>>>>>>> sorry.
>>>>>>> http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
>>>>>>> mentions that a call of
>>>>>>> _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
>>>>>>> is required before doing anything else. In fact, I see mention of this call
>>>>>>> in the ocl/logging/test/testlogging.cpp main()..... but how do I make this
>>>>>>> call from RTTLua? As that thread suggests, should that be done from RTTLua,
>>>>>>> or by modifying the Deployer or LoggingService?
>>>>>> rttlua does not yet support OCL::Logging, it misses these few lines
>>>>>> tlsf+logging code which the deployers do have. Do not confuse the lua
>>>>>> tlsf code with the RTT tlsf code, they are not the same tlsf pool !!
>>>>>> That's why nothing about tlsf was printed as well.
>>>>>>
>>>>>> In the end, they are 'trivially' to add to LuaComponent.cpp, and since
>>>>>> it's OCL's 'rttlua', I think they should be there if OCL is configured
>>>>>> to support the logging.
>>>>>>
>>>>>> I have added an untested patch which sets the cmake logic and adds
>>>>>> some code to LuaComponent.cpp. Since I didn't even compile this, there
>>>>>> will be issues, but I expect that we should be at 95%...
>>>>>>
>>>>>> Peter
>>>>> Found the first issue (sorry for the delay). The OCL::memorySize type is defined in deployer-funcs.hpp. The only things that include that header are the various deployers... Should LuaComponent.cpp include deployer-funcs.hpp? Seems odd to me for a component to need a deployer's header. Would a more appropriate mechanism be to put the OCL::memorySize type (and others like it) in a different header (say memoryTypes.hpp) that both the deployers and LuaComponent can depend on? I'm happy to do it, but I'm not sure if that's the preferred approach.
>>>>>
>>>>> -dustin
>>>> No, that would be unnecessary coupling. The memory size type is only there to work with validation with boost program_options, which is only useful to deployers (I'm presuming here that whatever program you run to get Lua doesn't need it also). We should change the internals to accept size_t (or ssize_t) and keep the memory size type only in the deployers.
>>>>
>>>> My 2c
>>>> S
>>> I modified Peter's patch to get rid of the use of OCL::memorySize and directly assign "size_t memSize=ORO_DEFAULT_RTALLOC_SIZE".
>>>
>>> But, now I get an issue where init_memory_pool() (and other methods declared in rtt/os/tlsf/tlsf.h are undeclared (according to the compiler).
>>>
>>> /**
>>> [ 87%] Building CXX object lua/CMakeFiles/rttlua.dir/LuaComponent.cpp.o
>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp: In function ‘int ORO_main_impl(int, char**)’:
>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:278:54: error: ‘init_memory_pool’ was not declared in this scope
>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:322:56: error: ‘get_max_size’ was not declared in this scope
>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:323:63: error: ‘get_used_size’ was not declared in this scope
>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:327:34: error: ‘destroy_memory_pool’ was not declared in this scope
>>> **/
>>>
>>> I think the issue is that there's a mixup between which header/source are being compiled and linked... <ocl/lua/tlsf.h> and <rtt/os/tlsf/tlsf.h>. The rtt one declares init_memory_pool, and the lua one doesn't. RTTLua's CMakeLists.txt is non-specific as to which tlsf it's building/linking, but I think it's the lua one... which might explain the declaration error.
>>>
>>> LuaComponent.cpp includes <rtt/os/tlsf/tlsf.h>. Changing that to "tlsf.h" and changing init_memory_pool to rtl_init_memory_pool (which is declared in the lua tlsf.h) doesn't help. "rtl_init_memory_pool" is not declared either. ("extern" only matters at link time, right? extern declarations are still declarations, as far as the compiler is concerned, right?)
>>>
>>> I've confirmed (as best I can) that the right compiler directives are being set by CMake, such that the right preprocessor branches are being taken (e.g., OS_RT_MALLOC, ORO_BUILD_LOGGING).
>>>
>>> Any ideas?
>>>
>>> -dustin
>>>
>> Check out Peter's patch again. These lines need to be first IIRC, otherwise some functions aren't declared in TLSF. This was a deliberate choice we made, but I don't recall why ... :-(
>>
>> #include <rtt/rtt-config.h>
>> #ifdef OS_RT_MALLOC
>> // need access to all TLSF functions embedded in RTT
>> #define ORO_MEMORY_POOL
>> #include <rtt/os/tlsf/tlsf.h>
>> #endif
>>
> Yeah, those lines are there in LuaComponent.cpp, before any mention of
> init_memory_pool.
>
> However, moving those lines up closer to the top of LuaComponent.cpp
> (just after the first call to #ifndef OCL_COMPONENT_ONLY) seems to make
> things a bit happier. rosmake completes without issue.
>
> I'll let you how things go from here.
>
> -dustin
> --
> Orocos-Users mailing list
> Orocos-Users [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>
Well, some good news, some bad.

RTTLua TLSF seems to be good. But I'm still not getting the OCL log
output from TestComponent. I'm sure it's a configuration issue, I just
don't know what's wrong. AFAIKT, the config I'm using is identical (or
at least as much as it can be) to the "good.xml" deployment that worked
earlier.

Here's how I'm deploying with RTTLua, and the output.
/**
dgooding@bacon:~/rtlogging$ rttlua -i setup_logging.lua
Real-time memory: 517904 bytes free of 524288 allocated.
OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
> 2.062 [ ERROR
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] RTT
ERROR TestComponent 0
2.063 [
Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
RTT WARNING TestComponent 0
4.062 [ ERROR
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] RTT
ERROR TestComponent 1
4.062 [
Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
RTT WARNING TestComponent 1
6.062 [ ERROR
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] RTT
ERROR TestComponent 2
6.062 [
Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
RTT WARNING TestComponent 2
test:stop()
>
TLSF bytes allocated=524288 overhead=6384 max-used=7632
currently-used=6384 still-allocated=0
**/

What's weird (I think), is that I'm not getting any errors or warnings
about a bad config. It's just not logging with OCL.

setup_logging.lua - http://pastebin.com/zTV3KtkP
logging_properties.cpf - http://pastebin.com/TJKEH4hn
appender_properties.cpf - http://pastebin.com/1kut2Rs1
log4cpp.conf - http://pastebin.com/9gUb3Xvw
TestComponent.cpp - http://pastebin.com/APWkyXGE

-dustin

logging

On Jul 16, 2012, at 11:51 , Dustin Gooding wrote:

> On 07/13/2012 05:13 PM, Dustin Gooding wrote:
>> On 07/13/2012 05:00 PM, Stephen Roderick wrote:
>>> On Jul 13, 2012, at 17:42 , Dustin Gooding wrote:
>>>
>>>> On 07/13/2012 12:31 PM, Stephen Roderick wrote:
>>>>> On Jul 13, 2012, at 11:21 , Dustin Gooding wrote:
>>>>>
>>>>>> On 07/11/2012 04:20 PM, Peter Soetens wrote:
>>>>>>> Hi Dustin,
>>>>>>>
>>>>>>> On Fri, Jul 6, 2012 at 9:23 PM, Dustin Gooding
>>>>>>> <dustin [dot] r [dot] gooding [..] ...> wrote:
>>>>>>>> On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>>>>>>>>
>>>>>>>> Fundamentally you need to set the log4cpp category factory before log4cpp is
>>>>>>>> used, and from then on it will automatically create OCL::Category objects.
>>>>>>>> So if you can do that first, and ensure you have setup the OCL logging
>>>>>>>> service, _before_ the library is used, I think you might end up with what
>>>>>>>> you want. The whole point of the mod's we made to log4cpp were to ensure it
>>>>>>>> only created OCL::Category logger objects instead of the standard
>>>>>>>> log4cpp::Category objects. But it's been a couple of years since we did
>>>>>>>> those mod's ... if you use a deployer (or copy the setup code for
>>>>>>>> rtalloc/log4cpp to your app) you might just get away with it. Get your
>>>>>>>> deployment/app running, and then trigger the logCategories() method in the
>>>>>>>> OCL::LoggingService component. Examine the output and see whether you have
>>>>>>>> any log4cpp::Category objects in your category hierarchy.
>>>>>>>>
>>>>>>>> HTH
>>>>>>>> S
>>>>>>>>
>>>>>>>> I'm attempting to get OCL::Logging up and running using the examples
>>>>>>>> described on
>>>>>>>> http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...
>>>>>>>> I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example
>>>>>>>> in 3.4. Unfortunately, I am getting an error:
>>>>>>>>
>>>>>>>> /**
>>>>>>>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>>>>>>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>>>>>>> 0.073 [ ERROR
>>>>>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category
>>>>>>>> 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is
>>>>>>>> 'N7log4cpp8CategoryE'
>>>>>>>> 0.074 [ ERROR
>>>>>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable
>>>>>>>> to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>>>>>>>> **/
>>>>>>>>
>>>>>>>> This looks suspiciously like the log4cpp factory hasn't been changed. Or
>>>>>>>> that a category has been setup _before_ you change the factory.
>>>>>>>>
>>>>>>>> I'm not sure what's wrong. I checked the TestComponent for how it's
>>>>>>>> creating the Category and it's as follows:
>>>>>>>>
>>>>>>>> TestComponent.hpp
>>>>>>>> /**
>>>>>>>> class Component : public RTT::TaskContext
>>>>>>>> {
>>>>>>>> public:
>>>>>>>> Component(std::string name);
>>>>>>>> virtual ~Component();
>>>>>>>>
>>>>>>>> protected:
>>>>>>>> virtual bool startHook();
>>>>>>>> virtual void updateHook();
>>>>>>>>
>>>>>>>> /// Name of our category
>>>>>>>> std::string categoryName;
>>>>>>>> /// Our logging category
>>>>>>>> OCL::logging::Category* logger; <---- this looks right to me
>>>>>>>>
>>>>>>>> Good
>>>>>>>>
>>>>>>>> };
>>>>>>>>
>>>>>>>> **/
>>>>>>>>
>>>>>>>> TestComponent.cpp
>>>>>>>> /**
>>>>>>>> Component::Component(std::string name) :
>>>>>>>> RTT::TaskContext(name),
>>>>>>>> categoryName(parentCategory + std::string(".") + name),
>>>>>>>> logger(dynamic_cast<OCL::logging::Category*>(
>>>>>>>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>>>>>>>>
>>>>>>>> Good.
>>>>>>>>
>>>>>>>> {
>>>>>>>> }
>>>>>>>>
>>>>>>>> bool Component::startHook()
>>>>>>>> {
>>>>>>>> bool ok = (0 != logger);
>>>>>>>> if (!ok)
>>>>>>>> {
>>>>>>>> log(Error) << "Unable to find existing OCL category '"
>>>>>>>> << categoryName << "'" << endlog();
>>>>>>>> }
>>>>>>>>
>>>>>>>> return ok;
>>>>>>>> }
>>>>>>>>
>>>>>>>> **/
>>>>>>>>
>>>>>>>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>>>>>>>>
>>>>>>>> The code above looks like what we use, though ours is based on OCL v1.
>>>>>>>> Fundamentally I think this part of the codebase is virtually unchanged
>>>>>>>> between v1 and v2.
>>>>>>>>
>>>>>>>> Ideas?
>>>>>>>>
>>>>>>>> Dig into the initialization sequence, and make sure that the category you
>>>>>>>> are trying to use is created by the OCL logging service, and that TLSF and
>>>>>>>> the OCL::Logging are setup as done in the the deployer (I think that the v2
>>>>>>>> sequence is the same as the v1 version we use). This approach does work, but
>>>>>>>> getting the sequence right is the first obstacle.
>>>>>>>>
>>>>>>>> HTH
>>>>>>>> S
>>>>>>>>
>>>>>>>>
>>>>>>>> I found a thread that addresses this issue. My Google-Foo was poor earlier,
>>>>>>>> sorry.
>>>>>>>> http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
>>>>>>>> mentions that a call of
>>>>>>>> _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
>>>>>>>> is required before doing anything else. In fact, I see mention of this call
>>>>>>>> in the ocl/logging/test/testlogging.cpp main()..... but how do I make this
>>>>>>>> call from RTTLua? As that thread suggests, should that be done from RTTLua,
>>>>>>>> or by modifying the Deployer or LoggingService?
>>>>>>> rttlua does not yet support OCL::Logging, it misses these few lines
>>>>>>> tlsf+logging code which the deployers do have. Do not confuse the lua
>>>>>>> tlsf code with the RTT tlsf code, they are not the same tlsf pool !!
>>>>>>> That's why nothing about tlsf was printed as well.
>>>>>>>
>>>>>>> In the end, they are 'trivially' to add to LuaComponent.cpp, and since
>>>>>>> it's OCL's 'rttlua', I think they should be there if OCL is configured
>>>>>>> to support the logging.
>>>>>>>
>>>>>>> I have added an untested patch which sets the cmake logic and adds
>>>>>>> some code to LuaComponent.cpp. Since I didn't even compile this, there
>>>>>>> will be issues, but I expect that we should be at 95%...
>>>>>>>
>>>>>>> Peter
>>>>>> Found the first issue (sorry for the delay). The OCL::memorySize type is defined in deployer-funcs.hpp. The only things that include that header are the various deployers... Should LuaComponent.cpp include deployer-funcs.hpp? Seems odd to me for a component to need a deployer's header. Would a more appropriate mechanism be to put the OCL::memorySize type (and others like it) in a different header (say memoryTypes.hpp) that both the deployers and LuaComponent can depend on? I'm happy to do it, but I'm not sure if that's the preferred approach.
>>>>>>
>>>>>> -dustin
>>>>> No, that would be unnecessary coupling. The memory size type is only there to work with validation with boost program_options, which is only useful to deployers (I'm presuming here that whatever program you run to get Lua doesn't need it also). We should change the internals to accept size_t (or ssize_t) and keep the memory size type only in the deployers.
>>>>>
>>>>> My 2c
>>>>> S
>>>> I modified Peter's patch to get rid of the use of OCL::memorySize and directly assign "size_t memSize=ORO_DEFAULT_RTALLOC_SIZE".
>>>>
>>>> But, now I get an issue where init_memory_pool() (and other methods declared in rtt/os/tlsf/tlsf.h are undeclared (according to the compiler).
>>>>
>>>> /**
>>>> [ 87%] Building CXX object lua/CMakeFiles/rttlua.dir/LuaComponent.cpp.o
>>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp: In function ‘int ORO_main_impl(int, char**)’:
>>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:278:54: error: ‘init_memory_pool’ was not declared in this scope
>>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:322:56: error: ‘get_max_size’ was not declared in this scope
>>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:323:63: error: ‘get_used_size’ was not declared in this scope
>>>> /opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:327:34: error: ‘destroy_memory_pool’ was not declared in this scope
>>>> **/
>>>>
>>>> I think the issue is that there's a mixup between which header/source are being compiled and linked... <ocl/lua/tlsf.h> and <rtt/os/tlsf/tlsf.h>. The rtt one declares init_memory_pool, and the lua one doesn't. RTTLua's CMakeLists.txt is non-specific as to which tlsf it's building/linking, but I think it's the lua one... which might explain the declaration error.
>>>>
>>>> LuaComponent.cpp includes <rtt/os/tlsf/tlsf.h>. Changing that to "tlsf.h" and changing init_memory_pool to rtl_init_memory_pool (which is declared in the lua tlsf.h) doesn't help. "rtl_init_memory_pool" is not declared either. ("extern" only matters at link time, right? extern declarations are still declarations, as far as the compiler is concerned, right?)
>>>>
>>>> I've confirmed (as best I can) that the right compiler directives are being set by CMake, such that the right preprocessor branches are being taken (e.g., OS_RT_MALLOC, ORO_BUILD_LOGGING).
>>>>
>>>> Any ideas?
>>>>
>>>> -dustin
>>>>
>>> Check out Peter's patch again. These lines need to be first IIRC, otherwise some functions aren't declared in TLSF. This was a deliberate choice we made, but I don't recall why ... :-(
>>>
>>> #include <rtt/rtt-config.h>
>>> #ifdef OS_RT_MALLOC
>>> // need access to all TLSF functions embedded in RTT
>>> #define ORO_MEMORY_POOL
>>> #include <rtt/os/tlsf/tlsf.h>
>>> #endif
>>>
>> Yeah, those lines are there in LuaComponent.cpp, before any mention of
>> init_memory_pool.
>>
>> However, moving those lines up closer to the top of LuaComponent.cpp
>> (just after the first call to #ifndef OCL_COMPONENT_ONLY) seems to make
>> things a bit happier. rosmake completes without issue.
>>
>> I'll let you how things go from here.
>>
>> -dustin
>> --
>> Orocos-Users mailing list
>> Orocos-Users [..] ...
>> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>>
> Well, some good news, some bad.
>
> RTTLua TLSF seems to be good. But I'm still not getting the OCL log output from TestComponent. I'm sure it's a configuration issue, I just don't know what's wrong. AFAIKT, the config I'm using is identical (or at least as much as it can be) to the "good.xml" deployment that worked earlier.
>
> Here's how I'm deploying with RTTLua, and the output.
> /**
> dgooding@bacon:~/rtlogging$ rttlua -i setup_logging.lua
> Real-time memory: 517904 bytes free of 524288 allocated.
> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
> > 2.062 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] RTT ERROR TestComponent 0
> 2.063 [ Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] RTT WARNING TestComponent 0
> 4.062 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] RTT ERROR TestComponent 1
> 4.062 [ Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] RTT WARNING TestComponent 1
> 6.062 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] RTT ERROR TestComponent 2
> 6.062 [ Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] RTT WARNING TestComponent 2
> test:stop()
> >
> TLSF bytes allocated=524288 overhead=6384 max-used=7632 currently-used=6384 still-allocated=0
> **/
>
> What's weird (I think), is that I'm not getting any errors or warnings about a bad config. It's just not logging with OCL.
>
> setup_logging.lua - http://pastebin.com/zTV3KtkP
> logging_properties.cpf - http://pastebin.com/TJKEH4hn
> appender_properties.cpf - http://pastebin.com/1kut2Rs1
> log4cpp.conf - http://pastebin.com/9gUb3Xvw
> TestComponent.cpp - http://pastebin.com/APWkyXGE
>
> -dustin

What happens if you create the test component last, after the logging service? It all _looks_ correct to me. :-(

What is the output of logCategories() in the LoggingService?
S

logging

On 07/17/2012 05:11 AM, S Roderick wrote:
> On Jul 16, 2012, at 11:51 , Dustin Gooding wrote:
>
>> On 07/13/2012 05:13 PM, Dustin Gooding wrote:
>>> On 07/13/2012 05:00 PM, Stephen Roderick wrote:
>>>> <sni

>>> Yeah, those lines are there in LuaComponent.cpp, before any mention of
>>> init_memory_pool.
>>>
>>> However, moving those lines up closer to the top of LuaComponent.cpp
>>> (just after the first call to #ifndef OCL_COMPONENT_ONLY) seems to make
>>> things a bit happier. rosmake completes without issue.
>>>
>>> I'll let you how things go from here.
>>>
>>> -dustin
>>> --
>>> Orocos-Users mailing list
>>> Orocos-Users [..] ...
>>> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>>>
>> Well, some good news, some bad.
>>
>> RTTLua TLSF seems to be good. But I'm still not getting the OCL log
>> output from TestComponent. I'm sure it's a configuration issue, I
>> just don't know what's wrong. AFAIKT, the config I'm using is
>> identical (or at least as much as it can be) to the "good.xml"
>> deployment that worked earlier.
>>
>> Here's how I'm deploying with RTTLua, and the output.
>> /**
>> dgooding@bacon:~/rtlogging$ rttlua -i setup_logging.lua
>> Real-time memory: 517904 bytes free of 524288 allocated.
>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>> > 2.062 [ ERROR
>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>> RTT ERROR TestComponent 0
>> 2.063 [
>> Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>> RTT WARNING TestComponent 0
>> 4.062 [ ERROR
>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>> RTT ERROR TestComponent 1
>> 4.062 [
>> Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>> RTT WARNING TestComponent 1
>> 6.062 [ ERROR
>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>> RTT ERROR TestComponent 2
>> 6.062 [
>> Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>> RTT WARNING TestComponent 2
>> test:stop()
>> >
>> TLSF bytes allocated=524288 overhead=6384 max-used=7632
>> currently-used=6384 still-allocated=0
>> **/
>>
>> What's weird (I think), is that I'm not getting any errors or
>> warnings about a bad config. It's just not logging with OCL.
>>
>> setup_logging.lua - http://pastebin.com/zTV3KtkP
>> logging_properties.cpf - http://pastebin.com/TJKEH4hn
>> appender_properties.cpf - http://pastebin.com/1kut2Rs1
>> log4cpp.conf - http://pastebin.com/9gUb3Xvw
>> TestComponent.cpp - http://pastebin.com/APWkyXGE
>>
>> -dustin
>
> What happens if you create the test component last, after the logging
> service? It all _looks_ correct to me. :-(
>
> What is the output of logCategories() in the LoggingService?
> S

Moving TestComponent last has no visible effect.

The output of logCategories() was set to INFO, which was below my log
level last time. I lowered the level and this is the output:
/**
0.906 [ Info
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Number categories = 7
0.906 [ Info
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Category '', level=INFO, typeid='PN7log4cpp8CategoryE', type really is
'OCL::Category'
0.906 [ Info
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Category 'org', level=NOTSET, typeid='PN7log4cpp8CategoryE', type really
is 'OCL::Category'
0.906 [ Info
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Category 'org.orocos', level=NOTSET, typeid='PN7log4cpp8CategoryE', type
really is 'OCL::Category'
0.906 [ Info
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Category 'org.orocos.ocl', level=NOTSET, typeid='PN7log4cpp8CategoryE',
type really is 'OCL::Category'
0.906 [ Info
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Category 'org.orocos.ocl.logging', level=NOTSET,
typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
0.906 [ Info
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Category 'org.orocos.ocl.logging.tests', level=NOTSET,
typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
0.906 [ Info
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Category 'org.orocos.ocl.logging.tests.TestComponent', level=NOTSET,
typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
**/

I'm guessing it's OK for TestComponent's level to be NOTSET, as it will
inherit from the base level of INFO ? I'm not even sure why that's set
that way, as logging_properties.cpf is pretty explicit about
org.orocos.ocl.logging.test.TestComponent having levels info and error.

(For what it's worth, I added a second appender (AppenderB,
OCL::logging::OstreamAppender) to see if maybe it was the appenders
problem, not the logging service, but that didn't help either.)

Another question: The standard deployer has a mechanism for loading a
log4cpp.properties file (for setting RTT logging and direct log4cpp
logging levels and appenders). How do I do the same in RTTLua, so that
I can provide a set of properties to my underlying library? Right now,
I've hardcoded my library to look for a prop file in the current
directory, but that's unsustainable.

-dustin

logging

On 07/17/2012 11:00 AM, Dustin Gooding wrote:
>
> On 07/17/2012 05:11 AM, S Roderick wrote:
>> On Jul 16, 2012, at 11:51 , Dustin Gooding wrote:
>>
>>> On 07/13/2012 05:13 PM, Dustin Gooding wrote:
>>>> On 07/13/2012 05:00 PM, Stephen Roderick wrote:
>>>>> <sni

>>>> Yeah, those lines are there in LuaComponent.cpp, before any mention of
>>>> init_memory_pool.
>>>>
>>>> However, moving those lines up closer to the top of LuaComponent.cpp
>>>> (just after the first call to #ifndef OCL_COMPONENT_ONLY) seems to make
>>>> things a bit happier. rosmake completes without issue.
>>>>
>>>> I'll let you how things go from here.
>>>>
>>>> -dustin
>>>> --
>>>> Orocos-Users mailing list
>>>> Orocos-Users [..] ...
>>>> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>>>>
>>> Well, some good news, some bad.
>>>
>>> RTTLua TLSF seems to be good. But I'm still not getting the OCL log
>>> output from TestComponent. I'm sure it's a configuration issue, I
>>> just don't know what's wrong. AFAIKT, the config I'm using is
>>> identical (or at least as much as it can be) to the "good.xml"
>>> deployment that worked earlier.
>>>
>>> Here's how I'm deploying with RTTLua, and the output.
>>> /**
>>> dgooding@bacon:~/rtlogging$ rttlua -i setup_logging.lua
>>> Real-time memory: 517904 bytes free of 524288 allocated.
>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>> > 2.062 [ ERROR
>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT ERROR TestComponent 0
>>> 2.063 [
>>> Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT WARNING TestComponent 0
>>> 4.062 [ ERROR
>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT ERROR TestComponent 1
>>> 4.062 [
>>> Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT WARNING TestComponent 1
>>> 6.062 [ ERROR
>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT ERROR TestComponent 2
>>> 6.062 [
>>> Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT WARNING TestComponent 2
>>> test:stop()
>>> >
>>> TLSF bytes allocated=524288 overhead=6384 max-used=7632
>>> currently-used=6384 still-allocated=0
>>> **/
>>>
>>> What's weird (I think), is that I'm not getting any errors or
>>> warnings about a bad config. It's just not logging with OCL.
>>>
>>> setup_logging.lua - http://pastebin.com/zTV3KtkP
>>> logging_properties.cpf - http://pastebin.com/TJKEH4hn
>>> appender_properties.cpf - http://pastebin.com/1kut2Rs1
>>> log4cpp.conf - http://pastebin.com/9gUb3Xvw
>>> TestComponent.cpp - http://pastebin.com/APWkyXGE
>>>
>>> -dustin
>>
>> What happens if you create the test component last, after the logging
>> service? It all _looks_ correct to me. :-(
>>
>> What is the output of logCategories() in the LoggingService?
>> S
>
> Moving TestComponent last has no visible effect.
>
> The output of logCategories() was set to INFO, which was below my log
> level last time. I lowered the level and this is the output:
> /**
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Number categories = 7
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category '', level=INFO, typeid='PN7log4cpp8CategoryE', type really is
> 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org', level=NOTSET, typeid='PN7log4cpp8CategoryE', type
> really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos', level=NOTSET, typeid='PN7log4cpp8CategoryE',
> type really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos.ocl', level=NOTSET,
> typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos.ocl.logging', level=NOTSET,
> typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos.ocl.logging.tests', level=NOTSET,
> typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos.ocl.logging.tests.TestComponent', level=NOTSET,
> typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
> **/
>
> I'm guessing it's OK for TestComponent's level to be NOTSET, as it
> will inherit from the base level of INFO ? I'm not even sure why
> that's set that way, as logging_properties.cpf is pretty explicit
> about org.orocos.ocl.logging.test.TestComponent having levels info and
> error.
>
> (For what it's worth, I added a second appender (AppenderB,
> OCL::logging::OstreamAppender) to see if maybe it was the appenders
> problem, not the logging service, but that didn't help either.)
>
> Another question: The standard deployer has a mechanism for loading a
> log4cpp.properties file (for setting RTT logging and direct log4cpp
> logging levels and appenders). How do I do the same in RTTLua, so
> that I can provide a set of properties to my underlying library?
> Right now, I've hardcoded my library to look for a prop file in the
> current directory, but that's unsustainable.
>
> -dustin

Son of a....

The problem I was having was an improper deployment and invalid properties.

My two problems:
1) The appender and LoggingService must be peers, but in the opposite
direction. The LoggingService component looks for the AppenderA peer
during configureHook, so the line should be
depl:addPeer("LoggingService", "AppenderA"). When "AppenderA" comes
first, LoggingService doesn't see AppenderA and fails its
configureHook. That's what I get for copy-pasting from a web site.
2) The logging_properties.cpf file was malformed, creating property bags
the LoggingService doesn't check. Refactoring to make the "Levels" and
"Appenders" property bags first class, and removing the "Peers" and
"Properties" bags, made the LoggingService component correctly
configured. That's what I get for copy-pasting from an XML deployment
file.

So there you go. OCL::Logging from an RTTLua deployment is working.
What's next... Do you want me to provide an updated patch akin to
Peter's, along with the example RTTLua deployment and CPFs that work (to
add to ocl/logging/tests)?

I think the only remaining issue I have (for now) is related to RTTLua
logging with rtt.log(). I've found that can't really be configured
using log4cpp-style properties. Has anyone done any work trying to get
RTTLua to use OCL::Logging or log4cpp directly?

-dustin

logging

On Jul 17, 2012, at 15:25 , Dustin Gooding wrote:

> On 07/17/2012 11:00 AM, Dustin Gooding wrote:
>>
>> On 07/17/2012 05:11 AM, S Roderick wrote:
>>> On Jul 16, 2012, at 11:51 , Dustin Gooding wrote:
>>>
>>>> On 07/13/2012 05:13 PM, Dustin Gooding wrote:
>>>>> On 07/13/2012 05:00 PM, Stephen Roderick wrote:

<sni

>
> Son of a....
>
> The problem I was having was an improper deployment and invalid properties.

Ugh, sorry to hear that.

>
> My two problems:
> 1) The appender and LoggingService must be peers, but in the opposite direction. The LoggingService component looks for the AppenderA peer during configureHook, so the line should be depl:addPeer("LoggingService", "AppenderA"). When "AppenderA" comes first, LoggingService doesn't see AppenderA and fails its configureHook. That's what I get for copy-pasting from a web site.
> 2) The logging_properties.cpf file was malformed, creating property bags the LoggingService doesn't check. Refactoring to make the "Levels" and "Appenders" property bags first class, and removing the "Peers" and "Properties" bags, made the LoggingService component correctly configured. That's what I get for copy-pasting from an XML deployment file.

So where are the problem items from? Is it the logging example page on the Orocos wiki?

> So there you go. OCL::Logging from an RTTLua deployment is working. What's next... Do you want me to provide an updated patch akin to Peter's, along with the example RTTLua deployment and CPFs that work (to add to ocl/logging/tests)?

I'd say YES! We don't want anyone else having to go through such joy's of "discovery" as you've just had to.

> I think the only remaining issue I have (for now) is related to RTTLua logging with rtt.log(). I've found that can't really be configured using log4cpp-style properties. Has anyone done any work trying to get RTTLua to use OCL::Logging or log4cpp directly?

It has been an open question for some time - how to improve RTT's native logging. We actually have patched RTT to use OCL::Logging/log4cpp directly. It is a little tricky to deal with due to the startup semantics (when does logging start, when does it get configured, what services are available at that time, etc), but we've been using it sucessfully for a while now. It needs a more consistent look at it though.
S

logging

On 07/18/2012 05:14 AM, S Roderick wrote:
> On Jul 17, 2012, at 15:25 , Dustin Gooding wrote:
>
>> On 07/17/2012 11:00 AM, Dustin Gooding wrote:
>>> On 07/17/2012 05:11 AM, S Roderick wrote:
>>>> On Jul 16, 2012, at 11:51 , Dustin Gooding wrote:
>>>>
>>>>> On 07/13/2012 05:13 PM, Dustin Gooding wrote:
>>>>>> On 07/13/2012 05:00 PM, Stephen Roderick wrote:
> <sni

>
>> Son of a....
>>
>> The problem I was having was an improper deployment and invalid properties.
> Ugh, sorry to hear that.
>
>> My two problems:
>> 1) The appender and LoggingService must be peers, but in the opposite direction. The LoggingService component looks for the AppenderA peer during configureHook, so the line should be depl:addPeer("LoggingService", "AppenderA"). When "AppenderA" comes first, LoggingService doesn't see AppenderA and fails its configureHook. That's what I get for copy-pasting from a web site.
>> 2) The logging_properties.cpf file was malformed, creating property bags the LoggingService doesn't check. Refactoring to make the "Levels" and "Appenders" property bags first class, and removing the "Peers" and "Properties" bags, made the LoggingService component correctly configured. That's what I get for copy-pasting from an XML deployment file.
> So where are the problem items from? Is it the logging example page on the Orocos wiki?
Yes. For the first problem, the RTTLua deployment on the "Using
real-time logging" wiki page has the wrong-order addPeer() calls. For
the second, even though the the provided RTTLua deployment scripts have
the option to generate (blank) CPF files, there's no explicit working
example provided. So, when I copy-pasted from the provided XML
deployment to get appropriate values, I screwed up and c-p'd too much.
I claim full responsibility for the second problem, but the first wasn't
apparent without digging into the LoggingService code.
>
>> So there you go. OCL::Logging from an RTTLua deployment is working. What's next... Do you want me to provide an updated patch akin to Peter's, along with the example RTTLua deployment and CPFs that work (to add to ocl/logging/tests)?
> I'd say YES! We don't want anyone else having to go through such joy's of "discovery" as you've just had to.
I'll start working on that now.
>
>> I think the only remaining issue I have (for now) is related to RTTLua logging with rtt.log(). I've found that can't really be configured using log4cpp-style properties. Has anyone done any work trying to get RTTLua to use OCL::Logging or log4cpp directly?
> It has been an open question for some time - how to improve RTT's native logging. We actually have patched RTT to use OCL::Logging/log4cpp directly. It is a little tricky to deal with due to the startup semantics (when does logging start, when does it get configured, what services are available at that time, etc), but we've been using it sucessfully for a while now. It needs a more consistent look at it though.
> S
>
>
>

For fun, and because we abuse the interactive nature of RTTLua (just ask
Peter), I created a C++ MessageLogger component that has a nine
operation calls (logDebug(), logInfo(), etc). All it does is take
strings and generate OCL::Logging log entries. This allows us to create
RTTLua-sourced log messages outside of the RTT::Logger system without
changing RTT. So our interactive Lua is logged to the same places and
configured with the same CPFs as our components and our library code.

Also, of note, our library code that uses log4cpp is (almost) seamlessly
integrated with OCL::Logging. You can easily see that the library
code's categories are added as OCL::Categories, and you can configure
the library's verbosity using the same CPF files that adjust the
component's verbosity. The one weird thing that keeps it from being
completely seemless is that we still have to give the library a
log4cpp.properties file that, at a minimum, defines a category for the
root of our log namespace ("gov", or "gov.nasa") and the associated
appenders. Without those, the library is silent.

And that brings back the question of providing a log4cpp.properties file
to RTTLua, akin to the way the standard deployers do. And *that* brings
up the question of configuring the LoggingService component directly
from a log4cpp.properties file instead of the assortment of CPF files.

Anyway, new patch coming in a few.

-dustin

logging

On 07/18/2012 08:42 AM, Dustin Gooding wrote:
> On 07/18/2012 05:14 AM, S Roderick wrote:
>> On Jul 17, 2012, at 15:25 , Dustin Gooding wrote:
>>
>>> On 07/17/2012 11:00 AM, Dustin Gooding wrote:
>>>> On 07/17/2012 05:11 AM, S Roderick wrote:
>>>>> On Jul 16, 2012, at 11:51 , Dustin Gooding wrote:
>>>>>
>>>>>> On 07/13/2012 05:13 PM, Dustin Gooding wrote:
>>>>>>> On 07/13/2012 05:00 PM, Stephen Roderick wrote:
>> <sni

>>
>>> Son of a....
>>>
>>> The problem I was having was an improper deployment and invalid properties.
>> Ugh, sorry to hear that.
>>
>>> My two problems:
>>> 1) The appender and LoggingService must be peers, but in the opposite direction. The LoggingService component looks for the AppenderA peer during configureHook, so the line should be depl:addPeer("LoggingService", "AppenderA"). When "AppenderA" comes first, LoggingService doesn't see AppenderA and fails its configureHook. That's what I get for copy-pasting from a web site.
>>> 2) The logging_properties.cpf file was malformed, creating property bags the LoggingService doesn't check. Refactoring to make the "Levels" and "Appenders" property bags first class, and removing the "Peers" and "Properties" bags, made the LoggingService component correctly configured. That's what I get for copy-pasting from an XML deployment file.
>> So where are the problem items from? Is it the logging example page on the Orocos wiki?
> Yes. For the first problem, the RTTLua deployment on the "Using
> real-time logging" wiki page has the wrong-order addPeer() calls. For
> the second, even though the the provided RTTLua deployment scripts have
> the option to generate (blank) CPF files, there's no explicit working
> example provided. So, when I copy-pasted from the provided XML
> deployment to get appropriate values, I screwed up and c-p'd too much.
> I claim full responsibility for the second problem, but the first wasn't
> apparent without digging into the LoggingService code.
>>> So there you go. OCL::Logging from an RTTLua deployment is working. What's next... Do you want me to provide an updated patch akin to Peter's, along with the example RTTLua deployment and CPFs that work (to add to ocl/logging/tests)?
>> I'd say YES! We don't want anyone else having to go through such joy's of "discovery" as you've just had to.
> I'll start working on that now.
>>> I think the only remaining issue I have (for now) is related to RTTLua logging with rtt.log(). I've found that can't really be configured using log4cpp-style properties. Has anyone done any work trying to get RTTLua to use OCL::Logging or log4cpp directly?
>> It has been an open question for some time - how to improve RTT's native logging. We actually have patched RTT to use OCL::Logging/log4cpp directly. It is a little tricky to deal with due to the startup semantics (when does logging start, when does it get configured, what services are available at that time, etc), but we've been using it sucessfully for a while now. It needs a more consistent look at it though.
>> S
>>
>>
>>
> For fun, and because we abuse the interactive nature of RTTLua (just ask
> Peter), I created a C++ MessageLogger component that has a nine
> operation calls (logDebug(), logInfo(), etc). All it does is take
> strings and generate OCL::Logging log entries. This allows us to create
> RTTLua-sourced log messages outside of the RTT::Logger system without
> changing RTT. So our interactive Lua is logged to the same places and
> configured with the same CPFs as our components and our library code.
>
> Also, of note, our library code that uses log4cpp is (almost) seamlessly
> integrated with OCL::Logging. You can easily see that the library
> code's categories are added as OCL::Categories, and you can configure
> the library's verbosity using the same CPF files that adjust the
> component's verbosity. The one weird thing that keeps it from being
> completely seemless is that we still have to give the library a
> log4cpp.properties file that, at a minimum, defines a category for the
> root of our log namespace ("gov", or "gov.nasa") and the associated
> appenders. Without those, the library is silent.
>
> And that brings back the question of providing a log4cpp.properties file
> to RTTLua, akin to the way the standard deployers do. And *that* brings
> up the question of configuring the LoggingService component directly
> from a log4cpp.properties file instead of the assortment of CPF files.
>
> Anyway, new patch coming in a few.
>
> -dustin
I hope this is good enough. As far as I can tell, it works.

Thanks for all the help.

What will the turn around be on getting this patch included in the
official repo? I'm trying to decide whether I should go apply this
patch manually around here, or just wait.

-dustin

logging

On 07/17/2012 11:00 AM, Dustin Gooding wrote:
>
> On 07/17/2012 05:11 AM, S Roderick wrote:
>> On Jul 16, 2012, at 11:51 , Dustin Gooding wrote:
>>
>>> On 07/13/2012 05:13 PM, Dustin Gooding wrote:
>>>> On 07/13/2012 05:00 PM, Stephen Roderick wrote:
>>>>> <sni

>>>> Yeah, those lines are there in LuaComponent.cpp, before any mention of
>>>> init_memory_pool.
>>>>
>>>> However, moving those lines up closer to the top of LuaComponent.cpp
>>>> (just after the first call to #ifndef OCL_COMPONENT_ONLY) seems to make
>>>> things a bit happier. rosmake completes without issue.
>>>>
>>>> I'll let you how things go from here.
>>>>
>>>> -dustin
>>>> --
>>>> Orocos-Users mailing list
>>>> Orocos-Users [..] ...
>>>> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users
>>>>
>>> Well, some good news, some bad.
>>>
>>> RTTLua TLSF seems to be good. But I'm still not getting the OCL log
>>> output from TestComponent. I'm sure it's a configuration issue, I
>>> just don't know what's wrong. AFAIKT, the config I'm using is
>>> identical (or at least as much as it can be) to the "good.xml"
>>> deployment that worked earlier.
>>>
>>> Here's how I'm deploying with RTTLua, and the output.
>>> /**
>>> dgooding@bacon:~/rtlogging$ rttlua -i setup_logging.lua
>>> Real-time memory: 517904 bytes free of 524288 allocated.
>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>> > 2.062 [ ERROR
>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT ERROR TestComponent 0
>>> 2.063 [
>>> Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT WARNING TestComponent 0
>>> 4.062 [ ERROR
>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT ERROR TestComponent 1
>>> 4.062 [
>>> Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT WARNING TestComponent 1
>>> 6.062 [ ERROR
>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT ERROR TestComponent 2
>>> 6.062 [
>>> Warning][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
>>> RTT WARNING TestComponent 2
>>> test:stop()
>>> >
>>> TLSF bytes allocated=524288 overhead=6384 max-used=7632
>>> currently-used=6384 still-allocated=0
>>> **/
>>>
>>> What's weird (I think), is that I'm not getting any errors or
>>> warnings about a bad config. It's just not logging with OCL.
>>>
>>> setup_logging.lua - http://pastebin.com/zTV3KtkP
>>> logging_properties.cpf - http://pastebin.com/TJKEH4hn
>>> appender_properties.cpf - http://pastebin.com/1kut2Rs1
>>> log4cpp.conf - http://pastebin.com/9gUb3Xvw
>>> TestComponent.cpp - http://pastebin.com/APWkyXGE
>>>
>>> -dustin
>>
>> What happens if you create the test component last, after the logging
>> service? It all _looks_ correct to me. :-(
>>
>> What is the output of logCategories() in the LoggingService?
>> S
>
> Moving TestComponent last has no visible effect.
>
> The output of logCategories() was set to INFO, which was below my log
> level last time. I lowered the level and this is the output:
> /**
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Number categories = 7
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category '', level=INFO, typeid='PN7log4cpp8CategoryE', type really is
> 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org', level=NOTSET, typeid='PN7log4cpp8CategoryE', type
> really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos', level=NOTSET, typeid='PN7log4cpp8CategoryE',
> type really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos.ocl', level=NOTSET,
> typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos.ocl.logging', level=NOTSET,
> typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos.ocl.logging.tests', level=NOTSET,
> typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
> 0.906 [ Info
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
> Category 'org.orocos.ocl.logging.tests.TestComponent', level=NOTSET,
> typeid='PN7log4cpp8CategoryE', type really is 'OCL::Category'
> **/
>
> I'm guessing it's OK for TestComponent's level to be NOTSET, as it
> will inherit from the base level of INFO ? I'm not even sure why
> that's set that way, as logging_properties.cpf is pretty explicit
> about org.orocos.ocl.logging.test.TestComponent having levels info and
> error.
>
> (For what it's worth, I added a second appender (AppenderB,
> OCL::logging::OstreamAppender) to see if maybe it was the appenders
> problem, not the logging service, but that didn't help either.)
>
> Another question: The standard deployer has a mechanism for loading a
> log4cpp.properties file (for setting RTT logging and direct log4cpp
> logging levels and appenders). How do I do the same in RTTLua, so
> that I can provide a set of properties to my underlying library?
> Right now, I've hardcoded my library to look for a prop file in the
> current directory, but that's unsustainable.
>
> -dustin

And a follow up question, sorry: I noticed that RTT::Logger (which
RTTLua uses for logging in rtt.log()) has a different set of log levels
than does OCL::Logging and log4cpp. Namely, it's missing Notice and
Alert. If we wanted to have a single, consistent log properties file
that sets levels and appenders, it would be nice if all the various
supported logging mechanisms used the same levels. Would adding those
levels to RTT (without modifying existing levels) be a an appropriate
patch to submit? Or would the better approach be to get RTTLua to use
OCL::Logging?

-dustin

logging

On Jul 6, 2012, at 16:23 , Dustin Gooding wrote:

> On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>>>> Fundamentally you need to set the log4cpp category factory before log4cpp is used, and from then on it will automatically create OCL::Category objects. So if you can do that first, and ensure you have setup the OCL logging service, _before_ the library is used, I think you might end up with what you want. The whole point of the mod's we made to log4cpp were to ensure it only created OCL::Category logger objects instead of the standard log4cpp::Category objects. But it's been a couple of years since we did those mod's ... if you use a deployer (or copy the setup code for rtalloc/log4cpp to your app) you might just get away with it. Get your deployment/app running, and then trigger the logCategories() method in the OCL::LoggingService component. Examine the output and see whether you have any log4cpp::Category objects in your category hierarchy.
>>>>
>>>> HTH
>>>> S
>>>>
>>> I'm attempting to get OCL::Logging up and running using the examples described on http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo... I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example in 3.4. Unfortunately, I am getting an error:
>>>
>>> /**
>>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>> 0.073 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is 'N7log4cpp8CategoryE'
>>> 0.074 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>>> **/
>> This looks suspiciously like the log4cpp factory hasn't been changed. Or that a category has been setup _before_ you change the factory.
>>
>>> I'm not sure what's wrong. I checked the TestComponent for how it's creating the Category and it's as follows:
>>>
>>> TestComponent.hpp
>>> /**
>>> class Component : public RTT::TaskContext
>>> {
>>> public:
>>> Component(std::string name);
>>> virtual ~Component();
>>>
>>> protected:
>>> virtual bool startHook();
>>> virtual void updateHook();
>>>
>>> /// Name of our category
>>> std::string categoryName;
>>> /// Our logging category
>>> OCL::logging::Category* logger; <---- this looks right to me
>> Good
>>
>>> };
>>>
>>> **/
>>>
>>> TestComponent.cpp
>>> /**
>>> Component::Component(std::string name) :
>>> RTT::TaskContext(name),
>>> categoryName(parentCategory + std::string(".") + name),
>>> logger(dynamic_cast<OCL::logging::Category*>(
>>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>> Good.
>>
>>> {
>>> }
>>>
>>> bool Component::startHook()
>>> {
>>> bool ok = (0 != logger);
>>> if (!ok)
>>> {
>>> log(Error) << "Unable to find existing OCL category '"
>>> << categoryName << "'" << endlog();
>>> }
>>>
>>> return ok;
>>> }
>>>
>>> **/
>>>
>>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>> The code above looks like what we use, though ours is based on OCL v1. Fundamentally I think this part of the codebase is virtually unchanged between v1 and v2.
>>
>>> Ideas?
>> Dig into the initialization sequence, and make sure that the category you are trying to use is created by the OCL logging service, and that TLSF and the OCL::Logging are setup as done in the the deployer (I think that the v2 sequence is the same as the v1 version we use). This approach does work, but getting the sequence right is the first obstacle.
>>
>> HTH
>> S
>
> I found a thread that addresses this issue. My Google-Foo was poor earlier, sorry. http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221 mentions that a call of _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_ is required before doing anything else. In fact, I see mention of this call in the ocl/logging/test/testlogging.cpp main()..... but how do I make this call from RTTLua? As that thread suggests, should that be done from RTTLua, or by modifying the Deployer or LoggingService?
>
> -dustin
>>
>> The deployer already has this (or should have). It has to occur _before_ even the LoggingService is instantiated. And you have to initializeTLSF (real-time memory pool) _before_ setting the factory (again, the deployer has this).
>> S
>
> Hmmm...
>
> When I run deployer by itself, I see that TLSF is initialized.
>
> /**
> dgooding@bacon:~$ deployer
> Real-time memory: 517904 bytes free of 524288 allocated.
> Switched to : Deployer
> ...
> Deployer [S]> quit
> TLSF bytes allocated=524288 overhead=6384 max-used=6384 currently-used=6384 still-allocated=0
> **/
>
> RTTLua doesn't have this output, but I assume it's being squashed by RTTLua and TLSF is still being initialized.
>
> And because of the build flags that were set for OCL (BUILD_RTALLOC and BUILD_LOGGING), both TLSF and the category factory should be configured correctly in Deployer.cpp (lines 124 and 138).
>
> Using one of the provided xml deployment scripts in ocl/logging/tests, I get positive results, though.
>
> /**
> dgooding@bacon:/opt/orocos/orocos_toolchain/ocl/logging/tests/data$ deployer -s good.xml
> Real-time memory: 517904 bytes free of 524288 allocated.
> Switched to : Deployer
> ...
> Deployer [S]> 0.573 [ ERROR ][Logger] RTT ERROR TestComponent 0
> 0.573 [ Warning][Logger] RTT WARNING TestComponent 0
> 0.573 [ ERROR ][Logger] RTT ERROR TestComponent2 1
> 2012-07-06 15:05:56,659 [140381892986624] ERROR org.orocos.ocl.logging.tests.TestComponent2 - ERROR TestComponent2 1
> 0.574 [ Warning][Logger] RTT WARNING TestComponent2 1
> 1.073 [ ERROR ][Logger] RTT ERROR TestComponent 2
> 1.073 [ Warning][Logger] RTT WARNING TestComponent 2
> 1.073 [ ERROR ][Logger] RTT ERROR TestComponent2 3
> 1.073 [ Warning][Logger] RTT WARNING TestComponent2 3
> 2012-07-06 15:05:57,159 [140381892986624] ERROR org.orocos.ocl.logging.tests.TestComponent2 - ERROR TestComponent2 3
> quit
> TLSF bytes allocated=524288 overhead=6384 max-used=7856 currently-used=6384 still-allocated=0
> **/

That's good to see!

> So, it still stands... how do I use OCL::Logging when using RTTLua as the deployment mechanism? What am I doing wrong?

And now I hand you over to the wonderful Markus who will solve all your problems ... ;-)

Unfortunately I've no idea with RTTLua, but for it to work it has to have the same sequence as the deployer. Hopefully Markus or someone else has already taken care of this.
S

Realtime logging

2012/1/24 <t [dot] t [dot] g [dot] clephas [..] ...>:
> Hello,
>
> At the TU/e we've constructed several controllers each containing several
> components.
> Every component acts on a triggerport.
>
> So the sequence goes as follows:
>
> ReadEncoders -> CalculateErrors -> Gain -> WriteOutput
>
> The first component has an update frequency of 1khz.
>
>
> Now in order to identify our hardware we want to log the data send over these
> channels without missing samples.
> We hope to get a text file were every line represents the date send at that
> milisecond over each channel:
>
>
> TimeStamp ReadEncoders CalculateErrors Gain  WriteOutput
> 0.0000    0.000        0.000           0.000 0.000
> 0.0010    0.300        0.100           0.300 3.000
> 0.0020    0.400        0.104           0.390 3.330
> etc.
>
>
> However the current reporter has hardcoded buffers as I understood, so what
> is the best way of doing this?
> It is also important that every line represents sequential data.

At 1kHz the main limitation will probably be the IO, with which the
Reporter buffer can't help you. A quick solution could be to write
only in e.g. sets of 100 samples, by storing the data inside the
program and logging at a lower frequency.

Steven

>
> Thanks in advance!
>
> Tim
> --
> Orocos-Users mailing list
> Orocos-Users [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-users

Realtime logging

On Jan 25, 2012, at 03:11 , Steven Bellens wrote:

> 2012/1/24 <t [dot] t [dot] g [dot] clephas [..] ...>:
>> Hello,
>>
>> At the TU/e we've constructed several controllers each containing several
>> components.
>> Every component acts on a triggerport.
>>
>> So the sequence goes as follows:
>>
>> ReadEncoders -> CalculateErrors -> Gain -> WriteOutput
>>
>> The first component has an update frequency of 1khz.
>>
>>
>> Now in order to identify our hardware we want to log the data send over these
>> channels without missing samples.
>> We hope to get a text file were every line represents the date send at that
>> milisecond over each channel:
>>
>>
>> TimeStamp ReadEncoders CalculateErrors Gain WriteOutput
>> 0.0000 0.000 0.000 0.000 0.000
>> 0.0010 0.300 0.100 0.300 3.000
>> 0.0020 0.400 0.104 0.390 3.330
>> etc.
>>
>>
>> However the current reporter has hardcoded buffers as I understood, so what
>> is the best way of doing this?
>> It is also important that every line represents sequential data.
>
>
> At 1kHz the main limitation will probably be the IO, with which the
> Reporter buffer can't help you. A quick solution could be to write
> only in e.g. sets of 100 samples, by storing the data inside the
> program and logging at a lower frequency.

Disagree (on a desktop-class computer, at least). We log thousands of bytes per cycle, at 500 Hz, and have no trouble whatsoever with the I/O. The O/S filesystem drivers are efficient at dealing with this. On an embedded system with a slow disk, then yes, this might become an issue. But given the output above, they're talking say 50 bytes/per cycle, or 50 kb/s, which is much less than 1% of the bandwidth of a modern HDD.
S

Realtime logging

On Tue, Jan 24, 2012 at 8:00 PM, <t [dot] t [dot] g [dot] clephas [..] ...> wrote:

> Hello,
>
> At the TU/e we've constructed several controllers each containing several
> components.
> Every component acts on a triggerport.
>
> So the sequence goes as follows:
>
> ReadEncoders -> CalculateErrors -> Gain -> WriteOutput
>
> The first component has an update frequency of 1khz.
>
>
> Now in order to identify our hardware we want to log the data send over
> these
> channels without missing samples.
> We hope to get a text file were every line represents the date send at that
> milisecond over each channel:
>
>
> TimeStamp ReadEncoders CalculateErrors Gain WriteOutput
> 0.0000 0.000 0.000 0.000 0.000
> 0.0010 0.300 0.100 0.300 3.000
> 0.0020 0.400 0.104 0.390 3.330
> etc.
>
>
> However the current reporter has hardcoded buffers as I understood, so what
> is the best way of doing this?
> It is also important that every line represents sequential data.
>

If you make the reporter non-periodic, it will try to log each sample as
you expect and use the buffers for, well, buffering when it doesn't get
enough time.

Logging to text format is very inefficient and you can only log a certain
number of columns before the IO or your thread can't finish it.

The reporter was made to support different marshalling formats such that
you could optimize this (for example, only write to a file at the end or
write to a binary format). The Netcdf reporter uses a far more efficient
format, but I haven't tested it in RTT 2.x, but it does build fine (tm) :)

Peter

logging

Hello all,

We're at a point in our development where we've got a bit of working
code and are taking a step back to make sure our insight into that code
is good. Basically getting benchmarks, logging, static/dynamic
analysis, etc. all squared away. For logging, specifically, we've got
some questions.

I understand that there's two different logging mechanisms in Orocos:
RTT::Logger and OCL::Logging.

We've been using RTT::Logger in our components, as that's what they use
by default. A move to OCL::Logging seems appropriate, as it supports
real-time execution. This shouldn't be too complicated, we think.

But for our own code, like libraries that our components will use, we've
been using direct log4cpp calls. (At first, we were linking against a
system-install of log4cpp, but after successfully linking against
Orocos' brand of log4cpp within the orocos_toolchain stack, we stuck
with that.)

Now there's discussion of moving to something a little fresher/faster,
specifically Pantheios. (The major reason for this was that we found
memory problems with log4cpp in valgrind, and few if any with
Pantheios.) However, I'm worried about having different logging
infrastructures in our libraries and components, and Pantheios doesn't
support log4j-style configuration files. I'm also concerned that the
valgrind results may be because of a mis-configuration on our part, not
with log4cpp itself.

So my questions are:
1) Does the Orocos-branded log4cpp have valgrind memory problems, in
your experience?
2) Does it matter if a component and its library have different logging
mechanisms, other than it requires the developers to know two different
log syntaxes and potentially have two different destinations for log
messages?
3) In section 3.5 on
http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...,
it mentions having components using OCL::Logging and GUI code using
log4cpp, with the same config file. While this seems appropriate for a
GUI, what about a real-time component using a library? If that library
is logging directly with log4cpp, real-time performance is no longer
guaranteed... How do we get around that?

logging

On Jul 5, 2012, at 15:59 , Dustin Gooding wrote:

> Hello all,
>
> We're at a point in our development where we've got a bit of working code and are taking a step back to make sure our insight into that code is good. Basically getting benchmarks, logging, static/dynamic analysis, etc. all squared away. For logging, specifically, we've got some questions.
>
> I understand that there's two different logging mechanisms in Orocos: RTT::Logger and OCL::Logging.
>
> We've been using RTT::Logger in our components, as that's what they use by default. A move to OCL::Logging seems appropriate, as it supports real-time execution. This shouldn't be too complicated, we think.
>
> But for our own code, like libraries that our components will use, we've been using direct log4cpp calls. (At first, we were linking against a system-install of log4cpp, but after successfully linking against Orocos' brand of log4cpp within the orocos_toolchain stack, we stuck with that.)
>
> Now there's discussion of moving to something a little fresher/faster, specifically Pantheios. (The major reason for this was that we found memory problems with log4cpp in valgrind, and few if any with Pantheios.) However, I'm worried about having different logging infrastructures in our libraries and components, and Pantheios doesn't support log4j-style configuration files. I'm also concerned that the valgrind results may be because of a mis-configuration on our part, not with log4cpp itself.

Haven't seen this project before. It looks interesting and definitely worth a look.

> So my questions are:
> 1) Does the Orocos-branded log4cpp have valgrind memory problems, in your experience?

Not in our experience, no.

> 2) Does it matter if a component and its library have different logging mechanisms, other than it requires the developers to know two different log syntaxes and potentially have two different destinations for log messages?

No, except for your real-time question below ...

> 3) In section 3.5 on http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo..., it mentions having components using OCL::Logging and GUI code using log4cpp, with the same config file. While this seems appropriate for a GUI, what about a real-time component using a library? If that library is logging directly with log4cpp, real-time performance is no longer guaranteed... How do we get around that?

You _must_ log through an OCL::Category logger to achieve real-time performance. This is layered on top of log4cpp, but modifies the internal path to appropriately remain in real-time between the category and appender objects. You can use OCL::Logging in the GUI too, if you want. It's just that you don't typically have the OCL::Logging back end services running there, and the additional complexity to remain real-time isn't warranted. The code looks exactly the same in the components as in the GUI, except in the component it's with an OCL::Category obejct and in the GUI it's with a log4cpp::Category object.

Yes, if the library that the component uses is using log4cpp, it will not log in real-time. The library _must_ use an OCL::Category logger. Can you modify the library?

Fundamentally you need to set the log4cpp category factory before log4cpp is used, and from then on it will automatically create OCL::Category objects. So if you can do that first, and ensure you have setup the OCL logging service, _before_ the library is used, I think you might end up with what you want. The whole point of the mod's we made to log4cpp were to ensure it only created OCL::Category logger objects instead of the standard log4cpp::Category objects. But it's been a couple of years since we did those mod's ... if you use a deployer (or copy the setup code for rtalloc/log4cpp to your app) you might just get away with it. Get your deployment/app running, and then trigger the logCategories() method in the OCL::LoggingService component. Examine the output and see whether you have any log4cpp::Category objects in your category hierarchy.

HTH
S

logging

On 07/06/2012 05:56 AM, S Roderick wrote:
> On Jul 5, 2012, at 15:59 , Dustin Gooding wrote:
>
>> Hello all,
>>
>> We're at a point in our development where we've got a bit of working
>> code and are taking a step back to make sure our insight into that
>> code is good. Basically getting benchmarks, logging, static/dynamic
>> analysis, etc. all squared away. For logging, specifically, we've
>> got some questions.
>>
>> I understand that there's two different logging mechanisms in Orocos:
>> RTT::Logger and OCL::Logging.
>>
>> We've been using RTT::Logger in our components, as that's what they
>> use by default. A move to OCL::Logging seems appropriate, as it
>> supports real-time execution. This shouldn't be too complicated, we
>> think.
>>
>> But for our own code, like libraries that our components will use,
>> we've been using direct log4cpp calls. (At first, we were linking
>> against a system-install of log4cpp, but after successfully linking
>> against Orocos' brand of log4cpp within the orocos_toolchain stack,
>> we stuck with that.)
>>
>> Now there's discussion of moving to something a little
>> fresher/faster, specifically Pantheios. (The major reason for this
>> was that we found memory problems with log4cpp in valgrind, and few
>> if any with Pantheios.) However, I'm worried about having different
>> logging infrastructures in our libraries and components, and
>> Pantheios doesn't support log4j-style configuration files. I'm also
>> concerned that the valgrind results may be because of a
>> mis-configuration on our part, not with log4cpp itself.
>
> Haven't seen this project before. It looks interesting and definitely
> worth a look.

Yes, particularly interesting are the benchmark results:
http://www.pantheios.org/performance.html The question is, though, can
it be made to run in real-time... And if so, is it worth the trouble of
swapping it out for log4cpp?

>
>> So my questions are:
>> 1) Does the Orocos-branded log4cpp have valgrind memory problems, in
>> your experience?
>
> Not in our experience, no.

The issue we're having is with the PropertyConfigurator::configure() call:

==17486== 65 bytes in 2 blocks are possibly lost in loss record 8 of 8
==17486== at 0x4C2B1C7: operator new(unsigned long) (in
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==17486== by 0x514CA88: std::string::_Rep::_S_create(unsigned long,
unsigned long, std::allocator<char> const&) (in
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
==17486== by 0x514E2B4: char* std::string::_S_construct<char*>(char*,
char*, std::allocator<char> const&, std::forward_iterator_tag) (in
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
==17486== by 0x514E414: std::basic_string<char,
std::char_traits const&, unsigned long, unsigned long) (in
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
==17486== by 0x514E441: std::string::substr(unsigned long, unsigned
long) const (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
==17486== by 0x5E3393B: unsigned int
log4cpp::StringUtil::split<std::back_insert_iterator std::allocator >(std::back_insert_iterator<std::list std::allocator int) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
==17486== by 0x5E30D5D:
log4cpp::PropertyConfiguratorImpl::instantiateAllAppenders() (in
/opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
==17486== by 0x5E30AD1:
log4cpp::PropertyConfiguratorImpl::doConfigure(std::istream&) (in
/opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
==17486== by 0x5E309C0:
log4cpp::PropertyConfiguratorImpl::doConfigure(std::string const&) (in
/opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
==17486== by 0x5E30693:
log4cpp::PropertyConfigurator::configure(std::string const&) (in
/opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
==17486== by 0x5BA795D: RCS::Logger::Logger() (Logger.cpp:21)
==17486== by 0x5BA75C5: _GLOBAL__sub_I_Logger.cpp (Logger.cpp:53)

>
>> 2) Does it matter if a component and its library have different
>> logging mechanisms, other than it requires the developers to know two
>> different log syntaxes and potentially have two different
>> destinations for log messages?
>
> No, except for your real-time question below ...
>
>> 3) In section 3.5 on
>> http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...,
>> it mentions having components using OCL::Logging and GUI code using
>> log4cpp, with the same config file. While this seems appropriate for
>> a GUI, what about a real-time component using a library? If that
>> library is logging directly with log4cpp, real-time performance is no
>> longer guaranteed... How do we get around that?
>
> You _must_ log through an OCL::Category logger to achieve real-time
> performance. This is layered on top of log4cpp, but modifies the
> internal path to appropriately remain in real-time between the
> category and appender objects. You can use OCL::Logging in the GUI
> too, if you want. It's just that you don't typically have the
> OCL::Logging back end services running there, and the additional
> complexity to remain real-time isn't warranted. The code looks exactly
> the same in the components as in the GUI, except in the component it's
> with an OCL::Category obejct and in the GUI it's with a
> log4cpp::Category object.
>
> Yes, if the library that the component uses is using log4cpp, it will
> not log in real-time. The library _must_ use an OCL::Category logger.
> Can you modify the library?

We'd obviously prefer not to modify the library. However, this library
is at the core of our I/O system, so it's pretty critical to get right.

>
> Fundamentally you need to set the log4cpp category factory before
> log4cpp is used, and from then on it will automatically create
> OCL::Category objects. So if you can do that first, and ensure you
> have setup the OCL logging service, _before_ the library is used, I
> think you might end up with what you want. The whole point of the
> mod's we made to log4cpp were to ensure it only created OCL::Category
> logger objects instead of the standard log4cpp::Category objects. But
> it's been a couple of years since we did those mod's ... if you use a
> deployer (or copy the setup code for rtalloc/log4cpp to your app) you
> might just get away with it. Get your deployment/app running, and then
> trigger the logCategories() method in the OCL::LoggingService
> component. Examine the output and see whether you have any
> log4cpp::Category objects in your category hierarchy.
>
> HTH
> S
>

I'm attempting to get OCL::Logging up and running using the examples
described on
http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...
I've recompiled OCL with BUILD_TESTS and am using the Lua deployment
example in 3.4. Unfortunately, I am getting an error:

/**
dgooding@bacon:~$ rttlua -i setup_logging.lua
OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
0.073 [ ERROR
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Category 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL
category: type is 'N7log4cpp8CategoryE'
0.074 [ ERROR
][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()]
Unable to find existing OCL category
'org.orocos.ocl.logging.tests.TestComponent'
**/

I'm not sure what's wrong. I checked the TestComponent for how it's
creating the Category and it's as follows:

TestComponent.hpp
/**
class Component : public RTT::TaskContext
{
public:
Component(std::string name);
virtual ~Component();

protected:
virtual bool startHook();
virtual void updateHook();

/// Name of our category
std::string categoryName;
/// Our logging category
OCL::logging::Category* logger; <---- this looks right to me
};

**/

TestComponent.cpp
/**
Component::Component(std::string name) :
RTT::TaskContext(name),
categoryName(parentCategory + std::string(".") + name),
logger(dynamic_cast<OCL::logging::Category*>(
&log4cpp::Category::getInstance(categoryName))) <---- so does this
{
}

bool Component::startHook()
{
bool ok = (0 != logger);
if (!ok)
{
log(Error) << "Unable to find existing OCL category '"
<< categoryName << "'" << endlog();
}

return ok;
}

**/

OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c

Ideas?

-dustin

logging

On Jul 6, 2012, at 15:02 , Dustin Gooding wrote:

> On 07/06/2012 05:56 AM, S Roderick wrote:
>> On Jul 5, 2012, at 15:59 , Dustin Gooding wrote:
>>
>>> Hello all,
>>>
>>> We're at a point in our development where we've got a bit of working code and are taking a step back to make sure our insight into that code is good. Basically getting benchmarks, logging, static/dynamic analysis, etc. all squared away. For logging, specifically, we've got some questions.
>>>
>>> I understand that there's two different logging mechanisms in Orocos: RTT::Logger and OCL::Logging.
>>>
>>> We've been using RTT::Logger in our components, as that's what they use by default. A move to OCL::Logging seems appropriate, as it supports real-time execution. This shouldn't be too complicated, we think.
>>>
>>> But for our own code, like libraries that our components will use, we've been using direct log4cpp calls. (At first, we were linking against a system-install of log4cpp, but after successfully linking against Orocos' brand of log4cpp within the orocos_toolchain stack, we stuck with that.)
>>>
>>> Now there's discussion of moving to something a little fresher/faster, specifically Pantheios. (The major reason for this was that we found memory problems with log4cpp in valgrind, and few if any with Pantheios.) However, I'm worried about having different logging infrastructures in our libraries and components, and Pantheios doesn't support log4j-style configuration files. I'm also concerned that the valgrind results may be because of a mis-configuration on our part, not with log4cpp itself.
>>
>> Haven't seen this project before. It looks interesting and definitely worth a look.
>
> Yes, particularly interesting are the benchmark results: http://www.pantheios.org/performance.html The question is, though, can it be made to run in real-time... And if so, is it worth the trouble of swapping it out for log4cpp?

Those benchmarks look suspicioulsy good though, don't they. ;-)

>>> So my questions are:
>>> 1) Does the Orocos-branded log4cpp have valgrind memory problems, in your experience?
>>
>> Not in our experience, no.
>
> The issue we're having is with the PropertyConfigurator::configure() call:
>
> ==17486== 65 bytes in 2 blocks are possibly lost in loss record 8 of 8
> ==17486== at 0x4C2B1C7: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==17486== by 0x514CA88: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
> ==17486== by 0x514E2B4: char* std::string::_S_construct<char*>(char*, char*, std::allocator<char> const&, std::forward_iterator_tag) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
> ==17486== by 0x514E414: std::basic_string<char, std::char_traits > ==17486== by 0x514E441: std::string::substr(unsigned long, unsigned long) const (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
> ==17486== by 0x5E3393B: unsigned int log4cpp::StringUtil::split<std::back_insert_iterator > ==17486== by 0x5E30D5D: log4cpp::PropertyConfiguratorImpl::instantiateAllAppenders() (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
> ==17486== by 0x5E30AD1: log4cpp::PropertyConfiguratorImpl::doConfigure(std::istream&) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
> ==17486== by 0x5E309C0: log4cpp::PropertyConfiguratorImpl::doConfigure(std::string const&) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
> ==17486== by 0x5E30693: log4cpp::PropertyConfigurator::configure(std::string const&) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
> ==17486== by 0x5BA795D: RCS::Logger::Logger() (Logger.cpp:21)
> ==17486== by 0x5BA75C5: _GLOBAL__sub_I_Logger.cpp (Logger.cpp:53)

Hmmm ... if this is a one off then could you live with it? You'd have to go trace the log4cpp code, which isn't terrible (and the codebase is pretty small). We don't even use the property configurator in our deployments - but I think we use them in the GUI. You have to configure the OCL::Logging side of things in the deployer.

>>> 2) Does it matter if a component and its library have different logging mechanisms, other than it requires the developers to know two different log syntaxes and potentially have two different destinations for log messages?
>>
>> No, except for your real-time question below ...
>>
>>> 3) In section 3.5 on http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo..., it mentions having components using OCL::Logging and GUI code using log4cpp, with the same config file. While this seems appropriate for a GUI, what about a real-time component using a library? If that library is logging directly with log4cpp, real-time performance is no longer guaranteed... How do we get around that?
>>
>> You _must_ log through an OCL::Category logger to achieve real-time performance. This is layered on top of log4cpp, but modifies the internal path to appropriately remain in real-time between the category and appender objects. You can use OCL::Logging in the GUI too, if you want. It's just that you don't typically have the OCL::Logging back end services running there, and the additional complexity to remain real-time isn't warranted. The code looks exactly the same in the components as in the GUI, except in the component it's with an OCL::Category obejct and in the GUI it's with a log4cpp::Category object.
>>
>> Yes, if the library that the component uses is using log4cpp, it will not log in real-time. The library _must_ use an OCL::Category logger. Can you modify the library?
>
> We'd obviously prefer not to modify the library. However, this library is at the core of our I/O system, so it's pretty critical to get right.

Understood.

>
>>
>> Fundamentally you need to set the log4cpp category factory before log4cpp is used, and from then on it will automatically create OCL::Category objects. So if you can do that first, and ensure you have setup the OCL logging service, _before_ the library is used, I think you might end up with what you want. The whole point of the mod's we made to log4cpp were to ensure it only created OCL::Category logger objects instead of the standard log4cpp::Category objects. But it's been a couple of years since we did those mod's ... if you use a deployer (or copy the setup code for rtalloc/log4cpp to your app) you might just get away with it. Get your deployment/app running, and then trigger the logCategories() method in the OCL::LoggingService component. Examine the output and see whether you have any log4cpp::Category objects in your category hierarchy.
>>
>> HTH
>> S
>>
>
> I'm attempting to get OCL::Logging up and running using the examples described on http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo... I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example in 3.4. Unfortunately, I am getting an error:
>
> /**
> dgooding@bacon:~$ rttlua -i setup_logging.lua
> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
> 0.073 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is 'N7log4cpp8CategoryE'
> 0.074 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
> **/

This looks suspiciously like the log4cpp factory hasn't been changed. Or that a category has been setup _before_ you change the factory.

> I'm not sure what's wrong. I checked the TestComponent for how it's creating the Category and it's as follows:
>
> TestComponent.hpp
> /**
> class Component : public RTT::TaskContext
> {
> public:
> Component(std::string name);
> virtual ~Component();
>
> protected:
> virtual bool startHook();
> virtual void updateHook();
>
> /// Name of our category
> std::string categoryName;
> /// Our logging category
> OCL::logging::Category* logger; <---- this looks right to me

Good

> };
>
> **/
>
> TestComponent.cpp
> /**
> Component::Component(std::string name) :
> RTT::TaskContext(name),
> categoryName(parentCategory + std::string(".") + name),
> logger(dynamic_cast<OCL::logging::Category*>(
> &log4cpp::Category::getInstance(categoryName))) <---- so does this

Good.

> {
> }
>
> bool Component::startHook()
> {
> bool ok = (0 != logger);
> if (!ok)
> {
> log(Error) << "Unable to find existing OCL category '"
> << categoryName << "'" << endlog();
> }
>
> return ok;
> }
>
> **/
>
> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c

The code above looks like what we use, though ours is based on OCL v1. Fundamentally I think this part of the codebase is virtually unchanged between v1 and v2.

> Ideas?

Dig into the initialization sequence, and make sure that the category you are trying to use is created by the OCL logging service, and that TLSF and the OCL::Logging are setup as done in the the deployer (I think that the v2 sequence is the same as the v1 version we use). This approach does work, but getting the sequence right is the first obstacle.

HTH
S

logging

On 07/06/2012 02:13 PM, Stephen Roderick wrote:
>>>> So my questions are:
>>>> 1) Does the Orocos-branded log4cpp have valgrind memory problems, in your experience?
>>> Not in our experience, no.
>> The issue we're having is with the PropertyConfigurator::configure() call:
>>
>> ==17486== 65 bytes in 2 blocks are possibly lost in loss record 8 of 8
>> ==17486== at 0x4C2B1C7: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
>> ==17486== by 0x514CA88: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
>> ==17486== by 0x514E2B4: char* std::string::_S_construct<char*>(char*, char*, std::allocator<char> const&, std::forward_iterator_tag) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
>> ==17486== by 0x514E414: std::basic_string<char, std::char_traits >> ==17486== by 0x514E441: std::string::substr(unsigned long, unsigned long) const (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
>> ==17486== by 0x5E3393B: unsigned int log4cpp::StringUtil::split<std::back_insert_iterator >> ==17486== by 0x5E30D5D: log4cpp::PropertyConfiguratorImpl::instantiateAllAppenders() (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
>> ==17486== by 0x5E30AD1: log4cpp::PropertyConfiguratorImpl::doConfigure(std::istream&) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
>> ==17486== by 0x5E309C0: log4cpp::PropertyConfiguratorImpl::doConfigure(std::string const&) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
>> ==17486== by 0x5E30693: log4cpp::PropertyConfigurator::configure(std::string const&) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
>> ==17486== by 0x5BA795D: RCS::Logger::Logger() (Logger.cpp:21)
>> ==17486== by 0x5BA75C5: _GLOBAL__sub_I_Logger.cpp (Logger.cpp:53)
> Hmmm ... if this is a one off then could you live with it? You'd have to go trace the log4cpp code, which isn't terrible (and the codebase is pretty small). We don't even use the property configurator in our deployments - but I think we use them in the GUI. You have to configure the OCL::Logging side of things in the deployer.

Given that this issue is only really occurring during initialization,
and not with the actual log calls, it's very likely something we can
live with. And if we modify things to use OCL::Logging instead of
log4cpp directly, this problem might just disappear.

>>> Fundamentally you need to set the log4cpp category factory before log4cpp is used, and from then on it will automatically create OCL::Category objects. So if you can do that first, and ensure you have setup the OCL logging service, _before_ the library is used, I think you might end up with what you want. The whole point of the mod's we made to log4cpp were to ensure it only created OCL::Category logger objects instead of the standard log4cpp::Category objects. But it's been a couple of years since we did those mod's ... if you use a deployer (or copy the setup code for rtalloc/log4cpp to your app) you might just get away with it. Get your deployment/app running, and then trigger the logCategories() method in the OCL::LoggingService component. Examine the output and see whether you have any log4cpp::Category objects in your category hierarchy.
>>>
>>> HTH
>>> S
>>>
>> I'm attempting to get OCL::Logging up and running using the examples described on http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo... I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example in 3.4. Unfortunately, I am getting an error:
>>
>> /**
>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>> 0.073 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is 'N7log4cpp8CategoryE'
>> 0.074 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>> **/
> This looks suspiciously like the log4cpp factory hasn't been changed. Or that a category has been setup _before_ you change the factory.
>
>> I'm not sure what's wrong. I checked the TestComponent for how it's creating the Category and it's as follows:
>>
>> TestComponent.hpp
>> /**
>> class Component : public RTT::TaskContext
>> {
>> public:
>> Component(std::string name);
>> virtual ~Component();
>>
>> protected:
>> virtual bool startHook();
>> virtual void updateHook();
>>
>> /// Name of our category
>> std::string categoryName;
>> /// Our logging category
>> OCL::logging::Category* logger; <---- this looks right to me
> Good
>
>> };
>>
>> **/
>>
>> TestComponent.cpp
>> /**
>> Component::Component(std::string name) :
>> RTT::TaskContext(name),
>> categoryName(parentCategory + std::string(".") + name),
>> logger(dynamic_cast<OCL::logging::Category*>(
>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
> Good.
>
>> {
>> }
>>
>> bool Component::startHook()
>> {
>> bool ok = (0 != logger);
>> if (!ok)
>> {
>> log(Error) << "Unable to find existing OCL category '"
>> << categoryName << "'" << endlog();
>> }
>>
>> return ok;
>> }
>>
>> **/
>>
>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
> The code above looks like what we use, though ours is based on OCL v1. Fundamentally I think this part of the codebase is virtually unchanged between v1 and v2.
>
>> Ideas?
> Dig into the initialization sequence, and make sure that the category you are trying to use is created by the OCL logging service, and that TLSF and the OCL::Logging are setup as done in the the deployer (I think that the v2 sequence is the same as the v1 version we use). This approach does work, but getting the sequence right is the first obstacle.
>
> HTH
> S

I found a thread that addresses this issue. My Google-Foo was poor
earlier, sorry.
http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
mentions that a call of
_log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
is required before doing anything else. In fact, I see mention of this
call in the ocl/logging/test/testlogging.cpp main()..... but how do I
make this call from RTTLua? As that thread suggests, should that be
done from RTTLua, or by modifying the Deployer or LoggingService?

-dustin

logging

On Jul 6, 2012, at 15:27 , Dustin Gooding wrote:

> On 07/06/2012 02:13 PM, Stephen Roderick wrote:
>>>>> So my questions are:
>>>>> 1) Does the Orocos-branded log4cpp have valgrind memory problems, in your experience?
>>>> Not in our experience, no.
>>> The issue we're having is with the PropertyConfigurator::configure() call:
>>>
>>> ==17486== 65 bytes in 2 blocks are possibly lost in loss record 8 of 8
>>> ==17486== at 0x4C2B1C7: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
>>> ==17486== by 0x514CA88: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
>>> ==17486== by 0x514E2B4: char* std::string::_S_construct<char*>(char*, char*, std::allocator<char> const&, std::forward_iterator_tag) (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
>>> ==17486== by 0x514E414: std::basic_string<char, std::char_traits >>> ==17486== by 0x514E441: std::string::substr(unsigned long, unsigned long) const (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16)
>>> ==17486== by 0x5E3393B: unsigned int log4cpp::StringUtil::split<std::back_insert_iterator >>> ==17486== by 0x5E30D5D: log4cpp::PropertyConfiguratorImpl::instantiateAllAppenders() (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
>>> ==17486== by 0x5E30AD1: log4cpp::PropertyConfiguratorImpl::doConfigure(std::istream&) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
>>> ==17486== by 0x5E309C0: log4cpp::PropertyConfiguratorImpl::doConfigure(std::string const&) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
>>> ==17486== by 0x5E30693: log4cpp::PropertyConfigurator::configure(std::string const&) (in /opt/orocos/orocos_toolchain/install/lib/liblog4cpp.so.6.0.0)
>>> ==17486== by 0x5BA795D: RCS::Logger::Logger() (Logger.cpp:21)
>>> ==17486== by 0x5BA75C5: _GLOBAL__sub_I_Logger.cpp (Logger.cpp:53)
>> Hmmm ... if this is a one off then could you live with it? You'd have to go trace the log4cpp code, which isn't terrible (and the codebase is pretty small). We don't even use the property configurator in our deployments - but I think we use them in the GUI. You have to configure the OCL::Logging side of things in the deployer.
>
> Given that this issue is only really occurring during initialization, and not with the actual log calls, it's very likely something we can live with. And if we modify things to use OCL::Logging instead of log4cpp directly, this problem might just disappear.
>
>>>> Fundamentally you need to set the log4cpp category factory before log4cpp is used, and from then on it will automatically create OCL::Category objects. So if you can do that first, and ensure you have setup the OCL logging service, _before_ the library is used, I think you might end up with what you want. The whole point of the mod's we made to log4cpp were to ensure it only created OCL::Category logger objects instead of the standard log4cpp::Category objects. But it's been a couple of years since we did those mod's ... if you use a deployer (or copy the setup code for rtalloc/log4cpp to your app) you might just get away with it. Get your deployment/app running, and then trigger the logCategories() method in the OCL::LoggingService component. Examine the output and see whether you have any log4cpp::Category objects in your category hierarchy.
>>>>
>>>> HTH
>>>> S
>>>>
>>> I'm attempting to get OCL::Logging up and running using the examples described on http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo... I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example in 3.4. Unfortunately, I am getting an error:
>>>
>>> /**
>>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>> 0.073 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is 'N7log4cpp8CategoryE'
>>> 0.074 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>>> **/
>> This looks suspiciously like the log4cpp factory hasn't been changed. Or that a category has been setup _before_ you change the factory.
>>
>>> I'm not sure what's wrong. I checked the TestComponent for how it's creating the Category and it's as follows:
>>>
>>> TestComponent.hpp
>>> /**
>>> class Component : public RTT::TaskContext
>>> {
>>> public:
>>> Component(std::string name);
>>> virtual ~Component();
>>>
>>> protected:
>>> virtual bool startHook();
>>> virtual void updateHook();
>>>
>>> /// Name of our category
>>> std::string categoryName;
>>> /// Our logging category
>>> OCL::logging::Category* logger; <---- this looks right to me
>> Good
>>
>>> };
>>>
>>> **/
>>>
>>> TestComponent.cpp
>>> /**
>>> Component::Component(std::string name) :
>>> RTT::TaskContext(name),
>>> categoryName(parentCategory + std::string(".") + name),
>>> logger(dynamic_cast<OCL::logging::Category*>(
>>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>> Good.
>>
>>> {
>>> }
>>>
>>> bool Component::startHook()
>>> {
>>> bool ok = (0 != logger);
>>> if (!ok)
>>> {
>>> log(Error) << "Unable to find existing OCL category '"
>>> << categoryName << "'" << endlog();
>>> }
>>>
>>> return ok;
>>> }
>>>
>>> **/
>>>
>>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>> The code above looks like what we use, though ours is based on OCL v1. Fundamentally I think this part of the codebase is virtually unchanged between v1 and v2.
>>
>>> Ideas?
>> Dig into the initialization sequence, and make sure that the category you are trying to use is created by the OCL logging service, and that TLSF and the OCL::Logging are setup as done in the the deployer (I think that the v2 sequence is the same as the v1 version we use). This approach does work, but getting the sequence right is the first obstacle.
>>
>> HTH
>> S
>
> I found a thread that addresses this issue. My Google-Foo was poor earlier, sorry. http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221 mentions that a call of _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_ is required before doing anything else. In fact, I see mention of this call in the ocl/logging/test/testlogging.cpp main()..... but how do I make this call from RTTLua? As that thread suggests, should that be done from RTTLua, or by modifying the Deployer or LoggingService?
>
> -dustin

The deployer already has this (or should have). It has to occur _before_ even the LoggingService is instantiated. And you have to initializeTLSF (real-time memory pool) _before_ setting the factory (again, the deployer has this).
S

logging

On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>>> Fundamentally you need to set the log4cpp category factory before log4cpp is used, and from then on it will automatically create OCL::Category objects. So if you can do that first, and ensure you have setup the OCL logging service, _before_ the library is used, I think you might end up with what you want. The whole point of the mod's we made to log4cpp were to ensure it only created OCL::Category logger objects instead of the standard log4cpp::Category objects. But it's been a couple of years since we did those mod's ... if you use a deployer (or copy the setup code for rtalloc/log4cpp to your app) you might just get away with it. Get your deployment/app running, and then trigger the logCategories() method in the OCL::LoggingService component. Examine the output and see whether you have any log4cpp::Category objects in your category hierarchy.
>>>
>>> HTH
>>> S
>>>
>> I'm attempting to get OCL::Logging up and running using the examples described onhttp://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-logging I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example in 3.4. Unfortunately, I am getting an error:
>>
>> /**
>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>> 0.073 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is 'N7log4cpp8CategoryE'
>> 0.074 [ ERROR ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>> **/
> This looks suspiciously like the log4cpp factory hasn't been changed. Or that a category has been setup _before_ you change the factory.
>
>> I'm not sure what's wrong. I checked the TestComponent for how it's creating the Category and it's as follows:
>>
>> TestComponent.hpp
>> /**
>> class Component : public RTT::TaskContext
>> {
>> public:
>> Component(std::string name);
>> virtual ~Component();
>>
>> protected:
>> virtual bool startHook();
>> virtual void updateHook();
>>
>> /// Name of our category
>> std::string categoryName;
>> /// Our logging category
>> OCL::logging::Category* logger; <---- this looks right to me
> Good
>
>> };
>>
>> **/
>>
>> TestComponent.cpp
>> /**
>> Component::Component(std::string name) :
>> RTT::TaskContext(name),
>> categoryName(parentCategory + std::string(".") + name),
>> logger(dynamic_cast<OCL::logging::Category*>(
>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
> Good.
>
>> {
>> }
>>
>> bool Component::startHook()
>> {
>> bool ok = (0 != logger);
>> if (!ok)
>> {
>> log(Error) << "Unable to find existing OCL category '"
>> << categoryName << "'" << endlog();
>> }
>>
>> return ok;
>> }
>>
>> **/
>>
>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
> The code above looks like what we use, though ours is based on OCL v1. Fundamentally I think this part of the codebase is virtually unchanged between v1 and v2.
>
>> Ideas?
> Dig into the initialization sequence, and make sure that the category you are trying to use is created by the OCL logging service, and that TLSF and the OCL::Logging are setup as done in the the deployer (I think that the v2 sequence is the same as the v1 version we use). This approach does work, but getting the sequence right is the first obstacle.
>
> HTH
> S

I found a thread that addresses this issue. My Google-Foo was poor
earlier, sorry.
http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
mentions that a call of
_log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
is required before doing anything else. In fact, I see mention of this
call in the ocl/logging/test/testlogging.cpp main()..... but how do I
make this call from RTTLua? As that thread suggests, should that be
done from RTTLua, or by modifying the Deployer or LoggingService?

-dustin
>
> The deployer already has this (or should have). It has to occur
> _before_ even the LoggingService is instantiated. And you have to
> initializeTLSF (real-time memory pool) _before_ setting the factory
> (again, the deployer has this).
> S

Hmmm...

When I run deployer by itself, I see that TLSF is initialized.

/**
dgooding@bacon:~$ deployer
Real-time memory: 517904 bytes free of 524288 allocated.
Switched to : Deployer
...
Deployer [S]> quit
TLSF bytes allocated=524288 overhead=6384 max-used=6384
currently-used=6384 still-allocated=0
**/

RTTLua doesn't have this output, but I assume it's being squashed by
RTTLua and TLSF is still being initialized.

And because of the build flags that were set for OCL (BUILD_RTALLOC and
BUILD_LOGGING), both TLSF and the category factory should be configured
correctly in Deployer.cpp (lines 124 and 138).

Using one of the provided xml deployment scripts in ocl/logging/tests, I
get positive results, though.

/**
dgooding@bacon:/opt/orocos/orocos_toolchain/ocl/logging/tests/data$
deployer -s good.xml
Real-time memory: 517904 bytes free of 524288 allocated.
Switched to : Deployer
...
Deployer [S]> 0.573 [ ERROR ][Logger] RTT ERROR TestComponent 0
0.573 [ Warning][Logger] RTT WARNING TestComponent 0
0.573 [ ERROR ][Logger] RTT ERROR TestComponent2 1
2012-07-06 15:05:56,659 [140381892986624] ERROR
org.orocos.ocl.logging.tests.TestComponent2 - ERROR TestComponent2 1
0.574 [ Warning][Logger] RTT WARNING TestComponent2 1
1.073 [ ERROR ][Logger] RTT ERROR TestComponent 2
1.073 [ Warning][Logger] RTT WARNING TestComponent 2
1.073 [ ERROR ][Logger] RTT ERROR TestComponent2 3
1.073 [ Warning][Logger] RTT WARNING TestComponent2 3
2012-07-06 15:05:57,159 [140381892986624] ERROR
org.orocos.ocl.logging.tests.TestComponent2 - ERROR TestComponent2 3
quit
TLSF bytes allocated=524288 overhead=6384 max-used=7856
currently-used=6384 still-allocated=0
**/

So, it still stands... how do I use OCL::Logging when using RTTLua as
the deployment mechanism? What am I doing wrong?

-dustin

logging

Hi Dustin,

On Fri, Jul 6, 2012 at 9:23 PM, Dustin Gooding
<dustin [dot] r [dot] gooding [..] ...> wrote:
> On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>
> Fundamentally you need to set the log4cpp category factory before log4cpp is
> used, and from then on it will automatically create OCL::Category objects.
> So if you can do that first, and ensure you have setup the OCL logging
> service, _before_ the library is used, I think you might end up with what
> you want. The whole point of the mod's we made to log4cpp were to ensure it
> only created OCL::Category logger objects instead of the standard
> log4cpp::Category objects. But it's been a couple of years since we did
> those mod's ... if you use a deployer (or copy the setup code for
> rtalloc/log4cpp to your app) you might just get away with it. Get your
> deployment/app running, and then trigger the logCategories() method in the
> OCL::LoggingService component. Examine the output and see whether you have
> any log4cpp::Category objects in your category hierarchy.
>
> HTH
> S
>
> I'm attempting to get OCL::Logging up and running using the examples
> described on
> http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...
> I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example
> in 3.4. Unfortunately, I am getting an error:
>
> /**
> dgooding@bacon:~$ rttlua -i setup_logging.lua
> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
> 0.073 [ ERROR
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category
> 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is
> 'N7log4cpp8CategoryE'
> 0.074 [ ERROR
> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable
> to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
> **/
>
> This looks suspiciously like the log4cpp factory hasn't been changed. Or
> that a category has been setup _before_ you change the factory.
>
> I'm not sure what's wrong. I checked the TestComponent for how it's
> creating the Category and it's as follows:
>
> TestComponent.hpp
> /**
> class Component : public RTT::TaskContext
> {
> public:
> Component(std::string name);
> virtual ~Component();
>
> protected:
> virtual bool startHook();
> virtual void updateHook();
>
> /// Name of our category
> std::string categoryName;
> /// Our logging category
> OCL::logging::Category* logger; <---- this looks right to me
>
> Good
>
> };
>
> **/
>
> TestComponent.cpp
> /**
> Component::Component(std::string name) :
> RTT::TaskContext(name),
> categoryName(parentCategory + std::string(".") + name),
> logger(dynamic_cast<OCL::logging::Category*>(
> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>
> Good.
>
> {
> }
>
> bool Component::startHook()
> {
> bool ok = (0 != logger);
> if (!ok)
> {
> log(Error) << "Unable to find existing OCL category '"
> << categoryName << "'" << endlog();
> }
>
> return ok;
> }
>
> **/
>
> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>
> The code above looks like what we use, though ours is based on OCL v1.
> Fundamentally I think this part of the codebase is virtually unchanged
> between v1 and v2.
>
> Ideas?
>
> Dig into the initialization sequence, and make sure that the category you
> are trying to use is created by the OCL logging service, and that TLSF and
> the OCL::Logging are setup as done in the the deployer (I think that the v2
> sequence is the same as the v1 version we use). This approach does work, but
> getting the sequence right is the first obstacle.
>
> HTH
> S
>
>
> I found a thread that addresses this issue. My Google-Foo was poor earlier,
> sorry.
> http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
> mentions that a call of
> _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
> is required before doing anything else. In fact, I see mention of this call
> in the ocl/logging/test/testlogging.cpp main()..... but how do I make this
> call from RTTLua? As that thread suggests, should that be done from RTTLua,
> or by modifying the Deployer or LoggingService?

rttlua does not yet support OCL::Logging, it misses these few lines
tlsf+logging code which the deployers do have. Do not confuse the lua
tlsf code with the RTT tlsf code, they are not the same tlsf pool !!
That's why nothing about tlsf was printed as well.

In the end, they are 'trivially' to add to LuaComponent.cpp, and since
it's OCL's 'rttlua', I think they should be there if OCL is configured
to support the logging.

I have added an untested patch which sets the cmake logic and adds
some code to LuaComponent.cpp. Since I didn't even compile this, there
will be issues, but I expect that we should be at 95%...

Peter

logging

On 07/11/2012 04:20 PM, Peter Soetens wrote:
> Hi Dustin,
>
> On Fri, Jul 6, 2012 at 9:23 PM, Dustin Gooding
> <dustin [dot] r [dot] gooding [..] ...> wrote:
>> On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>>
>> Fundamentally you need to set the log4cpp category factory before log4cpp is
>> used, and from then on it will automatically create OCL::Category objects.
>> So if you can do that first, and ensure you have setup the OCL logging
>> service, _before_ the library is used, I think you might end up with what
>> you want. The whole point of the mod's we made to log4cpp were to ensure it
>> only created OCL::Category logger objects instead of the standard
>> log4cpp::Category objects. But it's been a couple of years since we did
>> those mod's ... if you use a deployer (or copy the setup code for
>> rtalloc/log4cpp to your app) you might just get away with it. Get your
>> deployment/app running, and then trigger the logCategories() method in the
>> OCL::LoggingService component. Examine the output and see whether you have
>> any log4cpp::Category objects in your category hierarchy.
>>
>> HTH
>> S
>>
>> I'm attempting to get OCL::Logging up and running using the examples
>> described on
>> http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...
>> I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example
>> in 3.4. Unfortunately, I am getting an error:
>>
>> /**
>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>> 0.073 [ ERROR
>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category
>> 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is
>> 'N7log4cpp8CategoryE'
>> 0.074 [ ERROR
>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable
>> to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>> **/
>>
>> This looks suspiciously like the log4cpp factory hasn't been changed. Or
>> that a category has been setup _before_ you change the factory.
>>
>> I'm not sure what's wrong. I checked the TestComponent for how it's
>> creating the Category and it's as follows:
>>
>> TestComponent.hpp
>> /**
>> class Component : public RTT::TaskContext
>> {
>> public:
>> Component(std::string name);
>> virtual ~Component();
>>
>> protected:
>> virtual bool startHook();
>> virtual void updateHook();
>>
>> /// Name of our category
>> std::string categoryName;
>> /// Our logging category
>> OCL::logging::Category* logger; <---- this looks right to me
>>
>> Good
>>
>> };
>>
>> **/
>>
>> TestComponent.cpp
>> /**
>> Component::Component(std::string name) :
>> RTT::TaskContext(name),
>> categoryName(parentCategory + std::string(".") + name),
>> logger(dynamic_cast<OCL::logging::Category*>(
>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>>
>> Good.
>>
>> {
>> }
>>
>> bool Component::startHook()
>> {
>> bool ok = (0 != logger);
>> if (!ok)
>> {
>> log(Error) << "Unable to find existing OCL category '"
>> << categoryName << "'" << endlog();
>> }
>>
>> return ok;
>> }
>>
>> **/
>>
>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>>
>> The code above looks like what we use, though ours is based on OCL v1.
>> Fundamentally I think this part of the codebase is virtually unchanged
>> between v1 and v2.
>>
>> Ideas?
>>
>> Dig into the initialization sequence, and make sure that the category you
>> are trying to use is created by the OCL logging service, and that TLSF and
>> the OCL::Logging are setup as done in the the deployer (I think that the v2
>> sequence is the same as the v1 version we use). This approach does work, but
>> getting the sequence right is the first obstacle.
>>
>> HTH
>> S
>>
>>
>> I found a thread that addresses this issue. My Google-Foo was poor earlier,
>> sorry.
>> http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
>> mentions that a call of
>> _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
>> is required before doing anything else. In fact, I see mention of this call
>> in the ocl/logging/test/testlogging.cpp main()..... but how do I make this
>> call from RTTLua? As that thread suggests, should that be done from RTTLua,
>> or by modifying the Deployer or LoggingService?
> rttlua does not yet support OCL::Logging, it misses these few lines
> tlsf+logging code which the deployers do have. Do not confuse the lua
> tlsf code with the RTT tlsf code, they are not the same tlsf pool !!
> That's why nothing about tlsf was printed as well.
>
> In the end, they are 'trivially' to add to LuaComponent.cpp, and since
> it's OCL's 'rttlua', I think they should be there if OCL is configured
> to support the logging.
>
> I have added an untested patch which sets the cmake logic and adds
> some code to LuaComponent.cpp. Since I didn't even compile this, there
> will be issues, but I expect that we should be at 95%...
>
> Peter

Found the first issue (sorry for the delay). The OCL::memorySize type
is defined in deployer-funcs.hpp. The only things that include that
header are the various deployers... Should LuaComponent.cpp include
deployer-funcs.hpp? Seems odd to me for a component to need a
deployer's header. Would a more appropriate mechanism be to put the
OCL::memorySize type (and others like it) in a different header (say
memoryTypes.hpp) that both the deployers and LuaComponent can depend
on? I'm happy to do it, but I'm not sure if that's the preferred approach.

-dustin

logging

On Jul 13, 2012, at 11:21 , Dustin Gooding wrote:

> On 07/11/2012 04:20 PM, Peter Soetens wrote:
>> Hi Dustin,
>>
>> On Fri, Jul 6, 2012 at 9:23 PM, Dustin Gooding
>> <dustin [dot] r [dot] gooding [..] ...> wrote:
>>> On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>>>
>>> Fundamentally you need to set the log4cpp category factory before log4cpp is
>>> used, and from then on it will automatically create OCL::Category objects.
>>> So if you can do that first, and ensure you have setup the OCL logging
>>> service, _before_ the library is used, I think you might end up with what
>>> you want. The whole point of the mod's we made to log4cpp were to ensure it
>>> only created OCL::Category logger objects instead of the standard
>>> log4cpp::Category objects. But it's been a couple of years since we did
>>> those mod's ... if you use a deployer (or copy the setup code for
>>> rtalloc/log4cpp to your app) you might just get away with it. Get your
>>> deployment/app running, and then trigger the logCategories() method in the
>>> OCL::LoggingService component. Examine the output and see whether you have
>>> any log4cpp::Category objects in your category hierarchy.
>>>
>>> HTH
>>> S
>>>
>>> I'm attempting to get OCL::Logging up and running using the examples
>>> described on
>>> http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...
>>> I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example
>>> in 3.4. Unfortunately, I am getting an error:
>>>
>>> /**
>>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>> 0.073 [ ERROR
>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category
>>> 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is
>>> 'N7log4cpp8CategoryE'
>>> 0.074 [ ERROR
>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable
>>> to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>>> **/
>>>
>>> This looks suspiciously like the log4cpp factory hasn't been changed. Or
>>> that a category has been setup _before_ you change the factory.
>>>
>>> I'm not sure what's wrong. I checked the TestComponent for how it's
>>> creating the Category and it's as follows:
>>>
>>> TestComponent.hpp
>>> /**
>>> class Component : public RTT::TaskContext
>>> {
>>> public:
>>> Component(std::string name);
>>> virtual ~Component();
>>>
>>> protected:
>>> virtual bool startHook();
>>> virtual void updateHook();
>>>
>>> /// Name of our category
>>> std::string categoryName;
>>> /// Our logging category
>>> OCL::logging::Category* logger; <---- this looks right to me
>>>
>>> Good
>>>
>>> };
>>>
>>> **/
>>>
>>> TestComponent.cpp
>>> /**
>>> Component::Component(std::string name) :
>>> RTT::TaskContext(name),
>>> categoryName(parentCategory + std::string(".") + name),
>>> logger(dynamic_cast<OCL::logging::Category*>(
>>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>>>
>>> Good.
>>>
>>> {
>>> }
>>>
>>> bool Component::startHook()
>>> {
>>> bool ok = (0 != logger);
>>> if (!ok)
>>> {
>>> log(Error) << "Unable to find existing OCL category '"
>>> << categoryName << "'" << endlog();
>>> }
>>>
>>> return ok;
>>> }
>>>
>>> **/
>>>
>>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>>>
>>> The code above looks like what we use, though ours is based on OCL v1.
>>> Fundamentally I think this part of the codebase is virtually unchanged
>>> between v1 and v2.
>>>
>>> Ideas?
>>>
>>> Dig into the initialization sequence, and make sure that the category you
>>> are trying to use is created by the OCL logging service, and that TLSF and
>>> the OCL::Logging are setup as done in the the deployer (I think that the v2
>>> sequence is the same as the v1 version we use). This approach does work, but
>>> getting the sequence right is the first obstacle.
>>>
>>> HTH
>>> S
>>>
>>>
>>> I found a thread that addresses this issue. My Google-Foo was poor earlier,
>>> sorry.
>>> http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
>>> mentions that a call of
>>> _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
>>> is required before doing anything else. In fact, I see mention of this call
>>> in the ocl/logging/test/testlogging.cpp main()..... but how do I make this
>>> call from RTTLua? As that thread suggests, should that be done from RTTLua,
>>> or by modifying the Deployer or LoggingService?
>> rttlua does not yet support OCL::Logging, it misses these few lines
>> tlsf+logging code which the deployers do have. Do not confuse the lua
>> tlsf code with the RTT tlsf code, they are not the same tlsf pool !!
>> That's why nothing about tlsf was printed as well.
>>
>> In the end, they are 'trivially' to add to LuaComponent.cpp, and since
>> it's OCL's 'rttlua', I think they should be there if OCL is configured
>> to support the logging.
>>
>> I have added an untested patch which sets the cmake logic and adds
>> some code to LuaComponent.cpp. Since I didn't even compile this, there
>> will be issues, but I expect that we should be at 95%...
>>
>> Peter
>
> Found the first issue (sorry for the delay). The OCL::memorySize type is defined in deployer-funcs.hpp. The only things that include that header are the various deployers... Should LuaComponent.cpp include deployer-funcs.hpp? Seems odd to me for a component to need a deployer's header. Would a more appropriate mechanism be to put the OCL::memorySize type (and others like it) in a different header (say memoryTypes.hpp) that both the deployers and LuaComponent can depend on? I'm happy to do it, but I'm not sure if that's the preferred approach.
>
> -dustin

No, that would be unnecessary coupling. The memory size type is only there to work with validation with boost program_options, which is only useful to deployers (I'm presuming here that whatever program you run to get Lua doesn't need it also). We should change the internals to accept size_t (or ssize_t) and keep the memory size type only in the deployers.

My 2c
S

logging

On 07/13/2012 12:31 PM, Stephen Roderick wrote:
> On Jul 13, 2012, at 11:21 , Dustin Gooding wrote:
>
>> On 07/11/2012 04:20 PM, Peter Soetens wrote:
>>> Hi Dustin,
>>>
>>> On Fri, Jul 6, 2012 at 9:23 PM, Dustin Gooding
>>> <dustin [dot] r [dot] gooding [..] ...> wrote:
>>>> On 07/06/2012 02:32 PM, Stephen Roderick wrote:
>>>>
>>>> Fundamentally you need to set the log4cpp category factory before log4cpp is
>>>> used, and from then on it will automatically create OCL::Category objects.
>>>> So if you can do that first, and ensure you have setup the OCL logging
>>>> service, _before_ the library is used, I think you might end up with what
>>>> you want. The whole point of the mod's we made to log4cpp were to ensure it
>>>> only created OCL::Category logger objects instead of the standard
>>>> log4cpp::Category objects. But it's been a couple of years since we did
>>>> those mod's ... if you use a deployer (or copy the setup code for
>>>> rtalloc/log4cpp to your app) you might just get away with it. Get your
>>>> deployment/app running, and then trigger the logCategories() method in the
>>>> OCL::LoggingService component. Examine the output and see whether you have
>>>> any log4cpp::Category objects in your category hierarchy.
>>>>
>>>> HTH
>>>> S
>>>>
>>>> I'm attempting to get OCL::Logging up and running using the examples
>>>> described on
>>>> http://www.orocos.org/wiki/rtt/examples-and-tutorials/using-real-time-lo...
>>>> I've recompiled OCL with BUILD_TESTS and am using the Lua deployment example
>>>> in 3.4. Unfortunately, I am getting an error:
>>>>
>>>> /**
>>>> dgooding@bacon:~$ rttlua -i setup_logging.lua
>>>> OROCOS RTTLua 1.0-beta5 / Lua 5.1.4 (gnulinux)
>>>> 0.073 [ ERROR
>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Category
>>>> 'org.orocos.ocl.logging.tests.TestComponent' is not an OCL category: type is
>>>> 'N7log4cpp8CategoryE'
>>>> 0.074 [ ERROR
>>>> ][/opt/orocos/orocos_toolchain/install/bin/rttlua-gnulinux::main()] Unable
>>>> to find existing OCL category 'org.orocos.ocl.logging.tests.TestComponent'
>>>> **/
>>>>
>>>> This looks suspiciously like the log4cpp factory hasn't been changed. Or
>>>> that a category has been setup _before_ you change the factory.
>>>>
>>>> I'm not sure what's wrong. I checked the TestComponent for how it's
>>>> creating the Category and it's as follows:
>>>>
>>>> TestComponent.hpp
>>>> /**
>>>> class Component : public RTT::TaskContext
>>>> {
>>>> public:
>>>> Component(std::string name);
>>>> virtual ~Component();
>>>>
>>>> protected:
>>>> virtual bool startHook();
>>>> virtual void updateHook();
>>>>
>>>> /// Name of our category
>>>> std::string categoryName;
>>>> /// Our logging category
>>>> OCL::logging::Category* logger; <---- this looks right to me
>>>>
>>>> Good
>>>>
>>>> };
>>>>
>>>> **/
>>>>
>>>> TestComponent.cpp
>>>> /**
>>>> Component::Component(std::string name) :
>>>> RTT::TaskContext(name),
>>>> categoryName(parentCategory + std::string(".") + name),
>>>> logger(dynamic_cast<OCL::logging::Category*>(
>>>> &log4cpp::Category::getInstance(categoryName))) <---- so does this
>>>>
>>>> Good.
>>>>
>>>> {
>>>> }
>>>>
>>>> bool Component::startHook()
>>>> {
>>>> bool ok = (0 != logger);
>>>> if (!ok)
>>>> {
>>>> log(Error) << "Unable to find existing OCL category '"
>>>> << categoryName << "'" << endlog();
>>>> }
>>>>
>>>> return ok;
>>>> }
>>>>
>>>> **/
>>>>
>>>> OCL - toolchain-2.5 branch - commit 8c39ee9690373a50849e5ae4c96e1c9852314b7c
>>>>
>>>> The code above looks like what we use, though ours is based on OCL v1.
>>>> Fundamentally I think this part of the codebase is virtually unchanged
>>>> between v1 and v2.
>>>>
>>>> Ideas?
>>>>
>>>> Dig into the initialization sequence, and make sure that the category you
>>>> are trying to use is created by the OCL logging service, and that TLSF and
>>>> the OCL::Logging are setup as done in the the deployer (I think that the v2
>>>> sequence is the same as the v1 version we use). This approach does work, but
>>>> getting the sequence right is the first obstacle.
>>>>
>>>> HTH
>>>> S
>>>>
>>>>
>>>> I found a thread that addresses this issue. My Google-Foo was poor earlier,
>>>> sorry.
>>>> http://permalink.gmane.org/gmane.science.robotics.orocos.devel/11221
>>>> mentions that a call of
>>>> _log4cpp::HierarchyMaintainer::set_category_factory(OCL::logging::Category::createOCLCategory);_
>>>> is required before doing anything else. In fact, I see mention of this call
>>>> in the ocl/logging/test/testlogging.cpp main()..... but how do I make this
>>>> call from RTTLua? As that thread suggests, should that be done from RTTLua,
>>>> or by modifying the Deployer or LoggingService?
>>> rttlua does not yet support OCL::Logging, it misses these few lines
>>> tlsf+logging code which the deployers do have. Do not confuse the lua
>>> tlsf code with the RTT tlsf code, they are not the same tlsf pool !!
>>> That's why nothing about tlsf was printed as well.
>>>
>>> In the end, they are 'trivially' to add to LuaComponent.cpp, and since
>>> it's OCL's 'rttlua', I think they should be there if OCL is configured
>>> to support the logging.
>>>
>>> I have added an untested patch which sets the cmake logic and adds
>>> some code to LuaComponent.cpp. Since I didn't even compile this, there
>>> will be issues, but I expect that we should be at 95%...
>>>
>>> Peter
>> Found the first issue (sorry for the delay). The OCL::memorySize type is defined in deployer-funcs.hpp. The only things that include that header are the various deployers... Should LuaComponent.cpp include deployer-funcs.hpp? Seems odd to me for a component to need a deployer's header. Would a more appropriate mechanism be to put the OCL::memorySize type (and others like it) in a different header (say memoryTypes.hpp) that both the deployers and LuaComponent can depend on? I'm happy to do it, but I'm not sure if that's the preferred approach.
>>
>> -dustin
> No, that would be unnecessary coupling. The memory size type is only there to work with validation with boost program_options, which is only useful to deployers (I'm presuming here that whatever program you run to get Lua doesn't need it also). We should change the internals to accept size_t (or ssize_t) and keep the memory size type only in the deployers.
>
> My 2c
> S

I modified Peter's patch to get rid of the use of OCL::memorySize and
directly assign "size_t memSize=ORO_DEFAULT_RTALLOC_SIZE".

But, now I get an issue where init_memory_pool() (and other methods
declared in rtt/os/tlsf/tlsf.h are undeclared (according to the compiler).

/**
[ 87%] Building CXX object lua/CMakeFiles/rttlua.dir/LuaComponent.cpp.o
/opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp: In function ‘int
ORO_main_impl(int, char**)’:
/opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:278:54: error:
‘init_memory_pool’ was not declared in this scope
/opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:322:56: error:
‘get_max_size’ was not declared in this scope
/opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:323:63: error:
‘get_used_size’ was not declared in this scope
/opt/orocos/orocos_toolchain/ocl/lua/LuaComponent.cpp:327:34: error:
‘destroy_memory_pool’ was not declared in this scope
**/

I think the issue is that there's a mixup between which header/source
are being compiled and linked... <ocl/lua/tlsf.h> and
<rtt/os/tlsf/tlsf.h>. The rtt one declares init_memory_pool, and the lua
one doesn't. RTTLua's CMakeLists.txt is non-specific as to which tlsf
it's building/linking, but I think it's the lua one... which might
explain the declaration error.

LuaComponent.cpp includes <rtt/os/tlsf/tlsf.h>. Changing that to
"tlsf.h" and changing init_memory_pool to rtl_init_memory_pool (which is
declared in the lua tlsf.h) doesn't help. "rtl_init_memory_pool" is not
declared either. ("extern" only matters at link time, right? extern
declarations are still declarations, as far as the compiler is
concerned, right?)

I've confirmed (as best I can) that the right compiler directives are
being set by CMake, such that the right preprocessor branches are being
taken (e.g., OS_RT_MALLOC, ORO_BUILD_LOGGING).

Any ideas?

-dustin

Realtime logging

[list CC'd]

On Jan 25, 2012, at 02:07 , Clephas, T.T.G. wrote:

>>> Hello,
>>>
>>> At the TU/e we've constructed several controllers each containing several
>>> components.
>>> Every component acts on a triggerport.
>>>
>>> So the sequence goes as follows:
>>>
>>> ReadEncoders -> CalculateErrors -> Gain -> WriteOutput
>>>
>>> The first component has an update frequency of 1khz.
>>>
>>>
>>> Now in order to identify our hardware we want to log the data send over these
>>> channels without missing samples.
>>> We hope to get a text file were every line represents the date send at that
>>> milisecond over each channel:
>>>
>>>
>>> TimeStamp ReadEncoders CalculateErrors Gain WriteOutput
>>> 0.0000 0.000 0.000 0.000 0.000
>>> 0.0010 0.300 0.100 0.300 3.000
>>> 0.0020 0.400 0.104 0.390 3.330
>>> etc.
>>>
>>>
>>> However the current reporter has hardcoded buffers as I understood, so what
>>> is the best way of doing this?
>>> It is also important that every line represents sequential data.
>>>
>>> Thanks in advance!
>>
>> We do exactly this, at similar frequencies. Our approach has a bit of boilerplate code, but it works well. AFAIK you can't use the OCL::Reporting for this, as it has all kinds of limitations.
>>
>> 1) Create a component that samples the data of interest, and then logs it through the OCL Logging framework (ie real-time logging)
>> 2) Coordinate calling this component _after_ all your data has been computed on a cycle (at that point, your component samples everything, and then logs it)
>> 3) Connect a File/Socket/etc Appender to your logging category, and use that to store/transport your data.
>>
>> You'll have to make sure that your TLSF memory pool is of sufficient size, and that your logging buffers can absorb the difference in production rates of the real-time code, and the consumption rate of your (likely non-real-time) appending code. For desktop systems with oodles of RAM, this is no problem. For embedded units, this can take quite some tweaking.
> It's a desktop, plenty of RAM available.
>>
>> The boilerplate stuff is in all 1) for us, as it knows about the samples and data types of interest. I'm sure that more enterprising souls can generalize this solution …
>>
>> HTH
>> S
>
> Could you give me link to that code? That way I can get the idea of the implementation and how the data is written.
> Doesn't matter if it's boilerplate code, I might find a way to make it more general.

Sorry, no can do. But perhaps a better outline will help ...

Given component A with ports 1 and 2, and component B with port 3.

Reporter component
// NB custom per application, here Rock's generation facilities might help you
startHook()
reportHeader()

updateHook()
sample()
reportData()

sample()
// store in class members, the data from ports A.1, A.2, B.3

reportHeader()
// log your column headers

reportData()
// log your sample'd data

Executive/Supervisor/Master component
// just a state machine

state RUNNING {
entry {
do componentA.start();
do componentB.start();
do reporter.start();
}
run {
do componentA.update();
do componentB.update();
do reporter.update();
}
exit {
do componentA.stop();
do componentB.stop();
do reporter.stop();
}
}

Deploy
- Executive component as a periodic activity at 1 khz
- component A, componentB and the Reporter as slave's of the Executive
- a logging appender component (see OCL logging or the RTT v2 equivalent) as a periodic activity (probably at the 1khz rate also)

Size your TLSF buffer appropriately (we use 20 MB). Turn off SBRK and MMAP in RTT's TLSF configuration.
We use a smaller logging buffer size of 200 (the default in OCL v1 is 1000), so you'll be fine either way. See a previous post for the interaction of this value with the TLSF buffer size.

One of our smaller applications logs 100 kb/second to disk with this approach, with image storage on top of that. We also have a different state for each type of motion we can do, with different controllers being started, logged, and stopped, in each state.

YMMV
S

Realtime logging

If you are a bit brave and are using typegen to generate your typekits,
you may want to give a try to rock's logger

http://rock-robotics.org/package_directory/packages/tools/tools_logger/i...

and

http://rock-robotics.org/documentation/data_analysis/reading_logfiles.html

It logs data in binary, with timestamping, and is *very* efficient. It
has never been tested in an OCL-based workflow, though. So, as I said,
you would have to be brave ...

Realtime logging

On Wed, 25 Jan 2012, Sylvain Joyeux wrote:

> If you are a bit brave and are using typegen to generate your typekits,
> you may want to give a try to rock's logger
>
>
> http://rock-robotics.org/package_directory/packages/tools/tools_logger/i...
>
> and
>
>
> http://rock-robotics.org/documentation/data_analysis/reading_logfiles.html
>
> It logs data in binary, with timestamping, and is *very* efficient. It
> has never been tested in an OCL-based workflow, though. So, as I said,
> you would have to be brave ...

I hope someone is! Because this thread contains a _very important_ use
case, for which the "best practice" can not yet be identified, let alone
perfectly supported. So, it makes sense to keep on sharing "practices" with
each other, in the hope that a "best" one will emerge sooner or later...

> Sylvain Joyeux (Dr.Ing.)

Herman

> Space & Security Robotics
>
> !!! Achtung, neue Telefonnummer!!!
>
> Standort Bremen:
> DFKI GmbH
> Robotics Innovation Center
> Robert-Hooke-Straße 5
> 28359 Bremen, Germany
>
> Phone: +49 (0)421 178-454136
> Fax: +49 (0)421 218-454150
> E-Mail: robotik [..] ...
>
> Weitere Informationen: http://www.dfki.de/robotik
> -----------------------------------------------------------------------
> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern
> Geschaeftsfuehrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
> (Vorsitzender) Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
> Amtsgericht Kaiserslautern, HRB 2313
> Sitz der Gesellschaft: Kaiserslautern (HRB 2313)
> USt-Id.Nr.: DE 148646973
> Steuernummer: 19/673/0060/3
> -----------------------------------------------------------------------

Realtime logging

On Jan 25, 2012, at 03:32 , Herman Bruyninckx wrote:

> On Wed, 25 Jan 2012, Sylvain Joyeux wrote:
>
>> If you are a bit brave and are using typegen to generate your typekits,
>> you may want to give a try to rock's logger
>>
>>
>> http://rock-robotics.org/package_directory/packages/tools/tools_logger/i...
>>
>> and
>>
>>
>> http://rock-robotics.org/documentation/data_analysis/reading_logfiles.html
>>
>> It logs data in binary, with timestamping, and is *very* efficient. It
>> has never been tested in an OCL-based workflow, though. So, as I said,
>> you would have to be brave ...
>
> I hope someone is! Because this thread contains a _very important_ use
> case, for which the "best practice" can not yet be identified, let alone
> perfectly supported. So, it makes sense to keep on sharing "practices" with
> each other, in the hope that a "best" one will emerge sooner or later…

Agreed. I'd be _really_ interested in anyone's experience doing this kind of real-time logging with netCDF … anyone …?
S

Realtime logging

On Wed, 25 Jan 2012, S Roderick wrote:

>
> On Jan 25, 2012, at 03:32 , Herman Bruyninckx wrote:
>
>> On Wed, 25 Jan 2012, Sylvain Joyeux wrote:
>>
>>> If you are a bit brave and are using typegen to generate your typekits,
>>> you may want to give a try to rock's logger
>>>
>>>
>>> http://rock-robotics.org/package_directory/packages/tools/tools_logger/i...
>>>
>>> and
>>>
>>>
>>> http://rock-robotics.org/documentation/data_analysis/reading_logfiles.html
>>>
>>> It logs data in binary, with timestamping, and is *very* efficient. It
>>> has never been tested in an OCL-based workflow, though. So, as I said,
>>> you would have to be brave ...
>>
>> I hope someone is! Because this thread contains a _very important_ use
>> case, for which the "best practice" can not yet be identified, let alone
>> perfectly supported. So, it makes sense to keep on sharing "practices" with
>> each other, in the hope that a "best" one will emerge sooner or later?
>
> Agreed. I'd be _really_ interested in anyone's experience doing this kind of real-time logging with netCDF ? anyone ??

Me too... But these are (at least) two complementary questions:
- how can we best work with the binary netCDF data structures, _and_ their
semantic model ("IDL")?
- which tooling and serialization libraries are good enough for use in
realtime/RTT context?

> S

Herman

Realtime logging

On 01/25/2012 11:37 AM, Herman Bruyninckx wrote:
> On Wed, 25 Jan 2012, S Roderick wrote:
>
>>
>> On Jan 25, 2012, at 03:32 , Herman Bruyninckx wrote:
>>
>>> On Wed, 25 Jan 2012, Sylvain Joyeux wrote:
>>>
>>>> If you are a bit brave and are using typegen to generate your typekits,
>>>> you may want to give a try to rock's logger
>>>>
>>>>
>>>> http://rock-robotics.org/package_directory/packages/tools/tools_logger/i...
>>>>
>>>> and
>>>>
>>>>
>>>> http://rock-robotics.org/documentation/data_analysis/reading_logfiles.html
>>>>
>>>> It logs data in binary, with timestamping, and is *very* efficient. It
>>>> has never been tested in an OCL-based workflow, though. So, as I said,
>>>> you would have to be brave ...
>>>
>>> I hope someone is! Because this thread contains a _very important_ use
>>> case, for which the "best practice" can not yet be identified, let alone
>>> perfectly supported. So, it makes sense to keep on sharing "practices" with
>>> each other, in the hope that a "best" one will emerge sooner or later?
>>
>> Agreed. I'd be _really_ interested in anyone's experience doing this kind of real-time logging with netCDF ? anyone ??

+1. We've done several projects with RTT v1.x and have ended up with
home-cooked logging solutions everytime. It'd be really, really useful
to have some "standard" recipes for datalogging. This could perhaps be
done via a wikipage describing common datalogging needs (I suspect there
won't be very many) and the possible solutions to each need.
Alternately, the methods that are sometimes mentioned on this mailing
list (sometimes as part of a larger thread) could be put up on the wiki
for handier reference.

Towards this end, I am willing to create a wikipage describing a
datalogging scenario I use most frequently, and how I got it to work.
Others could add their own experiences and/or criticize the approaches
taken.

Would this be worthwhile?

We are currently looking at upgrading our system from RTT 1.x to 2.x and
revamping our datalogging may be a good thing as well.

/Sagar

Realtime logging

On Jan 25, 2012, at 05:37 , Herman Bruyninckx wrote:

> On Wed, 25 Jan 2012, S Roderick wrote:
>
>>
>> On Jan 25, 2012, at 03:32 , Herman Bruyninckx wrote:
>>
>>> On Wed, 25 Jan 2012, Sylvain Joyeux wrote:
>>>
>>>> If you are a bit brave and are using typegen to generate your typekits,
>>>> you may want to give a try to rock's logger
>>>>
>>>>
>>>> http://rock-robotics.org/package_directory/packages/tools/tools_logger/i...
>>>>
>>>> and
>>>>
>>>>
>>>> http://rock-robotics.org/documentation/data_analysis/reading_logfiles.html
>>>>
>>>> It logs data in binary, with timestamping, and is *very* efficient. It
>>>> has never been tested in an OCL-based workflow, though. So, as I said,
>>>> you would have to be brave ...
>>>
>>> I hope someone is! Because this thread contains a _very important_ use
>>> case, for which the "best practice" can not yet be identified, let alone
>>> perfectly supported. So, it makes sense to keep on sharing "practices" with
>>> each other, in the hope that a "best" one will emerge sooner or later?
>>
>> Agreed. I'd be _really_ interested in anyone's experience doing this kind of real-time logging with netCDF ? anyone ??
>
> Me too... But these are (at least) two complementary questions:
> - how can we best work with the binary netCDF data structures, _and_ their
> semantic model ("IDL")?
> - which tooling and serialization libraries are good enough for use in
> realtime/RTT context?

Agreed. And for us (and many other users I suspect) the OCL v1 real-time logging (or similar in RTT v2) to text data files works. It requires some boilerplate code, which is a pain, but it's simple, works well, and is easily understood.
S

Realtime logging

On Jan 24, 2012, at 17:28 , Peter Soetens wrote:

> On Tue, Jan 24, 2012 at 8:00 PM, <t [dot] t [dot] g [dot] clephas [..] ...> wrote:
> Hello,
>
> At the TU/e we've constructed several controllers each containing several
> components.
> Every component acts on a triggerport.
>
> So the sequence goes as follows:
>
> ReadEncoders -> CalculateErrors -> Gain -> WriteOutput
>
> The first component has an update frequency of 1khz.
>
>
> Now in order to identify our hardware we want to log the data send over these
> channels without missing samples.
> We hope to get a text file were every line represents the date send at that
> milisecond over each channel:
>
>
> TimeStamp ReadEncoders CalculateErrors Gain WriteOutput
> 0.0000 0.000 0.000 0.000 0.000
> 0.0010 0.300 0.100 0.300 3.000
> 0.0020 0.400 0.104 0.390 3.330
> etc.
>
>
> However the current reporter has hardcoded buffers as I understood, so what
> is the best way of doing this?
> It is also important that every line represents sequential data.
>
> If you make the reporter non-periodic, it will try to log each sample as you expect and use the buffers for, well, buffering when it doesn't get enough time.
>
> Logging to text format is very inefficient and you can only log a certain number of columns before the IO or your thread can't finish it.

Agreed, but it works and it's easy. And that's often good enough …

> The reporter was made to support different marshalling formats such that you could optimize this (for example, only write to a file at the end or write to a binary format). The Netcdf reporter uses a far more efficient format, but I haven't tested it in RTT 2.x, but it does build fine (tm) :)

I can only speak for the v1 OCL reporting, but that implementation was not up to this task. Not sure what improvements have been made in this regard in v2?
S

Realtime logging

On Jan 24, 2012, at 14:00 , t [dot] t [dot] g [dot] clephas [..] ... wrote:

> Hello,
>
> At the TU/e we've constructed several controllers each containing several
> components.
> Every component acts on a triggerport.
>
> So the sequence goes as follows:
>
> ReadEncoders -> CalculateErrors -> Gain -> WriteOutput
>
> The first component has an update frequency of 1khz.
>
>
> Now in order to identify our hardware we want to log the data send over these
> channels without missing samples.
> We hope to get a text file were every line represents the date send at that
> milisecond over each channel:
>
>
> TimeStamp ReadEncoders CalculateErrors Gain WriteOutput
> 0.0000 0.000 0.000 0.000 0.000
> 0.0010 0.300 0.100 0.300 3.000
> 0.0020 0.400 0.104 0.390 3.330
> etc.
>
>
> However the current reporter has hardcoded buffers as I understood, so what
> is the best way of doing this?
> It is also important that every line represents sequential data.
>
> Thanks in advance!

We do exactly this, at similar frequencies. Our approach has a bit of boilerplate code, but it works well. AFAIK you can't use the OCL::Reporting for this, as it has all kinds of limitations.

1) Create a component that samples the data of interest, and then logs it through the OCL Logging framework (ie real-time logging)
2) Coordinate calling this component _after_ all your data has been computed on a cycle (at that point, your component samples everything, and then logs it)
3) Connect a File/Socket/etc Appender to your logging category, and use that to store/transport your data.

You'll have to make sure that your TLSF memory pool is of sufficient size, and that your logging buffers can absorb the difference in production rates of the real-time code, and the consumption rate of your (likely non-real-time) appending code. For desktop systems with oodles of RAM, this is no problem. For embedded units, this can take quite some tweaking.

The boilerplate stuff is in all 1) for us, as it knows about the samples and data types of interest. I'm sure that more enterprising souls can generalize this solution …

HTH
S