EUROBENCH logo

FSTP-1 Open Call - DEVELOPMENT of the benchmarking framework
Deadline: Oct 31, 2018  
CALL EXPIRED

 Accelerators
 Electronics and Microelectronics
 IT
 IT Applications
 Mobile technology
 Robotics
 Internet of Things (IoT)
 Smart Mobility

Introduction

Goals of EUROBENCH

The EUROBENCH project aims to create the first unified benchmarking framework for robotic systems in Europe. This framework will allow companies and/or researchers to test the performance of their robots at any stage of development. The project is mainly focused on bipedal machines, i.e. exoskeletons, prosthetics and humanoids, but aims to be also extended to other robotic technologies. To this aim, EUROBENCH will develop:

● Two Testing Facilities, one for wearable robots (located in Spain) - including exoskeletons and prostheses - and the other for humanoid robots (located in Italy), to allow companies and/or researchers to perform standardized tests on advanced robotic prototypes in a unique location, saving resources and time.

● A unified Benchmarking Software, to easily design and run the tests, and automatically obtain relevant scores on robot performance. This software will also allow to perform the tests in any laboratory settings, and compare the results with similar systems in the state of the art.

To realize these goals, the EUROBENCH Consortium will count on the collaboration of external entities, a.k.a. Third Parties, offering them financial support for developing and validating specific sub-components of the Facilities and the Software. This Cascade Funding action, called “FINANCIAL SUPPORT TO THIRD PARTIES” (FSTP) will be organized in two competitive Open Calls:

  • ●  1st Open Call (FSTP-1), also titled “DEVELOPMENT of the benchmarking framework”. This Call is looking for Third Parties interested in designing and developing testbeds, algorithms and datasets to allow the benchmarking bipedal platforms performance. This call will be open from July 15, 2018 until October 31, 2018.

  • ●  2nd Open Call (FSTP-2), also titled “VALIDATION of the benchmarking framework”. This Call will offer Third Parties the opportunity to use the benchmarking facilities and/or software, at zero-costs, to test and improve their own robotic/control systems. This call will be open from June 1, 2020 until August 31, 2020.

    This document provides specific instructions for participating in the FSTP-1 Open Call.

    Instructions for applying to the FSTP-2 Open Call will be provided later in the project.

 

 

1 Preparing the proposal

1.1 General information

Applicants have two different options to participate in the FSTP-1 Open Call (see Figure 1.1):
OPTION 1. Developing a benchmarking solution for one specific benchmarking scenario (see Table 1 and Section 2.3 for further details).
OPTION 2. Developing a benchmarking solution that can be used across several scenarios (see Table 2 and Section 2.3 for further details).

To apply for OPTION 1, you should:

  • ●  Focus on one benchmarking scenario, preferably among those listed in Table 1. If strongly motivated,

    applicants may also cover more than one scenario (e.g. datasets spanning different motor skills), or propose

    a new scenario not included in Table 1 (e.g. a complex and domain-specific condition).

  • ●  For the selected scenario, design and develop one or more (all is desirable) of the following outcomes:

  • ● A testbed, namely “a platform for conducting replicable and repeatable testing experiments”. The testbed may include:

    • ○  Structures and/or devices that physically reproduce the environmental conditions (terrains, perturbations) typical of the selected scenario.

    • ○  Actuators needed to induce dynamic changes in the structures (e.g. changing dimensions) or to generate specific perturbations (e.g. pushes).

    • ○  Any special sensor specifically developed to be used with the proposed testbed. Please consider that:

      1. Traditional sensors (e.g. standard motion capture system) do not need to be delivered with the testbed, because they are already owned by the EUROBENCH Consortium (see Table 3 for a detail list of the EUROBENCH equipment).

      2. All sensors should be accompanied by pre-processing algorithms that transform raw data into data that are compliant in content and format with the EUROBENCH framework (see Section 2.1.2 and 2.1.3 for details).

  • ●  A set of software routines, which include algorithms and protocols able to calculate specific performance scores from experiments performed on the testbed. These algorithms will be integrated in the EUROBENCH Software.

  • ●  Experimental datasets, which include data obtained by real experiments on bipedal systems (either humans and/or robots). These datasets will be integrated in the EUROBENCH Database.

To apply for OPTION 2, you should:

● Propose a benchmarking device, composed of sensors and/or actuators, that can be applied indistinctly to

different benchmarking scenarios. Table 2 shows a list of relevant devices, but applicants are free to propose any other solution that may be relevant.

Figure 1.1. Decisional process to apply for the EUROBENCH FSTP-1 Open Call.

(PICTURE NOT AVAILABLE)

 

IMPORTANT: All the outcomes developed by the applicants (hardware and/or software) will be integrated in the EUROBENCH framework for their evaluation during the execution of the 2nd call and for their use in after the project end. The EUROBENCH consortium will study and define potential agreements with Third Parties whose outcomes will have been successfully validated, to ensure further exploitation and sustainability as part of the EUROBENCH facilities and software. If the testbeds addresses both fields (i.e. Wearable Robots & Humanoids), two prototypes should be developed, to allow their integration in both the EUROBENCH Facilities.

Table 1. Non-exhaustive list of benchmarking scenarios relevant for OPTION 1. The required outcomes are specified with an “X”. The applicability to the Wearable Robots (WR) and/or Humanoid fields is indicated with the “◾” symbol. The specific requirements of each of these scenarios are detailed in Section 2.2.

(TABLE NOT AVAILABLE)

 

Table 2. Non-exhaustive list of possible scenario-generic equipment needed in the EUROBENCH Facility, and relevant for OPTION 2. The applicability to the WR and/or Humanoid fields is indicated with the “◾” symbol. Specific requirements are detailed in Section 2.3.

(TABLE NOT AVAILABLE)

 

Table 3. Sensors already included in the EUROBENCH Facilities. If these sensors are compatible with those used by the applicants, they do not need to be delivered with the testbed (contact the EUROBENCH team for technical details).

(TABLE NOT AVAILABLE)

 

1.2 Applicants eligibility

The following eligibility rules apply to all EUROBENCH Open Calls:

  • ●  Participants can apply individually or as part of a consortium.

  • ●  Consortia can include partners from the same country, as well as partners from different countries.

  • ●  Applicants must be previously registered in Participant Register of the Participant Portal

    digit Participant Identification Code (PIC).

  • ●  Applicants cannot request any funding for activities that are already funded by other grants (principle of no double funding).

  • ●  Applicants can participate in more than one proposal according to budget limitations established in Section 1.3

  • ●  Countries eligible for funding are specified in the Section A of H2020 Work Programme.

 

1.3 Budget - Funding and Financial eligibility

Each sub-project (defined as funded proposals to be implemented) will receive the funding on a lump sum scheme, as defined by the EU Commission pilot 2018-20201. Each proposal should include a detailed work plan and a cost estimate. For the definition of this work plan participants should take into account the following phases and duration of sub-projects:

  • Phase 1: Development. The maximum duration of this phase will be 12 months. Participants will develop the components (test beds, software routines and/or datasets) during this phase.

  • Phase 2: Integration. Participants will have 6 months to integrate their outcomes into the EUROBENCH Software and/or Facilities.

    These phases will be separated by a 2-month reporting period (including reporting and evaluation) as better described in Section 7.
    The estimated costs of the third parties to develop the defined work plan should be reasonable and comply with the principle of sound financial management, in particular regarding economy and efficiency.

    All proposals should comply with the following budgetary limits:

  • ●  Each proposal can request a maximum contribution of 300k€, which can cover up to 100% of the total

    budget.

  • ●  Each participant can receive a maximum contribution of 100k€.

  • ●  Participants can submit more than one proposal. However, the total required contribution of the same legal

    entity (i.e. identified by the same PIC number) should not exceed 100k€ across all proposals submitted (i.e. it is not allowed to ask for 60k€ in one proposal and 50k€ in another proposal, because the sum is 110k€). This limitation is to avoid the situation in which one applicant is participating in two winning proposals, and one of the two should be rejected for budgetary limits.

  • ●  If you are planning to participate in the 2nd FSTP Open Call (opening in June 2020), please do not request 100k€ now, because this will impede, in the case you are funded, your future participation (the 100k€ limit applies to all EUROBENCH Open Calls as a whole).

    General criteria for budget definition are:

  • ●  Consumables costs will include materials for structure, mechatronic components, and sensors needed to

    build the testbed. Applicants should carefully adjust the required contribution to the number and complexity of the testbed(s) proposed. This aspect will be seriously considered during the evaluation process, since no negotiation phase will be admitted after proposal selection. We expect that consumables costs for one testbed prototype will not exceed 75k€. This limit can be exceptionally overcome, if strongly motivated. No costs for consumables could be allocated to SW Routines or Datasets, since these outcomes usually result from human efforts. Exceptions can be made, if motivated.

    Important: Please consider that hardware that will be an operating part of the prototype should be purchased as consumables. E.g. If you need to buy a commercial treadmill as part of your prototype, it can be considered as consumable.

  • ●  Personnel costs will also depend on the complexity of the testbed, algorithms and datasets developed. As a general estimation, being the duration of the sub-project 18 month (of them 12 for development, and 6 for integration in the EUROBENCH framework), we expect total personnel efforts per sub-project between 12 and 24 person-months (PM). According to this estimate, we expect total personnel costs per sub-project between 60 and 120k€. This figure may change, if appropriately motivated.

  • ●  Travel costs: A travel should be planned to deliver each testbed to the corresponding Facility, and make it operative.

 

1.4 Templates

To prepare your proposal, please use the template available at http://eurobench2020.eu/ftsp-open-calls/fstp-1/. Submission guidelines are provided in Section 3 of this document.

1.5 Communication and FAQ

A good communication between Participants and the EUROBENCH Consortium will be crucial to get to a proposal that matches the project goals and priorities. There will be several opportunities to get in touch with the EUROBENCH Consortium and receive direct feedback on your idea/proposal draft (see Figure 1.2). In particular, we strongly recommend participants to submit the Declaration of Interest form as soon as possible (at http://eurobench2020.eu/ftsp-open-calls/). Check also the EUROBENCH FAQ section (http://eurobench2020.eu/ftsp- open-calls/fstp-1/faqs/), to look for continuously updated questions and answers.

 

 

2 Expected Outcomes

This section includes the aspects that have been considered of highest priority by the EUROBENCH Team. Nevertheless, the information here provided has to be taken as a general advice. Applicants are free to propose alternative solutions. If you are proposing something that differs considerably from the scheme, please provide a valid motivation.

2.1 Common requirements

The following requirements apply in general to all benchmarking scenarios. Please look at each scenario (Section 2.2) to check for more specific indications.

2.1.1 Testbeds

A testbed should include all those structures, actuators and sensors needed to conduct replicable and repeatable testing experiments and provide relevant kinematic, kinetic and/or physiological data of the bipedal system. The following requirements should apply, provided that they do not compromise the correctness, efficiency and effectiveness of the outcome proposed:

● HW interface: All surfaces that can enter in contact with the bipedal system should be made of non- ferromagnetic (e.g. plywood) boards of 30 mm thickness, with a grid of holes of 10 mm diameter at 150 mm of distance to each other (see Figure 2.1), and 75 mm of distance from the outer border. Why? This standardized interface will allow to add any kind of terrain/obstacles on top of it, allowing a great variety of combinations of test environments.

Figure 2.1. Standardized board to be used for all surfaces that may enter in contact with the bipedal system. Board can vary in dimensions, whereas holes should have fixed diameter and distance.

  • ●  SW interface: If the testbed includes sensors, these should be accompanied with algorithms for pre- processing the raw data of the specific sensing device (e.g. c3d file from motion capture system with all markers positions, IMU accelerations, ground reaction forces, etc...), providing as output a set of ‘sensor- agnostic’ pre-processed data (e.g. joint angles). See next Section and Figure 2.2 for further details.

  • ●  ROS-compatible: All sensors and actuators should (if possible) provide a ROS package(s) ready for communicating and commanding the device.

  • ●  Safety: All testbeds that include a surface should also include bilateral handrails, for safety reasons. The height of this handrail should be adjustable. The handrail should be connected to the board using the grid of holes, so that the position of the handrails can be varied.

  • ●  Life-like: Testing devices should replicate as close as possible conditions of the everyday life.

  • ●  Innovation: Applicants should demonstrate that the device proposed is beyond the state of the art in one or

    more of the following aspects:

    • ○  New functions. The device should provide functionalities that are not available in existing commercial

      devices, at hardware (e.g. a new kind of perturbation dynamics) or software (e.g. data

      processing/representation) level.

    • ○  Replicability. The device should be easily reproduced from other groups (i.e. using off-the-shelf

      components, or 3D printable, or replicable with other material, e.g. wood). In this case, testbeds

      should be described in a document to enable their reproduction.

    • ○  Low cost. The device should have considerable lower cost with respect to similar devices in the market.

  • ●  Portability: If possible, the testbed should be easily movable, requiring the efforts of 1-2 persons.

  • ●  Flexibility: Testbeds able to cover many configurations will be given priority over fixed testbeds.

  • ●  Simulated version: It will be desirable to have the model of the testbed defined as URDF (Unified Robot Description File) and the plugins requested to simulate it within Gazebo 3D simulator.

2.1.2 Software routines

The software routines will be part of the EUROBENCH Benchmarking Software, and should automatically compute performance scores on the data recorded during a benchmarking experience (and uploaded by previous users).

  • ●  Input: data obtained by pre-processing algorithms of each sensor included in the testbed, (e.g. joint angles, body limb poses, center of mass trajectory, heel strike event, etc...). Such data should be independent from the measurement device, meaning that the same pre-processed data can be obtained by different sensors (e.g. the same joint angle can be obtained from either an optical or IMU-based motion capture system). This should also include relevant measurements from external devices (e.g. inclination of the ground, position of external obstacle, velocity of a perturbation device, etc..).

  • ●  Output: post-processed data, i.e. quantitative scores on one or more performance indicators (see Figure 2.2 for an example list).

 

Required features:

  1. ●  Access to the source code, to enable the EUROBENCH Consortium any posterior code adjustment.

  2. ●  The source code should not rely on plugin/modules whose licensing restricts the use of the code. In particular the code should only rely on plugin/modules whose use is free of charge.

  3. ●  If the code use is not free of charge, an offer for free-of-charge use should be provided to the EUROBENCH Consortium, at least during the project lifetime. Possibilities of agreements for further use should be mentioned.

  4. ●  The benchmarking algorithm should be launched through a script or executable without particular additional interaction from the user.

  5. ●  The dependencies of the source code should be explicitly listed.

  6. ●  In the case of Matlab code, the participants should demonstrate the possibility to generate a standalone

    application, or the possibility to run the code using Matlab alternatives, such as octave.

  7. ●  The benchmarking algorithm should be provided with some reference input data and related output data, to enable detecting any output deviation due to possible code change.

  8. ●  The benchmarking algorithm should be provided with a “User’s manual” describing (i) the steps for installing

    and running the software and (ii) the mathematics behind it, preferably with references to relevant scientific literature demonstrating its soundness.

  9. ●  The software should be associated to a standard license defining the type of uses.

Desirable features:

  1. ●  Version control: the use of version control is strongly suggested to ease the control of the potential code evolution during the integration. Having the code already under version control would thus be a plus.
  2. ●  Unit testing-like mechanisms: in relation with the delivery of reference input-output data, having them embedded in a unit testing mechanism would ease the potential evolution of the code during integration.
  3. ●  Continuous delivery-like mechanisms: the procedure to go from the core source code towards a standalone script or executable should be documented. Tools enabling the automatization of such process would be a plus.

  4. ●  The preferred operating system is Linux, but Windows can be used if participants are reluctant to Linux.

  5. ●  If the EUROBENCH consortium explicitly requires access to source code, it would be preferable to have the development in open source, to share such knowledge with the community. If we can accept the code to be not open-source, we would appreciate the applicants to comment the reasons why.

Table 4. Non-exhaustive list of examples of relevant performance indicators, variables to be measured, and suggested protocols. Please consider that this list is just an example. Applicants are completely free to propose any other relevant indicator.

(TABLE NOT AVAILABLE)

 

2.1.3 Experimental datasets

The experimental datasets provided by the applicants are expected to populate the EUROBENCH database, and should include all relevant data obtained by testing experiments together with the output of software routines.
The exact structure of the EUROBENCH database and datasets structure is still to be agreed by the Consortium. The applicants should commit themselves to perform adjustment of their data structure during the integration phase, to fit with the EUROBENCH constraints.

Required features:
Any experimental dataset should be provided with:

 

 

  • ●  Experimental meta-data:

    • ○  Description on the experimental task (no trials, magnitude and frequency of the external disturbance, indications given to the user/operator, ...).

    • ○  Description of the sensors used (type of sensor, manufacturer).

    • ○  Description on the positioning of the sensors on the bipedal system and on the testbed.

    • ○  All required information for understanding and processing raw and pre-processed data.

    • ○  Description of variations from a recorded experimentation to another.

  • ●  Anthropomorphic data:

    • ○ All relevant anthropomorphic information for each bipedal system (robot and/or human) involved.

  • ●  Raw data (see Figure 2.2):

    • ○  Recorded signals from all sensors. There is no restriction on the file format, even though standard structures are encouraged (and should be described).

    • ○  Description of the file content, to help the user open and use these file.

  • ●  Pre-processed data (see Figure 2.2):

    • ○  The pre-processed data should be stored in consistent and ASCII format (i.e. Human Readable format). The concrete file structure will be adjusted during the integration phase with the consortium.

    • ○  Data should be timestamped.

    • ○  The stored data should use Standard Measurement Unit.

    • ○  In the case of biomechanical measurements, reference to standard kinematic models should be

      provided.

  • ● Post-processed data (see Figure 2.2):
    ○ If the applicants are proposing also a software routine, they should provide the resulting scores

    obtained by the application of each software routine on the pre-processed data. ● Confidentiality

    ○ The access to such datasets is to be provided to the Community using the EUROBENCH ecosystem. For this reason the applicants should take into consideration confidentiality matters (in particular for data obtained by experiments on human users).

  • ○  Data recorded from external devices (e.g. ground force measurement or pushing devices), should be synchronized with all other data recorded.

  • ○  The attachment of captured videos, for all or some illustrative experiments is a plus. Naturally, User consent should be provided accordingly.

  • ○  Reference on the pre-processing algorithms used to obtain such data.

Desirable features:

  1. ● An access to the code used for generating the pre-processed data from the raw data is encouraged.

Agreement of open access to this code (according to the proper license property) for the community is also encouraged. If not accepted, this should be at least discussed.

IMPORTANT: Data will be sensible to be used open source, therefore, prior to development you should accept that these data cannot be confidential, and ensure, during data generation, that no sensible information is contained into them (and compliant with the latest GDPR policies).

 

(TRUNCATED)

 



Public link:   Only for registered users


Looking for a partnership?
Have a look at
Ma Région Sud!
https://maregionsud.up2europe.eu