IntellIoT logo

IntellioT - First Open Call
Deadline: Nov 1, 2021  
CALL EXPIRED

 Accelerators
 Entrepreneurship and SMEs
 Innovation
 Start Up
 Web-Entrepreneurship
 IT
 IT Applications
 Internet of Things (IoT)

1 GENERAL OVERVIEW
1.1 Overview and summary of the Open Call

IntellIoT, a pan-European research and innovation project funded by the European Union as part of the Horizon 2020 programme ICT-56-2020 “Next Generation Internet of Things”, is organizing two Open Calls with the aim to involve startups and SMEs to

The Open Calls will be used to gain feedback from the participants on the developed framework and technologies as well as to explore different novel business models applied by the Open Call winners. In case of Open Call 1, this feedback will be considered for the evolvement of the framework in its 2nd release. The Open Call 2 results will feed into the demonstration of the 2nd release of the IntellIoT framework and will aim to build a sustainable ecosystem beyond the project.

Applicants are invited to submit a short outline of their technology and business proposition, highlighting how they may integrate with the IntellIoT framework. The submissions will be evaluated by independent and external experts based on clearly outlined criteria, resulting in the selection of the four best companies per Open Call.

The four selected companies each gain access to the IntellIoT project within a 6-month pilot to work with the IntellIoT partners on the framework and use cases. In addition, the selected organizations will receive up to 150,000 Euro for their efforts in accordance with the selection criterion on economic fairness and as necessary to achieve the objectives of the action.

IntellIoT partner Startup Colors UG is responsible for the coordination of the Open Calls.

1.2 Introduction to IntellIoT

The overarching objective of IntellIoT is to develop a reference architecture and framework to enable IoT environments for (semi-)autonomous IoT applications endowed with intelligence that evolves with the human-in-the-loop based on an efficient and reliable IoT/edge- (computation) and network- (communication) framework that dynamically adapts to changes in the environment and with built-in and assured security, privacy and trust. This reference architecture and framework will be applied in the heterogeneous use cases encompassed in the project, covering agriculture, healthcare and manufacturing smart environments.

The IntellIoT project will mainly focus on three research aspects and associated next generation IoT capability pillars, namely collaborative intelligent systems (IoT), human interaction with the intelligent systems and that all these activities are performed in a trustworthy and secure way. These aspects result in three pillars, which are depicted in Figure 1, and are shortly described below.

 

1)  Collaborative IoT: Various semi-autonomous entities (e.g., tractors, robots, healthcare devices, etc.) will need to cooperate in order to execute multiple IoT applications. These entities will have to be self-aware and will all have a different amount of knowledge of the task at hand and their environment where they are located. Unfortunately, it is not always possible to provide all the necessary knowledge to the entities, especially in changing environments. To keep the knowledge of the entities up to date, they need to extend it by applying learning technologies based on Artificial Intelligence and Machine Learning. New knowledge can either be acquired by interacting with the environment (via sensors) or by interacting with the other entities in the environment. By exchanging information via a reliable and secure communication network, the entities in the environment will need to collaborate with each other to update their own knowledge to fulfil their assigned task.

2)  Human-in-the-Loop: The human within the system will keep on playing a crucial role in the whole process. The aim is to not remove the human from the system, but use his/her/their experience and knowledge to overcome unknown situations, where the system does not have the knowledge (yet) to handle the situation and the collaboration with the other entities in the field also does not provide the required information. The interaction with the human (be it either the machine operator, the farmer, the physician or any other person) will enable the intelligent system to expand its knowledge about the environment or the application through machine learning technologies and use the experience from the human operator to learn new features or information about the overall process. Therefore, humans will remain a vital element of the system and will interact with the IoT elements in the system to overcome the current limitations of the system.

 

3) Trustworthiness: Security, Privacy and, ultimately, trust are considered as indispensable preconditions for reliability and the wider acceptability of distributed, collaborative IoT systems and applications. Trust of the human (e.g., a patient or farmer) in the system is key, as the system's (autonomous) decisions need to be trusted, and the end-users’ data need to be handled with utmost care, by providing appropriate levels of security and privacy safeguards. In this context, and in addition to well-understood security and privacy best practices, IntellIoT will adopt advanced security intelligence to protect unsupervised device-to-device interactions, based on self-adaptable, security-related operations. Furthermore, the overall trust will be fortified by continuous monitoring, real-time assurance assessment, and primitives enabling transparency of performed actions. Distributed ledger technologies (DLT) and smart contracts will be made accessible by IoT devices and other actors in the use cases to show transparency of performed actions, create trustworthy supply chains and build trust between parties.

 

THE INTELLIOT FRAMEWORK

The IntellIoT architecture is organised around the three pillars at the heart of the project’s concept; i.e., Collaborative IoT, Human-in-the-loop, and Trustworthiness. Figure 2 provides a high level, simplified view of the IntellIoT framework and its building blocks.

 

In more detail, at the heart of the Collaborative IoT (system-wide AI) components is the Hypermedia Multi-Agent System (HyperMAS) Infrastructure; a multi-agent system that reacts to incoming end user goal specifications, managing - including discovery and search facilities - available artifacts and agents (defined in W3C WoT Thing Descriptions or W3C WoT Thing Description Templates) along with available procedural knowledge (i.e., agent plans). Moreover, Federated Learning is leveraged to prevent deployed model degradation, address edge cases, and implement data privacy and security.

In terms of the Human-in-the-Loop pillar, key elements include the Goal Specification Front End which enables the end user (e.g., farmer, customer, doctor) to specify the goal (e.g., "plant wheat in field 5"), sending it to the back end of the Web-based IDE that does the mapping from user goals to the input for the Hypermedia MAS and enables the systems engineer to monitor the system and to specify the agent organization and the procedural knowledge of individual agents. The IntellIoT features additional innovative Human-Machine Interface (HMI) capabilities, leveraging Virtual Reality (VR) and Augmented Reality (AR) technologies (based on Oculus Quest 21 and HoloLens 22, respectively – the latter augmented by a Stylus Pen) to provide a user friendly, feedback rich and tactile way of interacting with and managing the underlying intelligent infrastructure.

Aiming to provide a solution with trustworthiness by design, a set of trust enablers are designed, developed, and integrated within IntellIoT. A Security Assurance platform acts as the trustworthiness hub within the framework, providing continuous, real-time assessment of the security and privacy posture of the underlying IoT deployment and IntellIoT itself. Moreover, novel, resource-aware Distributed Ledger Technologies (DLTs) are developed and integrated which are based on HyperLedger Fabric (HLF), and are leveraged to provide auditability, reliability, and accountability in all critical operations (transactions) within IntellIoT’s deployed application domains. Furthermore, trust enablers include advanced (trust-based) Intrusion Detection systems which are augmented by Moving Target Defence mechanisms based on agents that administrate the IoT network at runtime to change the system’s configuration both proactively and reactively when attacks are detected.

The above three pillars are supported - and their capabilities are enabled – by IntellIoT’s dynamically managed network and compute infrastructure. In more detail, IntellIoT features computation resource management and edge management and orchestration capabilities, along with network choreography and management and xApp management and control features. The underlying network infrastructure supporting tactile communication is built upon 5G (5G MEC, 5G vRAN, 5G D2D, Private 5G Core), and Deterministic - Time-Sensitive Networking (TSN) - technologies.

Moreover, use case-specific features are developed to support the application of IntellIoT in the project’s three heterogeneous use cases, such as UC-specific AI models and Edge Applications. More details on the use cases are provided in the subsection that follows.

1.2.2 USE CASE DESCRIPTIONS

1.2.2.1 USE CASE 1 – AGRICULTURE

Within the agricultural domain, the industry has already successfully implemented “smart farming” features, which focus on the detection of the crop’s needs and problems, e.g., fertilizer and water application and crop spraying according to the needs of individual plants, rather than treating large areas in the same manner. These features have already introduced a high level of automation and have saved millions of tons of fertilizer, pesticides, insecticides, etc. The missing link for optimizing farming activities (e.g., ploughing, spraying, harvesting etc.) is heading in the direction of autonomous operations, in order to optimize resources, increase the level of efficiency, improve the safety and security of autonomous vehicles in the field of farming, and additionally reduce costs significantly.

Nowadays, farmers are driving agricultural vehicles for many hours during the days, resulting in fatigue and finally in (potential deadly) accidents3. The aim is to remove the farmer from the cabin and have the agricultural vehicles drive autonomously over the farming fields, performing their tasks (e.g., harvesting) by themselves, thereby using the available data to optimize their required behaviour.

The scope of the agriculture use case is to investigate future autonomous features of farming vehicles, like autonomous driving, decision making and interaction and reliable communication with other entities (e.g., vehicles, drones, sensors) in the field. The interaction between the different entities in the field will create an intelligent IoT environment, where the different entities securely interact with each other and use this knowledge to update their own internal knowledge of the environment. Such knowledge is utilised to enhance the decision-making capabilities and communication aspects. Although the future aim is to remove the farmer from the cabin, humans should still play a big role in the control or supervision of the overall system. In this context, the aim of this use case is to incorporate the human-in-the-loop in the intelligent IoT environment of a semi-autonomous agricultural vehicle in collaboration with other devices (e.g., drones, sensors, other tractors), while improving safety, reliability and security. Human intervention is needed in uncertain situations (e.g., animals on the path, dust or other particles, obstacles) and it is especially valuable in the initial deployments of smart farming.

To validate the above, the objective of the agriculture use case is to deploy and demonstrate a prototype of a self- driving tractor in an intelligent IoT environment by equipping a fully electrified tractor with new technologies, like cameras, communication, machine learning, interaction capabilities, unreliable prediction (by the tractor’s AI model), etc. These will be augmented by a set of innovative security enablers, aiming to provide a trustworthy-by-design environment for all involved stakeholders.

The high-level concept of the agriculture use case is depicted in Figure The use case will cover multiple facets of a smart agriculture deployment, where a tractor is driving over a farming field. The tractor will be equipped with sensors and computing resources to perform the mission assigned to it. The computing resources integrated into the tractor will enable it to perform computation tasks locally, thus acting as an edge device. Besides the tractor, there will be other edge devices in this use case, like a drone or an infrastructure edge. Tasks that cannot be performed on the vehicle will be offloaded to an available cloud infrastructure.

The tractor is programmed with a mission by the human operator by having the operator specify a goal that is passed to the Hypermedia Multi-agent System (HyperMAS) that had been configured (with respect to its organization and procedural knowledge) by a human engineer using a Web-based development environment. The system then plans how this goal can be achieved and instructs the tractor by assigning tasks to it. The central aspects of these tasks are way points in the field where the tractor should move together with information about what action it should perform at these way points and in-between (e.g., harvesting). Additionally, it could be possible that new/updated functionalities (e.g., object recognition, navigation, hazard detection) are uploaded to the tractor before it starts performing its assigned tasks. These functionalities can be made available to an infrastructure by technology providers (e.g., tractor OEMs, system integrators) from where vehicle owners can deploy these functionalities to the specific vehicles. While the vehicle is driving over the field, it observes the environment and uses the gathered information to update its internal knowledge of the environment. Other entities (e.g., other vehicles, drones, sensors, etc.) can also be present in the field and a network is created between these entities to exchange information and update the internal knowledge of the entities. The connected entities will use the information to collectively train their own models (i.e., local AI) and identify unknown obstacles in a faster and more robust manner. In the particular case of drones, which can be used in the use case to increase the field of view of tractors, there is currently no partner in the consortium with real drones, so the interaction and information exchange will be performed using a simulated drone.

In the situation where an unknown obstacle is detected and the vehicle does not know how to traverse it, it will first try to collect complementary information or knowledge from the other entities via the Infrastructure Assisted Knowledge Management (IAKM) component. If required or compatible information cannot be found, the vehicle will stop and request help from the human operator. Utilizing a 5G NR connection, data from the tractor’s sensors is sent to the VR glasses of the human operator. Part of the sent data is a video feed which will allow a view of the situation of the vehicle. If the provided feed does not provide enough information to allow finding a solution, the human operator can also access other entities in the field (other tractors, drones) to view the situation from different angles. The human operator will execute direct or indirect strategies. In the direct control, the human operator can directly interact with the vehicle (i.e., moving the vehicle forwards or backwards) using VR controllers. This will require a reliable, high-speed and low-latency connection that enables real-time interaction between operator and the vehicle. When using indirect control, a feasible trajectory around the obstacle has to be defined. Once this action is completed, the control needs to be given back to the vehicle. In this case, the human operator supervises the vehicle through the video feed to ensure that the newly defined trajectory is executed correctly. The indirect strategy has more relaxed latency and timing requirements for the communication. Based on the information coming from the human operator (be it either direct or indirect control), the vehicle will refine its own local AI models by continuously learning how to overcome such obstacles in the future, in addition to sharing the learned information (in an AI model) with other vehicles. The latter will be achieved by announcing the availability of the updated AI model tailored to the specific environment (using adapted semantics) via the IAKM. To this end, the IAKM will provide publish/subscribe mechanisms to exchange knowledge (in the form of AI models) or to seek environments required to learn or apply knowledge. Structured semantics will be used to describe various parameters related either to the model, to the environment, or to validity and applicability of the shared knowledge.

Distributed Ledger Technologies (DLTs) will further ensure that the data and information are transparently recorded and immutable, covering both cases of operation (with or without human intervention). Besides, smart contracts allow timely processes of exchange and payments between stakeholders that can be triggered by data changes appearing in the ledger.

Furthermore, security concepts will be applied to allow access only to authorized devices but also to mitigate any intrusions to the network. While the vehicle is performing a mission, and a malicious entity (e.g., a drone that has been infiltrated by an attacker) tries to harm that mission, the security assurance mechanisms should be activated. The vehicle or its peers must identify the malicious entity and notify the cloud infrastructure. The infrastructure will then take measures to isolate the malicious entity, while making sure the vehicle, as well as any other legitimate entities continue functioning. To pre-emptively protect the network, periodic actions will be taken to dynamically reconfigure it, thus making any knowledge an attacker might have gathered, obsolete.

 

1.2.2.2 USE CASE 2 – HEALTHCARE

Chronic diseases are a significant social and financial burden and the main cause of death world-wide4. There is a need for more effective strategies for patient management to improve patient outcomes and reduce healthcare costs. Remote patient monitoring through an increasingly large range of validated IoT devices and wearables, combined with the implementation of AI technologies have the potential to address the stringent needs of this large group of patients. Novel technologies can empower patients to become active partners in the treatment of their disease and contribute to effective strategies for secondary and tertiary prevention.

The scope of the healthcare use case is to investigate collaborative semi-autonomous systems with the human-in- the-loop, that leverage artificial intelligence, wearable devices and sensors, and communication technologies to provide more accurate information to clinicians about the health status of their patients, while enabling patients to carry out normal activities in their home environment with limited disruption related to the management of their chronic disease. We will as well investigate the use of these technologies to implement strategies for recovery and prevention at home, such as support for improved diet and guidance within safe physical exercise programs. The intelligent IoT environment incorporates devices, sensors and algorithms that interact using novel communication technologies to provide continuous active support, personalized to the needs of the individuals, providing specific interventions and recommendations, and fulfilling relevant information needs.

Human intervention is needed in two cases: (a) when an AI algorithm detects a potential health emergency, predicts a deterioration that requires the intervention of the clinician, receives a patient request that involves reaching out to a clinical expert, or when the defined workflow defines the involvement of the clinician; and (b) when the system encounters an exception that it does not know how to address, or in case of technology failure.

The main objectives of the healthcare use case are to design, develop and evaluate a platform combining novel IoT, communication and 5G technologies with an artificial intelligence framework and models enabling semi-autonomous collaborative patient support, with clinical oversight. We will apply and evaluate this platform to facilitate the guidance of heart failure patients and empower them in the management of their own disease, and to assess the effectiveness of a range of technology-assisted interventions.

The healthcare use case will explore the implementation of an intelligent IoT environment to provide efficient AI- supported interventions to patients, and effective interaction with their clinicians, in the context of home care management. This use case will focus on the needs of heart failure patients and develop a system enabling them to take a key role in improving their health and guiding them through the management plans provided by the clinical experts. We aim to demonstrate that such a remote and continuous support system can provide effective recommendations and guidance, empower patients to reach better outcomes, and reduce costs while never compromising on safety. To enable adoption by healthcare organizations, the solution needs to use the increased information (from sensors, devices, etc.) and the AI models to deliver effective and safe semi-autonomous interventions, without overwhelming the healthcare professional with large amounts of additional (and non- actionable) data.

Figure 4 depicts key components, actors, and interactions of the proposed intelligent semi-autonomous system. IoT devices (wearables, sensors) will collect data that will be used by the AI infrastructure and models to provide recommendations and drive interventions, and to extract accurate information on the health status of the patients that are monitored in their home environment. The AI-assisted system will guide the patients through their daily activities and through their care plan, with clinical expert oversight. Patients will be equipped with wearable devices measuring relevant data that is transferred to a personal IoT device (e.g., smart watch or smart phone) via step (1). The AI application will analyse the collected data in step (2) to identify the need for interventions or recommendations, according to the initial AI models and the care plans and goals previously defined by the clinicians responsible for treating the patients. When the need for an intervention is detected, either a recommendation is sent to the patient via step (3a) (with all the information sent to the patient, shared with the clinician as well for review), or the case is escalated to involve the clinician directly (e.g., when potential safety risks are detected), leading to the human-in-the- loop intervention depicted by step (3b). The solution will implement personalization approaches, tailoring interventions to the clinical needs and preferences of the individual patients. The goal is to both improve outcomes and increase adherence.

 

We will as well test our federated/distributed learning framework in the scenario of distributed collaborative hospital networks (step 9). Additionally, the system may implement a model for monitoring and diagnosing technical issues with the constrained devices as depicted by step (4).

In the planned solution, when an escalation takes place and the clinical expert in the loop is notified, the clinician may decide to contact the patient as shown in step (5a), respond to the personal device shown by step (5b), send a notification to another specialist, or raise an alarm by step (5c). The feedback or recommendation provided by the clinician is persisted in the dataset. The local dataset is used as well to validate and re-train the AI model locally on a personal IoT device as shown in step (6), in order to increase personalization, potentially improve performance, and avoid performance degradation. Model updates are then contributed to the aggregated model at the coordinator that is deployed at the central infrastructure (e.g., of a hospital) shown by step (7) to enable its continuous improvement. This distributed AI framework will implement federated and active learning. Model updates are communicated to all personal IoT devices (e.g., using 5G Cellular IoT or D2D communications), through distribution of the aggregated model via step (8). All the involved communications and interactions need to be covered by state-of-the-art security and privacy provisions, catering for the intricacies of the private-sensitive user data. Digital consent management to drive the interactions of the system (patients, clinicians, devices) can be managed e.g., via smart contracts.

1.2.2.3 USE CASE 3 – MANUFACTURING

Industry 4.0 is seen as the most disruptive change emerging in manufacturing industry. Shrinking lot sizes, with orders directly coming from the customer and being manufactured without or very little human intervention is one of the main focus areas. The aim of this use case is to enable flexible and individualized (up to lot-size one) production, which is widely recognized as a crucial feature of Industry 4.0 for the manufacturing plant of the future. Thinking such a demand even further, this use case considers a shared manufacturing plant with multiple customers utilizing manufacturing- as-a-service. Machines in the shared manufacturing plant are provided by multiple machine vendors and operators, which offers production flexibility and potential for novel and disruptive business models.

The intelligent IoT environment in this use case derives a production plan from product data received from a customer, selects machines for the planned production steps and plans optimized transport paths for workpieces. Smart contracts are concluded between customers, machine operators and plant operator – where at least the latter two are represented by digital agents. Transport is done by robots and Automated guided vehicles (AGVs), guided by in-built AI. Whenever built-in computing resources are insufficient, computation tasks are automatically offloaded to the edge cloud. Whenever AI is not sufficiently confident about a production step or workpiece handling, the intelligent IoT environment will involve a human-in-the-loop to take over control remotely. Using AR technologies, the human-in-the- loop assists the AI, which is concurrently trained with these new inputs. The infrastructure of IoT/edge and networking (within the machines and robots and to the operator) will enable tactile, reliable, secure and safe operation.

The approach of this use case is summarized in Figure 5 and entails the following components and actors, including their respective interactions: Instead of ordering a standard product, a customer (tenant) provides a specification for a product, i.e., a production goal, e.g., CAD-drawing and CAM-data, see step (1). A great variety of products can be built depending on the machines available in the shared manufacturing plant. Using additive manufacturing in addition to conventional machines (e.g., for drilling, milling, or welding), almost arbitrary products can be made. In a small-scale, but fully featured demonstrator of this use case, the customer provides text or an image to be engraved or lasered on a wood slice. In step (2), based on a given specification of the Hypermedia Multi-Agent System where a human engineer defines the system’s organization and procedural knowledge using a Web-based development environment that is aware of the currently available artifacts (3), the system orchestrates suitable artifacts that represent available machines to manufacture the specified product. If the system cannot find a solution for the production goal, it can request support from a human plant operator or customer (3). In the proposed demonstrator, a wood slice (as raw material) is selected, and the AI decides how to place it in which machine(s) to engrave and/or laser text or image on it. Human help might be needed e.g., if the image is too large and must be cut or resized. Next, a robot or AGV is tasked to transport the workpiece (4) to the next production step. As a machine may be operated by the plant owner or a third- party operator, contractual arrangements are set up using a distributed ledger. Further, comprehensive security mechanisms are applied to ensure privacy and security of customer data. When a robot interacts with a machine, e.g., moves a part in the working area of a machine, a safe peer-to-peer communication relation between both can be setup ad hoc to protect from collisions. E.g., 5G communication is setup that supports wireless TSN and supports URLL not only in Uplink or Downlink, but also in Sidelink (D2D). In step (5), a local AI on board of the robot decides how the robot picks a workpiece and places it in the next machine. If the confidence-level of the local AI is low and it cannot pick and place the workpiece safely, a connection to a human is established (6). Utilizing AR, the human can virtually grab the workpiece to support the robot. A tactile communication is established for this interaction, under consideration of security and privacy. Cameras will generate an accurate enough reconstruction of the surroundings and the robot itself which allows the full control and visual information about the parameters of every joint. Grabbing and haptic feedback will be realized with the Holo-Stylus developed by HOLO. If support from a remote operator is needed, a tactile communication may not be possible through long-distance internet connection. Hence, the operator can control a virtual robot (overlaying the scanned model with the CAD model), rendered in the local edge, with delayed movement of the real robot. From the human handling of the work piece, the local AI re-trains itself using the human feedback as target (7) and federates the learned parameters to other robots (8).

 

 

Figure 5: Manufacturing Use Case

1.3 What types of contributions are we looking for?

With the Open Calls, IntellIoT aims to widen the range of new and innovative IoT applications and devices within the framework and specific IoT environments of the use cases. To assure a consistent development of sustainable solutions throughout the use case implementations and upcoming activities, clear integration points with the IntellIoT framework need to be described in the Open Call proposals and realized during their execution. Moreover, it is intended to further develop the use cases already specified by providing additional tools, services, applications or devices. To ensure valuable addition to all use cases as well as the general framework, one proposal per use case and one proposal addressing the general framework will be selected.

The following list ofpossible contributionsshallgiveOpen Call applicants an understanding ofwhat could be contributed through their Open Call proposals. The described contributions do not preclude submission of alternative suggestions. In fact, additional ideas from interested entities to join the Open Call are encouraged and welcome.

 

 

 

1.3.1 EXEMPLARY CONTRIBUTIONS FOR THE GENERAL FRAMEWORK

To deliver the proposed framework, as sketched in subsection 1.2.1 above, the consortium (which includes large enterprises, SMEs/start-ups and academic institutions) brings together key expertise and heterogeneous state-of- the-art technologies for the next generation IoT. These include 5G communications, resource-aware edge computing, distributed AI, autonomous self-aware systems, DLTs, security & privacy for IoT, Tactile AR/VR HMI and vertical- specific know-how in the areas of agriculture, manufacturing, and healthcare.

Nevertheless, to further enhance the capabilities of the platform, but to also showcase the suitability of the core IntellIoT framework as a basis for an ecosystem that can be built on top of it beyond the project, we are seeking additional technological building blocks that can further expand the capabilities of the core framework and be applied horizontally across the project’s use cases. Therefore, besides contributions that target one specific use case, we welcome contributions that are more general and can be applied across use case environments. Examples of such contributions are:

  • ●  Digital Twin tooling: Software that allows to create a digital copy of a physical object to enable simulations, advanced designing, and planning. Such a digital twin could make sense for example for the tractor in use case 1 or the robot arm in use case 3.

  • ●  Edge and 5G Infrastructure: IntellIoT has its own Edge orchestrator which handles placement of Edge Apps on different Edge devices. Open Call participants can bring project relevant edge applications/tasks to be handled by the Edge orchestrator. Prerequisite is for those apps to be dockerized and run in a x86 environment.

    For 5G infrastructure, Open Call participants can bring software and hardware that can setup a prototype 5G private network. IntellIoT will have its own 5G infrastructure setup, however, to test interoperability with other implementations, a further 5G network testbed could be meaningfully included. Furthermore, more MEC applications (xApps) can be added for advanced 5G MEC functionalities.

  • ●  Blockchain-based marketplace: Software that implements a user interface and underlying marketplace exchange to support service business based on IntellIoT’s blockchain components (e.g., 3rd party agriculture or manufacturing machinery vendors).

  • ●  Devices/tools to support human-machine interaction: Hardware (e.g., HMI devices such as AR gloves) and software that enables intuitive remote machine or vehicle interaction, for example to control a robot arm in the manufacturing use case or to control the tractor in the agricultural use case. The solution should consider the cooperation with the AR/VR solutions used in the project. This could also involve the realization of simulation environments to train and validate the remote control of vehicles or machines.

    Web-based IDE as a tool for programming hypermedia-based multi-agent system also falls in this category. Together with a WoT Thing Description for constrained devices, the interoperability within heterogenous devices in the project can be demonstrated.

  • ●  Data Analytics platform: possibly a web interface, to be used by the human-in-the-loop and with log-in functionality. Data processing and Data engineering pipelines: clean and prepare the data collected by the sensors and make it available in the analytics platform, passing through the System Data Repository as appropriate.

  • ●  Advanced sensing solutions: The introduction of advanced sensing solutions (e.g., Sonar, Radar or LIDAR technology, Wireless Sensor Networks – WSNs) could be used horizontally across the project’s use cases. E.g., in Use Case 1 to enable Precision Agriculture scenarios (e.g., deploying WSNs to monitor soil parameters), in Use Case 2 to enable Ambient Assisted Living (AAL) scenarios which consider the ambient environment of the patients, and in Use Case 3 to support additional logistics/supply chain scenarios (e.g., monitoring supply of spare parts). The Open Call participant would have to provide the hardware for scanning the environment, the software to transform the data into useful functionality and the interface that the project can link to.

  • ●  AI models: that can be federated, personalised and executed in Edge devices in all different use cases. Filtering 3D video data frames for minimal representation would be helpful in the federated learning approach. For the healthcare use case, AI models together with anonymized patient data from cardiovascular patients is requested to verify against the project’s AI models.

  • ●  Security and Privacy: Mobile OS (Android) -focused security and privacy mechanisms for handling user data, Novel IoT authentication & authorisation schemes for resource-constrained environments.

1.3.2 EXEMPLARY CONTRIBUTIONS FOR THE AGRICULTURE USE CASE

In the agriculture use case, a semi-autonomous tractor will cover a field where it will traverse over and perform a simulated task like ploughing or mowing the dedicated field. A camera setup mounted on top of the vehicle will provide environmental information capable of driving it around the field and detect potential obstacles that could endanger the vehicle. The vehicle will collect environmental information and update its internal knowledge using AI algorithms to react to these situations. If an unknown situation occurs and the vehicle doesn’t have the knowledge how to react to it, it will interact with other entities in the field (e.g., other vehicles or drones) or use the knowledge of a human operator to overcome the situation. To demonstrate the scalability of the developed solutions, the agriculture use case is looking for additional solutions that go beyond the available technology in the project and use these new technologies as proof that the results can be applied to a fleet of vehicles or a combination of vehicles and drones to provide additional sensory information to the overall setup of the agriculture use case. These can be combinations of real hardware of drones and/or simulated entities representing other agricultural vehicles and drones. New sensory information or other IoT technologies for improved human-machine interaction to improve the performance of the overall agricultural system are also being investigated for improving the performance of semi-autonomous and future autonomous agricultural solutions. Examples of such contributions are:

  • ●  Digital Twin Tooling: Digital twins of physical objects enable advanced simulations without having the actual objects available for testing purposes. In many test situations, the actual hardware is not available or only a very limited amount. Digital twins can e.g., be extended with Hardware-in-the-Loop testing equipment and provide multiple interfaces for interacting with the technologies developed inside the IntellIoT project to demonstrate the functionality of the system, without having actual access to the real hardware. Additionally, digital twins enable continuous testing, whereas deploying solutions on the real hardware reduces the testing capabilities significantly. Within the agricultural Use Case, the team is looking for an advanced simulation environment, capable of demonstrating functionalities of semi-autonomous agricultural vehicles (e.g., tractors, harvesters, but also potentially drones) and use these digital twins to demonstrate the scalability of the solutions and the cooperation capabilities between multiple vehicles to fulfil the assigned missions.

  • ●  Drones: Drones are being used more and more in the agricultural domain to provide among other additional sensory information to the vehicles deployed in the field. The aim of the IntellIoT project is to provide collaborative IoT, where multiple entities in the field (be it a fleet of vehicles or drones and vehicles) interact with each other by exchanging data and learn from the exchanged data to overcome situations that can not be handled by the limited data available by a single entity. The challenge here is to have the correct interaction between the drone and the designated vehicle. Additionally, exchanging data over reliable data channels (e.g. 5G) will also be investigated.

  • ●  Smart Farming Solutions: Hardware and software solutions for smart farming could be integrated with the IntellIoT framework (e.g., via the HyperMAS) to support the agriculture use case. Solutions could range for example from the integration of in-situ sensor networks or aerial imagery sensors to the analysis of such collected data to improve the efficiency of the farming. These new IoT technologies can also cover other topics, including improved human-machine interaction where a human operator can have improved interaction with vehicles or drones located at the field and either have better control over the vehicles or acquire more detailed information about the activities and status of the machines.

 

1.3.3 EXEMPLARY CONTRIBUTIONS FOR THE HEALTHCARE USE CASE

In the healthcare use case, we will implement privacy-preserving AI solutions leveraging federated learning approaches to provide personalized interventions to patients and support them and their clinicians with the effective management of the disease. Data will be collected from patients with a range of devices and will be leveraged to design, develop and apply interventions. Our goal is to propose and evaluate interventions, models and workflows that may help improve the currently available options for the home monitoring and support of heart failure patients, for better outcomes. We welcome innovative contributions with the potential to enhance the solutions provided to clinicians and patients in the IntellIoT ecosystem, including new models and datasets for model training and validation, and innovative devices and wearables that could provide valuable information on the health status of the patients. Examples of such contributions are:

  • ●  Next generation Medical AI devices: Hardware (medical devices) and software (e.g., analytics) to integrate with the IntellIoT framework and support the healthcare use case through patient monitoring and guidance.

  • ●  AI development/models relevant in our ecosystem: AI models aiming to support the monitoring and management of patients with heart failure.

  • ●  Data & analytics: Access to data (e.g., historical and anonymized medical data or live data from sensors) as well as software (e.g., machine learning models that can be utilized to analyse the medical data).

  • ●  Wearables: Hardware (e.g., smart garments) and software to integrate them into the IntellIoT framework to provide data for support in the healthcare use case.

1.3.4 EXEMPLARY CONTRIBUTIONS FOR THE MANUFACTURING USE CASE

In the manufacturing use case, a robot arm transports work pieces between machines and storages involved in the production process. A camera, mounted on the head of the robot, provides images of the scene the robot is working in, to the controlling AI and to the helping operator if needed. Thinking the use case in a bigger scale, alternative transport technologies would be required to transport different work pieces, e.g. workpieces of different shapes (from very small to very large, round, square cut and anything in between), different weights (from very light to very heavy), different materials (from soft to hard, from robust to fragile), different aggregation states (e.g. liquids, hot, not yet hardened plastic parts). Additionally, in a bigger plant with more machines, larger distances will have to be spanned. Flexible storage solutions might be required to store semi-finished products if bottlenecks occur in machine availability. New kinds of machines would be required to cover a larger range of possible production steps, potentially covering additive manufacturing or chemical or biological processing steps. New sensors might be required to detect properties of new materials in use, or to monitor the quality of completed or semi-finished products. Examples of such contributions are:

●  Automated guided vehicle (AGV): Hardware (e.g., an AGV platform) and software to integrate the AGV into the IntellIoT framework (e.g., through HyperMAS adaptation) and to coordinate with other machinery/robots (potentially the AGV could also carry a robot arm).

●  Localization / navigation in manufacturing IoT environment: Hardware (e.g., indoor positioning system) and software to integrate with the IntellIoT framework. After integration, this contribution could provide accurate positioning information, for example for AGVs transporting work pieces or mobile machines and storages.

●  Process industry machinery: Hardware and software to adapt the IntellIoT framework to a process industry scenario (e.g., textile or food & beverages), which can meaningfully employ the developed technologies, e.g., to achieve modular designs and thereby contribute to a circular economy.

●  Additive manufacturing machinery: Hardware (e.g., 3D printers) and software to integrate with theIntellIoTframework and contribute with advanced machinery to the manufacturing use case, e.g., by integration with robots.

 

● New sensor technologies: Hardware (e.g., radar or LIDAR) and software to integrate with IntellIoT framework and to support the manufacturing use case, for example to improve the robot arm interaction with the machinery or to handle work pieces with different material properties.

Financial support provided

The EC funding budget for Third Parties available for both Open Calls is 860,000 Euro. In each of the Open Calls, a total of four companies will be selected – one contributing to the general framework, and one for each of the three use cases.

The four selected companies each gain access to the IntellIoT project within a 6-month pilot to work with the IntellIoT partners on the use cases. In addition, the selected organizations will receive financial support of up to 150,000 Euro for their efforts in accordance with the selection criterion on economic fairness and as necessary to achieve the objectives of the action.

Within the Open Call 1, the funding available for selected companies is as follows:

  • -  Up to 150,000 Euro for proposals addressing the general framework

  • -  Up to 100,000 Euro for proposals addressing one of the three use cases

Within their application, participants have to describe the expected workplan for the 6-month pilot and the budget required for the successful execution of this workplan.

The beneficiaries will receive the funding as a fixed lump sum. The lump sum is a simplified method of settling expenses in projects financed from Horizon 2020 funds. It means that the grantee is not required to present strictly defined accounting documents to prove the cost incurred (e.g. invoices), but is obliged to demonstrate the implementation of the project in line with the milestones set for the Project. Simply speaking it means that we will assess your progress and quality of your work during Interim and Final Reviews, not your accountancy. The milestones (deliverables and KPIs) will be elaborated at the beginning of the pilot.

The lump sum does not release you from the obligation to collect documentation to confirm the costs under fiscal regulation.

In order to receive the grant, an individual Sub-Grant Agreement in accordance with Horizon 2020 funding rules has to be signed between the selected organizations and the IntellIoT consortium. SIEMENS AG, as coordinator of the IntellIoT project, will be responsible for transferring the grants in accordance with the above-mentioned process, and the following installments:

o After contract signature, a prepayment of 50% of the requested funding, in order to avoid cash flow problems, will be issued.

o One month after the end of the pilot phase and after validation of an internally submitted progress report and Final Review, the remaining 50% of the requested funding will be transferred.

The Sub-Grant Agreement will include a set of obligations that beneficiaries have towards the European Commission. It is the task of beneficiaries to satisfy these obligations and of the IntellIoT consortium partners to inform beneficiaries about them.

An exemplary set of obligations:

o obligation to submit to any control measures (checks, reviews, audits or investigations) in relation to the participation in the IntellIoT project,

o obligation to keep records,
o obligation to provide information to the Coordinator or EC or other Consortium Members in order to verify proper implementation of the action and compliance with any other obligation, o obligation to adhere to the ethics requirements of the project,
o liability for damages.

The selected companies shall be responsible for all possible taxes, wire transfer costs and other possible costs related to the payment of grants.



Public link:   Only for registered users


Up2Europe Ads