CT-01: Emerging on-board processing techniques for Next Generation SatComs
Motivation: One of the crucial requirements to integrate satellites into 5G and beyond ecosystem is to introduce flexibility to the existing satellite systems so that the dynamics of future systems can be taken into account. With the recent advances in digital transparent processors, on-board processing has become a reality nowadays and soon wideband spectrum will be able to be flexibly filter and channelized. The reconfigurable onboard processors provide several advantages including flexibility for resource allocation and routing, incorporation of changes in the application parameters (bandwidth, frequency, modulation and coding, phased array control) and overall cost reduction. Due to these significant advantages, current SES missions already include this capability and the on-board processing speed vs required power is expected to keep improving for the years to come. However, several issues relating to feasbility, complexity and performance need to be addressed from the perspective of moving some of the current gateway functionalities to the satellite. One of the promising platforms in this direction is software-defined payload, which can be controlled by the network and can introduce the aforementioned flexibilities.
Objectives: This project aims at critically investigating the amount of GW functionality that can be transferred in the sky. A progressive transfer will be considered, starting from processing blocks that appear later in the chain; novel methodologies will be considered. A comparative evaluation of the on-board complexity and power vis-a-vis the gains/ flexibility accrued will be undertaken. Based on the outcome of the analysis, precise hybrid solutions will be proposed for near and long-term where the digital processing is split between the GW and the payload with the particular emphasis on MultiGW.
CT-02: Demand-based optimization of multibeam SatCom systems using Active Antennas
Motivation: With the ongoing advances in digital payloads and active antenna structures, it is expected that dynamic beamforming and beamhopping will be feasible in the next generation of satellite systems. Despite the capability of active antennas in terms of reconfiguring radio frequency beams over the coverage zone, there exist several research challenges including the adaptation of the beam patterns and the beam sizes based on uneven traffic demands, the efficient utilization of the available radio frequency and power resources, and their actual deployment.
Objectives: This project envisions to accurately model the active antennas, to utilize these models for optimizing the beam patterns over the coverage zone of a multibeam satellite system having uneven traffic distribution, and also to select the best algorithms for the design of demand traffic-based beam pattern. The proposed algorithms will be validated in SigCom communications lab by utilizing the inputs from SigCom’s recently developed satellite traffic emulator.
CT-03: Hybrid Satellite-Terrestrial Connectivity Solutions for Emerging IoT/MTC Systems
Motivation: In emerging IoT applications, a huge number of sensors are distributed over a very wide area, and in some cases (i.e., smart farming), they are located in remote areas, which are not accessible by the terrestrial networks. In this regard, supporting emerging distributed IoT applications via satellite seems promising to provide coverage to wider areas and rural/inaccessible areas while concurrently supporting the massive number of heterogeneous devices/sensors. However, for the low-cost and resource-constrained IoT nodes, the need of large terminal antennas having sufficient directivity due to propagation delay and the slotted orbit of GEO satellites, and the need of steerable antennas and waveform design due to highly time-variant channel of MEO/LEO satellites are crucial research issues. To address these, hybrid satellite-terrestrial connectivity with a few satellite-connected aggregators. Towards addressing the aforementioned issues, hybrid satellite-terrestrial connectivity with a few satellite-connected aggregators seems promising to support low-cost IoT sensors/devices since terrestrial communication technologies enable the deployment of low-cost terminals while the satellite-connected aggregation points can provide ubiquitous connectivity.
Objectives: This PhD project aims to investigate novel hybrid satellite-terrestrial techniques/architectures to provide resource-efficient and reliable connectivity to the massive number of low-cost IoT/MTC devices. In the proposed architectures, an intermediate layer of drones will also be considered as mobile gateways, which collect data from the sensors in the field and send data via LEO/MEO satellites. Furthermore, innovative waveform design, multiple access, cross-layer resource allocation, coverage extension, latency reduction and reliability enhancement techniques will be investigated for the IoT via hybrid satellite-terrestrial systems. Moreover, the efficient optimization of spectrum and power resources for resource-constrained IoT nodes will be considered.
CT-04: Machine learning/AI-assisted adaptive resource management and operational optimization in large multibeam SatCom systems
Motivation: With the recent advances in deep learning architectures/algorithms, GPU platforms and open source Artificial Intelligence (AI)/Machine Learning (ML) tools, the application of AI/ML in wireless networks including SatCom systems is increasingly getting attention. For example, ML/AI is already being used in various space applications including opportunistic weather monitoring, sensor fusion for navigation and earth observation applications. However, several challenges from the algorithmic and implementation perspectives need to be addressed to effectively employ ML/AI techniques for adaptive resource management and operational optimization in SatCom systems.
Objectives: This project will develop novel ML-assisted resource management algorithms to address the challenges of multi-dimensional and large search spaces, and evolving objectives and constraints. For this purpose, a combination of various mathematical tools including ML, optimization and predictive analytics, and open source deep learning platforms will be used.
CT-05: Dynamic spectrum management for emerging integrated SatCom and 5G networks
Motivation: While terrestrial communications have been progressively allocated more spectrum, SatComs have seen their traditional spectrum being challenged and have been forced to consider increasingly higher bands such as millimeter waves and optical frequencies. If this trend continues, the Satcom spectrum will be so limited that it will not be able to bring much added value in 5G even if the integration is technically feasible. In this regard, one of the promising ways of addressing spectrum scarcity problem in the integrated 5G-SatCom is to enable the dynamic utilization of the available spectrum between satellite and terrestrial systems. However, these techniques have been studied only in the context of terrestrial networks and the feasibility of these techniques in the integrated SatCom-5G system needs to be investigated. Furthermore, several other challenges including intersystem interference modeling and mitigation, traffic-aware resource management, power control, and hardware imperfections need to be addressed for the spectral coexistence of satellite and terrestrial systems in the integrated SatCom-5G systems.
Objectives: This PhD project aims at investigating the co-primary bands where both satellite and terrestrial systems are allowed and not the traditional bands where SatCom has a primary allocation. Also, it will focus on mutually beneficial dynamic spectrum management techniques and algorithms, e.g., LSA which can lead towards a centralized managed for integrated satellite-terrestrial 5G systems. Both C and Ka frequency bands will be considered for the access/backhaul connectivity. The output will be a SW demonstrator, focusing on the feasibility as well as the performance gain of the integrated system.
CT-06: Satellite-assisted edge caching and multicasting in 5G and beyond networks
Motivation: Satellite can play a crucial role in the 5G and beyond networks in terms of providing high data-rate broadcast/multicast services, narrowband services for MTC, offloading the signaling/video contents from the terrestrial networks and supporting highly customized and distributed enterprise networks. To achieve these functionalities, edge caching and multicasting have been identified as the promising use cases for integrating SatComs into 5G and beyond networks. Moreover, the combination of Digital Transparent Processing with Dynamic Beamforming has brought the required flexibility so that future missions can combine broadband and broadcast services in a single payload. However, there exist several research questions including the joint optimization of radio, caching/storage and computational resources, efficient cache placement and delivery strategies, update of cache contents based on content popularity models and the minimization of associated latency. Also, existing caching related works are focused on terrestrial networks without considering the advantages of the satellite segment, leading to need of innovative content placement and cache update schemes in the integrated satellite-5G terrestrial systems.
Objectives: The objective of this project is to study the payload resource optimization (e.g. carrier bandwidth, beam design/allocation) and the corresponding caching algorithms that can maximize the cache hit ratio over the coverage area. In terms of payload resources, optimization theory will be used based on properly defined objectives and constraints. In terms of caching, various types of caching including coded, uncoded, proactive offline, reactive online caching and cooperative caching will be investigated and compared. Furthermore, efficient content popularity methods and cache update strategies based on these models will be investigated. Moreover, Internet-Centric Networking (ICN) will be considered as a candidate scheme for the distribution of contents jointly with the utilization of caches. The concept will be demonstrated through SW simulations drawing on a combination of data, namely the SnT Satellite Traffic emulator for terminal locations and the Movielens/YouTube database for content demands.
NT-01: Algorithmic aspects of network slicing over next generation satellite systems
Motivation: Network slicing is envisioned to be a promising enabler for the deployment and operation of future integrated satellite-terrestrial systems, and can be employed on the top of the softwarized network infrastructure to enable end-to-end services provision. Also, the ongoing evolution of satellite ground segment systems including gateways and terminals will enable the application of these technologies in seamlessly integrating SatCom to the 5G and beyond ecosystem. In terms of implementation, there are already widely accepted tools like Ryu/Onos for SDN controllers and mininet for network simulations which support the necessary interfaces for the aforementioned functions. However, the algorithmic aspects which will drive the decision-making and the valorization of these capabilities are much less explored. Also, existing studies have just focused on the ground segment of SatCom systems, leading to the need of investigating how these techniques can enable the dynamic interactions with next generations satellite systems while considering the whole SatCom chain.
Objectives: This PhD topic will focus on building the algorithms, which control the orchestrator with the aim of smoothly integrating the advanced capabilities of satellite systems within the transport network of 5G. Slice load balancing as well as isolation will be used as the main performance indicators, while the constraints will be derived from the system capabilities, such as multi-satellite terminal connectivity, integrated MEO-GEO systems and flexible payloads. The algorithms will be based on optimization complemented by ML approaches when optimization becomes computationally infeasible. This PhD project aims to provide a demonstrator based on the SNT CommLab infrastructure which can cover a combination of GEO and MEO satellite links implemented in real-time HW. On top of this infrastructure, a suitable SDN platform will be integrated to prove proper slice balancing and isolation for dynamic satellite networks including handovers and payload reconfiguration.
NT-02: SDN and NFV as Key Enablers for SatCom Integration into 5G Ecosystem
Motivation: 5G and beyond networks are expected to support the management of End-to-End services across heterogeneous environments by means of a single common network with the help of NFV and SDN based technologies. The main vehicle for providing customized 5G network infrastructures for specific applications and services is the cloud computing technology extended to include the network infrastructure (cloud networking). The most promising architecture and implementation comes from Software Defined Networking (SDN) where networks can be dynamically programmed through centralized control points and from Network Function Virtualization (NFV) enabling the cost-efficient deployment and runtime of network functions as software only. The 5G network will be based on NFV and SDN technologies and will be capable to support End-to-End services (and their management) across heterogeneous environments by means of a single (converged) common network. Network Slicing is a service-oriented construct providing “Network as a Service” to concurrent applications. The 5G Slices will deliver different SLAs based on a unified pool of resources. Through this paradigm, the specific services can be highly customized, enabling the seamless integration of different heterogeneous networks in a 5G ecosystem, such as satellite networks.
Objectives: The main objective of this project is to virtualize satcom network functions to share the same virtualised core as cellular network functions, ensure compatibility with the SDN/NFV architecture and support network slicing, thus allowing significant benefits in terms of CAPEX reduction and flexible service provisioning. More specifically, the project will aim to research, develop and validate Software Defined Networking (SDN) and Network Function Virtualization (NFV) technologies towards softwarization and virtualization of future satellite ground segment. Main scenarios of interest include the following: (i) ground segment for 5G Multimedia Content (eMBB), (ii) ground Segment for 5G M2M/IoT data distribution (mMTC), and (iii) ground Segment for GovSatCom towards Mission Critical Communication (URLLC).
NT-03: Satellite-assisted Edge/Fog Processing for Latency Reduction and Enhanced QoS in Mission Critical IoT Applications
Motivation: The widely used cloud-processing platform cannot provide timely feedback to end-users in the mission critical IoT applications, which demand for low-latency and high QoS services. On the other hand, edge/fog computing can extend cloud-like functionalities to the network edge but edge-devices are limited in terms of resources (computational, storage and energy). To address the aforementioned issues, emerging MEO/LEO/HEO satellite systems can act as an important platform to deliver the large amount of data generated by IoT/MTC sensors/devices to the cloud much faster than the edge-devices of IoT systems, thus enabling the applications of sophisticated techniques including ML/AI on the cloud-side. Mission critical IoT applications face challenges in terms of low-latency, reliability, energy efficiency and high throughput connections. As compared to the terrestrial based solutions, High Throughput Satellites (HTS) can provide higher capacity, speed and coverage by utilizing spot beam technology, frequency reuse across multiple beams and higher frequency bands. Furthermore, the amount of information to be conveyed between the end-devices and the cloud can be drastically minimized by utilizing satellite-assisted Augmented Reality (AR) to display the contextually aware information in the operatives field of view for crisis management applications.
Objectives: This PhD project will explore the benefits of edge/fog processing and the role of satellite in enabling collaborative edge-cloud processing for reducing end-to-end latency and for enhancing QoS in mission critical applications. In mission critical applications, the important performance metrics to be enhanced are latency, low-power connectivity/energy efficiency and ultra-high reliability. To enhance these performance metrics, various innovative techniques including situation-aware data acquisition, distributed queueing-based access control, prioritized channel access, edge-side data preprocessing, data compression, aggregation and offloading techniques will be investigated in the considered scenarios.
NT-04: Flow management for resilient backhauling in 5G and beyond integrated satellite-terrestrial networks
Motivation: The existing terrestrial backhauling connectivity can be complemented with a very high-speed multicast-enabled satellite link directly from MEO/GEO satellites to the cellular sites. Such a satellite-assisted backhauling has been identified as one of the most promising use cases for the integration of 5G and SatCom to meet the requirements of resilience and high availability in future networks. However, satellite and terrestrial links have very different characteristics in terms of latency and coverage. In that sense, a proper traffic classification and flow management algorithm is needed in order to exploit the best of both worlds. For example, flows that are related to caching or file transfer can be effectively routed over satellite whereas telepresence has to go over the terrestrial network. This ecosystem becomes even more complex considering the wide range of service requirements along with the capabilities of various orbits, e.g., MEO vs GEO. To this end, it is essential to optimally allocate the terrestrial and satellite capacities in a way that the total network utility will be maximized under both the failure and non-failure conditions of the terrestrial link rather than the conventional approaches of simply replacing the failed terrestrial link with a satellite link or offloading overflowing traffic via satellite in the peak period.
Objectives: The main objective of this project is to utilize the well-established SDN tools for devising algorithms that can effectively classify and manage the various flows of backhaul traffic. With this SDN-enabled approach, the end-to-end routing across the terrestrial and satellite components can be computed centrally, and then can be rearranged dynamically at the flow-level granularity in the cases of link congestion and failure. The end KPIs will be the QoS at the intended receiver in the sense that the throughput and latency of each flow should respect the agreed SLAs. For the algorithmic part, optimization theory will be used to optimize the flow management in an offline fashion, whereas operational prediction and adaptation will be studied through Bayesian ML techniques.
NT-05: Modeling, optimization and reconfiguration of hybrid MEO/GEO satellite networks for integrated service delivery
Motivation: All satellite services including broadband, multimedia, TV broadcast and Internet for both the fixed and mobile applications need efficient global coverage and reliable transmission systems. In many cases, one satellite/orbit may not be able to provide global coverage, thus leading to the need of coordination of satellites operating in different orbits, i.e., hybrid constellations. However, most of the existing satellite constellations consist of satellites with same orbital type. In order to utilize the benefits of both orbits, hybrid constellations seem promising for the next generation SatCom systems. In this direction, SES has become the first operator to have both MEO and GEO systems in its disposal. The integration of these two systems on a service level is non-trivial and it raises a number of research challenges in terms of designing backbone and enhanced networks of hybrid constellation to achieve the objective of seamless global coverage. Due to the design complexity and cost issues, it is crucial to have this seamless global coverage with the minimal use of space and ground resources.
Objectives: This topic will address this hybrid constellation looking across the protocol stack. From a PHY/MAC perspective, a system-level simulator will provide insights into optimizing the system (e.g., coverage, beam pattern, power, bandwidth, Intersatellite link) based on throughput and availability. Going into the NET layer, flexible SDN techniques will be employed to guarantee the uninterrupted delivery of the network streams into the terminal while maintaining acceptable packet latency and jitter guarantees. Finally, system dimensioning aspects will be considered to guarantee the required constraints from an SLA point of view.
NT-06: Cloud-assisted coordinated gateway processing techniques based on Distributed Antenna Systems
Motivation: With the evolution of multibeam systems, a terabit of capacity from a single mission might soon be within reach. However, by increasing the user link unavoidably we have to find ways to increase the feeder link as well. Unfortunately, both are heavily regulated which means that the only way forward is to use the existing spectrum more efficiently. In this direction, one possible approach is to move the feeder links to the Q/V bands but this requires a high level of site diversity to compensate for the outage caused due to atmospheric effects, thus leading to need of deploying multiple gateways which can reuse the feeder link frequencies. However, this most widely accepted technique of increasing the number of GW complicates the operation of advanced transmission techniques such as precoding and beamhopping which benefit from the centralized processing. Also, under full frequency reuse schemes, the potential of multibeam processing is limited due to intrasystem interference created by multiple GWs. Furthermore, the implementation of precoding and beamforming in multi gateway systems has to address various issues such as intra-cluster, inter-cluster, inter feeder-link interference and creating a fair balance between the data demand growth and available feeder-link resources.
Objectives: This project will investigate the concept of splitting the radio frequency (RF) from the Baseband processing and employing a distributed GW antenna system coordinated through a centralized cloud-assisted BB processor. The distributed antenna systems should be able to fully reuse the available feeder link spectrum. The main challenges to be addressed will be distributed synchronization and centralized coordination/processing, while the main KPI will be the performance of the feeder uplink. Also, suitable interference mitigation techniques, which can be employed in a distributed manner, will be investigated to ensure the performance guarantee of multiple GWs systems.
ST-01: Model-based Simulation of Novel Network Technologies and Application Protocols
Motivation: With the advent of 5G networks, several attempts are underway for integrating the terrestrial networks (5G, fiber-optic) with Satcom. Satcom facilitates ubiquitous coverage of the terrestrial (5G) networks, and enhanced mobility anywhere on the ground, on sea and in air. Despite the extensive efforts in hybridizing network systems and providing end-to-end connectivity, these systems are still largely built independently of one another. This leads to challenges for the efficiency, programmability and agility of hybrid network systems. The management of both terrestrial and satellite systems is also done via independent network management infrastructures. Even though network simulations at the level of communication protocols have been widely studied for either satellite or terrestrial networks, managing federated satellite/terrestrial networks necessitates capabilities to simulate the system-level behaviors of
these networks. The complex requirements from clients combined with the emergence of new technologies such as SDN pose several new challenges for the simulation of hybrid network systems.
Objectives: The project will develop a framework for the system-level simulation of hybrid network systems. We will leverage model-based systems engineering (MBSE) as the main vehicle for expressing the system-level behaviors of hybrid network systems at the design stage. In particular, we will develop a domain-specific language (DSL) to enable systems engineers to conveniently capture the services that need to be simulated even when the service cross-cuts multiple communication networks. Model transformations will be utilized for deriving executable simulators from the behavioral models captured using the DSL. To maximize the likelihood of revealing faults via simulation, we will be leveraging search-based software engineering [N3]; this allows us to automatically explore the vast space of possible simulation scenarios and exercise those that maximize the desired coverage and diversity criteria. The main novelty of the ST-01 project is enabling simulation of the system-level behaviors of satellite and terrestrial networks.
ST-02: Model-driven run-time monitoring and verification of ground control systems
Motivation: Run-time verification (RV) is a technique that monitors a system during its execution as well as the surrounding environment, detects possible violations of requirements (e.g., abnormal behaviors), and plans suitable corrective actions (e.g., trigger alarms before critical failures happen). RV complements design-time verification and it is one of the most suitable techniques for the verification of highly-dynamic and pervasive software systems, which are executed in uncertain and variable environments. The development of ground control systems (GCSs) relies on software models, which capture the behavior of the main system components and of the environment; both these models and the system requirements specifications (e.g., end-customer demand) can change over time. In these contexts, RV can be lifted, for efficiency reasons, at the model level, leading to run-time model verification. As the system executes evolves over time, run-time model verification requires 1) the models to be kept alive, and 2) the requirements specifications against which the models are checked to adapt based on the intrinsic evolution of the system and its environment. Existing approaches for keeping the models alive at run time relies on model inference techniques based on logs; however, they suffer from scalability issues, due to the intrinsic computational complexity of the problem, leading to out- of-memory errors or extremely long, unpractical execution time when processing very large logs.
Objectives: The proposed research stream aims to develop a novel approach for run-time verification of models of GCSs. This approach will tackle the challenge of accommodating incomplete and evolving models and specifications in the RV process. We will define a pattern-based language for specifying properties of GCS behaviors (e.g., constraints on tele-command parameters); we will operationalize the semantics of such high-level properties into low-level constraints on models of the system’s executions; we will develop scalable run-time verification techniques both for the offline setting (i.e., when a full execution log is available) and for the online setting (i.e., when a continuous stream of events and data has to be processed); we will provide
techniques and tools to support the diagnostics of requirement violations. The proposed run-time verification techniques will leverage novel scalable model inference techniques (both for the offline and online setting), based on a divide and conquer approach for inferring models from system-level and component-level logs. The core enabling techniques and methodologies for such an approach are incremental verification (to efficiently analyze the part of a system affected by a change), model-driven engineering (to transform models and verify constraints on them), machine learning (leveraging historical data to deal with incomplete models and specifications), and Big-Data technologies (to cope with large-scale system models).
ST-03: Dynamic Adaptive Framework for IoT-enabled Disaster Management Systems
Motivation: Motivation: The Internet of Things (IoT) and the omnipresence of sensors provide several opportunities to build intelligent systems that can improve our lives in many different ways. A notable example is a disaster management system that monitors a large geographical area through a network of sensors to sense any disaster (e.g., fire, floods, hurricanes, earthquakes) as early as possible. Such systems necessarily depend on the presence of an underlying communication system. In disaster situations, such communication systems must provide reliable service by transmitting a large volume of data in an efficient, effective and flexible way. Recently, software-defined networks (SDN) have started to enable such flexible and effective communication systems by transferring the control of networks from localized fixed-behavior controllers distributed over a set of switches into centralized and programmable software controllers. For an IoT-system building on SDN, a major challenge is to design software controllers that can effectively and efficiently reconfigure and adapt the underlying network to always keep the quality of service at a desired level. Although SDN has received considerable attention in the recent literature on networks, developing dynamic reconfiguration controllers, which accounts for the tradeoffs among multiple quality of service criteria, e.g., effectiveness, efficiency and flexibility, requires further and multidisciplinary research.
Objectives: The purpose of this project is to develop efficient and effective automated reconfiguration techniques for SDN to improve reliability of IoT-systems. To achieve this goal, we will develop a dynamic and self-adaptive control loop that monitors and controls SDN at runtime. Our control loop continuously checks the underlying SDN for congestion and delays, and if congestion and delays are present, we use search-based optimization techniques to automatically reconfigure SDN networks in a minimal way to resolve congestion and minimize delays. Further, we will rely on active machine learning techniques utilizing simulation and actual data to predict congestion and delays beforehand and reconfigure the network in a preemptive way. Some recent approaches study dynamic reconfiguration of SDN to maximize quality of service. Chiang et al. formulate a new optimization problem to find optimal routing paths for group communication traffic. Gay et al. propose a local search-based segment routing method for networks with unexpected failures. Huang et al. present a dynamic routing algorithm to maximize network throughput under link-capacity and user-demand constraints. None of these lines of work, however, consider
or optimize the configuration of an SDN for multiple quality of service criteria simultaneously. The problem of configuration for the purpose of optimizing multiple criteria has been studied in prior research threads for design-time software development. These studies, however, are geared toward offline optimization of system design or architecture, and cannot address the challenge of online and dynamic SDN reconfiguration. The main novelty of the ST-03 project is providing dynamic adaptation techniques for SDN, which simultaneously account for multiple quality of service criteria.
ST-04: AI-driven field testing of software security properties in edge/fog communication nodes
Motivation: The peculiarities of edge (or fog) communication nodes complicate the verification of software security properties. Indeed, edge nodes are deployed in unsecured areas (e.g., physically reachable by attackers) and are characterized by resource constraints, which facilitate successful delivery of common security attacks like denial of services or man in the middle. Also, edge applications (i.e., the applications running on edge nodes) cannot be fully trusted either because they are vulnerable or might be malicious. For these reasons, engineers cannot foresee, at development time, all the possible execution conditions of edge nodes, which makes software testing performed at development time ineffective. Example case studies are Smart IoT Gateways and Smart Road Side Units (SRSUs) for autonomous driving. Smart Io T Gateways, for example, are installed in rural areas connected to satellite and terrestrial communication systems and use a significant number of interacting software components, which are individually updated via satellite. Software or transmission fault s, as well as security attacks, may lead to failed, partial, or delayed upgrades; since the status of the system after an upgrade might be uncertain, security properties should be periodically verified on the deployed node. Similar problems might affect also SRSUs.
Objectives: The objective of the project is to develop a framework capable of identifying vulnerabilities in edge applications by continuously testing their security properties in the field, i.e., while the system is in operation. The framework w ill automate the execution of penetration testing activities (i.e., simulating an attacker who aims to discover the vulnerabilities of the system). The project will overcome the state-of-the-art since existing penetration testing toolsets require human intervention to select and tailor the attacks to be performed. Also, it extends the body of work on testing automation in the field, which is preliminary and ignores security vulnerabilities. When testing is performed in the field, it is insufficient to automate the triggering of inputs but it is also necessary to automatically identify the testing targets (to reduce computational costs) and address the potential drawbacks of the testing activities performed (e.g., reaching an erroneous system state). For example, it is not enough to automate a command injection attack but it is necessary to automatically prioritize the target interfaces and to automatically prevent countereffects such as altering a local data storage because of the injected command.
ST-05: Automated software security auditing of IoT gateways
Motivation: Security vulnerabilities in IoT gateways e.g., DDoS-distributed denial of service, malicious code injection, buffer overflow) may have grave repercussions such data integrity violation, denial of service, and loss of commercial confidence of the service provider. One way to mitigate such consequences is to address security vulnerabilities as early as possible in the software development process, by including a security auditing activity in the process itself, to locate vulnerabilities in source code, identify their causes, and fix them. State-of-the-art techniques for software security auditing mainly rely on static program analysis; however, the complexity of the software architecture of IoT gateways (which provides several functionalities exposed as microservices and relies on application frameworks and libraries) limits the scalability and accuracy of existing program analysis techniques. As a consequence, these techniques yield imprecise results, generating many false warnings and audit reports with irrelevant information.
Objectives The goal of this project is to develop an automated, scalable, and effective solution to audit source code of IoT gateways for identifying and fixing security vulnerabilities. More specifically, we will develop scalable code analysis solutions supporting the microservice-based architecture of IoT gateways as well as the analysis of third-party libraries used by microservices; we will define strategies for detecting vulnerabilities, including the identification of sinks (i.e., security-sensitive program statements like database queries) and potentially critical functions (e.g., functions that may lead to a DoS). The proposed approach will leverage the synergistic combination of program analysis (to build an abstraction of the code), meta-heuristic search (to identify vulnerabilities), and machine learning (to suggest software fixes in the audit report). The main novelty of the ST-05 project is enabling automated, scalable and effective security auditing in the context of microservice-based architectures, for code-bases that heavily rely on third-party libraries and frameworks.