We are happy to support strong PhD candidates in any area of computer science.
Applicants interested in collaborating with us more closely are invited to check the following list of open PhD projects, representing some of our main current research interests.
Modeling and control of software performance
Response time, throughput and utilization are extra-functional properties of software that are especially relevant in resource-constrained environments such as mobile phones and in the Internet-of-Things. Ideally, one would like to use an application that automatically meets given user-defined performance requirements. This requires the availability of a mechanism that can identify the current state of the software system and predict its future behaviour under a range of assumptions of the environment, with an algorithm that returns the optimal configuration meeting the desired performance target. The candidate will have the opportunity to work on the development of self-adaptive methods for software performance using a range of techniques including black-box representations based on machine learning and white-box analytical models built from first principles.
software performance engineering; self-adaptive systems; predictive modelling
Incerto, M. Tribastone, and C. Trubiani, “Software performance self-adaptation through efficient model predictive control,” in 32nd ACM/IEEE International Conference on Automated Software Engineering (ASE), 2017. http://cse.lab.imtlucca.it/∼mirco.tribastone/papers/ase2017.pdf
Incerto, M. Tribastone, and C. Trubiani, “Combined vertical and horizontal autoscaling through model predictive control,” in 24th International European Conference on Parallel and Distributed Computing (EURO-PAR), 2018. http://cse.lab.imtlucca.it/∼mirco.tribastone/papers/europar18.pdf
Incerto, A. Napolitano, and M. Tribastone, “Moving horizon estimation of service demands in queuing networks,” in 26th IEEE International Symposium on the Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), 2018. http://cse.lab.imtlucca.it/∼mirco.tribastone/papers/mascots18.pdf
Coarse-graining of dynamical systems
Dynamical systems are a fundamental mathematical model to describe predict the behavior of natural as well as engineered processes. Our capability to gain relevant insights details is however hindered by the large dimensionality of such models when describing systems characterized by a large degree of complexity. Coarse graining, model reduction, and model abstraction are among the several different keywords with which a wide range of disciplines refer to the topic of simplifying a given dynamical system into a smaller one that preserves key observables of interest to the modeler. The candidate will have the opportunity to work on the development of new coarse-graining methods and algorithms, with applications related to models in various disciplines including automation, computer science, statistical physics, and systems biology.
coarse graining; model reduction; dynamical systems
Mirco Tribastone (SYSMA), Guido Caldarelli (NETWORKS), and Diego Garlaschelli (NETWORKS)
S. Tognazzi, M. Tribastone, M. Tschaikowski, and A. Vandin, “Backward invariance for linear differential algebraic equations,” in 57th IEEE Conference on Decision and Control (CDC), 2018
L. Cardelli, M. Tribastone, M. Tschaikowski, and A. Vandin, “Maximal aggregation of polynomial dynamical systems,” Proceedings of the National Academy of Sciences, vol. 114, no. 38, pp. 10 029–10 034, 2017.
L. Cardelli, M. Tribastone, A. Vandin, and M. Tschaikowski, “ERODE: A tool for the evaluation and reduction of ordinary differential equations,” in Tools and Algorithms for the Construction and Analysis of Systems – 23rd International Conference, TACAS, 2017.
Automatic vulnerability disclosure with coevolutionary algorithms
Symbolic execution is a powerful technique to spot out corner cases, e.g., vulnerabilities, in the semantics of a program. As a matter of fact, it replaces the standard semantics (referring to specific values) with a symbolic one (manipulating abstract expressions). Unfortunately, symbolic execution does not scale on large programs. For this reason, hybrid techniques have been proposed (e.g. concolic testing and symbolic backward execution). The goal of this project is to combine the symbolic analysis of a program with a test execution environment driven by an evolutionary optimization strategy. The symbolic analysis is applied to obtain a compact (thus computable) specification of the conditions that a test must satisfy to trigger a certain vulnerability. Instead the evolutionary algorithm drives the refinement process that, starting from some random tests, leads to the convergence toward the desired ones. The convergence criteria is based on the optimization of a fitness function derived from the symbolic specification.
Formal methods; vulnerability analysis; security testing; evolutionary algorithms; white-box testing; binary analysis
VeriOSS: A Blockchain for Open Source Software Verification
The Software Supply Chain (SSC) is a cornerstone of the industrial society on which many other Supply Chains (SCs) depend. The continuous demand/integration of the computing systems into SCs is pushing the development and distribution of software. To cope with this growing request many companies are including open source software (OSS) in their software products. OSS has many advantages, for example, it prevents that the software producer does not acquires a strong bargaining position on the consumer. However, the flip side is that the producer of a OSS has no obligation to maintain, improve or fix her software. All in all, the OSS ranges from small scale projects, with limited or even no security plan, to community projects that release periodic security updates. Such heterogeneity makes it difficult to understand the actual risks when one wants to integrate a OSS in his project. From a methodological point of view, the project aims at answering the following questions: (i) what are the conditions to make the formal verification a valuable asset in the SSC? (ii) can we design a mechanism based on economic rewards that push participant to find and fix bugs in OSS software? (iii) can the blockchain technology be used to implement a decentralized framework for the formal verification of security properties of OSS? From a practical point of view, the project aims at designing and implementing a blockchain service for the security analysis and patching of the OSS, where developers and security analysts cooperate efficiently.
Formal methods; software verification; blockchain; DLT; contract-driven development; mobile code security
Verified Compiler for SDN
The Software Defined Network (SDN) paradigm decouples network control and forwarding functions, and allows programming the network as a whole by abstracting the underlying infrastructure from applications and network services. Recently there has been a growing interest in high level languages for SDN, that make it possible to implement new network services easily and in a correct way. To ensure the correctness of these programs many papers propose to use formal verification techniques at source level. However, programs written in these languages pass through many compilation steps targeting specific hardware platforms and there is no guarantee that the generated code satisfies the same properties of the source code: for example, a violation of safety may be due to an incorrect interaction between the compiled code and the device hardware. The goal of this project is to devise a verification technique that operates directly on the compiled code running on a device and that is able to: (i) automatically infer a model representing the code behaviour on a specific platform; (ii) check for violations of given properties as the model is being constructed; (iii) account for the found violation (the compiler or the hardware).
Software Defined Networking; Network verification; Translation Validation
Translation Validation for Security
High-level languages provide a variety of abstractions and mechanisms (e.g. types, modules, automatic memory management) that enforce good programming practices and ease programmers in writing correct and secure code. However, those high-level abstractions do not always have counterparts when a program is compiled into a low-level language. This discrepancy can be dangerous when the source level abstractions are used to enforce security properties: if the target language does not provide any mechanism to preserve such security properties, the resulting code is vulnerable to attacks. The emerging field of formally secure compilation aims at granting that the security properties at the source level are preserved as they are into the object level, even when considering active attackers that can tamper with the data or the control-flow of a program. Currently, many papers propose to manually prove (with the help of a proof-assistant) once and for all that the compiler is security preserving. Although these manual proofs are very effective, they require huge efforts in term of time and resources, even when one considers simple languages. Instead of proving that a compiler is security preserving once and for all, a possible alternative is to use translation validation to prove that the compilation of a specific program preserves all security properties. The goal of this project is to design and implement a translation validation technique to automatically check that a given compilation run does not break security.
Secure compilation; Hyperproperties; Translation Validation
Analysis of rollback-recovery schemas on a reversible model
Among the techniques for fault tolerant systems, recovery techniques allow a system to bring back, in case of a fault, the system to a correct state just previous to the faulty one, and to continue the execution from that one.
To formalise these systems, formal model of reversible programming language are studied.
The goal of this project is to study the relationship between existing reversible language with controlled reversibility and various rollback-recovery schemas, especially causal ones, proposed in literature.
Main reference: I. Lanese, C.A. Mezzina, A. Schmitt, and J.B. Stefani. Controlling Reversibility in Higher-Order Pi. In Proc. 22nd Int. Conf. Concurrency Theory (CONCUR), LNCS 6901, Springer, 2011.
Computing systems are nowadays highly dependent on communication due to their distributed nature,
where interaction patterns are often highly complex and error prone. It is therefore crucial to devise programming mechanisms that focus on communication for the specification of reliable systems.
This topic concentrates in the design of programming language abstractions based on principles
such as abstraction and modularity for the development of communication-centered systems.
Main reference: Hans Hüttel, Ivan Lanese, Vasco T Vasconcelos, Luís Caires, Marco Carbone, Pierre-Malo Deniélou,
Dimitris Mostrous, Luca Padovani, António Ravara, Emilio Tuosto, Hugo Torres Vieira, and Gianluigi Zavattaro. Foundations of Session Types and Behavioural Contracts, ACM Computing Surveys 49(1). 2016
Quantitative analysis of variability-intensive software systems
In variability-intensive software systems, the choice of parameters may affect both the functional and nonfunctional properties of the program. Models may help reason about features such as response time, throughput, and utilization of a system. However, typically approaches do not scale because computations are not reused across the many configurations encountered in real-world models. We wish to study effective scalable techniques to tackle these issues.
Main reference: Matthias Kowal, Max Tschaikowski, Mirco Tribastone, and Ina Schaefer. Scaling size and parameter spaces in variability-aware software performance models. Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE 2015).
Simplification techniques for dynamical systems
Dynamical systems are widely used quantitative models in many branches of science and engineering. Unfortunately the number of equations required to describe complex real-world model is typically very large. This reduces the intelligibility of the model and poses significant computational problems for its analysis. We seek automatic techniques to simplify models in a rigorous manner.
This project may combine theoretical research, algorithm implementation, and experimental evaluation on large-scale real-world models and benchmarks.
Main reference: Luca Cardelli, Mirco Tribastone, Max Tschaikowski, and Andrea Vandin, Symbolic Computation of Differential Equivalences. Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2016).
Statistical model checking for (software) product lines
Recently, much effort is put into making formal modelling languages and related analysis techniques amenable to product lines. The challenge is to handle their inherent variability, due to which the number of possible products to be analysed may be exponential in the number of features that can be installed. We tackle this problem using efficient statistical analysis techniques like distributed statistical model checking.
We wish to further investigate on the combination of formal languages and statistical analysis techniques to provide a rich framework for the quantitative analysis of product lines.
Main reference: Maurice ter Beek, Axel Legay, Alberto Lluch Lafuente, Andrea Vandin. Statistical Analysis of Probabilistic Models of Software Product Lines with Quantitative Constraints, 19th International Software Product Line Conference (SPLC 2015).
SOS Semantics for Stochastic Calculi
We study unifying frameworks to provide operational semantics of many stochastic process algebras, including their quantitative variants useful for modeling quantitative aspects of behaviors. The model is obtained by modifying the transition relation to associate a state reachability distribution, rather than a single target state, with any pair of source state and transition label.
Main reference: Rocco De Nicola, Diego Latella, Michele Loreti, Mieke Massink, A Uniform Definition of Stochastic Process Calculi. IMT Working paper 2012
Behavioural Semantics of Quantitative Process Algebras
We study how different semantics of different models ranging from fully nondeterministic processes (LTS), fully probabilistic processes (ADTMC), fully stochastic processes (ACTMC), nondeterministic and probabilistic processes (MDP) or nondeterministic and stochastic processes (CTMDP) can be re-conducted to a unifying view. We extend this uniform treatment to behavioral equivalences and want to analyze the differences with the many known one in the literature.
Main reference: Marco Bernardo, Rocco De Nicola, Michele Loreti, A Uniform Framework for Modelling Nondeterministic, Probabilistic, Stochastic, or Mixed Processes and their Behavioral Equivalences, IMT Working paper 2012.
Design and programming principles for software ensembles
Software ensembles are a new generation of self-aware, self-adaptive, autonomic software components conceived to run over highly variable (often unpredictable) computational environments. Novel design and programming principles for the development of software ensembles need to be investigated and tailored for emerging programming languages.
Main reference: Rocco De Nicola, Gianluigi Ferrari, Michele Loreti, Rosario Pugliese, A Language-based Approach to Autonomic Computing, FMCO 2012.
Programming abstractions for trust
Trust and reputation systems are more and more used to support decision making in e-commerce applications, as well as in other fields including ad-hoc, sensor, and P2P networks. The integration of different trust and reputation system frameworks with IT system managing and interactions needs to be investigated. The existence of many reputation systems with remarkable differences calls for frameworks for modeling, analysing, implementing and comparing them, as well for integrating them into real IT systems where trust is a critical aspect, such as cloud-based ones.
Main reference: Celestini, De Nicola, Tiezzi. Network-aware Evaluations of Reputation Systems. Technical report, IMT, 2012.
Policy-based system management
Policy-based mechanisms are increasingly used to manage different aspects of IT systems, such as access control, security, reliability or performances. There is a need to develop formal languages based on industrial standards and the corresponding reasoning tools to support the analysis of policies. Another crucial aspect to be investigated is the integration of access policies with the behavioural aspects of systems (e.g. to regulate (self-)adaptation).