Today, people increasingly rely on data for decision making. Meanwhile, the amount of IoT data collected is growing exponentially. This has great potential to benefit society, however it also poses challenges for data transmission, storage and analytics.
This PhD project is concerned with investigating the synergies and trade-offs between data compression and analysis. Specifically, we intend to develop techniques for efficiently executing analytics directly on compressed data. We will also explore how the existing compression schemes can be optimised to best support analytics while maintaining good compression.
This PhD study is conducted by Aaron Hurst and supervised by Associate Professor Qi Zhang and Professor Daniel Enrique Lucani Rötter from the Department of Electrical and Computer Engineering.
Implementation security is critical to translate the formal guarantees of cryptographic algorithms from theory to practice, such that cryptographic software remains secure after deployment. This PhD project aims to develop secure cryptographic primitives and protocols resilient to implementation attacks, by combining concrete security analysis with a theoretically well-founded approach to defenses against those attacks. The goal will be achieved by gaining insights into realistic adversarial behavior through evaluation and enhancement of implementation attacks, and research on the design of effective countermeasures and algorithms employing them. As concrete use cases, we considered so far cache-timing and fault attacks against the implementation of elliptic curve cryptography in OpenSSL; the study of hedging as a fault attack countermeasure and threshold signing protocols to protect Fiat-Shamir signature schemes in the classical and postquantum settings.
This PhD projects is conducted by Akira Takahashi and the main supervisor is Associate Professor Claudio Orlandi from the department of Computer Science.
In simulations of turbulent fluid flows (a flow regime featured by fluctuations and chaotic-looking motion), a direct numerical solution of governing equations is highly computational-expensive and hardly feasible in most industrial projects. As a remedy, one can only solve the equations for the mean values of quantities and try to model the effect of fluctuations on the mean (so-called RANS method). Despite a large deal of effort devoted to physics-based models in the past, these models still face several limitations and shortcomings. With recent advances in the data-driven and, in particular, machine learning (ML) techniques and the broader availability of generated data in simulations of turbulent flows, there is a high potential that a well-trained ML algorithm can lead to superior modeling capabilities. In this project, the aforementioned potentials will be investigated to find out how much an ML model trained by high-fidelity data can improve the accuracy of low-fidelity approaches in turbulent flow simulations.
This PhD study is conducted by Ali Amarloo and is supervised by Associate professor Mahdi Abkar and Assistant professor Pourya Forooghi both from the Department of Mechanical and Production Engineering, and Associate professor Alexandros Iosifidis from the Department of Electrical and Computer Engineering.
Deep learning models have achieved tremendous success over the past decade in various domains such as computer vision, natural language processing and healthcare. However, deep learning models typically consist of many interconnected layers containing millions of parameters, which makes them quite resource-intensive and slow to execute. The goal of this project is to develop novel methods that can make deep learning models more lightweight, for instance, dynamic inference methods that can dynamically adjust the computation graph of deep neural networks during the inference phase. Methods developed during this project can help deep learning models run efficiently across time- and resource-constrained environments such as mobile devices, edge computing systems and IoT sensor networks.
This project is conducted by Arian Bakhtiarnia and supervised by Associated Professor Alexandros Iosifidis from the Department of Electrical and Computer Engineering.
Zero-knowledge proofs are integral for deploying privacy-preserving cryptocurrencies and other blockchain applications as they represent a fundamental building block for proving statements about confidential data. The most popular framework for such proofs is based on cryptographic pairings defined over elliptic curves, where pairing-based zero-knowledge Succinct Non-Interactive Arguments of Knowledge (zk-SNARKs) underlie private transactions.
The main aim of my project is to investigate techniques to develop a formally verified efficient software library for pairing-based cryptography as means to support current blockchain projects relying on zero-knowledge proofs.
A verified implementation facilitates trust to the blockchain and increases the robustness of the system and decreases required maintenance.
Modelling techniques are an integral part of physical and engineering sciences. However, the creation of models by hand is a time consuming and complex task, that requires a deep knowledge of the domain. In recent years there has been an increased interest in applying machine learning techniques such as deep learning to solve problems in science, that a not amenable to first principles modelling techniques.
An important modeling application is creating models of dynamical systems which can be used to numerically simulate the evolution of the system in time. The goal of this PhD project is to develop techniques for approximating the dynamics of a system using only sparse measurements of the true system.
Concretely, we aim to develop ways of constructing models using neural networks that incorporate prior knowledge to ensure a higher degree of generalization and interpretability, compared to plain neural networks.
This PhD study is conducted by Christian Møldrup Legaard and supervised by Professor Peter Gorm Larsen from the Department of Electrical and Computer Engineering.
Deep Learning is currently the leading paradigm in Machine Learning. It is extensively used by researchers and practitioners worldwide, showing unpreceded performance in applications like image analysis, speech/audio recognition, automatic translation and medical data analysis, to name a few. However, despite the large efforts and financial investments devoted every year in developing Deep Learning-related technologies, much of our current knowledge about these models is based on intuition. Deep Learning models are used mostly as black boxes with an enormous number of parameters that can be effectively “tuned” to memorize connections between available training samples (e.g. images or time series) to human-expert provided labels. In this project, we address two main challenges: 1) understanding neural network design (also referred to as the network architecture or its topology), training efficiency and effectiveness and 2) the interpretability of their predictions.
This PhD study is conducted by Frederik Hvilshøj and supervised by Associate Professor Alexandros Iosifidis from the Department of Engineering and Professor Ira Assent from the Department of Computer Science.
The goal of the project is to design novel schemes which provide both compression and security by using the advanced signal processing techniques. The proposed schemes will be implemented in IoT devices and a prototype will be delivered. The performance of the proposed scheme will be tested and evaluated through standard randomness tests, and energy constraints and computational requirements will be considered.
This PhD study is conducted by Gajraj Kuldeep and supervised by Associate Professor Qi Zhang from the Department of Electrical and Computer Engineering.
This PhD project aims at providing a novel protocol design for data compression in massive scale storage systems with a focus on privacy of the stored data. We have coined this concept Dual Deduplication, as it carries out data deduplication and modifications at both the end user and the Cloud to balance privacy and compression. The provided protocol is flexible in order to be effective for a wide variety of systems with different considerations of privacy and computation. The privacy of the data is achieved by information theoretic transformations rather than cryptographic methods, achieving a strong privacy guarantees against a wider range of possible adversaries. Finally, this project aims to use the concepts and system designs to deliver secure file sharing in a network with untrusted cloud and clients.
This PhD project is conducted by Hadi Sehat and supervised by Deputy Head of department for talent development and external funding Daniel Rötter from the Department of Electrical and Computer Engineering.
A Digital Twin (DT) is a system that incorporates different techniques to increase the values of the corresponding physical system. One of the key factors in digital twins is a model which is a core to digitalize the physical system. However, it is non-trivial to obtain a model from the first principles and to simulate the model. Thus this project focuses on using reduced-order modeling to construct and simulate the physical system. The specific systems studied in this project are the incubator and UR robot, where the incubator has relatively simple dynamical properties while the dynamics of the UR robot are more complex.
This PhD study is conducted by Hao Feng. The main supervisor is Peter Gorm Larsen and co-supervisors are Alexandros Iosifidis and Cláudio Gomes, all of them are from the Department of Electrical and Computer Engineering.
Generative models in manifold learning seek to describe high dimensional data as being near a submanifold in a high-dimensional Euclidean space, where the sub-manifold is described via charts - that is, embeddings of low-dimensional space in the data space. The learned submanifold gives a useful description of the data if the data points either lie on the submanifold or can be de-noised - that is, projected onto the submanifold in meaningful way.
Subsets of Euclidean space A are said to have reach r if points with distance less than r from A have a unique nearest point in A. These points thus have a meaningful projection to A, by projecting to this unique nearest point.
The project aims to study generative models by estimating the reach of a learned submanifold A, and then comparing this with the distance of data points from A. If many data points are further from A than its reach, then from our point of view A does not satisfactorily describe the data.
The theoretical part of the project describes reach in new geometrical terms; the practical part is aimed at producing algorithms to compute reach and the associated nearest-point projections, and to use these on concrete data sets with generative models, determining whether the data is satisfactorily described by the model.
This PhD study is conducted by Helene Hauschultz and the main supervisor is Associate Professor Andrew du Plessis from the Department of Mathematics.
During the past decade, deep learning has seen a rise in popularity largely due to improvements in predictive performance. However, the size and complexity of neural networks is also rapidly increasing, making training of such networks computationally too expensive for most individuals or smaller research teams. An attempt at alleviating this challenge is Knowledge Distillation.
Knowledge distillation is a procedure for transferring knowledge from one neural network (teacher) to another smaller neural network (student) without significant loss in predictive performance, by utilizing the generalization performance of the teacher network.
The practical benefits of knowledge distillation have been proven many times in controlled settings, but the theoretical justification is largely absent, and empirical evidence on real world data is lacking behind too. Thus, the overall aim of this project is two-fold: 1) provide theoretical results on how, when and why distillation techniques work, and 2) apply distillation techniques on real world datasets and models.
This PhD study is conducted by Kenneth Borup and supervised by Associate Professor Lars Nørvang Andersen from the Department of Mathematics and Professor Henrik Karstoft from the Department of Electrical and Computer Engineering.
The goal of the project is to design technologies and architectures for future storage and communication systems which can handle the increased data loads, using compression to decrease the amount of data actually stored without loss of information.
This PhD study conducted by Lars Nielsen is supervised by Deputy Head of department for talent development and external funding Daniel Rötter from the department of Electrical and Computer Engineering.
Communication networks, storage and analytics systems are facing tremendous challenges when handling the increasing data traffic generated by end users and billions of sensor and actuation devices to be connected to the Internet of Things (IoT). The expected data volumes far exceed the capacity of the current software and hardware infrastructure, and state-of-the-art methods of transfer, storage and analysis. One of the key technologies to tackle this unprecedented growth is compression, which helps in every stage of the data pipeline. Transferring fewer bytes relieves the communication networks, requires less energy and results in faster uploads and downloads. Storing compressed data takes up less digital storage space, therefore also faster and more energy efficient to read and write. Analysis methods that leverage compression can answer more queries per second than their traditional counterparts. Marcell is a second year PhD student whose research focuses on compressing smart meter readings for transport and storage, and developing a columnar database that runs SQL directly on compressed data.
This PhD study is conducted by Marcell Fehér and supervised by Deputy Head of department for talent development and external funding Daniel Rötter from the Department of Electrical and Computer Engineering.
This PhD project focuses on the development of novel coding theory designs for a more efficient management, updating, consistency assurance and storage of Internet of Things data at a massive scale.
In particular, focus is on the integration of (network) coding techniques and data deduplication, two approaches for reducing storage costs that have typically been attacked separately. This work is expected to open a new field at the intersection of traditional coding theory and distributed Cloud technologies and systems. The underlying goal of the results and designs of this project is to develop new technologies for Cloud, Edge and Local content management, transmission and consistency assurance.
This PhD study conducted by Niloofar Yazdani is supervised by Deputy Head of Department for talent development and external funding Daniel Rötter.
Future communication networks and storage systems must be able to handle increasing traffic generated by end users, the strict requirements for 5G communications, and billions of sensing and actuating devices connected to the Internet of Things.
This PhD project looks to develop data compression techniques that helps our communication infrastructure adapt to these new, stricter requirements. In particular, we both look to improve current compressors and design new ones. A key goal is to ensure that the infrastructure can handle the increasing volumes of data, but we also desire to provide additional advantages over the state-of-the-art: for example, low added latency and efficient random access to the compressed data.
This PhD study is conducted by Rasmus Vestergaard and supervised by Deputy Head of Department for talent development and external funding Daniel Rötter from the Department of Electrical and Computer Engineering.
The design of correct software is an essential facet of the engineering of dependable systems. The widespread use of cyber-physical systems (CPSs) will increase further, and many software engineers will soon face situations where the consequences of programming errors become severe. Using formal methods, this project aims at practical techniques and tools to develop such high-quality software using proof-based techniques. Whereas significant advances are being made in the development of powerful theorem provers, current tools require highly trained formal methods experts and are not accessible to typical software engineers. The major challenge faced by all proof-based verification is the rapidly increasing complexity of formal proofs, even for moderately complex software that is common in practice.
This project investigates how formal proof technologies can be made accessible to a wide range of software engineers tightly integrated into CPS modelling and development environments.
This PhD study is conducted by Simon Thrane Hansen and supervised by Associate Professor Stefan Hallerstede from the Department of Electrical and Computer Engineering.