Aarhus University Seal

Work Package 4

'Big Data' describes the massive amounts of various data from online activities, location based services and many other applications.

Our research aims to extract and make sense of Big Data by providing the means to identify high quality data sources, handling of noise, as well as describing the quality of the analysis itself. The goal is to provide the domain expert or end user with transparent access to Big Data, providing explanatory components that aid the understandability.

We work specifically with the following research activities that together define, extract, manage and evaluate relevant and transparent knowledge from Big Data.

  1. Big Data analysis quality, reliability and information content.
    As Big Data stems from a variety of sources, quality standards and noise are core issues for reliability and validity of data analysis. We devise quality measures for data and data analysis methods. For domain experts working with Big Data, we provide explanatory components that transparently provide information on data analyses for domain experts to verify and validate findings. We study active learning methods to adapt Big Data analytics to the needs of the application while reducing the load on the domain expert in training the methods.
  2. Handling complex textual information in Big Data.
    With the recent advances in Natural Language Processing, we can tap into the various text sources available in the Big Data age. Varying and dynamically changing language use, as well as diversity in document length and complexity, require methods capable of extracting semantic concepts and organising them in representations that are easily accessible for the user. In particular, we consider the problem of data leak prevention, where sensitive content is identified prior to publication of text documents.
  3. Efficient scalable Big Data analysis algorithms.
    Existing tools for analysis of Big Data generally assume a single-core computing model. This is in stark contrast to current computers, which feature multi-core CPUs as well as graphics cards (GPUs) that increasingly support general-purpose computation, i.e. a much wider range of operations than the graphical operations that GPUs were originally developed for. We render large-scale analysis practical for Big Data by exploiting the characteristics of modern hardware, especially the inherent parallelism of CPUs and GPUs. By levering today’s standard computers, Data Science at Big Data scale becomes widely accessible without the need of costly investments in specialised hardware.
  4. Performance evaluation and benchmarking for Big Data scale.
    In order to establish and consolidate Big Data research, we provide evaluation methodology and benchmarks. Existing evaluation setups are poorly suited for the study of large scale algorithms, both in terms of satisfying information needs, and in terms of running times. This means that current methods are typically not subjected to realistic empirical studies, limiting the understanding of state-of-the-art. We establish procedures as well as benchmarking data and workloads, and make them available to the research community.

Collaboration between Engineering and Computer Science on PhD project in Deep Learning

DIGIT supports a new collaboration between researchers Dr. Alexandros Iosifidis, an Associate Professor of Machine Learning and Computer Vision in the Department of Engineering and Dr. Ira Assent, a Professor in Data-Intensive Systems in the Department of Computer Science at Aarhus University, Denmark. With backgrounds in kernel methods and neural networks for classification/regression, and explanatory components and data analysis methods, they join forces to address open questions in Deep Learning methods.

Collaboration between work package 4 and other work packages