Resource-Efficient Distributed Computer Systems for Data-Intensive Applications
I research adaptive resource management methods with the aim of making it easier to run data-intensive applications more efficiently on computing infrastructures ranging from resource-constrained devices to large-scale clusters. I enjoy working with collaborators and partners and taking an iterative systems research approach.
Today, many organizations have to deal with very large volumes of collected data, be it to search through billions of websites, to recommend songs or TV shows to millions of users, to accurately identify genetic disorders by comparing terabytes of human genomic data, to monitor current environment conditions in urban areas using large sensor networks, or to detect fraudulent behavior in millions of business transactions. For this, businesses, sciences, municipalities, as well as other large and small organizations deploy data-intensive applications on to scalable fault-tolerant distributed systems and large-scale distributed computing infrastructures. Prominent examples of such scalable systems include distributed dataflow and workflow systems, scalable storage and database systems, and parallel systems for machine learning and graph processing.
Meanwhile, the computing infrastructures used for data-intensive applications are becoming increasingly diverse, distributed, and dynamic. Beyond the data center, there are more edge and fog resources as well as IoT devices. This enables to run applications closer to data sources and users, allowing for lower latencies, improved security and privacy, and reduced energy consumption for wide-area networking, but also creates distinctively heterogeneous new computing environments. At the same time, also data centers are becoming more diverse. In public clouds, users can choose among hundreds of different virtual machines, including instances optimized for compute, memory, and storage or for accelerated computing. Similarly, dedicated cluster infrastructures at larger organizations are becoming more heterogeneous. Scientists at universities, for instance, often have access to several clusters, each potentially again with multiple different types of machines, such as machines equipped with large amounts of memory or graphics processors.
The data-intensive applications that run on these distributed systems and infrastructures are created by many different users. These include software engineers, system operators, data analysts, machine learning specialists, scientists, and domain experts, yet typically also users without a strong background and deep understanding of parallel computer architectures and networking, distributed systems theory and implementation, as well as efficient data management and processing. Though, it is still very difficult to run scalable systems on today’s diverse and dynamic distributed computing environments, so that applications provide the required performance and dependability, yet also run efficiently. Users are largely left alone with the question, how much and which resources they should allocate for their applications. Meanwhile, configuring distributed systems, so that they run as required on a particular computing infrastructure, is frequently not straightforward even for expert users. Anticipating the behavior of systems on certain infrastructures is inherently difficult. It depends on many factors, there are usually numerous options to configure and large parameter spaces, and environments and workloads often change dynamically over time.
As a result, running data-intensive applications efficiently is hard, especially when given requirements for an application's performance and dependability. In fact, users often resort to overprovisioning to ensure that their requirements are met after all. I would even argue that – while high-level programming abstractions and distributed runtimes have made it easier to develop data-intensive applications – managing systems and infrastructures efficiently has become more difficult over the last decade. And there is abundant evidence of low resource utilization, limited energy efficiency, and severe failures with applications deployed in practice that back up this claim. Meanwhile, computing's environmental footprint already rivals aviation's and is projected to rise sharply over the next few decades. Therefore, as an increasing number of data-intensive applications is developed and deployed in businesses, in the sciences, and by municipalities and governments, it is from both an economical and an ecological point of view absolutely critical that computing infrastructures are used efficiently.
The main objective of my work is supporting organizations and users in making efficient use of computing infrastructures for their applications. Towards this goal, I work with collaborators and partners to develop methods, systems, and tools that make the implementation, testing, and operation of resource-efficient and resilient distributed systems easier. Ultimately, we aim to realize systems that adapt to diverse computing infrastructures, dynamic workloads, and high-level performance and reliability requirements fully automatically.
In line with this, we work towards an adaptive resource management for data-intensive applications that run on distributed computing infrastructures – from small IoT devices to large clusters of virtual resources – in three ways:
Central for realizing an adaptive and data-driven resource management according to high-level objectives and constraints are techniques that enable an effective modeling of the performance, dependability, and efficiency of applications and, therefore, optimization. Additionally, we investigate methods for resource-efficient monitoring, profiling, and experimentation.
I mostly do empirical systems research. Therefore, together with collaborators and partners, I evaluate new ideas by implementing them prototypically in context of relevant open-source systems (such as Flink, Kubernetes, and FreeRTOS) and by conducting experiments on actual hardware, with exemplary applications, and real-world data. For this, we use state-of-the-art infrastructures, including diverse cluster infrastructures, private and public clouds, as well as IoT devices and sensors.
I believe iterative processes and short feedback cycles are vital for research. I am, consequently, implementing a multi-staged approach to research: first presenting new ideas in focused workshops and work-in-progress tracks of conferences, then submitting rigorous results to the main tracks of renowned international conferences, and finally compiling more extensive findings into comprehensive journal articles. At the same time, I am convinced that it is essential to also be involved in applied and interdisciplinary research projects, to directly experience relevant problems and to uncover new opportunities for well motivated, impactful research.
Believing in the value of scientific discourse and feedback, I frequently collaborate with researchers from other institutions and actively participate in the international scientific community, taking on academic services. Moreover, I make results available to the public as far and as soon as possible with openly accessible publications, open-source software prototypes, and research-based university teaching.