Umut Acar Associate Professor Website Office 9101 Gates and Hillman Centers Email umut@cmu.edu Phone (412) 268-6791 Department Computer Science Department Administrative Support Person Matt McMonagle Research Interests Programming Languages Algorithms and Complexity Advisees Jatin Arora Colin McDonald Pengyu Liu Mingkuan Xu CSD Courses Taught 15210 - Spring, 2024 Research/Teaching Statement My main areas of interest are programming languages, parallel computing, and algorithms. In my research, I aim to raise the level of abstraction at which computer scientists reason about problems, and develop algorithms and software. To this end, I develop abstractions and design the supporting language constructs, algorithms, and software systems. Programming languages. Programming languages aim to fill the large gap between the level at which we humans reason (as for example manifested by mathematics) and the tedious code of instructions required by the computer. They do so by offering us abstractions with which we can organize and express our thoughts and by translating our thoughts expressed as programs to code suitable for computers to execute. For reasons of efficiency, as computer scientists, we have thus far resorted to low level abstractions--abstractions closer to the level of computers than humans--for expressing computation. But today, as computer systems become architecturally more complex, it has become increasingly more difficult to design software that perform well with such low-level abstractions. The problem is exacerbated by increased demands on the capability and the quality of software, as it performs many critical tasks and handles sensitive information. I therefore develop higher-level abstractions, programming languages, and systems that enable creative thought and expression while also ensuring efficiency. My research in this area thus far focused on dynamic or incremental computation, where systems interact with dynamically changing data, and parallel computation, where multiple processors can be used to perform a task simultaneously. Parallel computing. The turn of the 21st century may be remembered as a momentous point in the history of computing as the single-chip, multiple-processor (multicore) computer started replacing the sequential computer, the mainstay of computing until then. Unfortunately, many of today's programming languages, algorithms, and software systems are not suitable for use with parallel hardware. To take advantage of parallelism, we need new programming languages, algorithms, and software systems. Specific problems to be tackled include reliability (a.k.a., fault tolerance or resilience), scheduling for scalable efficiency, and control of communication costs between processors and memory. In order to keep the level of abstraction high without sacrificing performance, I work on these problems by using a broad methodology that combines techniques and tools from several areas including algorithms, programming languages, and systems. Algorithms. By allowing us to reason accurately about the cost (resource usage) of computation, classic models of computation, e.g., the RAM and the PRAM models, have enabled us to design efficient algorithms for many important problems. Many algorithms, (e.g., dynamic and parallel algorithms), however, can be difficult to implement and use in practice because they require implementations to maintain complex invariants expressed in terms of the details of the machine hardware (e.g., memory layout). This theory-practice gap increases as the complexity of the hardware (e.g., non-uniform memory) and the problems that we face increase. I wish to close this gap by inventing realistic computational models that are higher-level and that simplify the design, analysis, and expression (and thus implementation) of sophisticated algorithms by eliminating hardware-specific details from algorithms. The challenge is to ensure that such algorithms can remain efficient. In the near future, I am particularly interested in models and algorithms for dynamic and parallel problems. Understanding software. In the current state of the art, our ability to design software far exceeds our ability to understand its behavior. For example, we may spend a long time (e.g., days) to understand a small piece of code that we wrote in a fraction of the time (e.g., minutes). I am interested in developing techniques for understanding software. To this end, I am pursing the idea of enabling a conversation between the user and software, where the user queries the software and the software responds by "explaining" its work. Publications Journal Article Automatic Parallelism Management 2024 • Proceedings of the ACM on Programming Languages • 8: Westrick S, Fluet M, Rainey M, Acar UA Journal Article Disentanglement with Futures, State, and Interaction 2024 • Proceedings of the ACM on Programming Languages • 8:1569-1599 Arora J, Muller SK, Acar UA Journal Article Efficient Parallel Functional Programming with Effects 2023 • Proceedings of the ACM on Programming Languages • 7(PLDI): Arora J, Westrick S, Acar UA Journal Article Responsive Parallelism with Synchronization 2023 • Proceedings of the ACM on Programming Languages • 7(PLDI): Muller SK, Singer K, Keeney DT, Neth A, Agrawal K, Lee I-TA, Acar UA Conference WARDen: Specializing Cache Coherence for High-Level Parallel Languages 2023 • PROCEEDINGS OF THE 21ST ACM/IEEE INTERNATIONAL SYMPOSIUM ON CODE GENERATION AND OPTIMIZATION, CGO 2023 • 122-135 Wilkins M, Westrick S, Kandiah V, Bernat A, Suchy B, Deiana EA, Campanoni S, Acar UA, Dinda P, Hardavellas N
Journal Article Automatic Parallelism Management 2024 • Proceedings of the ACM on Programming Languages • 8: Westrick S, Fluet M, Rainey M, Acar UA
Journal Article Disentanglement with Futures, State, and Interaction 2024 • Proceedings of the ACM on Programming Languages • 8:1569-1599 Arora J, Muller SK, Acar UA
Journal Article Efficient Parallel Functional Programming with Effects 2023 • Proceedings of the ACM on Programming Languages • 7(PLDI): Arora J, Westrick S, Acar UA
Journal Article Responsive Parallelism with Synchronization 2023 • Proceedings of the ACM on Programming Languages • 7(PLDI): Muller SK, Singer K, Keeney DT, Neth A, Agrawal K, Lee I-TA, Acar UA
Conference WARDen: Specializing Cache Coherence for High-Level Parallel Languages 2023 • PROCEEDINGS OF THE 21ST ACM/IEEE INTERNATIONAL SYMPOSIUM ON CODE GENERATION AND OPTIMIZATION, CGO 2023 • 122-135 Wilkins M, Westrick S, Kandiah V, Bernat A, Suchy B, Deiana EA, Campanoni S, Acar UA, Dinda P, Hardavellas N