Exascale computing power will likely be reached in the next decade. While the precise system architectures are still evolving, one can safely assume that they will be largely based on deep hierarchies of multicore CPUs with similarly-deep memory hierarchies, potentially also supported by accelerators. New and disruptive programming models are needed to allow applications to run efficiently at large scale on these platforms. The Message Passing Interface (MPI) has emerged as the de-facto standard for parallel programming on current petascale machines; but Partitioned Global Address Space (PGAS) languages and libraries are increasingly being considered as alternatives or complements to MPI. However, both approaches have severe problems that will prevent them reaching exascale performance. The aim of this proposal is to prepare Message Passing (MP) and PGAS programming models for exascale systems by fundamentally addressing their main current limitations. We will introduce new disruptive concepts to fill the technological gap between the petascale and exascale era in two ways: • First, innovative algorithms will be used in both MP and PGAS, specifically to provide fast collective communication in both MP and PGAS, to decrease the memory consumption in MP, to enable fast synchronization in PGAS, to provide fault tolerance mechanisms in PGAS, and potential strategies for fault tolerance in MP. • Second, we will combine the best features of MP and PGAS by developing an MP interface using a PGAS library as communication substrate. The concepts developed will be tested and guided by two applications in the engineering and space weather domains chosen from the suite of codes in current EC exascale projects. By providing prototype implementations for both MP and PGAS concepts we will contribute significantly to advancement in programming models and interfaces for ultra-scale computing systems, and provide stimuli for European research in this vital area.