High performance computing and big data methods (1 position)
- Development of methods for scaling parallel and distributed training of deep neural networks (suitable architecture) for (large) multi-GPU clusters; in particular the topics scalable data communication and I/O, robustness and redundancy should be addressed
- Improvement of the specific convergence properties of distributed learning methods (loss functions, learning rates, features) for data sets from atmospheric physics
- Development of non-standard parameterizations for GPU accelerators (e.g. for irregular/adaptive grids, and spherical shell models, as Earth's atmosphere).