How do we manage and analyze our data?

We will generate exceedingly large, ever more complex data in need of well-defined management, analysis, and dissemination approaches. We will focus expertise in the areas of model building, data management, and state-of-the-art statistical approaches. We will facilitate collaborative archiving and sharing of research data and establish unified workflows based on open-source solutions integrating commonly used data processing packages. Our goal is to allow for maximal interoperability and turnover, minimize redundant method development, and facilitate the process of data and code sharing.

We will serve the computational and storage needs of contemporary machine learning approaches by providing state-of the art GPU-based HPC compute nodes and guarantee data storage and accessibility using RADAR long-term archives and broadly accessible large-scale network storage. Support for analysis tools as well as data- and workflow-management will be provided by the platform application specialists and data scientists. We will foster interactions between PhD students interested in the computational analysis of neural activity by organizing digital coding clubs and establishing common repositories for data analysis algorithms.