One of the outstanding challenges in the area of neuroscience is to obtain precise insights into the three-dimensional architecture of neurons and the architectures that promote complex interconnections between them. BigNeuron (http://bigneuron.org) is the first attempt at creating a comprehensive catalogue of neuron architectures that will enable researchers to not only map out the different types of neurons but also understand the internal connections of the brain. Such ambitious efforts require immense computational resources, including (but not restricted to) high performance computing to support the massive processing of diverse imaging and other datasets.This open, collaborative, and international effort is being supported by a number of international organizations, including the high performance computing centers at the Human Brain Project (http://humanbrainproject.eu), the International Neuroinformatics Coordinating Facility (http://www.incf.org), the Oak Ridge Leadership Computing Facility (http://olcf.ornl.gov), Lawrence Berkeley National Laboratory and National Energy Research Scientific Computing Center (http://nersc.gov), Julich Supercomputing center (http://www.fz-juelich.de/ias/jsc/EN/Home/home_node.html) and Ecole Polytechnique Federale de Lussane (http://www.epfl.ch). This session will focus on the use of supercomputing and other high performance/distributed computing resources to discuss:
(a) Emerging image analysis challenges as part of the BigNeuron project;
(b) Development of scalable computing frameworks to facilitate high throughput analysis, annotation, and real-time processing of neuronal datasets; and
(c) Support for integrative neuroscience applications to integrate multiple experimental observations with neuronal simulations.