Access BigBrain high-resolution data

siibra provides access to high-resolution image data parcellation maps defined for the 20 micrometer BigBrain space. The BigBrain is very different from other templates. Its native resolution is 20 micrometer, resulting in about one Terabyte of image data. Yet, fetching the template works the same way as for the MNI templates, with the difference that we can specify a reduced resolution or volume of interest to fetch a feasible amount of image data, or a volume of interest.

We start by selecting an atlas.

import siibra
from nilearn import plotting

Per default, siibra will fetch the whole brain volume at a reasonably reduced resolution.

space = siibra.spaces['bigbrain']
bigbrain_template = space.get_template()
bigbrain_whole_img = bigbrain_template.fetch()
plotting.view_img(bigbrain_whole_img, bg_img=None, cmap='gray')
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/html_stat_map.py:112: UserWarning: Threshold given was 1e-06, but the data has no values below 9.0.
  warnings.warn(


To see the full resolution, we may specify a bounding box in the physical space. You will learn more about spatial primitives like points and bounding boxes in Locations in reference spaces. For now, we just define a volume of interest from two corner points in the histological space. We specify the points with a string representation, which could be conveniently copy pasted from the interactive viewer siibra explorer. Note that the coordinates can be specified by 3-tuples, and in other ways.

voi = siibra.locations.BoundingBox(
    point1="-30.590mm, 3.270mm, 47.814mm",
    point2="-26.557mm, 6.277mm, 50.631mm",
    space=space
)
bigbrain_chunk = bigbrain_template.fetch(voi=voi, resolution_mm=0.02)
plotting.view_img(bigbrain_chunk, bg_img=None, cmap='gray')
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/html_stat_map.py:112: UserWarning: Threshold given was 1e-06, but the data has no values below 2.0.
  warnings.warn(


Note that since both fetched image volumes are spatial images with a properly defined transformation between their voxel and physical spaces, we can directly plot them correctly superimposed on each other:

plotting.view_img(
    bigbrain_chunk,
    bg_img=bigbrain_whole_img,
    cmap='magma',
    cut_coords=tuple(voi.center)
)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/html_stat_map.py:112: UserWarning: Threshold given was 1e-06, but the data has no values below 2.0.
  warnings.warn(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/numpy/core/fromnumeric.py:784: UserWarning: Warning: 'partition' will ignore the 'mask' of the MaskedArray.
  a.partition(kth, axis=axis, kind=kind, order=order)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/img_plotting.py:555: RuntimeWarning: overflow encountered in scalar add
  if background > 0.5 * (vmin + vmax):
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/img_plotting.py:565: RuntimeWarning: overflow encountered in scalar add
  vmean = 0.5 * (vmin + vmax)