Access BigBrain high-resolution data

siibra provides access to high-resolution image data parcellation maps defined for the 20 micrometer BigBrain space. The BigBrain is very different from other templates. Its native resolution is 20 micrometer, resulting in about one Terabyte of image data. Yet, fetching the template works the same way as for the MNI templates, with the difference that we can specify a reduced resolution or volume of interest to fetch a feasible amount of image data, or a volume of interest.

We start by selecting an atlas.

import siibra
from nilearn import plotting

Per default, siibra will fetch the whole brain volume at a reasonably reduced resolution.

space = siibra.spaces['bigbrain']
bigbrain_template = space.get_template()
bigbrain_whole_img = bigbrain_template.fetch()
plotting.view_img(bigbrain_whole_img, bg_img=None, cmap='gray')
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/html_stat_map.py:112: UserWarning: Threshold given was 1e-06, but the data has no values below 9.0.
  warnings.warn(


To see the full resolution, we may specify a bounding box in the physical space. You will learn more about spatial primitives like points and bounding boxes in Locations in reference spaces. For now, we just define a volume of interest from two corner points in the histological space. We specify the points with a string representation, which could be conveniently copy pasted from the interactive viewer siibra explorer. Note that the coordinates can be specified by 3-tuples, and in other ways.

voi = siibra.locations.BoundingBox(
    point1="-30.590mm, 3.270mm, 47.814mm",
    point2="-26.557mm, 6.277mm, 50.631mm",
    space=space
)
bigbrain_chunk = bigbrain_template.fetch(voi=voi, resolution_mm=0.02)
plotting.view_img(bigbrain_chunk, bg_img=None, cmap='gray')
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/html_stat_map.py:112: UserWarning: Threshold given was 1e-06, but the data has no values below 2.0.
  warnings.warn(


Note that since both fetched image volumes are spatial images with a properly defined transformation between their voxel and physical spaces, we can directly plot them correctly superimposed on each other:

plotting.view_img(
    bigbrain_chunk,
    bg_img=bigbrain_whole_img,
    cmap='magma',
    cut_coords=tuple(voi.center)
)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/html_stat_map.py:112: UserWarning: Threshold given was 1e-06, but the data has no values below 2.0.
  warnings.warn(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/numpy/core/fromnumeric.py:784: UserWarning: Warning: 'partition' will ignore the 'mask' of the MaskedArray.
  a.partition(kth, axis=axis, kind=kind, order=order)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/img_plotting.py:546: RuntimeWarning: overflow encountered in scalar add
  if background > 0.5 * (vmin + vmax):
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/img_plotting.py:556: RuntimeWarning: overflow encountered in scalar add
  vmean = 0.5 * (vmin + vmax)


Next we select a parcellation which provides a map for BigBrain, and extract labels for the same volume of interest. We choose the cortical layer maps by Wagstyl et al<https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000678>. Note that by specifying “-1” as a resolution, siibra will fetch the highest possible resolution.

layermap = siibra.get_map(space='bigbrain', parcellation='layers')
mask = layermap.fetch(fragment='left hemisphere', resolution_mm=-1, voi=voi)
mask
<nibabel.nifti1.Nifti1Image object at 0x7f6ba76891c0>

Since we operate in physical coordinates, we can plot both image chunks superimposed, even if their resolution is not exactly identical.

plotting.view_img(mask, bg_img=bigbrain_chunk, opacity=.2, symmetric_cmap=False)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/numpy/core/fromnumeric.py:784: UserWarning: Warning: 'partition' will ignore the 'mask' of the MaskedArray.
  a.partition(kth, axis=axis, kind=kind, order=order)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/img_plotting.py:546: RuntimeWarning: overflow encountered in scalar add
  if background > 0.5 * (vmin + vmax):
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/img_plotting.py:556: RuntimeWarning: overflow encountered in scalar add
  vmean = 0.5 * (vmin + vmax)


siibra can help us to assign a brain region to the position of the volume of interest. This is covered in more detail in Anatomical assignment. For now, just note that siibra can employ spatial objects from different template spaces. Here it automatically warps the centroid of the volume of interest to MNI space for location assignment.

julich_pmaps = siibra.get_map(space='mni152', parcellation='julich', maptype='statistical')
assignments = julich_pmaps.assign(voi.center)
assignments
input structure centroid volume region map value
0 0 (-20.82, -22.92, 74.17) 10 Area 4a (PreCG) left 0.067354
1 0 (-20.82, -22.92, 74.17) 15 Area 6d1 (PreCG) left 0.021094
2 0 (-20.82, -22.92, 74.17) 17 Area 6d3 (SFS) left 0.000002


1 micron scans of BigBrain sections across the brain can be found as VolumeOfInterest features. The result is a high-resolution image structure, just like the bigbrain template.

hoc5l = siibra.get_region('julich 2.9', 'hoc5 left')
features = siibra.features.get(
    hoc5l,
    siibra.features.cellular.CellbodyStainedSection
)
# let's see the names of the found features
for f in features:
    print(f.name)
#1255: selected 1 micron scans of BigBrain histological sections (v1.0) (cell body staining)
#1307: selected 1 micron scans of BigBrain histological sections (v1.0) (cell body staining)
#1345: selected 1 micron scans of BigBrain histological sections (v1.0) (cell body staining)
#1402: selected 1 micron scans of BigBrain histological sections (v1.0) (cell body staining)
#1454: selected 1 micron scans of BigBrain histological sections (v1.0) (cell body staining)
#1499: selected 1 micron scans of BigBrain histological sections (v1.0) (cell body staining)
#1561: selected 1 micron scans of BigBrain histological sections (v1.0) (cell body staining)
#1600: selected 1 micron scans of BigBrain histological sections (v1.0) (cell body staining)

Now fetch the 1 micron section at a lower resolution, and display in 3D space.

section1402 = features[3]
plotting.plot_img(
    section1402.fetch(),
    bg_img=bigbrain_whole_img,
    title="#1402",
    cmap='gray'
)
004 access bigbrain
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/find_cuts.py:148: UserWarning: Could not determine cut coords: All voxels were masked by the thresholding. Returning the center of mass instead.
  warnings.warn(

<nilearn.plotting.displays._slicers.OrthoSlicer object at 0x7f6bb2baef70>

Let’s fetch a crop inside hoc5 at full resolution. We intersect the bounding box of hoc5l and the section.

hoc5_bbox = hoc5l.get_boundingbox('bigbrain').intersection(section1402)
print(f"Size of the bounding box: {hoc5_bbox.shape}")

# this is quite large, so we shrink it
voi = hoc5_bbox.zoom(0.1)
crop = section1402.fetch(voi=voi, resolution_mm=-1)
plotting.plot_img(crop, bg_img=None, cmap='gray')
004 access bigbrain
Size of the bounding box: (13.716000000000001, 0.020000000000003126, 16.086666666666666)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/displays/_axes.py:74: UserWarning: Attempting to set identical low and high xlims makes transformation singular; automatically expanding.
  im = getattr(ax, type)(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/displays/_axes.py:52: UserWarning: Attempting to set identical low and high xlims makes transformation singular; automatically expanding.
  self.ax.axis(self.get_object_bounds())
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/displays/_axes.py:74: UserWarning: Attempting to set identical low and high ylims makes transformation singular; automatically expanding.
  im = getattr(ax, type)(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/nilearn/plotting/displays/_axes.py:52: UserWarning: Attempting to set identical low and high ylims makes transformation singular; automatically expanding.
  self.ax.axis(self.get_object_bounds())

<nilearn.plotting.displays._slicers.OrthoSlicer object at 0x7f6bb2a4c310>

Total running time of the script: (4 minutes 2.450 seconds)

Gallery generated by Sphinx-Gallery