-
Notifications
You must be signed in to change notification settings - Fork 0
/
main text.txt
23 lines (12 loc) · 10.8 KB
/
main text.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Sample clearing methods such as Clarity or SCALE [refs] enable imaging of very large, fixed specimen without the need for physical sectioning. In combination with lightsheet microscopy [refs] it is possible to scan entire volumes such as adult mouse brains at single-cell resolution within just a few hours [refs]. Since clearing protocols preserve endogenous fluorescent protein (Fig. ??) and are compatible with most staining methods including single-molecule fluorescent in-situ hybridization [ref], these acquisitions are powerful tools for whole-organ and whole-organism studies.
However, to acquire an entire sample, many large, overlapping three-dimensional (3d) image tiles need to be collected that typically amount to many terabytes in size (Fig. ??). Due to sample-induced scattering of the lightsheet in the direction of illumination [scat1,2], 3d image tiles are typically acquired twice while alternating illumination from opposing directions to achieve full coverage (Fig. ??). Similarly, emitted light is distorted by the sample, effectively limiting maximal imaging depth at which useful data can be collected. Sample-induced light refraction additionally causes depth and wavelength dependent aberrations in the acquired images. We therefore developed the BigStitcher software that enables user-friendly import, interactive handling of multi-terabyte image data, fast and precise alignment, as well as deconvolution and real-time fusion of large, high-dimensional datasets. BigStitcher additionally supports alignment of multi-tile acquisition taken from different physical orientations, so called multi-tile views, thus effectively doubling the size of cleared specimen that can be imaged [fig].
Microscopy acquisitions are saved in a multitude of vendor-specific or custom formats that store images along with important metadata. For efficient import of datasets, we developed the user-friendly AutoLoader based on the BioFormats library [ref] that enables automatic, cross-file import of most formats including approximate tile positions and non-standardized values such as illumination directions and potential sample rotation. We additionally support interactive placement of image tiles in regular grids. The image data can be directly accessed through memory-cached, virtual loading and is ideally resaved into the efficient multiresolution, blocked, compressed HDF5 format implementation [ref]. Interactive visualization, processing, and interaction with the terabyte-sized image data is enabled by basing our solutions on BigDataViewer [ref] and memory-cached ImgLib2 data structures [ref].
Although cleared samples are highly transparent [fig ??], light scattering is an issue when imaging centimeters deep into fixed tissue [fig ??]. Dual-sided illumination can therefore significantly increase the sample size for which high resolution image data can be collected laterally [fig ??]. However, it requires to image each 3d tile twice from both illumination directions, where most tiles only hold useful information from either direction. To address this problem, we estimate image sharpness at the lowest pre-computed resolution level and automatically suggest the best illumination direction for each tile and channel [methods, fig ??].
To compute the location of each image tile corresponding to the microscopic stage position used during acquisition we developed an optimized image stitching algorithm. It is tailored to very large datasets that can be acquired in a non-regular grid and can contain empty tiles as well as multiple independent objects. The algorithm first computes all distances between pairs of overlapping image tiles, followed by outlier removal to identify wrong pairwise overlaps, and a globally optimal final determination of tile positions.
Since acquisitions typically consist of hundreds of tiles, each many gigabytes in size with varying image content [Fig ??] we compute the pairwise overlap using the parameter-free Fourier-based Phase Correlation Method [ref, ref]. It computes all possible shifts between two images at once and intensity peaks in the resulting Phase Correlation Matrix correspond to shifts with high correlation [Supplementary Fig/Note ??]. To accommodate large image sizes, we support to compute the Phase Correlation Matrix on precomputed, downsampled images while sub-pixel localizing high-correlation peaks using a three-dimensional quadratic fit [ref]. Using simulations, we show that computing times can be reduced 100-fold while achieving pairwise alignment errors below 1 pixel [Supplementary Fig/Note ??]. Resulting pairwise registrations (links) can be pre-filtered by minimum correlation and distance from metadata defined positions during which remaining links and corresponding overlaps can be interactively displayed [Supplementary Fig ??]. To support a wide variety image types pairwise overlaps can optionally be computed using feature matching [ref] or iterative Lucas-Kanade [ref] [Supplementary Note ??]. < Mention in methods only.
To compute the final position of each image tile without propagating errors we extend the concept of minimizing the distance between all image tiles as defined by all remaining links [bioinf]. We detect incorrect links that prevent a coherent placement of image tiles using a compound metric and iterative computing of globally optimal image placement [methods]. To address the problems of placing image tiles for whom no link could be computed (e.g. empty images) and to coherently place multiple ‘islands’ of connected images we developed a new optimization strategy. We introduce the concept of strong and weak links, where strong links are defined by computed image overlaps while weak links are defined by approximately known image positions (e.g. metadata). We first identify groups of images that are connected by strong links and compute their positions relative to each other individually. Image positions within these groups are then fixed and a final position of all tiles is computed by minimizing the distance between all weak links [fig, methods, supplement?]. We thereby developed an algorithm that allows as-good-as-possible placement of image tiles in potentially non-regular acquisition grids with only sparse image content.
To estimate and correct for various forms of disturbances such as sample-induced light refraction, wavelength-dependent aberrations, or camera chip rotation [ref] we support an optional, easy-to-use interest point based alignment step that supports affine transformations. We automatically extract interest points and apply a variation of the iterative closest point algorithm [ref] combined with our new global optimization algorithm that is able to correct for smaller rigid and affine distortions. If auto fluorescence levels are high enough, it is able to correct for major effects of chromatic aberration [methods, fig, supplement].
Since emitted light is distorted by the sample, the maximum imaging depth is limited. To overcome this problem, we rotate samples and acquire them from opposing directions. We implemented a new multiview alignment algorithm that is able to register these massive multi-tile views, where each view represents a large set of aligned image tiles from one physical orientation. We developed software to first segment interest points in virtually fused, downsampled images of each multi-tile view that can be performed interactively. After applying the approximate rotation as defined by the microscope, we identify corresponding interest points using geometric local descriptor matching [refs]. We developed an optimized translation-invariant matching based on geometric hashing [methods, ref] that improves matching speed drastically and robustly aligns these large volumes effectively doubling the imaging depth of any sample [table, fig].
The aligned dataset as well as all intermediate steps are interactively displayed in the BigDataViewer. The user has the option to verify and interact with the alignment process at any time to confirm and potentially guide proper reconstruction of complicated datasets [supplement]. For downstream analysis of the data, a growing number of BDV plugins can be used directly on the reconstructed dataset [refs]. Fusion of the data into 3d images is enabled by automatically or manually defining bounding boxes on the dataset, which can comprise regions of interest or the entire dataset [supplement]. We implemented an algorithm that allows to subsequently fuse these bounding boxes at full resolution or downsampled in real-time by combining multithreaded fusion of the currently visible plane with blockwise multi-resolution loading [supplement].
Deconvolution based on measured point spread functions (PSF) is an established method to increase contrast and resolution in light microscopy acquisitions [refs]. PSFs are typically measured using fluorescent beads embedded with the sample since in lightsheet microscopy realistic PSFs often differ significantly from measured ones due to variable precision of lightsheet alignment in every experiment [ref]. However, embedding of fluorescent beads is challenging due to the complex clearing protocol. Here we develop an embedding protocol for fluorescent beads in polymerization solution that enables measurement of realistic PSFs right after the actual imaging process [materials]. We extend existing deconvolution code to handle multi-tile acquisitions [ref] enabling BigStitcher to perform deconvolution on selected bounding boxes and we show that imaging quality can be significantly improved [fig].
The BigStitcher is a comprehensive software package that enables efficient and automatic processing of large cleared samples that can range up to many terabytes in size. It addresses major unsolved issues such as easy import, managing of very large images and datasets acquired in a non-regular grid, globally optimal alignment of sparse datasets, illumination selection, multiview alignment of multitile acquistions, PSF extraction, and interactive fusion. Additionally, the user has the option intervene at almost any point during the reconstruction and manually correct results and decisions are supported by a comprehensive visualization of the alignment process [supplement]. Automatic reconstruction of even large datasets can be achieved within the matter of tens of minutes and BigStitcher clearly outperforms existing software in terms of performance, functionality, and user-interaction [table]. While BigStitcher was developed for reconstruction of cleared samples, it also supports two-dimensional acquisitions, standard confocal and widefield image datasets, as well as multitile and multiview lightsheet acquisitions of fixed [fig] and expansion microscopy samples [fig]. It supports ImageJ Macro recording for most of its functionality and can thus easily be automated [supplement]. BigStitcher is open-source and provided as a Fiji [ref] plugin with comprehensive documentation (http://imagej.net/BigStitcher).