Main

The function of biological tissues is strongly linked to their composition and organization. Advances in imaging and spatial molecular profiling technologies enable the addressing of these questions by interrogating tissue architectures with ever-growing comprehensiveness, resolution and sensitivity1,2. Existing spatial molecular profiling methods quantify DNA, RNA, protein and/or metabolite abundances in situ3,4. Several of these technologies employ light microscopy, providing spatial resolution of morphological features at length scales from the subcellular to entire organisms. Spatial omics technologies are advancing rapidly, and individual data modalities and methods feature distinct advantages and limitations such as trade-offs in spatial resolution, the extent of molecular multiplexing and detection sensitivity. The ability to efficiently integrate and then operate on data from different spatial omics modalities promises to be instrumental for the construction of holistic views of biological systems.

While progress has been made in the analysis of individual spatial omics datasets, integration of uni- and multimodal spatial omics data entails important practical challenges not sufficiently addressed by existing solutions5,6,7 (Extended Data Table 1, Supplementary Note 1 and Supplementary Table 1). Even basic operations such as loading of datasets into analysis pipelines in a coherent manner is hampered by the diversity in data types (for example, tabular data for sequencing and tens- to hundreds-of-gigabyte dense arrays for images) and file formats (for example, technology-specific vendor formats). In addition, individual spatial omics modalities can differ vastly in spatial resolution and the spatial regions for data acquisition in a tissue are often not aligned. Thus, for integration of such data they must be appropriately transformed and aligned to a common coordinate system (CCS), which is a prerequisite for the establishment of global common coordinate frameworks (CCFs)8. Finally, untangling the complexity of multimodal spatial omics datasets requires expert knowledge and motivation of approaches that enable large-scale interactive data exploration and annotation. Thus, to unlock the full potential of emerging spatial multiomics studies2,9 there is a need for computational infrastructures to store, explore, analyze and annotate data across the full breadth of spatial omics technologies with a unified programmatic interface.

The SpatialData framework enables the findable, accessible, interoperable, reusable (FAIR)10 integration of multimodal spatial omics data. A language-independent storage format increases the interoperability of data sources while the Python library standardizes access of, and operation across, different data types. The SpatialData format supports all major spatial omics technologies and derived quantities (Fig. 1a,c, Supplementary Note 2 and Supplementary Table 2). Briefly, spatial datasets are represented using five primitive elements: Images (raster images), Labels (for example, raster segmentation masks), Points (for example, molecular probes), Shapes (for example, polygon regions of interests, array capture locations and so on) and Tables (for example, molecular quantifications and annotations) (Supplementary Tables 2 and 3). The file format also tracks coordinate transformation or alignment steps applied to individual datasets. Dataset collections can be stored within a single SpatialData store, thereby facilitating joint integrative analyses. The SpatialData format builds on the Open Microscopy Environment–Next-Generation File Format (OME–NGFF) specifications and leverages the Zarr file format (Supplementary Fig. 1), thereby offering performant, interoperable access for both traditional file system- and cloud-based storage11,12 (Supplementary Note 3).

Fig. 1: Design overview and core functionality of SpatialData.
figure 1

a, The SpatialData storage format represents raw and derived data from a wide range of spatial omics technologies in a unified manner. The format builds on five primitive elements (SpatialElements), which are serialized to a Zarr store in an OME–NGFF-compliant manner. b, The SpatialData Python library implements operations for data access, alignment, queries and aggregation of spatial datasets. Coordinate transformations can be specified to align multiple modalities to a CCS, allowing for deployment of spatial queries and aggregation operators across modalities. c, SpatialData is compatible with common data formats, including vendor-specific file formats. Collections of datasets can be stored in a single Zarr store and are represented as a SpatialData object. d, Datasets stored in SpatialData format can be annotated interactively using the integrated napari-spatialdata plugin; SpatialData provides functionality for the generation of both interactive and static plots. e, SpatialData implements a PyTorch Dataset class, thereby facilitating the training of deep learning models directly from SpatialData objects. f, SpatialData builds on established standards and software, thereby providing interoperability with existing multimodal analysis approaches including Squidpy15, Scanpy14, MONAI23 and scvi-tools24, among others.

The SpatialData Python library represents this format as SpatialData objects in memory, which supports lazy loading of larger-than-memory data (Fig. 1b). The library also provides reader functions for widely used spatial omics technologies (Fig. 1c and Supplementary Table 3), as well as versatile functionalities for manipulating and accessing SpatialData objects and to define CCSs of biological tissues8. Briefly, each individual dataset is associated with a modality-specific coordinate transformation (Fig. 1b) that includes affine transformations and composite operations. Once aligned, a collection of datasets can be queried (Extended Data Fig. 1) and aggregated (Extended Data Fig. 2)—for example, using spatial annotations at diverse scales (cells, grids, anatomical regions) and both within and across modalities. The query and aggregation interfaces also allow for the creation of new datasets grouped by biologically informed factors from large dataset collections, thereby facilitating exploration, selected data sharing and access.

SpatialData has a napari plugin for interactive annotation (napari-spatialdata; Fig. 1d and Extended Data Fig. 3). The napari-spatialdata plugin can be used for the interactive definition of spatial annotations such as drawing regions of interest, or to define landmarks for guiding multidataset registration. Static figures and graphics can be created using the spatialdata-plot library (Extended Data Fig. 4).

The SpatialData library integrates seamlessly with the Python ecosystem by building on standard scientific Python data types. We have implemented a PyTorch Dataset class to effectively train deep learning models directly from SpatialData objects (Fig. 1e, Supplementary Note 4 and Extended Data Fig. 5). Further, thanks to the modular nature of the data representation, analysis packages in the scverse13 ecosystem such as Scanpy14, Squidpy15 and scvi-tools16 can be used for analysis of SpatialData objects (Fig. 1f and Supplementary Fig. 2). Taken together, the SpatialData framework provides infrastructure for the integration and analysis of spatial omics data.

To illustrate the utility of SpatialData for multimodal integration and analysis, we used the framework to represent and process data from a breast cancer study that combines hematoxylin and eosin (H&E) images and 10x Genomics Visium and Xenium assays17. The study comprises two in situ sequencing datasets (Xenium) and one spatial transcriptomics dataset (10x Visium CytAssist) from consecutive sections of a breast cancer tumor. First we used napari-spatialdata to define landmark points present in all datasets, followed by alignment of all three datasets using transformations to define a CCS (Fig. 2a). As a result of the alignment, SpatialData enabled us to identify the common spatial area, which can be accessed using SpatialData queries across datasets.

Fig. 2: Alignment and integrative analysis of three spatial datasets from breast cancer.
figure 2

a, Registration of two breast cancer Xenium replicate (rep.) slides, one Visium slide and their corresponding H&E images to a CCS based on interactively selected landmarks. b, Illustration of how spatial annotations can be transferred across datasets using the CCS. From top to bottom, spatial annotations derived from multiple datasets, including histological regions (H&E image), tumor clones (Visium-derived copy number aberrations) and cell types (Xenium and scRNA-seq). Spatial annotations, represented by different spatial elements (polygons, circles, molecules), can can be transferred between datasets via the CCS. c, SpatialData queries facilitate cross-modality aggregation, quality control and benchmarking. Left and middle, cell-type fractions in Xenium computed at circular regions corresponding to Visium quantification locations; right, cell-type fraction estimates from deconvolution methods based on Visium data (using cell2location). d, Use of SpatialData queries for arbitrary geometrical quantifications. Shown are cell-type fraction estimates obtained in Xenium (derived from the paired scRNA-seq dataset) and Visium (cell2location estimates) at annotated ROIs and clones as in b. e, Comparison of gene expression quantification in Xenium and Visium using SpatialData aggregations at Visium capture locations. Left, scatter plot of the correlation coefficient of aggregated gene expression quantifications between Xenium replicates (x axis) versus that between Xenium and Visium (y axis). Shown are gene expression quantifications for 313 genes (dots) present in both Xenium and Visium. Color denotes log expression in Xenium replicate 1. Right, visualization of aggregated expression levels at Visium locations for FOXA1 (top) and UCP1 (bottom). Color bars denote raw counts.

Next we used the collective information from all three datasets to create a shared set of spatial annotations. Briefly, we selected four regions of interest (ROIs) based on histological features present in the H&E image using napari-spatialdata (Extended Data Fig. 6). We then used genome-wide transcriptome information in Visium to estimate copy number states (using CopyKat18) and to annotate major genetic subclones. Finally we annotated cell types in two Xenium replicates by transferring cell-type labels from an independent breast cancer single-cell RNA sequencing (scRNA-seq) atlas19 (ingest, implemented in scanpy14; Fig. 2b).

To exemplify how SpatialData can be used to transfer spatial annotations between datasets, we considered the masks from Visium capture locations and aggregated cell-type information from the overlapping Xenium cells to estimate cell-type fractions at each location. For comparison we also considered a deconvolution-based analysis of Visium counts (using cell2location20) with the same scRNA-seq-derived cell types19 as reference. We observed high concordance of cell-type abundance estimates between Xenium replicates (median Pearson’s R = 0.88 across Visium locations) and overall good agreement between Xenium- and deconvolution-based estimates (median Pearson’s R = 0.69).

Analogous to the aggregation at Visium locations, we considered ROIs defined from H&E and areas defined by the union of subclone locations from Visium (Fig. 2d and Supplementary Fig. 3a). Again we quantified cell-type fractions within each region, either directly using cell count fractions from Xenium or via deconvolution of the corresponding Visium data. The two Xenium replicates showed high concordance of cell-type fractions, and Xenium and Visium were consistent.

As a second aggregation use case we compared expression estimates for individual genes at Visium capture locations using either Xenium or Visium data. We again transferred Visium capture locations to aggregate individual molecule counts from Xenium into the Visium masks (Fig. 2e and Supplementary Fig. 3b). As expected, the aggregated counts were highly concordant between Xenium replicates (median Pearson’s R = 0.62; Fig. 2e and Supplementary Fig. 3c–e) and, to a lesser extent, between Xenium and Visium counts (median Pearson’s R = 0.48; Supplementary Fig. 3c–e). We also noted a direct relationship between overall transcript abundance and the agreement between different tissue sections and technologies (Fig. 2e).

In sum, these examples illustrate the flexibility of the aggregation functionality that can be applied between SpatialElements of different kinds (points, circular capture locations, cells and larger anatomical ROIs) to transfer diverse types of spatial annotation (cell expression, cell-type fractions). Further examples and advanced-use cases of SpatialData aggregation operations are discussed in Extended Data Fig. 2.

SpatialData facilitates the processing of a wide range of uni- and multimodal datasets. The online documentation of SpatialData comes with vignettes that illustrate additional use cases. For example, we illustrate how SpatialData can serve as a backend to facilitate the training of deep learning models (Extended Data Fig. 5 and Supplementary Note 4), and to conduct downstream analysis using spatial interpretation tools such as Squidpy (Supplementary Fig. 2). As a starting point for using SpatialData in conjunction with different technologies, we also currently provide preformatted SpatialData objects from >40 datasets acquired by eight technologies (Supplementary Table 2). Interactive annotation can be performed on both single- and multimodality datasets. Finally we explored how SpatialData can align multiple fields of view into a global reference coordinate system by mapping 12 Visium slides to a large prostate section (Extended Data Fig. 7). Further information, including comprehensive documentation of the SpatialData Python library, tutorials, example datasets and a contributor guide, is available online (https://spatialdata.scverse.org).

Here we present SpatialData, a flexible, community standards-based framework for storage, processing and annotation of data from virtually any spatial omics technology available to date. The ability to flexibly create common coordinate systems by aligning datasets is a critical cornerstone to establishing comprehensive CCFs, which will unlock new analysis approaches that facilitate robust comparison and reuse of samples across studies. In conclusion, the flexibility and readily accessible solutions provided by the SpatialData framework enable new possibilities in analysis and enhance the reproducibility of integrated spatial analysis.

As the uptake of SpatialData continues to grow its utility will increase further. Ongoing developments (discussed in Supplementary Notes 5 and 6) extend the interoperability of SpatialData with R/Bioconductor21, provide support for multiscale point and polygon representations—such as polygonal meshes and five-dimensional volumetric images (that is, czyx images with an additional time component)—and support cloud-based data access both programmatically and via the visualization tool Vitessce22. In summary, SpatialData provides an open and universal data framework for spatial omics.

Methods

SpatialData framework

The SpatialData framework comprises a core package, spatial data and associated satellite packages napari-spatialdata, spatialdata-io and spatialdata-plot, compatible with Python 3.9 and above. All code is available on GitHub as part of the scverse organization and is licensed under the permissive ‘BSD 3-Clause License’. The project structures inherit from the scverse cookiecutter and the napari plugin cookiecutter, thus implementing unit tests and precommit checks in a continuous integration setting. The documentation is built using Sphinx and hosted on Read the Docs. It includes application programming interface (API) descriptions, example notebooks and a table with links to downloadable spatial omics datasets. Each dataset can be downloaded in full (.zip) or even directly accessed from the cloud (public S3 storage). Documentation, tutorials and sample data can be found in the links below.

We also provide a contribution guide and technical design document to encourage adoption. Users can reach out to the core development team via the GitHub Issues bug-tracking system. To encourage collaboration between the imaging and scverse communities we have created a public chat stream on the imagesc Zulip messaging platform: https://imagesc.zulipchat.com/#narrow/stream/329057-scverse.

SpatialData framework dependencies

The framework depends on routinely used Python libraries. In detail, the spatialdata package depends on networkx, numpy (scientific stack), anndata (single-cell data), dask-image, multiscale-spatial-image, ome-zarr-py, spatial-image, xarray, xarray-schema, xarray-spatial, zarr (raster spatial data), geopandas, pyarrow, pygeos, shapely (vector spatial data), fsspec, rich, tqdm, typing_extensions (utilities) and torch (deep learning, optional dependency).

The satellite packages spatialdata-io, spatialdata-plot and napari-spatialdata require additional dependencies; we refer the reader to the Reporting Summary for a complete list, and to the pyproject.toml and setup.cfg files of the corresponding GitHub repositories for the most up-to-date list, as the packages and their dependency continuously evolve.

All packages in the SpatialData framework are routinely published to PyPI via GitHub Actions and, as such, pip can be used readily to install the software and all its dependent libraries. Conda support is in preparation.

Raw human breast cancer Xenium and Visium data

We downloaded the raw data from https://www.10xgenomics.com/products/xenium-in-situ/preview-dataset-human-breast.

Loading Xenium and Visium datasets into SpatialData

The 10x Xenium and Visium readers from spatialdata-io were used to read the data into SpatialData objects. For the Xenium datasets, the DAPI channel was stored as a multiscale Image, cell and nuclei segmentation masks and boundaries were stored as Shapes elements whereas the transcripts were stored as Points. The metadata and count matrices were stored as a Table in the SpatialData object. For the Visium dataset, the H&E image was stored as a multiscale Image, the array capture areas (circles) were stored as Shapes and the count matrix and annotations were stored in the Table.

Cell-type annotation of Xenium replicates

We annotated cells from Xenium replicates using a publicly available scRNA-seq breast cancer atlas19 comprising nine malignant and normal cell types and 29 subtypes. After subsetting the atlas to the subset of 313 genes present in the Xenium panel, we applied the ingest method for label transfer as implemented in the Scanpy package (v.1.9)14 to annotate cells from the Xenium replicates. We transferred major cell-type labels first (coarse grained) and then within each class we mapped minor cell types (fine grained). In the current analysis only major cell types are shown. The nine major cell types are B cells, cancer-associated fibroblasts (CAFs), cancer epithelial, endothelial, normal epithelial, plasmablasts and perivascular-like cells (PVL) and T cells.

Alignment to create common coordinate systems

We selected three landmark points from the images from the two Xenium replicates and the Visium dataset. Landmark points are to be selected on each of the images in the same order and there should be a 1-to-1 spatial correspondence between sets of points. Xenium replicate 1 was used as the reference to which Xenium replicate 2 and Visium were aligned using the SpatialData function align_elements_using_landmarks. We used napari-spatialdata to annotate the landmark points and to view the result of alignments. Internally, Dask’s lazy-loading and Zarr’s multiscale representation made it possible to performantly explore and zoom the datasets, even in a low-memory device like a standard laptop.

Computation of cell-type fractions for Visium

Following alignment, the shared area between each cell and from the Xenium replicates and Visium locations was computed. Cell-type fractions were then computed for each Visium location based on the surface fractions of the locations covered by each cell type. This was done using the SpatialData aggregate function with fractions=True, and was performed separately for Xenium replicates 1 and 2.

Cell-type deconvolution using cell2location

We used cell2location (v.0.1.3)20 to estimate cell-type fractions at Visium locations, with the aforementioned breast cancer atlas as the reference. For this task we operated on the subset of 313 genes present in the Xenium replicates and subset the Visium dataset and breast cancer atlas to those genes. We set the default parameters as suggested in the cell2location tutorial (https://cell2location.readthedocs.io/en/latest/notebooks/cell2location_tutorial.html). The analysis can be found at https://github.com/scverse/spatialdata-notebooks/tree/main/notebooks/paper_reproducibility. For visualization, only cell types contributing at least 5% per Visium capture location were taken into account then the quantity at each location was normalized to have a total of 1.

ROI selection with napari-spatialdata

Following alignment, four ROIs were selected based on the H&E image from the Visium dataset using the napari-spatial data plugin, and these ROIs were then added to the aligned Xenium replicates. Each ROI was selected based on its distinct microanatomical characteristics and then labeled manually based on the underlying cell-type composition from the Xenium replicates.

Clone detection on Visium using CopyKat

We used CopyKat (v.1.1.0)18 with default parameters to estimate copy number states from the Visium count matrix followed by hierarchical clustering, which identified three major clusters on the locations labeled as ʻaneuploidʼ; these three clusters were used as genetic subclones. We also transferred clone labels to overlapping cells from Xenium replicates; these labels were stored as a SpatialData table element. This analysis was conducted in R separately (the notebooks repository: https://github.com/scverse/spatialdata-notebooks/tree/main/notebooks/paper_reproducibility).

Visium’s anndata table was saved in .h5ad AnnData format14,25 for loading and analysis in R, and clone labels were then transferred back to SpatialData via .h5ad. There are ongoing efforts in the Bioconductor community to enable direct loading of anndata tables into R from Zarr, such as anndataR26, which would obviate the need for exporting as.h5ad (HDF5 format) when completed.

ROI cell-type fractions

We next computed, for each ROI and clone, the fractions of cell types for the cells contained within them. The SpatialData aggregation APIs offer a convenient interface to compute these metrics, independently if what is being aggregated is a set of circles or polygons, and if the target region is a polygonal ROI or a set of circles defining a particular clone.

Transcript aggregations

For each Visium capture location we aggregated transcripts from the Xenium replicates falling into each Visium location; we performed this analysis for Xenium replicates 1 and 2 separately. This yielded two aggregated count matrices that were saved as separate layers in Visium’s SpatialData objects table.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.