Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing tiles in Xenium H&E image #235

Open
marcovarrone opened this issue Nov 6, 2024 · 4 comments
Open

Missing tiles in Xenium H&E image #235

marcovarrone opened this issue Nov 6, 2024 · 4 comments

Comments

@marcovarrone
Copy link

I ran xenium() with default parameters on official data from 10x Genomics (such as https://www.10xgenomics.com/datasets/ffpe-human-breast-with-custom-add-on-panel-1-standard).

However, I noticed that when I plotted the H&E image using the spatialdata-plot package, there were some missing tiles, as shown in the following image:

Image

The missing tiles are always the same if I run xenium() multiple times, so it doesn't look like a random drop.

I checked the original .ome.tif image in QuPath and there are no missing tiles.

Packages version:

  • spatialdata = 0.2.5
  • spatialdata-io = 0.1.5
  • spatialdata-plot = 0.2.7
@marcovarrone marcovarrone changed the title Holes in Xenium H&E image Missing tiles in Xenium H&E image Nov 6, 2024
@LucaMarconato
Copy link
Member

H @marcovarrone, strange bug. Which H&E image are you referring to? The Supplemental: Post-Xenium H&E image (OME-TIFF) from Tisue sample 1 or Tissue sample 2?

@LucaMarconato
Copy link
Member

Ok, sample 1.

I tried opening the data using Photoshop (ImageJ gave an error), and I see that there seem to be some artifacts/pixels repetitions exactly where spatialdata-plot (or napari-spatialdata) show the black squares:

Image

Here I zoom in one of those areas:

Image

It therefore seems to be an error in the data.

Luckily, if you consider the first downscaled image, FIJI doesn't show any artifact. Therefore, for this particular dataset I'd consider to manually loading the ome.tiff file and constructing the multiscale image from the scale1 instead of by downscaling the scale0. You can check the function xenium_aligned_image() and _add_aligned_images() in xenium() for some code to start from.

@LucaMarconato
Copy link
Member

This is the scale 1 opened with FIJI

Image

@LucaMarconato
Copy link
Member

LucaMarconato commented Feb 12, 2025

Here is an example of code for achieving what I described above (parsing the scale 1 instead of scale 0) of the H&E image and using this scale to compute the DataTree (multiscale image), quickly wrote it from xenium_aligned_image().

import tifffile
from pathlib import Path
import pandas as pd
from spatialdata.models import (
    Image2DModel,
)
from spatialdata.transformations.transformations import Affine
import xmltodict
from spatialdata import SpatialData
from napari_spatialdata import Interactive

image_path = Path(
    "/Users/macbook/embl/projects/basel/spatialdata-sandbox/xenium_rep1_io/data/xenium/outs/Xenium_V1_FFPE_Human_Breast_IDC_With_Addon_he_unaligned_image.ome.tif"
)
alignment_file = Path(
    "/Users/macbook/embl/projects/basel/spatialdata-sandbox/xenium_rep1_io/data/xenium/outs/Xenium_V1_FFPE_Human_Breast_IDC_With_Addon_he_imagealignment.csv"
)
imread_kwargs = {}
image_models_kwargs = {"chunks": (1, 4096, 4096), "scale_factors": [2, 2, 2, 2]}
dims = None
rgba = False
c_coords = None

image_path = Path(image_path)
assert image_path.exists(), f"File {image_path} does not exist."
assert alignment_file.exists(), f"File {alignment_file} does not exist."

# here we read the data fully in-memory; it requires only 1.14 GB of memory
image = tifffile.imread(image_path, level=1)

# get metadata for the scale0
with tifffile.TiffFile(image_path, is_ome=True) as tif:
    ome_metadata = xmltodict.parse(tif.ome_metadata)
    sizes = {}
    sizes["x"] = ome_metadata["OME"]["Image"]["Pixels"]["@SizeX"]
    sizes["y"] = ome_metadata["OME"]["Image"]["Pixels"]["@SizeY"]
    sizes["c"] = ome_metadata["OME"]["Image"]["Pixels"]["@SizeC"]

# after manually examining the metadata, we know that the image is in the order of (y, x, c)
dims = ["y", "x", "c"]
c_coords = ["r", "g", "b"]

alignment_file = Path(alignment_file)
assert alignment_file.exists(), f"File {alignment_file} does not exist."
alignment = pd.read_csv(alignment_file, header=None).values
transformation = Affine(alignment, input_axes=("x", "y"), output_axes=("x", "y"))

image = Image2DModel.parse(
    image,
    dims=dims,
    transformations={"global": transformation},
    c_coords=c_coords,
    **image_models_kwargs,
)

SpatialData.init_from_elements({"he_image": image}).write("he_image.zarr", overwrite=True)

sdata = SpatialData.read("he_image.zarr")
Interactive(sdata)

One note: we currently do not have a general function that takes as input a .ome.tiff file and parses it as a DataArray or DataTree object, but we want to add such API to spatialdata-io. To this regard @lucas-diedrich is developing a general reader for images (not .ome.tiff, but .tiff), and such effort could be the starting point to then also have a .ome.tiff reader. Stay tuned 😁

Napari view for the code above:

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants